The Lifecycle of an Eventbrite Webhook

At Eventbrite, we have a feature called webhooks.  Webhooks can be thought of as the opposite of an API call.  When using our API, developers either ask us for information, or hand us information.  Both of these are initiated by you.  In a webhook, we proactively notify developers (via an HTTP POST with JSON content) when actions happen on our site.  The actions we currently support are as follows:

  • Attendee data is updated
  • An attendee is checked in via barcode scan
  • And attendee is checked out via barcode scan
  • An event is created
  • And event is published
  • An event is unpublished
  • Event data is updated
  • Venue data is updated
  • Organizer data is updated
  • An order is placed
  • An order is refunded
  • Order data is updated

Webhooks are relatively simple to create.  You can create/delete them in our admin web interface.

Screenshot 2016-07-28 10.22.49

Screenshot 2016-07-28 10.25.46

You can also create/delete them by using the API.

import requests
import json
from pprint import pprint

#This sample creates and then immediately deletes a webhook
def create_webhook():
    response = requests.post("https://www.eventbriteapi.com/v3/webhooks/",
    headers = {
        "Authorization": "Bearer YOURPERSONALOAUTHTOKEN",
    },
    data = {
        "endpoint_url": "http://www.malina.io/webhook",
        "actions" : "",
        "event_id": "26081133372",
    },
    verify = True # Verify SSL certificate
    )

pprint (response.json())
return (response.json()[u'id'])

def delete_webhook(hook_id):
    response = requests.delete(
    "https://www.eventbriteapi.com/v3/webhooks/" + hook_id + "/",
    headers = {
        "Authorization": "Bearer YOURPERSONALOAUTHTOKEN",
    },

    verify = True # Verify SSL certificate
    )

pprint (response.json())

if __name__ == '__main__':
hook_id = create_webhook()
delete_webhook(hook_id)

When various actions occur within our system, there is a pipeline of infrastructure through which these actions flow in order to finally result in an HTTP post to a webhook URL.  In this post, I’ll describe that pipeline in detail. 

 

Step 1
Some action happens in Eventbrite.  Someone creates an event, or updates one. Someone buys a ticket, etc.  This could happen on eventbrite.com or through one of our mobile apps, or through our API.

 

Step 2
Dilithium detects a change in Eventbrite’s database. Let’s take the example of someone updating an event.  You might think that we have a place in the code where all updates of events happen, and that place in the code is also responsible for publishing to Kafka.  However, it turns out that it’s not that simple.  Due to event access happening in multiple parts of our codebase, and also due to our desire to *never* miss an action, we watch for them in the source of truth:  our database.  We do this via a piece of technology we call Dilithium.

Dilithium is an internal service that directly watches the replication logs of one of our databases.  When it sees “interesting” SQL statements (an insert of an event, an update of an event, etc.) it packages the relevant data (what happened, the ID of the objects, etc.) as JSON and sends it to Kafka.

 

Step 3
Kafka receives a message from Dilithium.  Kafka is a messaging system that has become fairly widely used, find out more at Kafka.org. 
For our purposes it can be thought of as a “pub-sub” pipeline.  Messages get published to it, and a number of interested consumers subscribe to these messages so they are notified when they happen. Kafka is a good choice for the webhooks pipeline because the actions that cause webhooks to fire are also relevant to other systems at Eventbrite:  maybe we need to set/invalidate a cache, or update our data warehouse, etc.

 

Step 4
The Webhook service receives an action from Kafka.  You’ll notice that nowhere in the pipeline up to now do we look at the events and try to match them to an actual webhook.  As a result, the webhook service receives many events (the vast majority of them) for which there is not a webhook registered.

The webhook service, which is part of our django application, starts by seeing if there is a webhook for any message it receives.  It simply uses the same database we discussed before (with some caching provided by memcache) in order to do this.  When it actually finds a webhook, it creates a JSON payload and is ready to actually make the HTTP request to the 3rd party developer.

 

The payload is JSON sent in an HTTP POST

{
"api_url": "https://www.eventbriteapi.com/v3/events/26081133372/",
"config": {
    "action": "event.published",
    "endpoint_url": "http://www.malina.io/webhook",
    "user_id": "163054428874",
    "webhook_id": "147601"
    }
}

Let’s take a closer look at what this payload object is made of.  

The ‘api_url’ can be thought of as the address of the data that caused the webhook to fire.  You could take url, append a personal OAuth token, plug it into your browser and view that data on Eventbrite’s API explorer.

The ‘action’ represents the change that we saw in the database. In the case of the example above, the event table was changed on the publish column. All possible actions can be found in the bulleted list at the beginning of this post. Each of those represent a change in the database.

The ‘endpoint_url is the value provided by the developer who registered the webhook and it is the address to which we send this payload.

The ‘user_id’ is the Eventbrite user id of the user who created the webhook.

The ‘webhook_id’ is the unique id that is assigned to this webhook.

 

Step 5
The final step is sending the actual HTTP request.  As you can imagine, this can be (and usually is) the slowest part of the pipeline by far.  These URLs are not ours, and we know nothing about them.  Maybe they will timeout, maybe they will take 20 seconds to respond, maybe they will throw a 500 error and we will want to retry.  Due to all these concerns, performing the actual HTTP request from the webhooks service is not feasible.  We really need to do them asynchronously.  For that we use a common async framework called Celery
.  We won’t talk in too much detail about Celery, but in brief, Celery implements a task queueing system that makes it very easy to take a section of code and run it asynchronously.  You simply provide celery with a queueing mechanism (RabbitMQ or SQS for example) and it takes care of the rest.  It’s this easy:

from proj.celery import app
import requests</code>

@app.task
def http_post_request(url, payload):
    response = requests.post(
    url,
    data = payload

    verify = True # Verify SSL certificate
    )

_log_response_to_database(response)

>>> http_post_request.delay(some_url, some_payload)

The celery workers make HTTP requests and store information about the request/response in the webhooks database.  We do this so we have a complete record of all the external communications related to webhooks.

 

Step 6
Sometimes the webhook URL fails.  In these cases we try again.  Since we store all requests/responses in the database, it is easy to determine if we see a failure for a particular webhook request (and how many times it has fails.) To implement our retry policy, we have a cron job that, every 10 minutes will retry failed requests, up to 10 times.  It is written as a django management command, and generally uses the same code to queue requests to django.

 

Lastly, let’s take a look at some of the things that are made possible by our webhook system. Zapier is one of our Spectrum Partners and is the largest consumer of our webhooks system. Zapier alone has tens of thousands webhooks registered which allow tens of thousands of Eventbrite organizers to automate sending their data to any combination of a selection of over 500 CRM applications.

 

Eventbrite and SEO: How does Google find our pages?

One thing that took me by surprise when I started researching SEO was that when a user enters a search term, the results are gathered from Google’s representation of the web not the entire web. For a page to be included in its index, Google must have already parsed, and stored the page’s contents in its databases.

To do this, automated robots known as spiders or crawlers scan the internet for links leading to pages they can index. These crawlers will begin scanning one page, then follow the links they find to then scan and index those pages.

webCrawlers

This pattern repeats until the search engine has indexed a sizable representation of the web. It stores the meta information and text it finds on each page in their own databases and it is this data they use to generate the search engine ranking pages displayed to users.

Having a website online will not guarantee Google will find your site and include all pages in its rankings. It needs to either find each page through outbound and inbound links, the website’s own sitemap or through manual submission to Google. Eventbrite relies on a mixture of these strategies to make sure our pages are included in Google’s index of the web.

Inbound Links

Inbound links are links from other domains that point to your website. Once Google crawlers land on a page, they quickly parse its content including any links that do not specifically tell search engines to ignore them. If website A includes a link to website B Google will follow the link to Website B after it is done parsing website A. The more external sites that link to your site the better chance Google has of indexing your pages.

Inbound links also play a large part in increasing a site’s relevancy and authority. Google’s main aim is to treat each web page as a user would. Therefore they deem pages that have a lot of natural outbound links as popular and increase their ranking in relevant search results. These links must occur naturally though as Google is known to decrease a page’s rank or remove them from their index entirely if the majority of their inbound links are from low authority or irrelevant pages.

Sausalito Art Festival Website with link to event page on Eventbrite

down arrow

Sausalito Art Event Page Source

Sausalito Arts Festival site links to Eventbrite

Links to our event pages are often included on our organizer’s own sites which are indexed by Google. We also rely on press releases, news articles and blogs to link to these event pages when covering the event. The more links we are able to accrue from outside sources the higher our authority score is. This boosts all Eventbrite pages as Google deems the site trustworthy and popular based on the pages linking to our site.

Outbound Links

Once Google has landed on an Eventbrite page, we use internal linking to direct crawlers to other pages we want indexed by Google. We utilize our most popular pages to point to other internal pages we want both users and Google to find. Our homepage is a popular entry point for users therefore Google views any internal links found on the page as important for parsing and indexing. We take advantage of this by listing popular events and links to our category search pages.
pop-events-homepage

top-categories-homepage

We also take a lot of care curating links within our footer as they are shown on each page of our site and is a good indicator to Google that these links are important. Some of our links are dynamic within the footer depending on the top-level domain (TLD) visited. A user visiting eventbrite.com will see links to American cities in our footer whereas users visiting eventbrite.com.au will see Australian cities.

Eventbrite Footer - US tld

Eventbrite Footer – US TLD

Eventbrite Footer - Australia tld

Eventbrite Footer – Australia TLD

We also use breadcrumbs on our public event pages to link to city and category directory pages. Not only does it provide another place for Google to find these pages, but it also allows users to jump quickly to other events similar to the current event page they are visiting.

breadcrumbs trail on Eventbrite event pages

breadcrumbs trail on Eventbrite event pages

Sitemap

A sitemap is a file, or multiple files, that provide a navigation for search engines to find all pages for a site. While it doesn’t replace linking, it does help crawlers find pages they might have missed due to orphaning and the absence of interlinking. Sitemaps also pass along useful metadata about each url, including when it was last modified and how often the page may change. While you will mainly see sitemaps as XML files, text and RSS file types are also accepted by Google.

For large sites, it is best to break up sitemaps as Google has a limit of 50,000 urls and a file size limit of 10MB uncompressed. You can then place the url to your smaller sitemaps into a sitemap index file. This is the approach we take at Eventbrite as we have over 10 million pages and growing.

Our main sitemap index holds links to the sitemaps for event pages, directory pages, venue profile pages and organizer pages, with information on when the sitemap was last modified. Each sitemap then has information on its priority. This gives Google an indication on how often it should come back to index new pages.

A snippet of Eventbrite's sitemap index

A snippet of Eventbrite’s sitemap index

Keep in mind that including a link in the sitemap will not guarantee that Google’s crawlers will index and parse that page. Sitemaps merely suggest links for search engines to index and should not replace linking practices.

Manual Submission

For new sites, it is unrealistic to expect Google crawlers to find their pages through outbound links. Google allows you to manually submit either a single page or sitemap through Google Webmaster Tools Seach Console. Again, it is Google’s discretion whether it will crawl and index these pages or not. You can also submit new pages through Google webmaster tools.

Google Crawl Budget

Google sets a crawl limit, also known as a budget, on each website. Every website has a different crawl budget closely linked to its page rank. This means the more Google deems your site as relevant and important the more time it will spend crawling and indexing your pages each time it visits your site.

Determining factors Google uses to set your crawl budget are your authority score, how often your site is updated, the frequency of new pages added and individual page speed and size. To increase the amount of pages Google indexes on each visit, make sure you reduce broken links as this is a waste of time and the crawler will have no further links to follow. You should also make sure there are no redirect loops. Redirect loops are where website A redirects to website B that then redirects back to website A. The crawler will be stuck in a loop when it could have been indexing other pages on your site.

Also utilize your robots.txt file and determine which pages are not important enough or have low quality, and add a rule to disallow crawlers from following and indexing these pages or directories. Eventbrite has over 10 million pages but only 1.5 million are included in Google’s index. We pay close attention to pages that are of low quality content, spam, dated etc. and restrict Google from indexing these pages. We also place links we deem as important as close to the homepage or easily accessible by our global navigation. A well thought out site hierarchy is key to making sure priority pages are indexed and reindexed frequently.

Wrap Up

With over 40 billion web pages on the internet, Google often needs a hand to find new websites and pages. An estimated number of pages indexed by Google is 10% of pages on the web. It is important to remember that when a user enters a search term in Google the pages searched are not the entire web but Google’s representation of the web. Results returned are those that Google has found and stored in their large databases.

You should not rely solely on one strategy to improve the chances of Google parsing and indexing all pages on your site. A clear and well thought out site hierarchy is important with all pages linked at least once internally. Sitemaps are a great starting point for Google to find your pages and manual submission is important for new pages that are of high priority.

As your site grows and it receives more inbound links, Google will prioritize indexing new pages as it wants the most relevant and popular pages appearing in search results. Including content that will draw users to your site will also increase your presence on search engines. Here at Eventbrite we live by the motto what is good for SEO should be good for user experience too.

Learning ES6: Promises

Pinky Swear

Like clockwork the Learning ES6 series continues on, looking at promises. It will be the first feature we’ve looked at in this series that really is more than syntactic sugar. But promises aren’t entirely new to JavaScript. They’ve existed for quite some time in helper libraries. ECMAScript 6 now brings native promise support to JavaScript via the Promise API. Let’s jump right in!

TL;DR

A promise represents the eventual result of an asynchronous operation. Instead of registering a callback in the call to an async function, the function returns a promise. The caller registers callbacks with the promise to receive either a promise’s eventual value from the async operation or the reason why the promise cannot be fulfilled.

// Creating a promise wrapper for setTimeout
function wait(delay = 0) {
    return new Promise((resolve, reject) => {
        setTimeout(resolve, delay);
    });
}

// Using a promise
wait(3000)
    .then(() => {
        console.log('3 seconds have passed!');
        return wait(2000);
    })
    .then(() => {
    	console.log('5 seconds have passed!');
    	x++; // ReferenceError triggers `catch`
    })
    .catch(error => {
    	// output: ReferenceError
    	console.log(error);
    })
    .then(() => {
    	// simulate `finally` clause
    	console.log('clean up');
    });

Did you notice the use of default parameters and arrow functions too? If you’re unfamiliar with those ES6 features, you should check out the articles detailing how they work. Interested in learning more about ES6 promises? Clone the Learning ES6 Github repo and take a look at the promises code examples page showing off the features in greater detail.

Well you’ve come this far. You might as well keep going!

Continue reading

The Realistic Code Reviewer, Part II

Once you have a strong foundation for being a realistic code reviewer, you’re finally ready to move into the actual code itself.

Rely on established patterns more than personal style.

A common mistake in a code review is recommending things you’re used to seeing rather than well-documented patterns. The problem with this approach—besides reflecting a lack of thoughtfulness or desire find the best solutions—is it can create a lack of trust. If your ego and personal preferences get in the way, you’ll lose the trust and confidence of the author—and the code suffers as a result.

Never forget: the perspective we offer should be a helpful flag for the author, not simply an opinionated comment from someone who isn’t the one actually writing the code.

A good example we encountered at Eventbrite is the JavaScript Switch Statement. Consider this piece of code:

// given a state, we return our city of choice. 
function getPreferedCitiesByState(state) {
  switch (state) {
    case "Florida": 
      return "Tallahassee";
    case "Idaho": 
      return "Boise";
    case "Arizona": 
      return "Phoenix";
    case "South Carolina": 
      return "Columbia";
    default: 
      return "San Francisco";
  }
}
// we could separate logic from information. using a dictonary in C# or map in python. in js would be a simple Object. 
function getPreferedCitiesByState(state) {
  var preferedCities = {
    "Florida": "Tallahassee",
    "Idaho": "Boise",
    "Arizona": "Phoenix",
    "South Carolina": "Columbia"
  }, 
  defaultCity = 'San Francisco';
  // we lookup in the dictionary/object/strcuture for the matching key, if non is matched, we return the defaultCity
  return preferedCities[state] || defaultCity;
}

When I look at this code, I immediately think it can be more succinct, simplified, and written in a way that’s much easier to maintain and improve in the future. But how should I communicate that with an author?

“I don’t like that statement; use this piece of code instead.”

Sure, that might get us to a better piece of code for now, but at best it improves the code without improving the author and at worst, it becomes a frustrating debate of egos.

There are plenty of good alternatives for how to approach giving productive feedback:

  • Present the pattern (this feedback technique is heavily used and referred to as HashMaps or lookup maps in languages like Python or Java)
  • Apply a well known principle called “tell, don’t ask” principle, and with it all the proper justifications since it’s well documented.
  • Provide a generic solution that can be a model for resolving the specific trouble. This is especially useful if the solution isn’t common or specific enough to point out directly to it. Plus, the generic model can be a useful tool in the future as well.
  • Ask the author questions that focus on the problem trying to be solved (like
    “How are we thinking about maintenance/scalability?”) rather than the specific current solution. This can help expose actionable points.

 

Move away from the framework to gain perspective.

First of, let’s clear the air: this doesn’t mean we recommend always write code outside of the chosen framework/tooling as possible. On the contrary: we should know what is tightly coupled with the framework and what’s not, and this creates separation in our code.

This will improve our maintainability when the time comes to upgrade the framework or even to switch framework without rethinking algorithms.

When reviewing code, finding this clear line (and asking the author for a clear division, if possible) makes maintainability possible or at least, if you encounter issues, narrowing down the actual problem easier.

Another way of presenting this concept would be asking for a clear API in every minimal abstraction. To assess if this is being achieved, stop in any part of the code, and ask:

  • What is being transformed/altered here? (Ideally it should only be one thing.)
  • What is used to transform/alter the element? (Ideally this would be a clear API.)
  • How many flows can I see being developed here? (Ideally it should be one.)

Even though this technique produces really clean code, the risk is overdoing it and creating over-engineered code—arguably, rendering our entire effort useless.

 

Standardize repetitive pieces of code.

You might have heard quite a bit about “functional” or even “declarative” programming. (If not, DRY is a good starting point for learning the key concepts of these approaches.)

If we’re going through a code review and see several implementations that look alike but with tiny differences here and there, it can be really hard to keep track of them and how they affect the code.

At this point, we should start asking for utilities to simplify the code—or at least a way of finding the common denominator abstracted out. Once we isolate the abstraction, we can name it, which allows easy discussion/revisiting of the concept later on.

 

“Ship it!” with confidence.

Here at Eventbrite, a great review always ends with a celebratory “ship it!” from whomever reviewed the code. The collective review team’s consent and blessing should be part of our great product.

Remember, code review will always involves compromise—it’s a matter of making the right compromises. You want to get everyone to yes, but it might not always be an excited yes.

The crucial thing here is that disagreements can be voiced, explored, and resolved. If you find yourself uncomfortable shipping a piece of code as it stands, it’s your responsibility to dig into why (and the earlier in the review you deal with this concern, the better).

First, check your ego to ensure the discomfort is valid, not just you digging in your heals. If you objectively feel there’s still a problem, now your job is to explore it with the author and/or other reviewers. You can:

  • Go back to the core questions that informed the problem/solution earlier on.
  • Ask for testing on the area you’re concerned about.
  • Encourage the author to ask you additional questions about your feedback.

This is a conversation with someone whose instinct might be to protect what they’ve created, so don’t forget the importance of empathy. In the end, you should both have the goal of producing the best possible code—but it might be more difficult for them to critique it harshly.

As we grow as engineers, we discover new and fun ways to do this important work to create the best code and experiences on the web. This is how we make sure we excellent code as best as we can, and make a difference where we come out better every time. What’s yours?

Learning ES6: Classes

classes

We’re going from enhanced object literals that look a lot like classes to actual classes in ES6. We’ll learn, however, that these aren’t really classes, but syntactic sugar over the existing prototype functions in JavaScript. Let’s continue on with the Learning ES6 series series!

TL;DR

ECMAScript 6 provides syntactic sugar over the prototype-based, object-oriented pattern in JavaScript. ES6 classes provide support for constructors, instance and static methods, (prototype-based) inheritance, and super calls. Instance and static properties are not (yet) supported.

// Define base Note class
class Note {
	constructor(id, content, owner) {
		this._id = id;
		this._content = content;
		this._owner = owner;
	}

	static add(...properties) {
		// `this` will be the class on which
		// `add()` was called increment counter
		++this._idCounter;

		let id = `note${this._idCounter}`;

		// construct a new instance of the note passing in the
		// arguments after the ID. This is so subclasses can
		// get all of the arguments needed
		let note = new this(id, ...properties);

		// add note to the lookup by ID
		this._noteLookup[id] = note;

		return note;
	}

	static get(id) {
		return this._noteLookup[id];
	}

	// read-only
	get id() { return this._id; }

	get content() { return this._content; }
	set content(value) { this._content = value; }

	get owner() { return this._owner; }
	set owner(value) { this._owner = value; }

	toString() {
		return `ID: ${this._id}
			Content: ${this._content}
			Owner: ${this._owner}`;
	}
}

// Static "private" properties (not yet supported in class syntax)
Note._idCounter = -1;
Note._noteLookup = {};

class ColorNote extends Note {
	constructor(id, content, owner, color='#ff0000') {
		// super constructor must be called first!
		super(id, content, owner);
		this._color = color;
	}

	get color() { return this._color; }
	set color(value) { this._color = value; }

	toString() {  // computed method names are supported
		// Override `toString()`, but call parent/super version
		// first
		return `${super.toString()}
			Color: ${this._color}`;
	}
}

// `add` factory method is defined on `Note`, but accessible
// on ColorNote subclass
let colorNote = ColorNote.add('My note', 'benmvp', '#0000ff');

// output: ID: note0
// Content: My Note
// Owner: benmvp
// Color: #0000ff
console.log(`${colorNote}`);

// output: true
console.log(Note.get('note0') === colorNote);

This is just a quick example of how ES6 classes work. Be sure to clone the Learning ES6 Github repo and take a look at the classes code examples page showing off the features in greater detail.

The example also uses default parameters, rest parameters, and the spread operator so you may want to revisit the relevant articles if you’re not familiar. It also makes use of template strings for string interpolation, so you should read up on that as well.

Continue reading

The Realistic Code Reviewer, Part I

So you’re on board with presenting code for others to review—but what the flipside of this?

Code review isn’t always easy to get right. Like any form of communication, it’s often fraught with opportunities for miscommunication and confusion—but working through these challenges for an improved end-result is immensely valuable. We’re here to help.

'And when did you last see your father?', 1878 by William Frederick Yeames

Consider your ability to support the author before offering a critique.

Making regular, bite-sized code reviews a part of your normal development process is a crucial first-step toward making this part of your process. But it’s more than just the first round of feedback that matters. Supporting the developer you’re reviewing is just a critical as your initial technical review.

Continue reading

Escapándome de las Software Factory

Introduction

Cuando se nos presenta la posibilidad de decidir un lugar para nuestro desarrollo profesional, ya sea en un país distinto al nuestro, o en una empresa diferente, suelen aparecer muchas preguntas … ¿Nos podremos desarrollar profesionalmente?¿ Alguien nos dará una mano para mejorar nuestro nivel profesional? ¿Nos resultarán atractivos los proyectos que nos ofrecen? y quizás la más dificil de todas, ¿Cuánto estamos dispuestos a involucrarnos con estos nuevos proyectos?

Estas preguntas son los fantasmas que nos persiguen a los largo de nuestra carrera. Vamos a intentar responder aquí, a algunas de todas estas.

Una empresa, un mundo posible.

Después de 12 años relacionado con informática/programación, entiendo (quizás más tarde que temprano) que nosotros y la empresa es lo que finalmente importa. Esta relación está formada de personas claro, pero cada persona necesita y debe encajar en este complicado sistema. Un buen ejemplo sería cómo nuestros pares interactúan y la cultura que esto ofrece…¿Somos todos serios? ¿Hacemos bromas? ¿Se habla de los problemas? O solamente se es tan educado que se pasa a ser pasivo-agresivo o simplemente se entiende que cada uno está solo en el mundo y debe encontrar cómo volverse necesario.

Continue reading

Help!ful Things YOU Can Do for New Developers

Last year, I wrote an article titled The Catch-22 of Being “Too Junior”. It detailed high-level conditions that companies should assess when considering hiring “junior” developers. This article follows up that post.

During my first two years as a burgeoning developer, I maintained a list of things veteran engineers did that helped me evolve as a software engineer. Here’s that list in all its glory:

Explain jargon that new developers might not know or understand.

I actually wrote an entire conference talk on this topic. Software engineering and web development have their own vocabulary, which is not easily understood. Keeping this in mind helps people in all roles of a company, from product managers to support agents and salespeople. It’s also a great way to practice empathy.

Notice that someone in your conversation circle looks confused? Don’t keep droning on! Clarify. Not everyone knows what dependency injection is.

Continue reading

Learning ES6: Enhanced Object Literals

object_literal

Wow, we’re making some good progress covering ECMAScript 6 features in this Learning ES6 series series. We most recently covered template literals and arrow functions. Now we zero in on the enhancements to object literals, another piece of ES6 syntactic sugar.

TL;DR

ECMAScript 6 makes declaring object literals even more succinct by providing shorthand syntax for initializing properties from variables and defining function methods. It also enables the ability to have computed property keys in an object literal definition.

function getCar(make, model, value) {
	return {
		// with property value shorthand
		// syntax, you can omit the property
		// value if key matches variable
		// name
		make,  // same as make: make
		model, // same as model: model
		value, // same as value: value

		// computed values now work with
		// object literals
		['make' + make]: true,

		// Method definition shorthand syntax
		// omits `function` keyword & colon
		depreciate() {
			this.value -= 2500;
		}
	};
}

let car = getCar('Kia', 'Sorento', 40000);

// output: {
//     make: 'Kia',
//     model:'Sorento',
//     value: 40000,
//     depreciate: function()
// }
console.log(car);

car.depreciate();

// output: 37500
console.log(car.value);

The enhanced object literals code examples page has many more examples showing off each feature in more detail. There are also some ES6 katas for testing your ES6 enhanced object literal knowledge.

Continue on for more details!

Continue reading

Eventbrite and SEO: The Basics

Search Engine Optimization is important for all sites, but at Eventbrite it’s critical to our business. Many of our pages are created by our organizers, and our ability to surface the events to relevant people is one of the ways we make sure our customers are successful. My name is Beck Cronin-Dixon and I am one of the Software Engineers at Eventbrite focusing on making sure our organizers’ events rank highly in Google and their customers are able to find the events they are looking for.

Before starting at Eventbrite, I was (of course) aware of the term SEO but didn’t fully understand its meaning or role in a company, especially in terms of programming. Through thorough digging I realized it is essential for any website that wants to have its service or products  in front of customers to invest in the right SEO tactics. In this post, I wanted to cover the very basics of SEO that we incorporate into the Eventbrite site and has helped us grow organic traffic year to year. 

When Search Engine Optimization is mentioned I usually receive two kinds of reactions: the first is confusion, and the second is hesitancy. While most people know that SEO is the practice of improving a website’s ranking within search engines they, it’s not often clear on how it can and should be done. 

The days of ‘build it and they will come’ are now over. Websites have to not only think about the quality of their content for users, but also how it will be perceived by search engines. High quality content that attracts a lot of eyes will always win for both. Google heavily weighs in the favor of sites with high authority and relevancy. This is often hard for sites that rely heavily on user generated content. Here at Eventbrite, we are always trying to improve the rankings of our organizers’ events and other Eventbrite pages. We often have to be on the lookout for user generated content that could be deemed duplicate or low content by Google. Too many of these pages could result in a low authority score, or worse, a penalty that can affect the whole site.

Continue reading