A Story of a React Re-Rendering Bug

As front-end developers, we often find ourselves getting into perplexing bugs when the page we build involves a lot of user interactions. When we find a bug, no matter how tricky it is, it means something is wrong in the code. There is no magic, and the code itself does not lie.

This blog will take you on a short journey about how I fixed a particularly annoying bug that existed in one of our products. Continue reading “A Story of a React Re-Rendering Bug”

How to fix the ugly focus ring and not break accessibility in React

header image

Creating beautiful, aesthetic designs while maintaining accessibility has always been a challenge in the frontend. One particular barrier is the dreaded “:focus” ring. It looks like this:

focus outline on a button

After clicking any button, the default styling in the browser displays a focus outline. This ugly outline easily mars a perfectly crafted interface.

A quick Stack Overflow search reveals an easy fix: just use a bit of CSS, outline: none; on the affected element. It turns out that many websites use this trick to make their sites beautiful and avoid the focus outline. Continue reading “How to fix the ugly focus ring and not break accessibility in React”

Design System Wednesday: A Supportive Professional Community

Design systems produce a lot of value by providing an effective solution both for design and engineering. Yet, they take considerable time and work to set up and maintain. Many times, only a few people get tasked with this mammoth task and knowing where to begin is hard.

Design System Wednesday is a monthly community event where we welcome anyone working on or wanting to learn about design systems. These events provide a much-needed place to show off your system, tooling, or pose a burning question to the group. You get a group of incredible product designers, front-end engineers, and product managers. Their insightful answers and battle stories directly apply to the work you’re doing.

Keep reading to learn Design System Wednesdays. Our design system community meetings promote learning, cross-discipline partnership, and systems thinking.

Get input from other design system experts

As a design systems developer/designer, surrounding yourself with others facing the same challenge is incredibly beneficial. Most likely, you are one of a handful of designers and engineers dedicated to this vast undertaking. How daunting! Where do you begin? Have you found the most effective solution? How do you manage the balance between being too design or engineering centric? Design System Wednesday provides a space to bounce ideas off of others, ask for advice, or even crack some hilarious systems jokes!

We once had the pleasure of meeting a new design system lead whose company wanted to start a design system and they charged her with starting it. She asked her design system questions and got advice from people from over 10 companies! Questions on how to get buy-in, recommendations on tech stacks, and what design tools to use. What better way to learn than from peers working on similar things? I remember everyone’s willingness to answer her questions and help steer her in the right direction.

Grow and collaborate

I attended my very first Design System Wednesday the second day at my new job. It was exciting meeting everyone and, at the same time, a little intimidating. Still, I remember people’s welcoming and open spirit. I now look forward to attending these every month. We have a different group of people join us and different companies graciously host us every session. The open dialog, hospitality, and open day structure foster a space for growth and collaboration.

Become part of a community

As a front-end engineer, I seem to always be around other engineers. How refreshing to meet people from other roles and responsibilities! A diverse group of people from companies of all sizes and disciplines comprises the Design System Wednesday community. You can usually find product designers, front-end engineers, and product managers all sitting around the same table. I get to hear how they approach problems and how they solve them.

I even get to foster new friendships over silly easter eggs their products have that I didn’t know about. One Design System Wednesday, some Atlassian designers showed me Jira Board Jr. A Jira board for kids so they don’t miss out on the joy of building a Jira Board – their April fools joke!  I find it very refreshing to step out of my bubble and build connections with peers outside my company and discipline.

Design System Wednesday at Zendesk, Aug 2018

Design System Wednesdays is a community event for the community, by the community. I love being part of this community and helping plan these events, the same way I love helping other design system-ers come together, collaborate, and inspire each other.

We enjoy community events here at Eventbrite, what about you? What are some ways you help your community come together and inspire each other? Drop us a comment below or ping me on Twitter @mbeguiluz.

Featured Image: Design System Wednesday at Zendesk – August 8, 2018

Why Would Webpack Stop Re-compiling? (The Quest for Micro-Apps)

Eventbrite is on a quest to convert our “monolith” React application, with 30+ entry points, into individual “micro-apps” that can be developed and deployed individually. We’re documenting this process in a series of posts entitled The Quest for Micro-Apps. You can read the full Introduction to our journey as well as Part 1 – Single App Mode outlining our first steps in improving our development environment.

Here in Part 2, we’ll take a quick detour to a project that occupied our time after Single App Mode (SAM), but before we continued towards separating our apps. We were experiencing an issue where Webpack would mysteriously stop re-compiling and provide no error messaging. We narrowed it down to a memory leak in our Docker environment and discovered a bug in the implementation for cache invalidation within our React server-side rendering system. Interest piqued? Read on for the details on how we discovered and plugged the memory leak!

A little background on our frontend infrastructure

Before embarking on our quest for “micro-apps,” we first had to migrate our React apps to Webpack. Our React applications originally ran on requirejs because that’s what our Backbone / Marionette code used (and still does to this day). To limit the scope of the initial switch to React from Backbone, we ran React on the existing infrastructure. However, we quickly hit the limits of what requirejs could do with modern libraries and decided to migrate all of our React apps over to Webpack. That migration deserves a whole post in itself.

During our months-long migration in 2017 (feature development never stopped by the way), the Frontend Platform team started hearing sporadic reports about Webpack “stopping.” With no obvious reproduction steps, Webpack would stop re-compiling code changes. In the beginning, we were too focused on the Webpack migration to investigate the problem deeply. However, we did find that turning off editor autosave seemed to decrease the occurrences dramatically. Problem successfully punted.

Also the migration to Webpack allowed us to change our React server-side rendering solution (we call it react-render-server or RRS) in our development environment. With requirejs react-render-server used Babel to transpile modules on-demand with babel-register.

if (argv.transpile) {
  // When the `transpile` flag is turned on, all future modules
  // imported (using `require`) will get transpiled. This is 
  // particularly important for the React components written in JSX.
      stage: 0

  reactLogger('Using Babel transpilation');

This code is how we were able to import React files to render components. It was a bit slow but effective. However because Node caches all of its imports, we needed to invalidate the cache each time we made changes to the React app source code. We accomplished this by using supervisor to restart the server every time a source file changed.

#!/usr/bin/env bash

./node_modules/.bin/supervisor \
  --watch /path/to/components \
  --extensions js \
  --poll-interval 5000 \
  -- ./node_modules/react-render-server/server.js \
    --port 8991 \
    --address \
    --verbose \
    --transpile \
    --gettext-locale-path /srv/translations/core \
    --gettext-catalog-domain djangojs

This addition, unfortunately, resulted in a poor developer experience because it took several seconds for the server to restart. During that time, our Django backend was unable to reach RRS, and the development site would be unavailable.

With the switch, Webpack was already creating fully-transpiled bundles for the browser to consume, so we had it create node bundles as well. Then, react-render-server no longer needed to transpile on-demand

Around the same time, the helper react-render library we were using for server-rendering also provided a new --no-cache option which solved our source code caching problem. We no longer needed to restart RRS! It seemed like all of our problems were solved, but little did we know that it created one massive problem for us.

The Webpack stopping problem

In between the Webpack migration and the Single Application Mode (SAM) projects, more and more Britelings were having Webpack issues; their Webpack re-compiling would stop. We crossed our fingers and hoped that SAM would fix it. Our theory was that before SAM we were running 30+ entry points in Webpack. Therefore if we reduced that down to only one or two, we would reduce the “load” on Webpack dramatically.

Unfortunately, we were not able to kill two birds with one stone. SAM did accomplish its goals, including reducing memory usage, but it didn’t alleviate the Webpack stoppages. Instead of continuing to the next phase of our quest, we decided to take a detour to investigate and fix this Webpack stoppage issue once and for all. Any benefits we added in the next project would be lost due to the Webpack stoppages. Eventbrite developers are our users so we shouldn’t build new features before fixing major bugs.

The Webpack stoppage investigations

We had no idea what was causing the issue, so we tried many different approaches to discover the root problem. We were still running on Webpack 3 (v3.10.0 specifically), so why not see if Webpack 4 had some magic code to fix our problem? Unfortunately, Webpack 4 crashed and wouldn’t even compile. We chose not to investigate further in that direction because we were already dealing with one big problem. Our team will return to Webpack 4 later.

Sanity check

First, our DevTools team joined in on the investigations because they are responsible for maintaining our Docker infrastructure. We observed that when Webpack stopped re-compiling, we could still see the source file changes reflected within the Docker container. So we knew it wasn’t a Docker issue.

Reliably reproducing the problem

Next, we knew we needed a way to reproduce the Webpack stoppage quickly and more reliably. Because we observed that editor autosave was a way to cause the stoppage, we created a “rapid file saver” script. It updated dummy files by changing imported functions in random intervals between 200 to 300 milliseconds. This script would update the file before Webpack finished re-compiling just like editor autosave, and enabled us to reproduce the issue within 5 minutes. Running this script essentially became a stress test for Webpack and the rest of our system. We didn’t have a fix, but at least we could verify one when we found it!

var fs = require('fs');
var path = require('path');

const TEMP_FILE_PATH = path.resolve(__dirname, '../../src/playground/tempFile.js');

// Recommendation: Do not set lower than 200ms 
// File changes that quickly will not allow webpack to finish compiling

const getRandomInRange = (min, max) => (Math.random() * (max - min) + min)
const getTimeout = () => getRandomInRange(REWRITE_TIMEOUT_MIN, REWRITE_TIMEOUT_MAX);

const FILE_VALUES = [
    {name: 'add', content:'export default (a, b) => (a + b);'},
    {name: 'subtract', content:'export default (a, b) => (a - b);'},
    {name: 'divide', content:'export default (a, b) => (a / b);'},
    {name: 'multiply', content:'export default (a, b) => (a * b);'},

let currentValue = 1;
const getValue = () => {
    const value = FILE_VALUES[currentValue];
    if (currentValue === FILE_VALUES.length-1) {
        currentValue = 0;
    } else {
    return value;

const writeToFile = () => {
    const {name, content} = getValue();
    console.log(`${new Date().toISOString()} -- WRITING (${name}) --`);
    fs.writeFileSync(TEMP_FILE_PATH, content);
    setTimeout(writeToFile, getTimeout());


With the “rapid file saver” at our disposal and a stroke of serendipity, we noticed the Docker container’s memory steadily increasing while the files were rapidly changing. We thought that we had solved the Docker memory issues with the Single Application Mode project. However, this did give us a new theory: Webpack stopped re-compiling when the Docker container ran out of memory.

Webpack code spelunking

The next question we aimed to answer was why Webpack 3 wasn’t throwing any errors when it stopped re-compiling. It was just failing silently leaving the developer to wonder why their app wasn’t updating. We began “code spelunking” into Webpack 3 to investigate further.

We found out that Webpack 3 uses chokidar through a helper library called watchpack (v1.4.0) to watch files. We added additional console.log debug statements to all of the event handlers within (transpiled) node_modules, and noticed that when chokidar stopped firing its change event handler, Webpack also stopped re-compiling. But why weren’t there any errors? It turns out that the underlying watcher didn’t pass along chokidar’s error events, so Webpack wasn’t able to log anything when chokidar stopped watching.

The latest version of Webpack 4, still uses watchpack, which still doesn’t pass along chokidar’s error events, so it’s likely that Webpack 4 would suffer from the same error silence. Sounds like an opportunity for a pull request!

For those wanting to nerd out, here is the full rabbit hole:

This whole process was an interesting discovery and a super fun exercise, but it still wasn’t the solution to the problem. What was causing the memory leak in the first place? Was Webpack even to blame or was it just a downstream consequence?


We began looking into our react-render-server and the --no-cache implementation within  react-render, the dependency that renders the components server-side. We discovered that react-render uses decache for its --no-cache implementation to clear the require cache for every request for our app bundles (and their node module dependencies). This was successful in allowing new bundles with the same path to be required, however, decache was not enabling the garbage collection of the references to the raw text code for the bundles.

Whether or not the source code changed, each server-side rendering request resulted in more orphaned app bundle text in memory. With app bundle sizes in the megabytes, and our Docker containers already close to maxing out memory, it was very easy for the React Docker container to run out of memory completely.

We found the memory leak!


We needed a way to clear the cache, and also reliably clear out the memory. We considered trying to make decache more robust, but messing around with the require cache is hairy and unsupported.

So we returned to our original solution of running react-render-server (RRS) with supervisor, but this time being smarter with when we restart the server. We only need to take that step when the developer changes the source files and has already rendered the app. That’s when we need to clear the cache for the next render. We don’t need to keep restarting the server on source file changes if an app hasn’t been rendered because nothing has been cached. That’s what caused the poor developer experience before, as the server was unresponsive because it was always restarting.

Now, in the Docker container for RRS, when in “dynamic mode”, we only restart the server if a source file changes and the developer has a previous version of the app bundle cached (by rendering the component prior). This rule is a bit more sophisticated than what supervisor could handle on its own, so we had to roll our own logic around supervisor. Here’s some code:

// misc setup stuff
const createRequestInfoFile = () => (
        JSON.stringify({start: new Date()}),

const touchRestartFile = () => writeFileSync(RESTART_FILE_PATH, new Date());

const needsToRestartRRS = async () => {
    const rrsRequestInfo = await safeReadJSONFile(RRS_REQUEST_INFO_PATH);

    if (!rrsRequestInfo.lastRequest) {
        return false;

    const timeDelta = Date.parse(rrsRequestInfo.lastRequest) - Date.parse(rrsRequestInfo.start);

    return Number.isNaN(timeDelta) || timeDelta > 0;

const watchSourceFiles = () => {
    let isReady = false;

        .on('ready', () => (isReady = true))

        .on('all', async () => {
            if (isReady && await needsToRestartRRS()) {

const isDynamicMode = shouldServeDynamic();
const supervisorArgs = [
    '--extensions', extname(RESTART_FILE_PATH).slice(1),

    ...(isDynamicMode ? ['--watch', RESTART_FILE_PATH] : ['--ignore', '.']),
const rrsArgs = [
    '--port', '8991',
    '--address', '',
    '--request-info-path', RRS_REQUEST_INFO_PATH,

if (isDynamicMode) {

    [...supervisorArgs, '--', RRS_PATH, ...rrsArgs],
        // make the spawned process run as if it's in the main process
        stdio: 'inherit',
        shell: true,

In short we:

  1. Create __request.json and initialize it with a start timestamp.
  2. Pass the _request.json file to RRS to update it with the lastRequest timestamp every time an app bundle is rendered.
  3. Use chokidar directly to watch the source files.
  4. Check to see if the lastRequest timestamp is after the start timestamp when the source files change and touch a __restart.watch file if that is the case. This means we have the app bundle cached because we’ve rendered an app bundle after the server was last restarted.
  5. Set up supervisor to only watch the __restart.watch file. That way, we restart the server only when all of our conditions are met.
  6. Recreate and reinitialize the __request.json file when the server restarts, and start the process again.

All of our server-side rendering happens through our Django backend. That’s where we’ve been receiving the timeout errors when react-render-server is unreachable by Django. So, in development only, we also added 5 retry attempts separated by 250 milliseconds if the request failed because Django couldn’t connect to the react-render-server.

The results are in

Because we had the “rapid file saver” with which to test, we were able to leverage it to verify all of the fixes. We ran the “rapid file saver” for hours, and Webpack kept humming along without a hiccup. We analyzed Docker’s memory over time as we reloaded pages and re-rendered apps and saw that the memory remained constant as expected. The memory issues were gone!

Even though we were once again restarting the server on file changes, the react-render-server connection issues were gone. There were some corner cases where the site would automatically refresh and not be able to connect, but those situations were few and far between.

Coming up next

Now that we finished our detour of a major bug we’ll return to the next milestone towards apps that can be developed and deployed independently.

The next step in our goal towards “micro-apps” is to give each application autonomy and control with its own package.json and dependencies. The benefit is that upgrading a dependency with a breaking change doesn’t require fixing all 30+ apps at once; now each app can move at its own pace.

We need to solve two main technical challenges with this new setup:

  • how to prevent each app from having to manage its infrastructure, and
  • what to do with the massive, unversioned, shared common/ folder that all apps use

We’re actively working on this project right now, so we’ll share how it turns out when we’re finished. In the meantime, we’d love to hear if you’ve had any similar challenges and how you tackled the problem. Let us know in the comments or ping me directly on Twitter at @benmvp.

Photo by Sasikan Ulevik on Unsplash

How to Make Swift Product Changes Using a Design System

Redesigning an entire site is a daunting challenge for a frontend team. Developers approach extensive visual changes with caution as they can be challenging. You might have to go through hundreds of stylesheets updating everything from hex values to custom spacing. Did you use the same name for colors on all your files? No typos? Do your colors have accessible contrasts? What a nightmare!

At Eventbrite, our design system helps our developers make those sweeping changes all while saving time and money. Keep reading to see how a design system can help your team with consistency, accessibility, and lightning-fast redesigns.

The Key to Consistency

A design system is a library of components that developers across teams can use as building blocks for their projects. A shared library allows everyone to use components, or reusable chunks of styling and code, that look and work the same way. You don’t want ten similar but different copies of the same thing, do you? Take custom file uploader components, for example. If each team builds their custom version of the component, not only does it create a confusing user experience, but it also means that developers across teams have to maintain and test all of them. No, thank you!

As part of the Frontend Platform team here at Eventbrite, my team and I maintain the Eventbrite Design System (EDS). Because we wrote EDS in React, some of our apps use EDS while legacy apps that use other JS frameworks do not. As we move more of our products move over to React, adoption of our design system is increasing. Our user experiences across all of our platforms look and feel more cohesive than ever before. Every EDS file uploader looks and behaves the same way (with minor variations).

Accessibility for All

When everyone uses the same component, you can build accessibility features in one place, and others can inherit it for free. Furthermore, you or a dedicated team can now thoroughly test each component to ensure they work for users of all abilities and needs. The result? People that navigate your site using screen readers or keystrokes can now use your product!

We love taking advantage of this benefit here at Eventbrite. We ensure the colors in our design system components have the right contrast ratios, which means that all Eventbrite pages are usable by people with colorblindness. Our color documentation page uses CromaJS to help calculate the rations for our text and color combinations. We also use WCAG AA as our contrast standard.

A sample of one of our colors on the Eventbrite Design System colors documentation page. It includes the color name, hex, RGB, and Luma values along with the WCAG score.

We also strive for our components and our pages to work well with keyboards and screen readers. EDS has a Keyboard higher-order component (HOC) where we use react-hotkeys to help us set up our React pages for optimal keyboard accessibility. Eventbrite works towards having all our components be accessible to all. Thanks to our design system, when Frontend Platform doubles down on accessibility, all teams that use EDS inherit the accessibility improvements by keeping up with our latest version.

Quick Turn-Arounds and Fast Redesign

Now, back to the redesign scenario. If you’ve defined all your colors and variables in one place, your team no longer has to hunt down definitions for each component. One developer can change a hex value (say, from #DB5A2C to #F05537), and every app that uses your design system inherits all changes right away.

In spite of all our planning and prep work, every once in a while our team needs to set a tight deadline. In our latest redesign, we made sweeping typography and color changes. While it seemed like a massive task, EDS enabled us to make many of these changes very quickly. We spent most of our time and energy making these changes to our products that don’t yet use EDS and thus require specific updates and quality assurance.  Check out the results of the transformation below!

Search Results Page Before the Rebrand

Eventbrite Search Results Page Before Redesign
Search Results Page After the Rebrand

Eventbrite Search Results Page After Redesign

Home Page Before Rebrand

Eventbrite Home Page Before Redesign

Home Page After the Rebrand

Eventbrite Home Page After Redesign

While adopting, implementing, and maintaining a new design system took serious work, the benefits have been well worth it. A design system might save your team a lot of time and work, too. However, they are not a magic bullet, and it takes time to get it right. Don’t despair if it doesn’t look as fleshed out as some of the more popular and well-staffed design systems, like Google’s Material UI or Airbnb’s Design Language System. Start saving time and money by having a shared library to increase consistency, increase the accessibility of your product, and make broad changes safe. Create a design system as unique as your product and start reaping the benefits.

What about you? Is your team using a design system? Is it a custom built one? Drop us some lines in the comments below or ping me directly on Twitter @mbeguiluz.

The Quest for React Micro-Apps: Single App Mode

Eventbrite’s React applications are a single React app with many entry points. To improve the development experience for both backend and frontend engineers, we implemented a single application mode (codenamed SAM) in our local environments. Whenever the React Docker container boots, it downloads and statically serves a set of pre-built assets for all of the React applications so that Webpack compilation never has to run.

Using a settings file, developers can indicate that they would like to run only their app in an active development mode. Having this feature was another significant milestone towards the quest for micro-apps. Backend engineers no longer have to wait for Webpack to set up to compile and recompile files that they will never change, and frontend developers only need to run Webpack for their app.

The post you are reading is the second in a series entitled The Quest for Micro-Apps detailing how our Frontend Platform team is decoupling our React apps from themselves and our Django monolith application. We are going to do it by creating Micro-Apps so that we can develop and deploy independently. If you haven’t already, check out the Introduction that provided background and overall goals for the project.

A little background

Our React apps are universal apps: they render both client-side in the browser and server-side in Node. Also, as mentioned in the introduction, we have just one single React application with an entry point for every app, which is how we get the different bundles to use for the different apps.

We use Docker for our development environment, which runs many, many containers to spin up a local version of all of eventbrite.com. One of these containers is our React container that contains all of the React apps. When the container starts, it spawns two Webpack processes that watch for source code changes. The server-side render requests consume the Node bundles that the first task writes to disk. The second process is a webpack-dev-server process, which creates in-memory bundles and reloads the page once new changes are compiled.

The growth problem

This setup worked fine when we initially created this infrastructure over a year ago, and we had less than a dozen apps; the processes ran quickly and development felt very responsive. However, a year later, the number of apps had nearly tripled, and the development environment was starting to feel sluggish, not only for the frontend developers who are living in React-land but also for the backend developers who never touch our React stack.

Our backend engineers developing APIs, working on the monolith, or merely browsing the site locally were spawning those same two Webpack watchers even though they weren’t making any JavaScript changes. Our backend devs were also waiting for the Webpack processes to perform their initial compilation at container start, which wasted a good amount of time. The container was also eating up a lot of memory watching for file changes that would never happen. Backend devs didn’t need Webpack running at all, just for the local site to work.

It was not just the backend devs who were hurting. Because all of the React apps were just a single app with many entry points, we were recompiling the entire app every time a change happened. When a dev made a change to their app, Webpack had to follow all of the other 29 entry points to see if their Node and webpack-dev-server bundles needed to be recreated as well. Why should they have to wait when they only cared about changes to their app? Webpack is smart about knowing what has changed, but it was still doing a whole lot of unnecessary work. Furthermore, at the container start, we were still waiting for the initial Webpack compilation to build all of the other apps, in addition to the one we were working on.

Static apps to the rescue

Our proposed solution was to enable a “static mode” in our development environment. By default, everyone would load the same bundled assets that are used in our continuous integration (CI) server. In this case, we wouldn’t need webpack-dev-server running; we could use a simple static Express server for serving assets. This new approach would greatly benefit our backend engineers who weren’t writing React code.

A developer would have to opt-in to run their app(s) in “dynamic mode.” However, the Webpack processes would only watch specific app(s), significantly reducing the amount of work they would need to do. This approach would greatly benefit our frontend engineers who were working on only an app or two at a time.

Single Application Mode (codenamed SAM) also fit into our long-term strategy of micro-apps. We still want developers to be able to browse the entire site in their local development environment even when all of the React applications are independently developed and deployable. Enabling this mode means that most or all of the local site has to be able to run in “static mode,” similar to a quality assurance (QA) environment. So this milestone not only allows us to break up this mega project but also increases developer productivity while we journey towards the end goal.

How we made it happen

As mentioned in the introduction, this entire endeavor is about replacing the existing infrastructure while it’s still running. Our goal is zero downtime due to bugs or rollbacks. This means that we have to move in smaller phases than if we were just building it greenfield. Phase 1 of this project introduced the concept of “static mode,” but it was disabled by default and it was all-or-nothing; you couldn’t single out specific apps. Once we tested and verified everything was working, we enabled “static mode” by default in Phase 2. After that was successful in the wild, we added “single-application mode” (SAM) in Phase 3.

Phase 0: CI setup

Before anything began, we needed to augment our current CI setup in Jenkins. To run in “static mode,” we decided to use the production assets built for our CI server in our development environment. This way, developers could easily replicate the information in our QA environment within their development environments.

When the code is merged to master, a Jenkins job builds the production assets and uploads a tarball (a package of files compressed with gzip) to the cloud with the build id in its name. Every hour, the latest tarball is downloaded and unpacked on a specific QA machine to create our CI environment.

That tarball is massive because it includes every bit of CSS and JavaScript for the entire site. It takes many minutes to download and unpack the tarball, so we couldn’t use it to seed our development environment. Instead, we created a new tarball of just our React bundles for quicker downloading and unpacking.

Phase 1: All dynamic by default

Then we began building the actual system. It relies on a git-ignored settings.json file that has a configuration for how the system should work:

    "apps": null,
    "buildIdOverride": "",
    "__lastSuccessfulQABuildTime": "2018-06-22T21:31:49.361Z",
    "__lastSuccessfulQABuildId": "12345-master-cfda2b6"

Every time the react container starts, it reads the settings.json file and the apps property that indicates static versus dynamic mode. If the settings.json file doesn’t exist, it gets auto-created with null as the value for the apps property. One or more app names within the apps array means dynamic mode, while an empty array means static mode, and null means use the default.

If the settings file indicates static mode, we retrieve the latest QA tarball stored in the cloud and unpack it locally where the Webpack compiled bundles would have been. We choose the latest build on QA instead of the HEAD of master so that what’s running locally will match what’s currently running on QA. The __lastSuccessfulQABuildTime and __lastSuccessfulQABuildId properties are logging information written out in static mode to help with later debugging.

Now, instead of running webpack-dev-server, we just run a static Express server to serve all of the static bundle assets. Because our server-side React renderer is already reading bundles written to disk by the second Webpack process, it doesn’t have to change at all because now those bundles just happen to come from the tarball.

Here’s the gist of the Docker start script:

(async () => {
    // create settings.json file w/ default settings if it doesn't exist yet

    // fetch prebuilt bundles from cloud, use `--no-fetch` to bypass
    if (!process.argv.includes('--no-fetch')) {
        try {
            await spawnProcess('yarn fetch:static');
        } catch(e) {

    if (shouldServeDynamic()) {
        // run webpack in normal development mode
        spawnProcess('yarn dev');
    } else {
        // run static server to serve prebuilt bundles
        spawnProcess('yarn serve:static');

A developer can also select a specific tarball with the buildIdOverride property instead of using the most recent QA tarball. This is a rarely used feature, but comes in handy when needing to test out a release candidate (RC) build (or any other build) locally.

The key with this phase was minimal disruption. To start things off, we defaulted to dynamic mode, the existing way things worked. If any app was listed (i.e. apps was non-empty), we would run all the apps in the dynamic mode, using Webpack to compile the changes.

When this released, everything worked the same as before. Most folks didn’t even realize that the settings.json file was being created. We found some key stakeholders to explicitly enable static mode and worked out the kinks for about a week before moving on to Phase 2.

Phase 2: All static by default

After we felt confident that the static mode system worked, we wanted to make static mode the default, the huge win for the backend engineers. First we announced it in our weekly Frontend Guild meeting and asked all the frontend developers to start explicitly listing the names of their app(s) in the apps property within the settings.json file. This way when we flipped the switch from dynamic-by-default to static-by-default, their environment would continue to run in dynamic mode.

    "apps": ["playground"],
    "buildIdOverride": "",
    "__lastSuccessfulQABuildTime": "2018-06-22T21:31:49.361Z",
    "__lastSuccessfulQABuildId": "eventbrite-25763-master_16.04-c1d32bb"

It was at this point that we wished we had a feature flag or rollout system for our development infrastructure, like the feature flag system we have for the site where we can slowly roll out features to end users. It would’ve been nice to be able to turn on static-by-default to a small percentage of devs and slowly ramp up to 100%. That way we could handle bugs before they affected all developers.

Without such a system, we had to make the code change that enabled static mode as the default and just hope that we had adequately tested it! Now any developer who hadn’t specified an app name (or names) in their settings.json would get static mode the next time their React container restarted. We ran into a few edge case problems, but nothing major. After about a week or two, we resolved them all and moved on to Phase 3.

Phase 3: Single-application mode (SAM)

Single-application mode (codenamed SAM) was the actual feature we wanted. Instead of having to choose between all-dynamic or all-static, we started reading the apps property to determine which apps to run in dynamic mode while leaving the rest in static mode.

Before in all-dynamic mode, we determined the entry points by finding all of the subfolders within the src folder that had an index.js entry point. Now with single-application mode, we just read the apps property in settings.json to determine the entry points. All other apps are run in static mode.

 * returns an object with appName as key and appPath as string value to be consumed by webpack entry key
const getEntries = () => {
    const appNames = getSettings().apps || [];
    const appPaths = appNames.map((appName) => path.resolve(__dirname, appName, 'index.js'))
        .filter((filePath) => fs.existsSync(filePath));

    if (_.isEmpty(appPaths)) {
        throw new Error('There are no legitimate apps to compile in your entries file. Please check your settings.json file');

    const entries = appPaths
        .reduce((entryHash, appPath) => {
            const appName = path.basename(path.dirname(appPath));

            return {
                [appName]: appPath,
        }, {});

    return entries;

Before single-application mode, we ran a simple Express server for all-static and webpack-dev-server for all-dynamic. With SAM we have a mixture of both modes. However, we cannot run both servers on a single port. So we decided to only use webpack-dev-server and add middleware that would determine whether or not the incoming request was for an app running in dynamic or static mode. If it’s a static mode request, we just stream the file from the file system; if it’s a dynamic request we route to the appropriate webpack-dev-server using http-proxy-middleware.

const appNames = getSettings().apps || [];

// Object of app names and their corresponding ports to be ran on
const portMap = appNames.reduce((portMap, appName, index) => ({
    [appName]: STARTING_PORT + index,
}), {});

// Object of proxy servers, used to route incoming traffic to the appropriate client dev server
const proxyMap = appNames.reduce((proxyMap, appName) => ({
    [appName]: proxyMiddleware({
        target: `${SERVER_HOST}:${portMap[appName]}`,
}), {});

// call each workspace's <code>yarn start</code> command to kick off their respective webpack processes
appNames.forEach((appName) => {
    spawnProcess(<code>yarn workspace ${appName} start ${portMap[appName]}</code>);

const app = express();

// Setup proxy for every appName in settings. All devMode content requests will be
// forwarded through these proxies to their corresponding webpack-dev-servers
app.use((req, res, next) => {
    const appName = path.parse(req.originalUrl).name.split('.')[0];

    if (proxyMap[appName]) {
        return proxyMap[appName](req, res, {});


// by default serve static bundles
app.use(ASSET_PATH, express.static(BUNDLES_PATH));

// start the static server


Issues are likely to arise with any significant change, and the change for developers to only run their app in dynamic mode was huge. Here are a couple of issues we encountered that you can hopefully avoid.

The Common Chunk

Because all of our different apps were just entry points in one big monolith app, we were able to leverage Webpack’s CommonChunkPlugin to create a shared bundle that contains the common dependencies between all of the apps. That way when our users moved between apps, after visiting the first app, they would only have to download app-specific code. Even though this is a production optimization, we built the common chunk in our development environment with webpack-dev-server as well.

Unfortunately, the common chunk broke when multiple apps were specified. Although it’s called SAM (single-application mode), the system supports specifying multiple applications that developers would like to run in dynamic mode simultaneously. While we tested that multiple apps worked in SAM, we did the majority of our testing with just one application, which is the common use case.

We include this common chunk in the tarball that gets downloaded, unpacked, and read in static mode. However, when running two apps in dynamic mode, the local common chunk would only consist of the commonalities between the two apps, not all 30+. So using the statically built common chunk caused errors in those apps running in dynamic mode.

Our initial fix was to update the webpack-dev-server middleware to also handle requests for the common chunk. However, this swung the pendulum in the opposite direction. It fixed the common chunk problem for multiple dynamic apps, but now all of the static apps were no longer using the statically built common chunk. They were using the locally built dynamic common chunk. So now all the static apps were broken.

In the end, since the common chunk is a production optimization, we elected to get rid of it in dynamic dev mode. So now no matter how many apps a developer specifies in the apps property of the settings.json, they won’t get a common chunk. However, we still need to keep the common chunk for the static mode apps for now, since the QA environment builds the apps where the common chunk still exists.

“Which mode am I in?”

Another issue we ran into wasn’t a bug, but a consequence of introducing static mode: developers didn’t know which mode they were in. Some backend developers weren’t even aware there was a static mode to begin with; they would try to make changes to an app and wonder why their changes weren’t being reflected. The problem was exacerbated when we introduced SAM in Phase 3 because one app would update while another would not. The Frontend Platform team found ourselves troubleshooting a lot of issues that ultimately were rooted in the fact that the engineer didn’t know which mode they were in.

The solution was to add an overlay message to the base HTML template that all the apps shared. It reads the settings.json file and determines which mode the currently displaying app is in, including the app name. If the app is in static mode it mentions how long it has been since its last refresh.

If the app is in the dynamic mode, it says “webpack dev mode.”

It turned out that mentioning the app name was also crucial because if a dev needed to work on a page that wasn’t their own, they wouldn’t always know which app needed updating.

The results are in

Our hypotheses about the benefits of the project panned out. We started hearing fewer and fewer issues from our backend engineers about the React container failing to boot. Less troubleshooting meant more time for development. Unfortunately, we don’t collect any metrics on individual engineers’ development environments so we don’t have any hard numbers on how much faster the container booted before nor the decrease in memory usage.

The biggest win for the frontend engineers was the reduction in Webpack recompile time when making changes to files. Previously Webpack traversed through all of the entry points, and now it only has to look at one (or however many the developer indicates in settings.json). The rebuild time was 2x or 3x faster, and we received lots of positive feedback.

So even though the SAM project was just a milestone in the overall endeavor to enable Micro-Apps, we were able to deliver lots of value to teams in the interim.

Coming up next

Late last year we started hearing some mysterious, but sparse reports from one or two frontend engineers that at some point Webpack would stop rebuilding when they were making changes. Over time as the engineering team added more apps and more Docker containers, the problem grew to affect almost all frontend engineers. It was even happening to us on the Frontend Platform Team.

We suspected it to be a memory issue, but we weren’t sure the source. We crossed our fingers hoping that the SAM project would fix the issue, but we were still able to trigger the problem even when only running a single app. Things were still on fire, and we realized that we couldn’t move forward with the quest for Micro-Apps until we resolved the instability issues. Any new features wouldn’t have the desired impact if the overall system was still unstable.

In the third post in the series, I will cover this topic in detail. In the meantime, have you ever managed a similar system? Did you face similar challenges? Different challenges? Let us know in the comments or ping me directly on Twitter at @benmvp.

The Quest for React Micro-Apps: The Beginning

Eventbrite’s site started as a typical mid-2000s monolith server rendered application. Although we recently moved into a React stack, we have experienced a lack of flexibility, coupling, and scale issues.

The Frontend Platform team wants to give developer teams autonomy, flexibility, and most importantly ownership of their apps so that they can move at the pace they need to provide value to our users. We have a vision: we want to get to a world where each React application can be both developed and deployed individually. In short, we want micro-apps. In this blog post series, we relate our quest for this vision, so keep on reading!

It’s been a long journey

Eventbrite built its website in the mid-2000s before the concept of a JAMstack (sites built solely on JavaScript, APIs, and Markup) was ever a thing. As a result, the site was a typical monolith application where the backend code (Python) rendered the frontend (HTML) to generate a website. In modern web architecture, we now create an entirely separate API/services layer so that there can be other data consumers, such as mobile apps or external developers.

Later on the frontend, we sprinkled in some jQuery for light client-side interactions. Once we needed more sophisticated experiences, we started using Backbone (and then Marionette). Then in early 2016, the Frontend Platform team added a React-based stack, with the hope of deprecating the legacy jQuery and Backbone apps over time.

Eventbrite isn’t one SPA (single-page application), but a collection of many applications. Sometimes an application is as big as a whole section of the site, like Event Creation/Management or Search & Browse, and other times it’s just a single admin page. In all cases, however, they are universal React apps rendered both server- and client-side.

If you’re interested in how we accomplished server-side rendering with our Django backend, take a look at a talk I gave last year on it:

Not always sunny

Although we’re moving more server-side logic into microservices accessible via the Eventbrite APIv3, our React apps are still tied to the core monolith in many unfortunate ways:

React Server-side rendering

We render server-side through our Django monolith (watch the video for more details), so the Django layer makes calls to the microservices directly to retrieve initial data. These calls are mimicked in JavaScript for subsequent client-side data retrieval.

Django HTML templates

The HTML templates used to hydrate the React apps initially are in Django-land, so all the data and environment information (locale and other context) have to come from the monolith.

Same repository

Because of the reasons above, to create a React application, you also need to create some Django scaffolding, including routing. As a result, the React apps live in the same repo as the core monolith so that developers wouldn’t have to try to keep two separate-yet-not-separate repositories in sync.

Shared package.json

Our React apps themselves aren’t truly separate. They are technically multiple entry points within a single React monolith that have a single package.json and shared bundling, transpilation, and linting configurations. If one team wants to change a dependency for their app, they need to ensure it doesn’t break the 29 others.

Cross-app dependencies

Because all of the apps come together under one single app, we can import components and utilities across applications. We’ve tried to actively discourage this, but it still happens. Instead, we’ve advised teams to put shared dependencies in the (unversioned) “common” folder.

Constant vigilance

The Frontend Platform team currently oversees the dependencies that all the apps use. We need to ensure development teams don’t accidentally back us into a corner with a library choice that prevents us from moving the platform forward in the future. We also need to make sure that those apps not actively being developed do not break with dependency changes.

Unscalable architecture

If the number of our development teams doubled, everything would probably grind to a halt. Eventbrite already has development teams in three continents across four time zones, so the status quo won’t scale.

We have a vision

We need to give teams autonomy, flexibility, and most importantly ownership of their apps so that they can move at the pace they need to provide value to our users.

We have a vision: we want to get to a world where each React application can be both developed and deployed individually; we want micro-apps. For development, devs wouldn’t need the rest of the site running. They could just build their app on their local machine talking to APIs running on our QA environment. Moreover, for deployment, the entire site wouldn’t need to be deployed to deliver new code to our users for a specific app. However, while the apps are independent, they must still feel cohesive and consistent with the rest of eventbrite.com for our end users.

Micro-apps aren’t a novel idea in the industry, but we believe that it will be immensely transformational for us.

Our quest

The thing is, the Frontend Platform team can’t just disappear for 6+ months and come back with a shiny new environment. It is too risky. It’s uncertain because the project is so massive. Moreover, it’s dangerous because it’s all or nothing. If at five months the company’s priorities change and we need to work on something more important, we would have five months of sunk cost.

So the plan is to rebuild the entire plane while it’s cruising at 36,000 feet. We’ll work on this project iteratively, breaking it down into smaller goals so that we can provide value frequently. It’d be like flying from SFO to JFK and midway through getting more legroom, free Wi-Fi, or lie-flat seats. We never want to be too far from a place where we can pause the project to work on something of greater importance. If all you got during the flight was the legroom and Wi-Fi, that would be better than having to wait for another flight to get all three.

You may have noticed that I haven’t been speaking in the past tense but in the present. That’s because we’re not done! We want to share our learnings as we go; not just the technology, but also the logistics and processes behind it. We want to share what worked, what didn’t, and what challenges we faced in hopes that you will be able to learn from what we’ve accomplished in real time.

We’re applying the same iterative approach to this series, so I’m not quite sure how many posts there will be. The team has a rough breakdown of the milestones that we want to hit and the value they provide. However, there may not be a one-to-one mapping between milestones and articles.

In any event, let’s kick things off with Part 1: Single App Mode.

Creating Flexible and Reusable React File Uploaders

The Event Creation team at Eventbrite needed a React based image uploader that would provide flexibility while presenting a straightforward user interface. The image uploader components ought to work in a variety of scenarios as reusable parts that could be composed differently as needs arose. Read on to see how we solved this problem.

What’s a File Uploader?

In the past, if you wanted to get a file from your web users, you had to use a “file” input type. This approach was limited in many ways, most prominently in that it’s an input: data is only transmitted when you submit the form, so users don’t have an opportunity to see feedback before or during upload.

With that in mind, the React uploaders we will be talking about aren’t form inputs; they’re “immediate transport tools.” The user chooses a file, the uploader transports it to a remote server, then receives a response with some unique identifier. That identifier is then immediately associated with a database record or put into a hidden form field.

This new strategy provides tremendous flexibility over traditional upload processes. For example, decoupling file transportation from form submissions enable us to upload directly to third-party storage (like Amazon S3) without sending the files through our servers.

The tradeoff for this flexibility is complexity; drag-and-drop file uploaders are complex beasts. We also needed our React uploader to be straightforward and usable. Finding a path to provide both flexibility and ease-of-use was no easy task.

Identifying Responsibilities

Establishing the responsibilities of an uploader seems easy… it uploads, right? Well sure, but there are a lot of other things involved to make that happen:

  • It must have a drop zone that changes based on user interaction. If the user drags a file over it, it should indicate this change in state.
  • What if our users can’t drag files? Maybe they have accessibility considerations or maybe they’re trying to upload from their phone. Either way, our uploader must show a file chooser when the user clicks or taps.
  • It must validate the chosen file to ensure it’s an acceptable type and size.
  • Once a file is picked, it should show a preview of that file while it uploads.
  • It should give meaningful feedback to the user as it’s uploading, like a progress bar or a loading graphic that communicates that something is happening.
  • Also, what if it fails? It must show a meaningful error to the user so they know to try again (or give up).
  • Oh, and it actually has to upload the file.

These responsibilities are just a short list, but you get the idea, they can get complicated very quickly. Moreover, while uploading images is our primary use case, there could be a variety of needs for file uploading. If you’ve gone to all the trouble to figure out the drag / drop / highlight / validate / transport / success / failure behavior, why write it all again when you suddenly need to upload CSVs for that one report?

So, how can we structure our React Image Uploader to get maximum flexibility and reusability?

Separation of Concerns

In the following diagram you can see an overview of our intended approach. Don’t worry if it seems complicated -we’ll dig into each of these components below to see details about their purpose and role.

React Architecture illustration showing overview of several components that will be enumerated below.

Our React based component library shouldn’t have to know how our APIs work. Keeping this logic separate has the added benefit of reusability; different products aren’t locked into a single API or even a single style of API. Instead, they can reuse as much or as little as they need.

Even within presentational components, there’s opportunity to separate function from presentation. So we took our list of responsibilities and created a stack of components, ranging from most general at the bottom to most specific at the top.

Foundational Components

Recap of the architecture illustration from above with "Foundational Components" highlighted.


This component is the heart of the uploader UI, where action begins. It handles drag/drop events as well as click-to-browse. It has no state itself, only knowing how to normalize and react (see what I did there?) to certain user actions. It accepts callbacks as props so it can tell its implementer when things happen.

Illustration showing a clear pane representing UploaderDropzone over a simple layout in React

It listens for files to be dragged over it, then invokes a callback. Similarly when a file is chosen, either through drag/drop or click-to-browse, it invokes another callback with the JS file object.

It has one of those inputs of type “file” that I mentioned earlier hidden inside for users who can’t (or prefer not to) drag files. This functionality is important, and by abstracting it here, components that use the dropzone don’t have to think about how the file was chosen.

The following is an example of UploaderDropzone using React:

    Drag a file here!
    <Icon type="upload" />

UploaderDropzone has very little opinion about how it looks, and so has only minimal styling. For example, some browsers treat drag events differently when they occur on deep descendants of the target node. To address this problem the dropzone uses a single transparent div to cover all its descendants. This provides the needed experience for users that drag/drop, but also maintains accessibility for screen readers and other assistive technologies.


The UploaderLayoutManager component handles most state transitions and knows which layout should be displayed for each step of the process, while accepting other React Components as props for each step.
Illustration showing a block representing the UploaderLayoutManager and several smaller blocks representing layouts being flipped/shuffled

This allows implementers to think about each step as a separate visual idea without the concern of how and when each transition happens. Supporters of this React component only have to think about which layout should be visible at a given time based on state, not how files are populated or how the layout should look.

Here is a list of steps that can be provided to the LayoutManager as props:

  • Steps managed by LayoutManager:
    • Unpopulated – an empty dropzone with a call-to-action (“Upload a great image!”)
    • File dragged over window but not over dropzone (“Drop that file here!”)
    • File dragged over dropzone (“Drop it now!”)
    • File upload in progress (“Hang on, I’m sending it…”)
  • Step managed by component that implements LayoutManager:
    • File has uploaded and is populated. For our image uploader, this is a preview of the image with a “Remove” button.

The LayoutManager itself has little or no styles, and only displays visuals that have been passed as props. It’s responsible for maintaining which step in the process the user has reached and displaying some content for that step.

The only layout step that’s externally managed is “Preview” (whether the Uploader has an image populated). This is because the implementing component needs to define the state in which the uploader starts. For example, if the user has previously uploaded an image, we want to show that image when they return to the page.

Example of LayoutManager use:

    dropzoneElement={<DropzoneLayout />}
    windowDragDropzoneElement={<WindowDragDropzoneLayout />}
    dragDropzoneElement={<DragDropzoneLayout />}
    loadingElement={<LoadingLayout />}
    previewElement={<PreviewLayout file={file} />}



Resource-Specific Components

Recap of the React architecture illustration from above with "General Components" highlighted.


The ImageUploader component is geared almost entirely toward presentation; defining the look and feel of each step and passing them as props into an UploadLayoutManager. This is also a great place to do validation (file type, file size, etc).

Supporters of this tool can focus almost entirely on the visual look of the uploader. This component maintains very little logic since state transitions are handled by the UploaderLayoutManager. We can change the visuals fluidly with very little concern about damaging the function of the uploader.

Example ImageUploader:

const DropzoneLayout = () => (
    <p>Drag a file here or click to browse</p>
const DragDropzoneLayout = () => (
    <p>Drop file now!</p>
const LoadingLayout = () => (
    <p>Please wait, loading...</p>
const PreviewLayout = ({file, onRemove}) => (
        <p>Name: {file.name}</p>
        <Button onClick={onRemove}>Remove file</Button>
class ImageUploader extends React.Component {
    state = {file: undefined};

    _handleRemove = () => this.setState({file: undefined});

    _handleReceiveFile = (file) => {

        return new Promise((resolve, reject) => {
            // upload the file!
        .catch(() => this.setState({file: undefined}))

    render() {
        let {file} = this.state;
        let preview;

        if (file) {
            preview = (
                <PreviewLayout file={file} onRemove={this._handleRemove} />

        return (
                dropzoneElement={<DropzoneLayout />}
                dragDropzoneElement={<DragDropzoneLayout />}
                loadingElement={<LoadingLayout />}

Application-Specific Layer

Recap of the React architecture illustration from above with "Application-specific layer" highlighted.

The example above has one prominent aspect that isn’t about presentation:  the file transport that happens in _handleReceiveFile. We want this ImageUploader component to live in our component library and be decoupled from API specific behavior, so we need to remove that. Thankfully, it’s as simple as accepting a function via props that returns a promise that resolves when upload is complete.

_handleReceiveFile(file) {
    // could do file validation here before transport. If file fails validation, return a rejected promise.
    let {uploadImage} = this.props;


    return uploadImage(file)
        .catch(() => this.setState({file: undefined}))

With this small change, this same image uploader can be used for a variety of applications. One part of your application can upload images directly to a third party (like Amazon S3), while another can upload to a local server for a totally different purpose and handling, but using the same visual presentation.

And now because all that complexity is compartmentalized into each component, the ImageUploader has a very clean implementation:

<ImageUploader uploadImage={S3ImageUploadApi} />

With this foundation applications can use this same ImageUploader in a variety of ways. We’ve provided the flexibility we want while keeping the API clean and simple. New wrappers can be built upon UploadLayoutManager to handle other file types or new layouts.

In Closing

Imagine image uploaders which were purpose-built for each scenario, but contain only a few simple components made of presentational markup.  They can each use the same upload functionality if it makes sense, but with a totally different presentation. Or flip that idea around, using the same uploader visuals but with totally different API interfaces.

In what other ways would you use these foundational components? What other uploaders would you build? The sky’s the limit if you take the time to build reusable components.

Doctor Python: Or How I Learned to Stop Worrying and Love ES6

Screen capture of Major Kong riding on top of a bomb falling from a plane in the film, Doctor Stangelove.

Have you learned ES6 yet? Oof. When people started asking me that, I’d feel a sense of pressure that I was missing out on something. What was this “ECMA” people kept talking about? I was worried.

But Python helped me learn ES6. Weird, right? Turns out a lot of ES6 syntax overlaps with that of Python, which I learned at Eventbrite. Much of the syntax is shared between the two languages, so they kind of go hand in hand. Kinda.

Without further ado, let’s talk about these two buddies.


Block Scope

When I first started learning JavaScript (back in “ancient” ES5 days), I assumed several things created scope. I thought that conditionals created scope and was quickly told that I was wrong.

“NO. Only functions create scope in JavaScript!”

So when I found out that with ES6, we now have block scope, I was like, “WAT”.

A massive inflatable rubber ducky floating in front of a pier and building.

With the addition of const and let to ES6, block scope! Wow! I felt like I’d predicted the future.

function simpleExample(value) {
  if (value) {
    var varValue = value;
    let letValue = value;
    console.log(varValue, letValue); // value value

  // varValue is available even though it was defined
  // in if-block because it was "hoisted" to function scope
  console.log(varValue); // value

  // letValue is a ReferenceError because 
  // it was defined within the if-block
  console.log(letValue); // Uncaught ReferenceError: letValue is not defined

What else creates scope in JavaScript, ES6, and Python? And what kind of scope do they use? Check out the following table:

JavaScript Python
Scope Lexical Lexical
Namespace Functions, Classes [ES6!], Modules [ES6!], Blocks [ES6!] Functions, Classes, Modules
New Identifiers Variables, Functions Variables, Functions, Classes

Template Literals

I like to think of template literals as Mad Libs. Did you have them as a child? Sentences were missing words, and you could write anything you wanted into those spaces. You only had to conform to the specified word type: noun, pronoun, verb, adjective, exclamation.

Mad Libs that read "mothers sit around burmping. Last summer, my little brother fell in a/an hairdo and got poison palmtree all over his butt. My family is going to Winsconsin, and I will.."

Similarly, template literals are string literals that allow embedded expressions. They were originally called “template strings” in prior editions of the ES2015 specification.

Yup, these already exist in Python. I had actually learned about literal string interpolation in Python, which made it that much easier for me to understand in ES6. They are great because you no longer need the ridiculous concatenation found in older versions of JavaScript.

let exclamation = 'Whoa!';
let sentence = `They are really similar to Python.`;

console.log(`Template Literals: ${exclamation} ${sentence}`);
// Template Literals: Whoa! They are really similar to Python.
print '.format(): {} {}'.format('Yup.', 'Quite!')
# .format(): Yup. Quite!


Default Parameters

Yup. Python’s got ‘em too. Default parameters set a default for function parameters. This is most effective for avoiding bugs that pop up with missing arguments.

function nom(food="ice cream") {
  console.log(`Time to eat ${food}`);

nom(); // Time to eat ice cream
def nom(food="ice cream"):
  print 'Time to eat {}'.format(food)

nom() # Time to eat ice cream

Rest Parameters & *args

Rest parameter syntax allows us to represent an indefinite number of arguments as an array. In Python, they’re called *args, which again, I’d already learned! Are you sensing a pattern here?

Check out how each of the languages bundles parameters up in neat little packages:

function joke(question, ...phrases) {
  for (let i = 0; i > phrases.length; i++) {

let es6Joke = "Why does JS single out one parameter?"
joke(es6Joke, "Because it doesn't", 'really like', 'all the REST of them!');

// Why does JS single out one parameter?
// Because it doesn't
// really like
// all the REST of them!
def pirate_joke(question, *args):
  print question
  for arg in args:
    print arg

python_joke = "What's a Pyrate's favorite parameter?"

pirate_joke(python_joke, "*args!", "*arrgs!", "*arrrgs!")

# What's a Pyrate's favorite parameter?
# *args!
# *arrgs!
# *arrrgs!



Oh boy, we’re gonna talk about prototypal inheritance now! ES6 classes are actually syntactic sugar and based on the prototype chain found in ES5 and previous iterations of JavaScript. So, what we can do with ES6 classes is not much different from what we do with ES5 prototypes.

Python has classes built in, allowing for quick and easy Object Oriented Programming (Python is down with OOP.). I always found the prototype chain extremely confusing in JavaScript, but looking at Python and ES6 classes side by side really hit home for me.

Let’s take a look at these ES6 “classes” based on the prototype chain:

class Mammal {
  constructor() {
    this.neocortex = true;

class Cat extends Mammal {
  constructor(name, years) {
    this.name = name;
    this.years = years;

  eat(food) {
    console.log('nom ' + food);

let fryCat = new Cat('Fry', 7);
class Mammal(object):
  neo_cortex = True

class Cat(Mammal):
  def __init__(self, name, years):
    self.name = name
    self.years = years

  def eat(food):
    print 'nom %s' % (food)

fry_cat = Cat('Fry', 7)

A big difference between ES6 Classes and ES5 Prototypes: you can inherit more easily with classes than with the prototype chain. This is very similar to Python’s structure. Neato!

So there you have it. Five quick examples of Doctor Python helping me stop worrying and love ES6. It’s been many months now, and my ES6 usage is now pretty explosive.

Screen capture of Major Kong riding on top of a bomb falling from a plane in the film, Doctor Stangelove.

6 Unsuspecting Problems in HTML, CSS, and JavaScript – Part 2

Welcome to part 2 of our series covering six unsuspecting problems and scenarios that one may come across in HTML, CSS, and JavaScript. In part 1 we talked about the Block Formatting Context and Margin Collapsing. In this post, we will be covering DOM reflow and event delegation, and how they affect the performance of your application.

DOM Reflow

DOM reflow is the drawing or redrawing the layout of all or part of the DOM. This is an expensive process, but unfortunately easily triggered. This could cause a noticeable performance degradation of a web app that requires a lot of user interactivity i.e. drag & drop and WYSIWYG editors. If you are developing a highly interactive web app, then you will most likely at some point trigger a DOM reflow.

Eventbrite’s switch to ReactJS is one answer to the poor performing native DOM. ReactJS creates its own virtual DOM, which optimizes which parts of the web page needs to be re-rendered.

Continue reading “6 Unsuspecting Problems in HTML, CSS, and JavaScript – Part 2”