BriteBytes: Diego “Kartones” Muñoz

An Eventbrite original series, BriteBytes features interviews with Eventbrite’s growing global engineering team, shining a light on the individuals whose jobs are to build the technology that powers live experience.

One of my favorite things about Eventbrite is getting to work with engineers from all over the world. In September, I had the pleasure of sitting down with Diego “Kartones” Muñoz, a Principal Engineer visiting Eventbrite’s headquarters HQ in San Francisco from our Spain office. He joined Eventbrite through our Ticketea acquisition in May and works out of Madrid with the Ticketing and Registration Business Unit (TRBU) Mapache team. In this interview, he tells us about his path, what it’s like onboarding onto a larger company, and things he likes most working at Eventbrite.

Tamara Chu: How did you come to work for Ticketea/Eventbrite? What was your path as a software engineer?

Diego “Kartones” Muñoz: I started early in development and computers, so before entering university I already knew a bit and wasn’t sure if I wanted to study it or not. I started studying, then I quit after a few years because I thought it was boring [laughs]. I started working, and I felt I was learning way more by working. Since then I’ve switched a lot: I started consulting with .NET, then switched to PHP and more open-source stacks, then I switched to Ruby, and since 2015, Python, which I’m in love with.

In 2009, I switched from consulting for other companies to product development, and since then I have been in multiple different areas: social networks, web gaming portals, mapping tools, video generation tools, and now ticketing.

T: How long had you been at Ticketea before Eventbrite?

D: I joined March 2017, so one year. In total it’s now been one year and a half between Ticketea and Eventbrite.

T: And did you like the culture of Ticketea compared to the other companies you’ve worked at?

D: Yes, that was probably the deciding factor. A friendlier company, not willing to jump on the startup unicorn hype but preferring to focus on a single product; not so worried about growing a lot, but keeping the product stable when adding new features. Also, while Ticketea had investing, it was a small amount, and it was profitable, so it was nice that we weren’t in such a hurry to always be generating lots of new users or lots of new revenue, just growing steady but at a slower pace than other startups.

It’s not that that’s bad in itself, but other places I’ve been were just growing, growing, growing, and they didn’t care about quality as much.

T: Mm, like growth for growth’s sake, no matter what happens to the team or what kind of culture you’re building.

D: Yes, exactly, or when things are failing often because the platform is not stable enough.

T: Has the transition to Eventbrite felt natural? Or what was that shift like?

D: I think for us it has been quite natural, also because our stack at Ticketea was more or less similar; we already used most of the tech stack. [The shift] has been learning a new platform, adjusting to mostly everything in English, and the time difference.

T: Yeah, [the time difference] is a big one the teams are still figuring out. Was there anything about Eventbrite that surprised you when you joined?

D: The size and the scale of some things, like the size of some big events that [Eventbrite] has might be more than the total of what Ticketea sells in one year. And some parts of the technology, you can actually look at it and see that it has years of experience put into there, and [years of] thought evolving those parts. That’s something I appreciate a lot, spending time improving and making things better.

T: Was there something that excited you, like “oh cool, this is something new that I can look into?” Something specific?

D: Yes, for example, the way the APIs work — the internals of how to build and expand them and how they communicate between themselves — it was a problem that I’ve seen in the past but never solved as cleanly as here. I’m not an expert on API development, but here I think we have a good and elegant solution.

T: How were you doing it at Ticketea versus here?

D: For example regarding API design, ours were less advanced, more built in a classical way of “load data, fetch all related entities and return everything.” It was more manual work, without the EB API magic. We also didn’t have the scale as Eventbrite, so usually performance wasn’t a problem; things would go slower, but it would still work. At Ticketea also we were just two technical teams, so also there’s been a big jump to now being part of a company with hundreds of engineers.

T: Was there anything from Ticketea that you wish had come over to Eventbrite?

D: The automated deployment, the quicker release cycles. As we didn’t have Ops, we were all tiny part DevOps, mostly developers. We handled our own infrastructure. That’s also why we were switching from AWS to GCP [Google Cloud Platform] because it removes an additional layer of complexity. So we can self-deploy without systems or release engineers. We had automatic deploys, canary releases, simple traffic splitting, automatic with a slider with one button. Those things, here with so many people and so many services, it’s not as quick.

T: What has been your favorite thing about working at Eventbrite?

D: Probably being able to work on such a big project. Because we’re thinking, you build something, it’s not something that three or four people are going to use, but it’s a thing that millions of people are going to use. But still, I don’t know what else, because it has just been a few months [laughs].

T: [laughs] I’ll ask you again in another 6 months.

D: Yeah, let’s do that!

T: How about your least favorite thing?

D: Adapting, maybe, to the way of releasing things. We have lots of services with complex interactions, so you have to be careful and take additional steps to deploy services. Every change takes extra effort to update and release, etcetera, which I wasn’t used to due to our smaller scale and mostly automated platform.

T: Do you see opportunities to change that?

D: I think yes. I don’t know what the future is for our team, but yes, of course, I feel there are opportunities to improve the way things are done. There’s PySOA (Eventbrite’s Python library for writing microservices and their clients), there are tools in place to migrate services, and probably going to be more alignment between product and tech — is this important, or are there more pressing issues, or can we take advantage of doing something with the service to also separate it?

T: What are you most excited about?

D: All the things that I can learn from the platform. I am just grasping the tip of the iceberg, how everything works: the backend parts, learning React, how the tools we use work (internally), DevOps, the infrastructure that we have, the general learning opportunity of the architecture, and the platform.

Diego has been an active part of Spain’s tech scene for many years, and it’s fantastic having him on the team. Learn more about him at https://kartones.net/. A big thank you to Diego for sharing his background and experience. We’re looking forward to hearing more from him and the rest of the team in the future, so stay tuned for more BriteBytes!

Rethinking quality and the engineers who protect it

Testing software is an important responsibility, but testing is not a synonym for quality. At Eventbrite, we are trying to dig deeper into quality and what it means to be a QA Engineer. This article is not just for QA engineers, it is for anyone who wants to better understand how to deliver higher quality products and better utilize QA resources. If you don’t have QA resources, by the end of this article you will have a better idea of what to ask for when you look to add a QA Engineer to your team.

Rethinking the role

When I sat down to write an updated job description for our QA Engineering position, I started my research by looking at job listings from similar companies. Most of the listings agreed on one thing: QA Engineers test. The specifics vary, but the posting would always include a range of automated and manual testing tasks.

While these testing tasks are worth doing, testing software doesn’t ensure that  the output is a high quality product. In practice, effective QA extends well beyond testing. QA Engineers should ensure teams develop products that work and address a targeted customer need.

The iron triangle

Being a strong advocate for quality requires understanding what could cause quality to suffer. I’d like to start this post by introducing the concept of “The Iron Triangle” The triangle is a visualization sometimes used to describe the constraints of a project, but it also works as a model for the challenges of maintaining quality.

The idea here is that we constrain the quality of a project by its scope, deadline, and budget (among other factors). Changes to one of these constraints then require balancing adjustments to the others, or quality suffers.

External quality

The team can’t control all of these constraints, but it is critical that they monitor them. These constraints directly impact the quality of work. This sort of quality is external because it is quality as understood by the customer.

Some scenarios

  • A project has a broad scope. The timeline for the project is likely full of feature work, with limited time left for testing tasks. Intervention can mean working to carve out time to write and perform tests, advocating for a reduction in scope, or developing a testing approach that is lean without sacrificing too much coverage.
  • A project has a tight budget. This type of project is likely to have even less time to spend on quality. In these cases, my preference is to establish clear goals and expectations with stakeholders for quality in the planning step. This process enables the team to pack their limited QA time with more precise and targeted testing tasks without misrepresenting how hardened our code may be when we finish the work.
  • A project has an open timeline. This is less common but has its own challenges to quality. When we give plenty of time to projects, they naturally move more slowly. In these situations, it is essential to test along the way, because the closing days of this project can be hectic. I like to limit final testing before release as much as possible with incremental tasks and plenty of automated testing. That way, I can protect the development team from last-minute changes, complexity, and most major bugs.

External quality is linked directly to the success of the business and is everyone’s responsibility. All arms of the business are responsible for maintaining external quality and delivering functional products.

Beyond bugs

I loosely consider an issue a bug any time the software produces an incorrect or unexpected result or behaves in unintended ways. Bugs are going to happen, and minimizing their occurrence is why we test software. However, external quality can only cover so far as we understand how the product will be used. You cannot write a test to cover a use case you don’t understand or know about

If something works as expected but fails to meet the user’s need, this is still an issue of quality. However, this is not a bug. The QA team should bring knowledge of the product and the user to the entire development process. If QA is involved in the planning phase, and the testing phase of development, they can help with more than just finding bugs. They can help ensure developers more thoroughly understand how users employ the products they are building.

Internal quality

That said, there is also an internal, procedural component to quality. Writing code and building products in a way that minimizes technical debt and mitigates risk maintains internal quality. Being good at managing external quality does not make an organization good at managing internal quality.

A new scenario

  • The development team is wrapping up a project and is ready to execute their test plan. Through testing, they uncover some bugs and edge cases that they didn’t think of when writing requirements for the project. To fix these issues, they need to add cyclomatic complexity. This could reduce internal quality and has downstream effects on external quality too. This issue could have been curtailed by involving QA in the writing of product requirements, or by being more deliberate when considering edge cases and architecting the feature.

Balancing external and internal quality

Good external quality is not an indication of good internal quality. Since QA Engineers are driving external quality, they need to be cognizant of increased complexity as an output of testing. Testing uncovers more than bugs, it also uncovers where the product we are building may be failing to meet user needs. Addressing these gaps is critical to quality, but can have a significant impact on timeline, budget, and scope. Our compromises are likely to produce technical debt

Technical debt

Technical debt should be a conscious compromise. The development team can give up some internal quality to make the project work within other constraints. Future work to pay off that technical debt often competes for the same development time as work done to fix a bug, and both issues concern overall quality. This can be a confusing number of plates to keep spinning at once. We should neglect neither type of quality work for the other, and understanding their relation to one another is crucial to preserving high overall quality.

One final scenario

  • The business asks for a feature with very narrow scope, a small budget, and a tight deadline. The feature will require new development work on an old, neglected part of the codebase. The development team is worried about losing time to cleaning up technical debt around their integration points and bringing the old code in line with new standards and practices. Testing time for the new feature work is already tight, and the business wants the development team to prioritize keeping the existing feature set healthy. The team needs to make certain compromises to meet their target release date. One of those compromises is balancing investment in internal quality against the external quality of this new feature and the old code.

Protecting quality

While it is critical to be understanding and compromise during development, QA Engineers should remain biased toward quality. The organization has managers charged with protecting budget, scope, and deadlines – but quality should have an advocate too. QA Engineers should spend time encouraging and coaching development teams on bugs and testing tasks, but the real goal should be to encourage those teams to take ownership of quality.

When the user-need and gravity of testing is well-communicated and well-understood by developers, they write higher quality code. Developers that understand their users write better tests that leverage user stories, rather than the developer’s expectation for what their code does. Beyond testing functionality, they are making sure that what they have developed aligns with how the product is addressing targeted need.

Engaged developers make the best testers

To be clear, I am advocating that developers do their testing and own their quality. Outsourcing your testing to automation engineers or manual testers is an option, but comes with drawbacks. Developers bring vital skills for driving quality into the product at speed. Engineers are also uniquely positioned to solve problems with their code, and developers that write their tests are more vested in fixing them when they fail.

The QA team can and should assist with this process. They can help developers deliver higher quality products by making sure the project is testable upfront, and making sure the approach to testing is thorough and considerate of other constraints to development. Beyond just saying that “quality should be high”, the team should set expectations for quality within the context of other constraints. These expectations serve two purposes. Foremost, they helps with estimation. If you fail to consider QA tasks during estimation, then you have not made time for quality. Secondly, it binds quality to the development process, fostering ownership within the team. Teams that take ownership of their work are more invested in delivering higher quality products.

The new job description

QA Engineers should protect overall quality. They should work with teams to find the right balance of testing for each unique project. To do this, a good QA Engineer understands quality in the context of other constraints to development and is willing to compromise, but will never allow the business to concede quality. When a business delivers low-quality products, it fails.

SQA Quality Engineer

New Job Listing for QA Engineer

What strategies do your teams use to assure quality? How do you leverage your QA team beyond testing? Tell us about it in the comments and drop me a line on Twitter @aqualityhuman.

The Quest for React Micro-Apps: Single App Mode

Eventbrite’s React applications are a single React app with many entry points. To improve the development experience for both backend and frontend engineers, we implemented a single application mode (codenamed SAM) in our local environments. Whenever the React Docker container boots, it downloads and statically serves a set of pre-built assets for all of the React applications so that Webpack compilation never has to run.

Using a settings file, developers can indicate that they would like to run only their app in an active development mode. Having this feature was another significant milestone towards the quest for micro-apps. Backend engineers no longer have to wait for Webpack to set up to compile and recompile files that they will never change, and frontend developers only need to run Webpack for their app.

The post you are reading is the second in a series entitled The Quest for Micro-Apps detailing how our Frontend Platform team is decoupling our React apps from themselves and our Django monolith application. We are going to do it by creating Micro-Apps so that we can develop and deploy independently. If you haven’t already, check out the Introduction that provided background and overall goals for the project.

A little background

Our React apps are universal apps: they render both client-side in the browser and server-side in Node. Also, as mentioned in the introduction, we have just one single React application with an entry point for every app, which is how we get the different bundles to use for the different apps.

We use Docker for our development environment, which runs many, many containers to spin up a local version of all of eventbrite.com. One of these containers is our React container that contains all of the React apps. When the container starts, it spawns two Webpack processes that watch for source code changes. The server-side render requests consume the Node bundles that the first task writes to disk. The second process is a webpack-dev-server process, which creates in-memory bundles and reloads the page once new changes are compiled.

The growth problem

This setup worked fine when we initially created this infrastructure over a year ago, and we had less than a dozen apps; the processes ran quickly and development felt very responsive. However, a year later, the number of apps had nearly tripled, and the development environment was starting to feel sluggish, not only for the frontend developers who are living in React-land but also for the backend developers who never touch our React stack.

Our backend engineers developing APIs, working on the monolith, or merely browsing the site locally were spawning those same two Webpack watchers even though they weren’t making any JavaScript changes. Our backend devs were also waiting for the Webpack processes to perform their initial compilation at container start, which wasted a good amount of time. The container was also eating up a lot of memory watching for file changes that would never happen. Backend devs didn’t need Webpack running at all, just for the local site to work.

It was not just the backend devs who were hurting. Because all of the React apps were just a single app with many entry points, we were recompiling the entire app every time a change happened. When a dev made a change to their app, Webpack had to follow all of the other 29 entry points to see if their Node and webpack-dev-server bundles needed to be recreated as well. Why should they have to wait when they only cared about changes to their app? Webpack is smart about knowing what has changed, but it was still doing a whole lot of unnecessary work. Furthermore, at the container start, we were still waiting for the initial Webpack compilation to build all of the other apps, in addition to the one we were working on.

Static apps to the rescue

Our proposed solution was to enable a “static mode” in our development environment. By default, everyone would load the same bundled assets that are used in our continuous integration (CI) server. In this case, we wouldn’t need webpack-dev-server running; we could use a simple static Express server for serving assets. This new approach would greatly benefit our backend engineers who weren’t writing React code.

A developer would have to opt-in to run their app(s) in “dynamic mode.” However, the Webpack processes would only watch specific app(s), significantly reducing the amount of work they would need to do. This approach would greatly benefit our frontend engineers who were working on only an app or two at a time.

Single Application Mode (codenamed SAM) also fit into our long-term strategy of micro-apps. We still want developers to be able to browse the entire site in their local development environment even when all of the React applications are independently developed and deployable. Enabling this mode means that most or all of the local site has to be able to run in “static mode,” similar to a quality assurance (QA) environment. So this milestone not only allows us to break up this mega project but also increases developer productivity while we journey towards the end goal.

How we made it happen

As mentioned in the introduction, this entire endeavor is about replacing the existing infrastructure while it’s still running. Our goal is zero downtime due to bugs or rollbacks. This means that we have to move in smaller phases than if we were just building it greenfield. Phase 1 of this project introduced the concept of “static mode,” but it was disabled by default and it was all-or-nothing; you couldn’t single out specific apps. Once we tested and verified everything was working, we enabled “static mode” by default in Phase 2. After that was successful in the wild, we added “single-application mode” (SAM) in Phase 3.

Phase 0: CI setup

Before anything began, we needed to augment our current CI setup in Jenkins. To run in “static mode,” we decided to use the production assets built for our CI server in our development environment. This way, developers could easily replicate the information in our QA environment within their development environments.

When the code is merged to master, a Jenkins job builds the production assets and uploads a tarball (a package of files compressed with gzip) to the cloud with the build id in its name. Every hour, the latest tarball is downloaded and unpacked on a specific QA machine to create our CI environment.

That tarball is massive because it includes every bit of CSS and JavaScript for the entire site. It takes many minutes to download and unpack the tarball, so we couldn’t use it to seed our development environment. Instead, we created a new tarball of just our React bundles for quicker downloading and unpacking.

Phase 1: All dynamic by default

Then we began building the actual system. It relies on a git-ignored settings.json file that has a configuration for how the system should work:

{
    "apps": null,
    "buildIdOverride": "",
    "__lastSuccessfulQABuildTime": "2018-06-22T21:31:49.361Z",
    "__lastSuccessfulQABuildId": "12345-master-cfda2b6"
}

Every time the react container starts, it reads the settings.json file and the apps property that indicates static versus dynamic mode. If the settings.json file doesn’t exist, it gets auto-created with null as the value for the apps property. One or more app names within the apps array means dynamic mode, while an empty array means static mode, and null means use the default.

If the settings file indicates static mode, we retrieve the latest QA tarball stored in the cloud and unpack it locally where the Webpack compiled bundles would have been. We choose the latest build on QA instead of the HEAD of master so that what’s running locally will match what’s currently running on QA. The __lastSuccessfulQABuildTime and __lastSuccessfulQABuildId properties are logging information written out in static mode to help with later debugging.

Now, instead of running webpack-dev-server, we just run a static Express server to serve all of the static bundle assets. Because our server-side React renderer is already reading bundles written to disk by the second Webpack process, it doesn’t have to change at all because now those bundles just happen to come from the tarball.

Here’s the gist of the Docker start script:

(async () => {
    // create settings.json file w/ default settings if it doesn't exist yet
    await ensureJSONFileExists(SETTINGS_PATH, DEFAULT_SETTINGS);

    // fetch prebuilt bundles from cloud, use `--no-fetch` to bypass
    if (!process.argv.includes('--no-fetch')) {
        try {
            await spawnProcess('yarn fetch:static');
        } catch(e) {
            console.log(e.message);
            process.exit(e.statusCode);
        }
    }

    if (shouldServeDynamic()) {
        // run webpack in normal development mode
        spawnProcess('yarn dev');
    } else {
        // run static server to serve prebuilt bundles
        spawnProcess('yarn serve:static');
    }
})();

A developer can also select a specific tarball with the buildIdOverride property instead of using the most recent QA tarball. This is a rarely used feature, but comes in handy when needing to test out a release candidate (RC) build (or any other build) locally.

The key with this phase was minimal disruption. To start things off, we defaulted to dynamic mode, the existing way things worked. If any app was listed (i.e. apps was non-empty), we would run all the apps in the dynamic mode, using Webpack to compile the changes.

When this released, everything worked the same as before. Most folks didn’t even realize that the settings.json file was being created. We found some key stakeholders to explicitly enable static mode and worked out the kinks for about a week before moving on to Phase 2.

Phase 2: All static by default

After we felt confident that the static mode system worked, we wanted to make static mode the default, the huge win for the backend engineers. First we announced it in our weekly Frontend Guild meeting and asked all the frontend developers to start explicitly listing the names of their app(s) in the apps property within the settings.json file. This way when we flipped the switch from dynamic-by-default to static-by-default, their environment would continue to run in dynamic mode.

{
    "apps": ["playground"],
    "buildIdOverride": "",
    "__lastSuccessfulQABuildTime": "2018-06-22T21:31:49.361Z",
    "__lastSuccessfulQABuildId": "eventbrite-25763-master_16.04-c1d32bb"
}

It was at this point that we wished we had a feature flag or rollout system for our development infrastructure, like the feature flag system we have for the site where we can slowly roll out features to end users. It would’ve been nice to be able to turn on static-by-default to a small percentage of devs and slowly ramp up to 100%. That way we could handle bugs before they affected all developers.

Without such a system, we had to make the code change that enabled static mode as the default and just hope that we had adequately tested it! Now any developer who hadn’t specified an app name (or names) in their settings.json would get static mode the next time their React container restarted. We ran into a few edge case problems, but nothing major. After about a week or two, we resolved them all and moved on to Phase 3.

Phase 3: Single-application mode (SAM)

Single-application mode (codenamed SAM) was the actual feature we wanted. Instead of having to choose between all-dynamic or all-static, we started reading the apps property to determine which apps to run in dynamic mode while leaving the rest in static mode.

Before in all-dynamic mode, we determined the entry points by finding all of the subfolders within the src folder that had an index.js entry point. Now with single-application mode, we just read the apps property in settings.json to determine the entry points. All other apps are run in static mode.

/**
 * returns an object with appName as key and appPath as string value to be consumed by webpack entry key
 */
const getEntries = () => {
    const appNames = getSettings().apps || [];
    const appPaths = appNames.map((appName) => path.resolve(__dirname, appName, 'index.js'))
        .filter((filePath) => fs.existsSync(filePath));

    if (_.isEmpty(appPaths)) {
        throw new Error('There are no legitimate apps to compile in your entries file. Please check your settings.json file');
    }

    const entries = appPaths
        .reduce((entryHash, appPath) => {
            const appName = path.basename(path.dirname(appPath));

            return {
                ...entryHash,
                [appName]: appPath,
            };
        }, {});

    return entries;
};

Before single-application mode, we ran a simple Express server for all-static and webpack-dev-server for all-dynamic. With SAM we have a mixture of both modes. However, we cannot run both servers on a single port. So we decided to only use webpack-dev-server and add middleware that would determine whether or not the incoming request was for an app running in dynamic or static mode. If it’s a static mode request, we just stream the file from the file system; if it’s a dynamic request we route to the appropriate webpack-dev-server using http-proxy-middleware.

const appNames = getSettings().apps || [];

// Object of app names and their corresponding ports to be ran on
const portMap = appNames.reduce((portMap, appName, index) => ({
    ...portMap,
    [appName]: STARTING_PORT + index,
}), {});

// Object of proxy servers, used to route incoming traffic to the appropriate client dev server
const proxyMap = appNames.reduce((proxyMap, appName) => ({
    ...proxyMap,
    [appName]: proxyMiddleware({
        target: `${SERVER_HOST}:${portMap[appName]}`,
    }),
}), {});

// call each workspace's <code>yarn start</code> command to kick off their respective webpack processes
appNames.forEach((appName) => {
    spawnProcess(<code>yarn workspace ${appName} start ${portMap[appName]}</code>);
});

const app = express();

// Setup proxy for every appName in settings. All devMode content requests will be
// forwarded through these proxies to their corresponding webpack-dev-servers
app.use((req, res, next) => {
    const appName = path.parse(req.originalUrl).name.split('.')[0];

    if (proxyMap[appName]) {
        return proxyMap[appName](req, res, {});
    }

    next();
});

// by default serve static bundles
app.use(ASSET_PATH, express.static(BUNDLES_PATH));

// start the static server
app.listen(SERVER_PORT, SERVER_HOST);

Gotchas

Issues are likely to arise with any significant change, and the change for developers to only run their app in dynamic mode was huge. Here are a couple of issues we encountered that you can hopefully avoid.

The Common Chunk

Because all of our different apps were just entry points in one big monolith app, we were able to leverage Webpack’s CommonChunkPlugin to create a shared bundle that contains the common dependencies between all of the apps. That way when our users moved between apps, after visiting the first app, they would only have to download app-specific code. Even though this is a production optimization, we built the common chunk in our development environment with webpack-dev-server as well.

Unfortunately, the common chunk broke when multiple apps were specified. Although it’s called SAM (single-application mode), the system supports specifying multiple applications that developers would like to run in dynamic mode simultaneously. While we tested that multiple apps worked in SAM, we did the majority of our testing with just one application, which is the common use case.

We include this common chunk in the tarball that gets downloaded, unpacked, and read in static mode. However, when running two apps in dynamic mode, the local common chunk would only consist of the commonalities between the two apps, not all 30+. So using the statically built common chunk caused errors in those apps running in dynamic mode.

Our initial fix was to update the webpack-dev-server middleware to also handle requests for the common chunk. However, this swung the pendulum in the opposite direction. It fixed the common chunk problem for multiple dynamic apps, but now all of the static apps were no longer using the statically built common chunk. They were using the locally built dynamic common chunk. So now all the static apps were broken.

In the end, since the common chunk is a production optimization, we elected to get rid of it in dynamic dev mode. So now no matter how many apps a developer specifies in the apps property of the settings.json, they won’t get a common chunk. However, we still need to keep the common chunk for the static mode apps for now, since the QA environment builds the apps where the common chunk still exists.

“Which mode am I in?”

Another issue we ran into wasn’t a bug, but a consequence of introducing static mode: developers didn’t know which mode they were in. Some backend developers weren’t even aware there was a static mode to begin with; they would try to make changes to an app and wonder why their changes weren’t being reflected. The problem was exacerbated when we introduced SAM in Phase 3 because one app would update while another would not. The Frontend Platform team found ourselves troubleshooting a lot of issues that ultimately were rooted in the fact that the engineer didn’t know which mode they were in.

The solution was to add an overlay message to the base HTML template that all the apps shared. It reads the settings.json file and determines which mode the currently displaying app is in, including the app name. If the app is in static mode it mentions how long it has been since its last refresh.

If the app is in the dynamic mode, it says “webpack dev mode.”

It turned out that mentioning the app name was also crucial because if a dev needed to work on a page that wasn’t their own, they wouldn’t always know which app needed updating.

The results are in

Our hypotheses about the benefits of the project panned out. We started hearing fewer and fewer issues from our backend engineers about the React container failing to boot. Less troubleshooting meant more time for development. Unfortunately, we don’t collect any metrics on individual engineers’ development environments so we don’t have any hard numbers on how much faster the container booted before nor the decrease in memory usage.

The biggest win for the frontend engineers was the reduction in Webpack recompile time when making changes to files. Previously Webpack traversed through all of the entry points, and now it only has to look at one (or however many the developer indicates in settings.json). The rebuild time was 2x or 3x faster, and we received lots of positive feedback.

So even though the SAM project was just a milestone in the overall endeavor to enable Micro-Apps, we were able to deliver lots of value to teams in the interim.

Coming up next

Late last year we started hearing some mysterious, but sparse reports from one or two frontend engineers that at some point Webpack would stop rebuilding when they were making changes. Over time as the engineering team added more apps and more Docker containers, the problem grew to affect almost all frontend engineers. It was even happening to us on the Frontend Platform Team.

We suspected it to be a memory issue, but we weren’t sure the source. We crossed our fingers hoping that the SAM project would fix the issue, but we were still able to trigger the problem even when only running a single app. Things were still on fire, and we realized that we couldn’t move forward with the quest for Micro-Apps until we resolved the instability issues. Any new features wouldn’t have the desired impact if the overall system was still unstable.

In the third post in the series, I will cover this topic in detail. In the meantime, have you ever managed a similar system? Did you face similar challenges? Different challenges? Let us know in the comments or ping me directly on Twitter at @benmvp.

The Quest for React Micro-Apps: The Beginning

Eventbrite’s site started as a typical mid-2000s monolith server rendered application. Although we recently moved into a React stack, we have experienced a lack of flexibility, coupling, and scale issues.

The Frontend Platform team wants to give developer teams autonomy, flexibility, and most importantly ownership of their apps so that they can move at the pace they need to provide value to our users. We have a vision: we want to get to a world where each React application can be both developed and deployed individually. In short, we want micro-apps. In this blog post series, we relate our quest for this vision, so keep on reading!

It’s been a long journey

Eventbrite built its website in the mid-2000s before the concept of a JAMstack (sites built solely on JavaScript, APIs, and Markup) was ever a thing. As a result, the site was a typical monolith application where the backend code (Python) rendered the frontend (HTML) to generate a website. In modern web architecture, we now create an entirely separate API/services layer so that there can be other data consumers, such as mobile apps or external developers.

Later on the frontend, we sprinkled in some jQuery for light client-side interactions. Once we needed more sophisticated experiences, we started using Backbone (and then Marionette). Then in early 2016, the Frontend Platform team added a React-based stack, with the hope of deprecating the legacy jQuery and Backbone apps over time.

Eventbrite isn’t one SPA (single-page application), but a collection of many applications. Sometimes an application is as big as a whole section of the site, like Event Creation/Management or Search & Browse, and other times it’s just a single admin page. In all cases, however, they are universal React apps rendered both server- and client-side.

If you’re interested in how we accomplished server-side rendering with our Django backend, take a look at a talk I gave last year on it:

Not always sunny

Although we’re moving more server-side logic into microservices accessible via the Eventbrite APIv3, our React apps are still tied to the core monolith in many unfortunate ways:

React Server-side rendering

We render server-side through our Django monolith (watch the video for more details), so the Django layer makes calls to the microservices directly to retrieve initial data. These calls are mimicked in JavaScript for subsequent client-side data retrieval.

Django HTML templates

The HTML templates used to hydrate the React apps initially are in Django-land, so all the data and environment information (locale and other context) have to come from the monolith.

Same repository

Because of the reasons above, to create a React application, you also need to create some Django scaffolding, including routing. As a result, the React apps live in the same repo as the core monolith so that developers wouldn’t have to try to keep two separate-yet-not-separate repositories in sync.

Shared package.json

Our React apps themselves aren’t truly separate. They are technically multiple entry points within a single React monolith that have a single package.json and shared bundling, transpilation, and linting configurations. If one team wants to change a dependency for their app, they need to ensure it doesn’t break the 29 others.

Cross-app dependencies

Because all of the apps come together under one single app, we can import components and utilities across applications. We’ve tried to actively discourage this, but it still happens. Instead, we’ve advised teams to put shared dependencies in the (unversioned) “common” folder.

Constant vigilance

The Frontend Platform team currently oversees the dependencies that all the apps use. We need to ensure development teams don’t accidentally back us into a corner with a library choice that prevents us from moving the platform forward in the future. We also need to make sure that those apps not actively being developed do not break with dependency changes.

Unscalable architecture

If the number of our development teams doubled, everything would probably grind to a halt. Eventbrite already has development teams in three continents across four time zones, so the status quo won’t scale.

We have a vision

We need to give teams autonomy, flexibility, and most importantly ownership of their apps so that they can move at the pace they need to provide value to our users.

We have a vision: we want to get to a world where each React application can be both developed and deployed individually; we want micro-apps. For development, devs wouldn’t need the rest of the site running. They could just build their app on their local machine talking to APIs running on our QA environment. Moreover, for deployment, the entire site wouldn’t need to be deployed to deliver new code to our users for a specific app. However, while the apps are independent, they must still feel cohesive and consistent with the rest of eventbrite.com for our end users.

Micro-apps aren’t a novel idea in the industry, but we believe that it will be immensely transformational for us.

Our quest

The thing is, the Frontend Platform team can’t just disappear for 6+ months and come back with a shiny new environment. It is too risky. It’s uncertain because the project is so massive. Moreover, it’s dangerous because it’s all or nothing. If at five months the company’s priorities change and we need to work on something more important, we would have five months of sunk cost.

So the plan is to rebuild the entire plane while it’s cruising at 36,000 feet. We’ll work on this project iteratively, breaking it down into smaller goals so that we can provide value frequently. It’d be like flying from SFO to JFK and midway through getting more legroom, free Wi-Fi, or lie-flat seats. We never want to be too far from a place where we can pause the project to work on something of greater importance. If all you got during the flight was the legroom and Wi-Fi, that would be better than having to wait for another flight to get all three.

You may have noticed that I haven’t been speaking in the past tense but in the present. That’s because we’re not done! We want to share our learnings as we go; not just the technology, but also the logistics and processes behind it. We want to share what worked, what didn’t, and what challenges we faced in hopes that you will be able to learn from what we’ve accomplished in real time.

We’re applying the same iterative approach to this series, so I’m not quite sure how many posts there will be. The team has a rough breakdown of the milestones that we want to hit and the value they provide. However, there may not be a one-to-one mapping between milestones and articles.

In any event, let’s kick things off with Part 1: Single App Mode.

Simple and Easy Mentorship with a Mentoring Agreement

Mentoring is hard. Mentors and mentees usually have many things on their respective tables between work, personal projects, and their training paths. Learning opportunities are infinite, but the time available is not. How can we foster productive mentoring relationships without consuming our time communicating and aligning our expectations?

Read on to learn how a mentoring agreement can help you streamline the mentor-mentee relationship, making communications more efficient, and setting the – sometimes hidden – expectations on both sides of the deal.

My struggles navigating the mentorship program

At Eventbrite, we run an engineer mentorship program. During six months, developers and leaders both mentor and receive mentorship from their peers. A committee matches participants depending on the skills they want to learn or teach.

The program has happened a couple of times already, and I have always had hardworking mentees and great mentors. However, during the initial cycle, I struggled with several aspects of the relationship. The first issue was accountability and commitment: How could I motivate my mentees to get things done and make the most of our time? Also, how do I continue to motivate without coming off as pushy or too demanding? Other challenges I faced were inefficient communications or lack of clarity in terms of goals and expectations. As a mentee myself, I assumed my mentors might be experiencing similar challenges.

With these issues in mind and craving to improve, I did some research and looked for solutions. Inspired by 6 Things Every Mentor Should Do and Kim Clayton’s talk Overcoming the Challenges of Mentoring, I arrived at a process that includes a mentoring kickoff meeting, where mentor and mentee discuss a mentoring agreement.

The mentoring kickoff meeting

The mentoring kickoff meeting is a quick gathering where mentor and mentee set goals and talk about how they will measure their achievement. In that meeting, you could also:

  • Set hourly commitments and cadence of meetings and communications.
  • Draft a plan of action for the whole mentorship period.
  • Arrange a review meeting later on, where you and your mentor/mentee can sit down to evaluate the relationship.

However, the most critical part of the kickoff meeting is to read, understand and clarify the points of the mentoring agreement.

What is a mentoring agreement?

A mentoring agreement is a reference document where mentor and mentee agree what are their commitments during the period they work together.

A mentoring agreement can enrich the mentor-mentee relationship with the following qualities:

  • Clear expectations. The agreement highlights what mentor and mentee are going to do, establishing a two-way relationship. The shared expectations also make accountability an official part of the mentorship experience and also help with identifying areas where either mentor or mentee need extra support.
  • Honest communication. The agreement specifies how communication should happen between the two participants, establishing the channels you are going to use and striving for open and transparent communication.
  • Goals and deadline setting. Discussing what the mentee will do and agreeing to a timeline is an essential component of this document, especially in terms of keeping both parties on track and the overall experience productive. You need to know what success looks like to achieve it.

I like to keep the mentoring agreement short, with five to eight bullet points per role. Some points are intentionally vague, leaving room for interpretation and ongoing discussion.

My agreement

Here is the mentoring agreement that I propose to my mentors and mentees for a healthy and productive relationship:

A Mentor

  • Is there to offer support as a guide
  • Will push the mentee to produce their best work
  • Acknowledges the work put forward by the mentee
  • Prepares the mentee to become a mentor

A Mentee

  • Must finish homework on time and with a quality
  • Will graduate after <agreed period>
  • Should let the mentor know if anything is not clear
  • Sets the meeting agenda and shares it with enough time for the mentor to prepare
  • Suggests activities and exercises to do together
  • Welcomes constructive criticism
  • Should keep the relationship going

Both Mentor and Mentee

  • Should be responsive and communicative
  • Should get to know each other

The value of a process

Subscribing to a mentoring agreement sets the expectations of the mentor-mentee relationship, streamlines communication and highlights the goals and deadlines of the interaction.

Although you could say this is all common sense, there is value in making the shared terms explicit. It is more efficient, as you compress several conversations into one. Moreover, you demonstrate the value you bring to the mentorship experience by running it like a pro.

In my first try, this agreement has worked well: it reduced communication overhead, and my relationships have been more productive. I will admit that from time to time I have let a deadline slide for fear of affecting the relationship. I know! I should stick to the agreement, but I guess that’s material for another blog post.

Would you add anything else to this agreement? Is there something you think is helpful to mention? Drop me some lines below or ping me on Twitter @golodhros.

Photo by Mimi Thian on Unsplash

How to Make Your Next Event App Remarkable with these 4 Mobile Navigation Gestures

Raquel was mindlessly browsing Instagram on her Pixel 3. Her thumb repeating the same gesture over and over again – until she found an image that intrigued her and then tapped the hashtag #octoberfest in the comments. Three polka videos and 15 images of lederhosen later she typed in “octoberfest in sf” and found a pic of her friend drinking out of a 2-liter glass boot.

Does Raquel’s Instagram browsing experience remind you of how you navigate your favorite app? There’s a reason for that. As Principal Product Manager for Eventbrite’s mobile app my team and I mapped out the primary navigation gestures used to discover events on mobile, and created a remarkable event app. Read on to learn how we did it.

It all starts with data

At Eventbrite, we have two sides to our marketplace-based business: the business (organizer) side and the consumer (attendee) side. With 3 million events published in 2017, there is a huge user experience problem with trying to show an increasing amount of events onto your mobile screen – all at the same time. So my team and set out to improve the event discovery experience, which started with our home screen – a simple feed of upcoming events.

Our initial approach to the feed was to create a list of horizontal category buckets of events but limit that list to the number of categories, creating a “bottom” to the list view. However, with the task being to improve event discovery my team asked, “why limit consumer’s vertical experience subjectively?” So we ran an A/B test of the initial horizontal bucket feed against a (now) more traditional vertical feed with infinite scroll –so the user would never hit the “bottom.” To maintain as much of a control as possible, we kept the backend response to the client the same so that we would be able to understand the impact to engagement and retention in our analytics tracking throughout the month-long test.

Horizontal vs Vertical feed layouts

Initial Horizontal feed layout vs new Vertical feed layout

The analysis showed a clear winner:

Event Clicks saw an increase of 3+% for the test group (B) compared to the control (A), as well as a 3% lift in users selecting/tapping “Get Tickets”. This lift was mainly due to the exposure of the “See More Events Like This” suggested search jump off. The “See More” search saw an increase of 57% during the test, suggesting that prompting the user to search after exposing them to different topics may help users further consider events under suggested topics.

As the team paired this A/B test with qualitative user research a funny thing happened – we mapped out the event discovery funnel and the key gestures and influences that interact to make consumers attend an event.

Attendance Probability

One reason we were so intrigued with understanding how users used different navigation gestures in the funnel was due to our internal research that referred to the event discovery experience as “a serendipitous moment.” This was understandable as the focus of that research was to identify the decision-making process of attending an event once it has moved into the consideration state, but my team and I found the analysis extremely vague in regards to the discovery rituals.

Event discovery funnel

The event discovery funnel

With a need to understand how people physically wanted to explore events the team first needed to understand the consideration process for attending the event. Luckily I was able to pair our search patterns with the output of the qualitative research to create three core influences that impact event purchase consideration:

  • Availability (time & location of the event)
  • Inventory (the main act or headliner, the type of event)
  • Attributes (friend’s availability, the venue, dress code, weather on event day, transportation logistics, price, etc.)

Understanding these three core influences is necessary to power the backend response and put the app in a position to show a high-possible conversion event. While seemingly straightforward, the three influences are relational to each other in that if one of the influences is problematic (in regards to attendance), then the others can overcome the adverse effect of it. To illustrate the above let’s run through some examples:

Example #1: Friday night show at The Independent (San Francisco music venue)

Let’s assume that we are showing recommendations at noon on Friday, and there is an event that evening which starts at 9 pm. If I work or live near The Independent, it’s a lot easier to attend the event vs. if I worked on the edge of the city and commuted to San Jose. Still, even if I’m physically close to the location, I’m not set on attending until I validate who is performing (inventory) and then which of my friends can go (attributes).

Example #2: Burningman

Let’s assume I’m living in Europe, but it’s October (i.e., the next Burningman is 10 months in the future). Although the event is physically far-away, the fact that it’s also far-out in time is helpful (complementing location). The event itself is the draw (inventory), but the level of commitment needed to travel to Burningman means that I need lots of good friends going to make it happen (attributes).

While it’s apparent how each influence impacts the other two, what’s more interesting is that the combination of the influences for each event produces a different outcome in regards to a person’s probability to attend it.

As the team’s understanding of attendance probability became grounded in our qualitative research results, we began to re-look at the initial data from the A/B test. With an experienced perspective what the team and I now found more interesting was the consumer’s transition from swiping in the vertical browse experience to tapping in the suggested search experience, rather than the actual metrics themselves. We wondered if the transition between the two mobile gestures (swiping and tapping) was fluid (back-and-forth) or if it was funnel-based (goes one-way) and if these gestures correlate with intent to attend an event.

Just Gestures?

When we talk about navigating or exploring on an app, what we’re doing is using our fingers to steer the app to the response we want. Qualitative assessments showed that the Home Feed (with its vertical-infinite scroll) was highly used with a swiping gesture. As users would find events or canned searches that interested them, they would move to a tapping gesture to dive deeper. Moreover, once a user wanted to find a specific event they would leverage the keyword input for search and begin typing. Interacting with push notifications complemented this discovery experience.  Top-level user interactions with push notifications also complemented this discovery experience.

Mapping these interactions to the level of involvement from the user results in a clear arc of effort to interaction:

  • Notifying: the user receives a push notification and is notified of an upcoming event. This is the least amount of effort a user can provide since it is virtually none and corresponds with zero intent.
  • Swiping: the user has opened the app and is browsing through the feed with one finger. This browse experience is a simple repetitive action that the user can do even if they are bored and looking for something to do (an everyday use case) and corresponds with a low level of intent.
  • Tapping: the user’s consideration has been triggered and is now partaking in a tapping action. It could be from a `see more events like this` link or a suggestion in another screen – but the suggestion is congruent enough with their consideration set that it results in the user tapping it, displaying a medium level of intent.
  • Typing: the user either knows what he/she wants or has a perfect sense of trying to steer the app towards their consideration criteria. They take out their second hand and enable the keyboard with the maximum amount of physical effort that can with the highest level of intent.

These four gestures map back to the physical patterns expressed during the serendipitous event discovery moments – without all four any product is missing a fundamental discovery use case needed to be successful.

What does this mean for me?

If you’re a Product Manager or UX-specialist, it’s easy to take on a project and carve out an MVP that doesn’t include all of the elements laid out in this post since your users can do the core task. However, once you look to improve upon that initial output make sure to design the navigation elements of your user experience to correspond to the intent of the user – then pair with qualitative research to understand the variables in decision making for users and how they compliment each other.

What’s next?

We’re trying to push the limit of how can we apply the four navigation gestures (notifying, swiping, tapping and typing) into the mobile experiences at Eventbrite. If you have our iOS or Android apps, don’t be surprised if you get enrolled in an A/B test and are exposed to unique ways to explore more events! Also, if you don’t have our app now’s a great time to check it out: Android or iOS available.

So what do you think? Feel free to post your thoughts in the comments below and make sure to pass this article along to any Product Managers or UX-specialists you know. Also feel free to read about my efforts to create the best ticket scanning experience for our fans.

Eventbrite Engineering at PyConES

PyConES is the main event for Python developers in Spain and a must-attend for the engineering team based in Madrid. We meet the Python community every year, so we have a chance to catch up with fellow developers from other parts of the country. We learn a lot from them as we share our most recent experiences, either through our sessions or while hanging out with a coffee in hand.

The Spanish Python community has an identity of its own: it’s very diverse, open and most of all, inclusive. These values are essential to us, that’s why it’s not only necessary to be present at this great event, but also to sponsor it!

The trip to Málaga

PyConES is celebrated at a different location every year so that everybody gets to attend at least once, to share the hard work among various members of the community, and –why not— to showcase beautiful Spanish cities.

This year we attended PyConES at Málaga. This city is 530 km south of Madrid, and because that’s too far for many people, Eventbrite facilitated a bus so everybody could get there. That allowed us to meet some attendees before the event.

The Python Conference

Eventbrite, similar to PyConES, has a firm commitment to diversity, so we didn’t want to miss the opportunity to join the Django Girls workshop. (Once at the conference, we were glad to see that 25% of the attendees and 33% of the speakers were female!) One of our Britelings, Federico Mon (@gnufede), was a mentor for the workshop and he enjoyed it. So much so that we’re going to repeat the experience in Madrid on November 17th.

Our engineering team presented two talks:

  • Andrew Godwin(@andrewgodwin), who works as SRE in San Francisco, and is also a Django core developer, talked about the approach to make Django asynchronous.
  • Federico Mon (@gnufede), an engineering manager in Madrid, told us why Ticketea used Wagtail, and how can it be employed not only in blogs or news sites.

While this was the first time for Eventbrite to attend the conference, the Spanish team in Madrid had participated many times in the past years. Our faces were very recognizable, so obvious questions arose: “Where is Ticketea?”, “What’s Eventbrite?”. We were committed to satisfying the curiosity of everyone visiting our booth —which was a lot, by the way. Eventually, people got to know the brand and gained interest in it not as a platform, but also as a nice place to work in the center of Madrid. We met lots of people who want to come and visit us at our office!

When people visited our booth we had the chance to chit-chat with many of them, give them some Eventbrite goodies (very cool ones if you allow me), and discover their interests. We met many young people, a more significant amount of female attendees than at any other conference we attended this year. Among them were some non-devs that wanted to start with programming, and many data scientists as well. We can assert that the Python community is – hands down – one of the healthiest ones out there.

Also, we are hiring!

First of all, we want to thank the organizers of Python Spain, speakers, sponsors, and attendees for making it a great conference every year.

We are looking for passionate React and Django developers and also Site Reliability Engineers. We are expecting to welcome many more roles in the coming future, so if you are interested in working with us, don’t miss out and check our open positions at eventbrite.com/careers.

For more about other conferences Eventbrite has been attending so far, check this out!

Getting the most out of React Alicante

14 Britelings arrived by way of planes, trains, and automobiles to the city of Alicante. We joined close to 400 attendees at a two-day conference, React Alicante.

Keep reading to learn about the new stuff coming out in the React ecosystem and learn more about our favorite talks at the event.

The atmosphere

Set in the Melia hotel, overlooking the Mediterranean coast and its rocky shores, the conference venue location was enviable. Inside, a small group of sponsor tables lined the conference room lobby. Eventbrite’s table, stocked by our R+D Villena team, was the first one in the attendees’ line of sight as they made their way in to see the speakers. Our swag didn’t last long, but we made sure to keep the Eventbrite presence strong. We had 14 Britelings in the house and our very own Ben Ilegbodu speaking at the event.

The Lonely and Dark Road to Styling in React

Sara Vieira (@NikkitaFTW) walks us through the dark alleys of styling with CSS in ReactJS apps. Sara starts her talk by reassuring us that CSS is hard. We know all about that here at Eventbrite, where we use a design system to speed up our development process. This often keeps us from having to walk The Lonely and Dark Road to Styling in React. Still, Sara’s talk gives us a lot to think about when it comes to styling in ReactJS apps. She walks us through the pros and cons of everything from link tags to BEM, and her main focus, styled-components.

The Year of Web Components

Dominik Kundel(@DKundel) reminded us, once again, that web components are out there and this is their moment! What are web components and why should you care? Web components are a set of building blocks, defined as WC3 web standards, that allow the browser to natively interpret the reusable components we frontend developers love so much. We often think of frameworks like ReactJS or VueJS when componentizing our code, but what if we could write reusable components that were framework agnostic?

Next Generation Forms with React Final Form

Here at Eventbrite, we know the joys and pains of working with forms inside of ReactJS. When Erik Rasmussen(@erikras) hit the stage to offer one Form to rule them all, our ears perked up. The author behind Redux Form went back to the drawing board and iterated a new solution for forms. Unlike its predecessor, Redux Form, Final Form has no dependency on Redux and has a bundle size that’s about five times smaller. Final Form by itself is framework agnostic, but Rasmussen also provides a thin wrapper library for interfacing with ReactJS which works by taking advantage of React’s subscription utility.

Help! My React app is slowwwww!

Last but not least, our very own Principal Software Engineer, Ben Ilegbodu(@benmvp), hit the stage and got our blood pumping with a quick workout. Thought you’d have to skip your workout during the conference? Ben’s got you covered. It was just what the crowd needed before he brought us some insights he’s gathered while working on ReactJS right here at Eventbrite. As a premier ticketing platform, the last thing we want is for our website to feel slow for our users. Ben covers everything from breaking up component markup to combining Redux dispatch calls.

The best talk

Choosing from the long list of great talks is hard. If forced to pick one, I’d go with the talk that wasn’t on the official conference schedule: it was the conversations with my peers which brought up insights and questions inspired by the talks.

Eventbrite's Team at React Alicante

Where you there? Which talk was your favorite? What’s the most valuable part of attending a conference for you? Drop us some thoughts in the comments or ping me in Twitter @pixelbruja.

Getting started with Unit Tests

“No amount of testing can prove a software right, but a single test can prove a software wrong.”— Amir Ghahrai

Many developers think that Unit Testing is like flossing. Everybody knows it’s good, but many don’t do it. Some even think that with good code, tests are unnecessary. For a small project you are working on, that might be ok. You know the definition, the implementation details, and the desired behavior because you created them. For any other scenario, it’s a slippery slope, to say the least.

Keep on reading and learn why unit tests are an essential part of your development workflow and how you can start writing them for your new and legacy projects. For those who are unfamiliar to unit testing, you might want to start with a thoughtful article about what they are.

Example code

An unexpected morning regression

Last week I came into the office, grabbed my first coffee, leaned on my chair and start sipping from my old Tarantino’s cup while reading a bunch of emails, as usual. Eventually, I opened the code and faced what I had left undone the day before. “What was I thinking about?” I muttered as I started pounding the keyboard. “I managed to fix that!”

A few days later, we discovered a regression caused by that same line of code. Shame on me. “How could our unit tests allow this to happen?” Oops! No tests whatsoever, of any kind. Now, who’s to blame? Nobody in particular, of course. All of us are responsible for the codebase, even for the code we didn’t write, so it’s everybody’s fault. We need to prevent this from happening again. I usually forget what I broke — and, especially what I fix— these missing tests should be the first to start with.

Here are a few steps I should have followed before crashing our codebase first thing in the morning:

  • If I change any code, I am changing its definition and the expected behavior for any other parts involved. Unit tests are the definition. “What does this code do?” “Here are the tests, read them yourself.”
  • If I create new code, I am assuming it works not just for my current implementation, but for others to come. By testing, I force myself to make it extendable and parameterizable, allowing me to think about any possible input and output. If I have tests that cover my particular case, it is easy to cover the next ones. By testing, we realize how difficult it could be for others to extend our first implementation. This way, our teammates won’t need to alter its behavior: they will inject it!
  • If I write complex code, I ally encounter someone that puts me to the test: “Does this work?”, “Yes, here are the tests. It works”. Tests are proof that it works, your best friend and lawyer. Moreover, if someone messes up and your code is included somewhere, chances are developers summon you to illuminate the situation. Probably your tests will guide you to narrow the issue.
  • If I am making a new feature, I should code the bare minimum necessary for it to work. Writing tests first, before actually writing any real code is the fastest and most reliable way to accomplish that. I can estimate how much time I spend writing tests. I cannot estimate how long I will spend in front of the debugger trying to figure out where things went south because I made the whole thing a little too complicated.

Now I want to write unit tests. What’s next?

Let’s say I have convinced you that tests are not a dispensable part of our daily work, but your team does not believe in this. Maybe they think there is still too much work to do, of that if you were to write all the missing tests, that would take weeks, even months! How can you explain this to your Product Owner?

Here’s the thing: you won’t. If testing becomes a time-demanding task that requires it to be on a plan or a roadmap, it won’t likely ever take off. However, I want to offer you some tips to get started with testing that would work both if you have a significant deficit in test or you just started a new project:

  • Write unit tests first if you don’t know where to start.
  • Only write tests for the code you made and understand.
  • Don’t test everything. Just ensure that you add a couple of tests will every time you merge code.
  • Test one thing only. I’d rather maintain five simple tests than one complex one.
  • Test one level of abstraction. This means that when you test a component which affects others, you can ignore them. Make the component testable instead of testing everything around it.
  • If some new code is too complex to test, don’t. Rather, try to split it into smaller pieces and test each individually.
  • Don’t assume current locales or configuration. Run tests using different languages and time zones, for instance.
  • Keep them simple: Arrange just one “System Under Test” (SUT), perform some action on it to retrieve an output, and assert the result is the one you want.
  • Don’t import too much stuff into test suites. The fewer components involved, the easier it is to test yours.
  • Start testing the borders of the system, the tools, and utility libraries. Create compelling public APIs and test them. Ignore implementation details, as they are likely to change, and focus on input and outputs.

Remember, these tips work well for a codebase with no tests. The very first time you are about to fix, refactor or change the behavior of any part of the code, you must write the tests first to ensure you are not breaking anything. However, when working with legacy code, you would likely see the test coverage increase as the code changes.

Conclusion

In this blog post, we included some pieces of advice taken from our own experience with unit testing. There are other types of tests, but if you and your team want to start testing, unit tests suit you best.

Unit tests are more “straight to the point” than any other kind since they focus on validating single parts of a more complex codebase. If you are new to them, don’t panic: start from the smallest piece and build upon that. You’ll learn a lot along the process, and detect implicit dependencies or troublesome APIs you had previously skipped.

One nice thing about testing is that you make a massive leap towards coding from the outside out — instead of from the inside out, which is usually better for the implementer, and never for the user — which turns out to create a more elegant, comprehensive, and extendable code. It goes without saying that manual testing is still a thing.

What’s your experience with testing? Is there any other tip you would suggest to newcomers? Drop us some lines in the comments or ping me directly in Twitter @Maquert.

Photos by Markus Spiske and Isis França on Unsplash.

The “Aha” Moments of Becoming an Engineering Manager

Making the leap to being a manager is one of the most challenging transitions an individual contributor (IC) can choose to make in their career. And, let me tell you, my initial transition from IC to manager was not especially graceful — I would lovingly describe the two years it took me to figure things out as a time of “epic failure.”

A couple of months back, I shared this story on the main stage at ELEVATE, a San Francisco-based event for engineering leaders. In front of an audience of peers and senior engineers – some friendly faces, but mostly all strangers – I shared how dropping the ball as a new manager was, in fact, an invaluable lesson. The “aha” moment I experienced has stuck with me throughout my career as a manager, as a leader of leaders, and, today, as the SVP of Platform at Eventbrite.

Three key tips I share with engineers or technical folks thinking about (or struggling with) transitioning from IC to manager are:

  • Know how you work best and make adjustments (ex: I personally don’t multitask well).
  • Understand the skills that made you good at one level won’t necessarily translate into the next.
  • Learn how to embrace a growth mindset.

You can hear me expand on these points as my presentation was graciously captured on video and you can see it here:


Radical Transparency: Biggest Learnings From Transitioning to Management

About 10 minutes in, I flip to being the interviewer, navigating a fantastic discussion packed with insights between four esteemed panelists — Yi Huang, Sue Nallapeta, David Murray, and Hala Al-Adwan — who hold engineering leadership roles at Facebook, Zoosk, doctor.com, and Signal Science, respectively.

Ultimately, the lessons I learned as a result of my “epic failure” ended up shaping me into the engineering manager I am today and impacted the trajectory of my career. I hope others also benefit from my shared tale of woe when navigating their own transitions into the next level.

Have you experienced an “aha” moment in the face of a career challenge? Tell us about it in the comments.