What the Top Minds in Tech Communicated at Hopperx1 Seattle

Loretta (left) and Elizabeth (right) in Seattle

Loretta (left) and Elizabeth (right) in Seattle

The Second Hopperx1 Seattle Conference was an impressive two-day event with 92 speakers, including us! We attended talks and events surrounded by a diverse group of 1500 women for two days. Here are some of our big takeaways.

Diversity and Inclusion is a Choice

The conference opened with Brenda Darden Wilkerson reviewing the history of women in technology. She asked why Katherine Johnson, and Hedy Lamarraren’t mentioned more often when talking about who made modern technology possible. Recognizing women’s contributions to technology helps counter harmful narratives that technology isn’t the place for women. This also gives younger technologists access to a larger set of role models.

Sandy Carter continued the D&I focus with a keynote speech on how to build the most innovative products—diverse teams do a much better job! Representing more viewpoints in your development team introduces more perspectives and allows you to craft products for a broader demographic. Elizabeth Viera echoed this point in her speech the next day. She declared that diversity was one of her team’s greatest strengths when building products. Having different backgrounds made for some difficult conversations when people disagreed, but defending a position urged team members to question their assumptions. In the end, their decisions were more thought through.

In a panel on diversity and inclusion, Ambika Singh, The CEO of Armoire, shared how her first angel investor was a woman who was using her company’s service. She encouraged the audience to rethink their ideas of who could be an investor, since investment firms control which ideas get funded. Increasing the diversity of venture capital firms can also increase the diversity of funded startups.

We Need to Pay Attention to the Human Side of Tech

On the first day, Jen Davies and Elizabeth Reifer walked us through wireframing techniques using decision diagrams. Our group undertook the task of designing a flow to allow a user to change their email address. Many questions arose: could this present a security issue? Should we do it like other apps we’ve seen, or should we try something new? Should the user be able to find the feature from several locations, or would that add redundant layers of complexity?

Later, we learned about developer user experience at Shraya Ramani‘s talk on how Buzzfeed open-sourced their single-sign-on authentication proxy. While this seemed like giving away the keys to the product, Shraya compared it to crowdsourcing a way to build a better lock.

Samantha Reynolds gave a talk deconstructing the hype around blockchain. She emphasized trust as the main benefit of blockchain and gave examples of uses for the technology in industries where trust is integral: agricultural, legal, and pharmaceutical sectors.

The conference wrapped up with an inspiring talk from Fred Hutchinson Cancer Research Center about how they’re partnering with major tech companies to pioneer cancer research methods. Kathy Alexion introduced how natural language processing, data science, and wearable technologies are fast-tracking research, helping doctors select treatment plans, and preventing financially-burdensome emergency room trips.

The Spotlight on Sponsors, Mentors, and Leaders

One of the most popular tracks at the conference was the set of career advice talks. Stephanie Szeto’s “The Art of an Effective 1:1” had multiple overflow rooms where the talk was being streamed. Loretta filled an auditorium with her talk on leveling up your career. Many career talks highlighted the importance of having mentors and sponsors. Stephanie Szeto spoke about how to have awkward conversations with your manager. She advised that the best time to surface a desire for a promotion is the quarter before it happens. Loretta Stokes recommended having at least two mentors. One mentor should look like you and can help you navigate challenges that they once faced. Another mentor from a different demographic can give you a different perspective on how you can react to adversity.

Loretta takes the stage to promote the importance of mentorship.

Loretta takes the stage to promote the importance of mentorship.

There was also a significant emphasis on sponsorship—how to get it and why we need it. A sponsor is a person senior to you that advocates on your behalf. The panel members of “The Future of Seattle Tech—A Diversity Perspective,” Alice Steinglass, Ambika Singh, and Nicole Buchanan surfaced the idea of turning a mentor into a sponsor. Especially if you can find mentors at your company a few levels above you. In Loretta Stokes’s talk, she explained that sponsors are essential to your career after a certain point: sponsors can find internal opportunities that might not be posted in public spaces and put your name up for consideration when you aren’t in the room to advocate for yourself.

Your Environment Matters, A Lot

Many of the talks acknowledged that some aspects of your career are not directly in your control, such as how you interact with your manager and team and the culture of your company. Some talks addressed ways organizations could shape their culture to be better for employees: Elizabeth Viera’s talk on Intrapreneurship programs discussed how lean, iterative, autonomous mini-startups at mid-sized companies allow employees to concentrate effort on the challenges that matter, like interesting technical problems or building a better product rather than securing support or getting customers to notice you.

Elizabeth defines 'intrapreneurship' and points to the strength of diverse teams.

Elizabeth defines ‘intrapreneurship’ and points to the strength of diverse teams.

Sarah Henrikson surfaced the benefits of permitting employees to do tech part-time, which allows employees to take care of their mental health while remaining productive and engaged. Other industries have part-time employees, and the flexibility of part-time work can address the needs of employees while allowing companies to retain employees that might search for flexible options at other companies or in other industries.

On a personal level, Stephanie Szeto and Loretta Stokes both talked about how to know when it’s time to leave a team or company. Stephanie Szeto acknowledged you might just be incompatible with your manager, while Loretta Stokes emphasized the importance of being at a growing company to find those opportunities to move up. Carey Jenkins explained in her talk “Embracing the Rebuilding Years” why setting boundaries with your company is important, as a form of advocating for yourself and remind others to value your time.

We’re Still Learning

Several times at the conference, speakers advocated for life-long learning. They encouraged adopting new technologies and seeking the perspectives of users. We have many opportunities to grow into more thoughtful people and change our views of practices and institutions. Sarah Henrikson taught us that we should value part-time employees’ career ambitions and dedication at the same level as full-time employees. Many of the reasons people want to move to full-time are circumstantial, caring for themselves or other people. Even if a person wants to move to part-time to pursue their interests, they’re still learning and growing at their jobs and should be considered for raises and promotions accordingly.

Christina Lee shared apprenticeships as a pathway to the tech industry and encouraged us to see value in non-traditional tech backgrounds. There were many instances when we were urged to rethink the way we use language to be more inclusive. When one of the speakers, Erin Grace, pointed out the harm of using the language of “tribes” to describe friendship, other speakers were willing to listen and revise their talks accordingly. It was very encouraging that Hopperx1 created a climate of empathy and growth.

Loretta contemplates the collective brilliance of all the women at the conference.

Loretta contemplates the collective brilliance of all the women at the conference.

Our Takeaways

We left Hopperx1 Seattle feeling inspired and empowered. The lineup was powerful and cohesive, but never repetitive. The environment was very encouraging, and it was refreshing to see auditoriums filled with women in tech. The speakers were all  fantastic and their messages resonated. We hope to go back next year! Will we see you there?

Photo of Loretta by @shidoshi. Others by Loretta and Elizabeth.

8 Simple Tips for better Communication with Customer-Facing Teams

Businesses often maintain documentation as part of their product. You might be thinking of help center articles, hover text, and little anthropomorphic paper clips. But, the end user is not the only one who benefits from guidance around a product’s intended use. Customer-facing teams, like sales and customer support, also need thorough product communications.

Enabling customer-facing teams to support and sell your offerings is part of delivering a high-quality product. Development teams should be cognizant of the needs of customer-facing teams in the same way that they are the needs of their end user. In this post, I share 8 tips for communicating with customer-facing teams.

1. Real-time isn’t always best

Engineers being available for questions from customer-facing teams is a well-intentioned practice. Generally, this practice is healthy for the organization too. Yet, communication channels that enable this real-time support come with pitfalls.

If your company operates in several offices or timezones communicating in real-time may be difficult. Remote employees commonly receiving subpar communications.

It is hard for development teams to track work they are doing supporting customer-facing teams in real-time too. Not being able to track this work can make planning difficult.

Real-time communications are rarely indexed or searchable. This means you will need to do a bit of digging when you want to reference old information later. Anyone not involved in the original correspondence will struggle to find that information. You are unlikely to get any answers from an email you don’t know about, or one in someone else’s inbox!

Because of their different responsibilities, engineers and customer-facing teams also have a different context. It is often hard to know what someone else knows. It is difficult to know what information would help someone without a sense for what challenges they face.

Real-time channels are still a valuable part of communicating with customer-facing teams. Pairing real-time conversation with written documentation and maintaining FAQs is a nice mix!

2. You need to translate some communications

If your team works in more than one language, translating communications is critical. Beyond that, these communications should always leverage language that promotes shared understanding.

Reusing documentation for many stakeholders is practical. Keep in mind that customer-facing teams may not be familiar with the same jargon. Those teams also have different areas of concern. Documentation that a development team prepares for product and engineering is unlikely to be as helpful for the sales team.

Write documentation in a way that serves more than one stakeholder. If that doesn’t make sense, create more than one document. Preparing communications explicitly for customer-facing teams enables them to support your user better. This is more work up front. Yet, in the long run, this may even reduce the time and cost of cross-functional communication.

3. Good content is useless if nobody can find it

Teams should use a mix of documentation and communication strategies. For coworkers who are working with a live user, searchable information is critical. Utilizing existing searchable tools for communication is great, but devs can think bigger! Consider building tooling that aggregates and indexes internal comms. Painstakingly crafted documentation has little value without good visibility.

4. The Agile Manifesto doesn’t suggest you stop writing documentation

“Working software over comprehensive documentation” is one of the core values of Agile. Some believe it prescribes prioritizing development work at the cost of documentation. This is a misinterpretation. Teams should create documentation that adds value and doesn’t hinder progress. The Agile value actually supports thinking of communication as a product! Instead of “comprehensive documentation”, try considering communication as part of building “working software.”

5. Demos, demos, demos

Eventbrite surveyed our customer-facing employees, and this tip comes straight from the data. Recorded demos are a valuable part of our communication strategy. The issue at Eventbrite wasn’t creating demos, but making sure we distribute them. We mostly shared our demos with product and engineering. We now know that those same demos can help empower our customer-facing teams. When you create informative content, think beyond the obvious stakeholders.

I also want to call out that you should record demos whenever possible. Live demos are valuable, but recording and distributing them can broaden their visibility. This is critical to serving customer-facing teams in other timezones and locals.

6. Slide out of the DMs

This tip is one of my favorites because you can apply it at a personal level without having to get your team on board. Discuss your product’s functionality or expected behavior public channels. This is especially valuable for teams that use tools that make conversations searchable. Set clear expectations for what information is not ready for your customer. Then, have non-sensitive conversations where customer-facing can access them.

7. Consider the impact on peer support channels

Customer-facing teams are resourceful. These teams create spaces to ask their peers for help. These channels are very useful when trying to improve visibility for your communication. These forums are also a perfect place to learn what questions need answering. I recommend checking-in from time to time! Use these channels to help build out FAQs.

8. Audit your processes

I would be remiss to not also share a more general tip for healthy business communication. Approach conversations with empathy and try to understand the other party’s perspective. Discuss and measure the success of your communications. Ask customer-facing folks what is working for them and where they need more support. Keep looking for opportunities to improve your communications as you do your product. Consider adding a mechanism for soliciting, recording, and leveraging feedback.

Now What?

Good cross-functional communication is often an amalgam of communication channels and strategies. The communication improvement initiative with the greatest impact will vary between organizations. What strategies have worked best for you when sharing information with customer-facing teams? Please add a comment with your insight, or reach out to me on Twitter to continue the conversation!

What is the best way to hire QA Engineers?

If you are a leader in engineering, hiring is one of your most important responsibilities. It can also be one of your most demanding tasks. It can be challenging and expensive to attract talented engineers that meet your qualifications and are a good fit for your organization.

Have you ever considered looking outside of your traditional hiring pipelines to find engineers; perhaps inside your own company? There are people you see every day who don’t yet have the title of engineer but are ready and willing to learn. Here at Eventbrite, we’ve had several folks move from our customer support teams to engineering. We’ve found that high performing customer support representatives (CSRs) have skills well suited to engineering.

In January, I wrote How To Move From Customer Support to Engineering in 5 Steps. I promised to follow up with a post for leaders in engineering who want to support these career moves. Read on to discover how to find the best QA engineers for your team.

Customer Support Representatives make excellent QA Engineers

At Eventbrite, we’ve observed that roles in Quality Assurance (QA) are a good fit for those coming from customer support. Here are four reasons why you should take a closer look at your hidden talent pool.

Customer empathy

When I was in customer support, I would invite software engineers to sit next to me to shadow my calls with customers. One time, a customer called in to get help using a particular feature. During the call, the software engineer observed the customer’s difficulty with the feature because of a small UX flaw in the product. After the shadowing session, the engineer immediately went to his desk to fix the flaw. He saw firsthand the customer’s pain and did not want any others to suffer the same way.

This type of customer empathy helps build delightful, user-friendly products. However, encouraging customer empathy among development teams can be difficult. Most software engineers don’t have the time to talk to customers or to shadow your support team’s calls.

Folks from customer support speak with customers every single day to help them understand how to use your product. They have built a wealth of knowledge about all the various user flows, edge cases, and sticky situations your customers encounter. They have spent hours teaching a customer how to use the more difficult parts of your product, and have been on the receiving end of a tirade from an angry customer encountering a bad bug. Finally, they know what makes your customers happy and what makes them want to search for a competitor.

This profound amount of customer empathy makes people in customer support exceptionally well equipped to help improve your product. Why not give them a voice by hiring them into a software development team? There they can recommend product solutions and suggest changes in project requirements. Their inclusion will help to create delightful experiences the first time around, not only after hearing user feedback.

Product expertise

CSRs are complete experts in your product. Just as they know your customers, they also know your product through and through. They know all the quirks, nuances, and flaws — and how to work around them. Many CSRs have your product entirely memorized too. I once saw a CSR walking a first time customer through their account. She was guiding them on setting up their first event page and how to use our advanced features. She did this all while walking around the office with no computer screen in front of her. She relied only on her memory to lead the customer step by step through links, pages, forms, and buttons to publish their event. She knew the product inside and out.

This deep product knowledge makes CSRs excellent at identifying test cases. They are especially good at finding test cases that your software development teams might otherwise have found as critical bugs in production.

Critical thinking skills

Most folks in customer support don’t have degrees in computer science, but what they do have is a unique strain of troubleshooting expertise. CSRs often have to think of solutions and workarounds to appease a customer on the spot. Picture this: a CSR is on the phone with a client who is experiencing a bug. A thousand people are waiting impatiently to get into the event. Even in this high-stress situation, the CSR can calm the customer while searching for a solution to the issue. This way of working requires a particular type of critical thinking skill that doesn’t bend under pressure.

Another benefit of this expertise is an unparalleled ability to anticipate problems proactively. CSRs have seen every manner of bug, design flaw, and user error in the past; they can help your development teams anticipate these problems when building new features so that you can develop a firm foundation for your product.

It’s easy to hire CSRs to engineering

Hiring someone from customer support into engineering is so much easier than hiring externally. You don’t have to go through a lengthy and expensive hiring cycle. You know that the candidate is a good fit at the company because they already work there. Moreover, you won’t have competing offers from other companies to entice your candidate away from you.

While it is easier to recruit internally, there are a few things you should consider when trying to attract talent from other parts of your organization into engineering.

Provide learning opportunities to those outside engineering

Many people outside of engineering want to learn coding basics, but they might not know how to get started. When I was in customer support, Eventbrite offered me the opportunity to take a very basic online HTML course which sparked my interest in programming. It also recharged my commitment to building my career at Eventbrite after I felt it slow down after some time in my role in customer support.

There are many ways to provide learning opportunities to those outside of engineering. You could run a half-day workshop on the basics of Python, SQL, or Javascript, or provide Udemy subscriptions for the team. Alternatively, you could offer shadowing sessions for others to learn what a “day in the life of an engineer” looks like. We’ve also seen success with educating non-technical folks at Eventbrite about our engineering processes. One of our engineering directors, Eyal Reuveni, recently hosted a popular series of talks that included “Software Engineering Concepts, Explained Non-Technically,” “How We Write, Test, and Release Code at Eventbrite,” and “How the Internet Works.” You’ll be able to identify and recruit those who were most excited by these learning opportunities.

Encourage tangential work

These are tasks that are just outside the scope of your regular assigned role. As a CSR tangential work for me was anything that wasn’t answering phone calls and emails. I was able to triage bugs, which exposed me to our internal tools as well as SQL, databases, logging, and even the command line interface. As a QA Engineer, my tangential work was finding ways to start to make an impact in the code. I started to fix small bugs, write Python scripts to automate bug statistics, and pair program with my teams on building small features. This work wasn’t in the job description for my QA role, but it helped keep me engaged at work. These bite-sized opportunities were an excellent way for me to try out a software engineer role. Switching from customer support to full-time software engineer seemed impossible, but by taking on these bite-sized pieces of work, I was able to build experience and interest in the career path.

It’s also important to note that you should provide opportunities for people to do this work during their work hours rather than expect them to work overtime or outside of work. Reward high performers with 10% of their work hours spent on tangential work. By encouraging this type of behavior in your company culture, you’ll see greater retention of employees. High performers will choose to look for internal moves that align with their career growth rather than look externally.

Reach out to high performers in other roles

Your coworkers on the other side of the office may not know that opportunities in engineering exist for them. They may not know that you are willing to hire people into engineering without a technical degree; that was the case with me. I learned that you could be an engineer without a technical degree for the first time at a Girl Geek Dinner. At the event, I heard a software engineer speak about her experience moving into engineering from customer support at her company. My mind was blown; before this, it had never occurred to me that this was even a possibility.

Spread the word by proactively reaching out to high performers to gauge their interest in a career change. Assure them that they would be well supported in this transition along the way. Make sure you have a plan for how to onboard them and provide them with a mentor (check out our resources on the topic here, here, here, and here).

Final thoughts

You might be wondering if all of these benefits are worth the risk of hiring someone who doesn’t have previous experience in a QA role. I’ll let our VP of Engineering, Galen Krumel, sum up why it’s a lower risk to hire a QA Engineer internally from customer support than it is to hire one externally:

“Working on the front lines and helping customers solve their most difficult problems is a challenging job. You can only really be successful at this if you are passionate about the customer experience, understand the product deeply, and are driven to solve difficult problems. These are the exact same traits that we look for when hiring QA Engineers. And when they’ve already established a track record of getting things done, and have a strong set of relationships inside the company, it takes away nearly all of the risk of hiring an unknown entity from the outside.”

Beyond being less risky than hiring someone from the outside, looking inside your company to find engineers can be far easier and more cost-effective than looking externally. Also, as we’ve learned, customer support representatives hold an abundance of knowledge about your product and customers. High performing CSRs will look elsewhere if their company doesn’t keep them challenged and some studies even show that average CSR turnover is between 30-45%. If these CSRs leave your company, they take all of the invaluable knowledge about your customers with them (and potentially to your competitors!). Keep that knowledge with your company; even better, retain that customer-centric knowledge within your engineering team where it will be put to good use as you build your product.

Do you have experience hiring engineers from other departments at your company? Let us know in the comments below or reach out on Twitter (@snazbala).

Open Data: The what, why and how to get started

Have you tried looking for data about the events industry in other countries? Recently I started to investigate new markets, new countries and new possibilities for our business. My first question was “how are others doing this?” When I searched for it, Google only showed a few isolated results which are mostly reports of taxes or event announcement. So I thought “All this research could be easier if we had a trustworthy data source”. Unfortunately, that data source doesn´t exist, so the only data I can play with is our own databases.

Having a good data source about events helps companies like Eventbrite to improve their customer service. However, there aren’t data sources related to events. Why isn’t out there an open data set for the event industry? Customer-facing companies would benefit from sharing data between them, using the concept of Open Data.

Read on to discover what is Open Data, why is it important and how you can get started.

What is Open Data?

The idea of Open Data has been around since the late 90s but is only recently becoming fully implemented. According to the California Open Data Handbook, data must have two essential principles to be entirely accessible: the data must be technically and legally open.

  • Technically open: available in a standard machine-readable format, which means it can be retrieved and meaningfully processed by a computer application.
  • Legally Open: explicitly licensed in a way that permits commercial and non-commercial use and re-use without restrictions.

Defining a set of data as open requires that the data is presented within an application programming interface (API) to be accessed from outside the origin; We might structure data in a bulk download; and if it’s aimed at the average citizen, data should be available without requiring software purchases.

What can we expect of implementing Open Data?

Having shared data in our companies increases the likelihood of involvement by the average consumer, as well as potential customers. The availability of a public data set also makes it more likely that researchers, other companies, and other markets could help us to understand our own market fit and the possibilities.

I know what you’re thinking: “If I offer all my data, then what will I get in exchange?” Having your audience testing your data and giving you feedback could result in unexpected advantages:

  • Identify new features
  • Improve the existing process in your company
  • Discover new markets
  • Find an unexpected market fit
  • Have information about similar products around the world
  • More innovation

Success history for Open Data

One of the biggest successes for Open Data happened in Canada; two students helped to expose one of the biggest tax frauds in Canada’s history and saved Canadian taxpayers $3.2 billion just by checking the data from the Canada Revenue Agency (CRA). These two students were reviewing tax information from the 2005 annual report that charities return to the CRA. After discovering that the most prominent charity foundation in Canada didn’t appear in the top 15 of this list, they informed the CRA about irregular behavior or data inconsistency. This agency started an investigation that resulted in the shut down of charity tax shelters that didn´t exits and foundations that reported tax charities that never exist. For more info about it: https://eaves.ca/2010/04/14/case-study-open-data-and-the-public-purse/

Since 2012, we have seen the founding of the Open Data Institute which focuses on data’s business value. Also, a benchmark McKinsey study that pegged Open Data’s annual value at the U.S. $3 to $5 trillion. We witnessed the sale of the Climate Corporation, a pioneer Open Data company, to Monsanto for about $1 billion. Lastly, we have seen the launch of the Open Data 500 and the Open Data Impact Map which have documented the use of Open Data by thousands of companies worldwide.

We can find examples of Open Data portals in numerous cities across the U.S., such as San Francisco, New York, Los Angeles, Chicago, and Sacramento.

According to the European Data Portal,  the benefits of implementing Open Data in local governments resulted in many benefits:

Benefits of using data in governments

How can we start?

Creating Open Data isn’t without its complexities. Many tasks need to happen before an Open Data project begins. For example, the first step for this is a full endorsement from leadership. Adding the project into the company’s workflow is another. After we set the foundation for Open Data, the handbook prescribes four steps:

  1. Choosing a set of data: This might sound pretty obvious but choosing a data set is more complicated than you might think. This data be a particular set based on your unique goals.
  2. Attaching an open license: You can find tips for reference at Opendefinition.org, a site that has a list of examples and links to open licenses that meet the definition of open use.
  3. Making it available through a proper format for your audience: we must package data in formats that all users can digest: developers, civic hackers, department staff, researchers and citizens. This could mean creating or modifying APIs, text and zip files, FTP servers and more. The file type and the system for download all depend on the audience that you want to reach.
  4. Ensuring the data is discoverable: The goal is to have a way to access all the formats and all the data, just once. Maybe you think “we have all this data available, let’s promote this on all the web sites that we can” but that is a terrible idea. Having many sources could give the idea that the data is not trustworthy. It is better to have a single trusted site where the public can find all our available data.

Conclusions

The reasons for opening our data are related to improving customer service. The final understanding behind sharing our data as a top company in the events industry is to find new ways to solve user questions requirements.

What possibilities might arise if we open our knowledge and ask other companies in the industry to do the same? What do you think? Drop us some lines in the comments or reach me on Twitter at @natuc_no.

Resources

  • California Open Data Handbook: A guide published by the Stewards of Change Institute. It explains what Open Data is, why it’s important and the technical nuances behind opening it up.
  • Sunlight Foundation, Open Data Guidelines: The Sunlight Foundation is a well-known open data advocate. These guidelines offer advice and best practices for governments that want to start an open data project.
  • Open Data Institute: In Europe and across the globe the ODI is making waves by linking open data with businesses and organizations. The organization offers tools, tips, and classes on open data use in addition to the certification of open data types.
  • The Data Transparency Coalition: A transparency lobbying group that has been working with legislators in Washington D.C. They have a website explaining and monitoring the issues around Open Data.
  • Open Data Definition: Do you want examples of “open” licenses that you can add to your data? This site has a collection of licenses for reference and use.
  • Data Collaboratives: open data ecosystem for private-sector.

Photo by rawpixel on Unsplash

How to Craft a Successful Engineering Interview

Interviewing engineering candidates is hard. It takes a lot of practice to get comfortable with it. Moreover, interviewing is a necessary skill for your career as an engineer. You have a limited amount of time and there are many questions you can ask.

Read on to learn how to manage your time, and think about good interview questions and coding challenges.

Preparation

For many years I was usually as nervous as the candidate I was interviewing. A couple of years ago my manager gave our team training before a series of upcoming interviews. He gave us advice on how to prepare for the interview and how to manage our time. In this post, I’ll share the advice from my manager and a few of my insights from interviewing other engineers.

I cannot stress this enough, prepare for the interview in advance. Review their resume and formalize your questions in writing, a day or two before the interview. You never know when you might be called to deal with some urgent matter or a critical bug. It could happen right before the interview leaving you no time to prepare. This event has happened to me — it’s a bummer.

The hiring manager should inform the team on the role we want to fill and what skills each interviewer is evaluating.  Each interviewer should focus on a different skill set: back-end, front-end, communication, etc. This separation gives the team a broad understanding of the candidate’s skills. If this is not clear the day before the interview, contact the hiring manager for guidance.

Time Management

Hourglass on rocky shore with blue sand flowing

Your primary goal is to walk out of the interview with a clear decision on whether to hire the candidate or not. It’s that simple. Don’t feel bad about judging them, that’s the deal here, everyone understands this. Budget time to prepare for each interview in advance. During the interview, do your best to keep on eye on the clock and stay on schedule.

Here is how I set the agenda for a 45-minute interview:

  1. I introduce myself and offer them a bio-break (i.e., bathroom, water, coffee). You want the candidate to feel comfortable, so they perform their best. Maybe they had a tough coding challenge with the previous interviewer or hit bad traffic getting in. Give them the opportunity to regroup before diving in.
  2. I give a brief overview of my role, the team, what we do and how we operate. I spend about 5 minutes on this. Sometimes questions come up which lead to further discussions, that’s ok, but keep an eye on the time. You may need to cut it short and move on.
  3. I ask specific questions about their resume. These 15-20 minutes are crucial to evaluating the candidate. We’ll discuss this more below.
  4. I give them a coding challenge. Another 15-20 minutes. This step likewise is critical.  We’ll discuss good coding challenges below.
  5. I close with a Q/A session. About 5 minutes on this depending on how much time we have. This period gives them an opportunity to ask any follow-up questions. Be sincere and honest with your answers. This is also about making sure the job is a good fit for them.
  6. End on a positive note. Introduce them to the next interviewer or escort them out.  They are your guest, treat them as such.

It’s not easy making this evaluation. You have a lot to cover in a short period. Keep an eye on the clock, stay on task and get good at politely interrupting. Sometimes we get caught up in a conversation, but it’s essential to cover all the bases.

The main things I’m looking to walk out of that room with are:

  • Does this person have programming skills?
  • Are they curious and can they learn?
  • Can they communicate effectively?

Resume Questions

Read through their entire resume —it contains nuggets of information. Find a few items that are interesting or relevant to the job and write down questions that allow you to assess the candidate’s skills in this area.  Be specific: “Explain how you used Technology X to connect these two applications?”  Don’t be vague: “Tell me about your time at Company Foo?” Specific questions allow you to get into the details and gain insight into their understanding. Ask follow-up questions if necessary to assess their skill.

If there is anything on their resume that doesn’t seem quite right, make a note of it and ask them. For example, they have a bullet point saying a project took 2 years to ship. But you notice they were only at the company for 9 months. Ask them about it; often there is a simple explanation.

I always print out a copy of their resume and write directly on it, ideally in a red Sharpie. I circle key words, put question marks or write notes in the margins. In the end, it looks like a graded homework assignment my kids bring home from school. I bring this marked-up resume into the interview and keep it in front of me so I can refer to it as we talk. It’s ok if they see it, I want this to be an open dialog.

I do bring a laptop into the interview for the programming challenge, but I keep it closed most of the time. I intend to speak directly at the candidate and do my best to listen.

Coding Challenge

Woman at whiteboard diagramming design of homepage to man

The candidate is interviewing for an engineering position: writing code is a must-have skill. They should write code on a laptop or diagram software architecture on a whiteboard. The format is up to you, but they need to be able to demonstrate this skill. Coming up with good coding challenges can be stressful, but try not to worry about it too much. The challenge doesn’t have to be super complex. It shouldn’t be. It also doesn’t need to be as trivial as FizzBuzz. Your goal is to make sure they can think through a problem and write sensible code to solve it.  Do your best to come up with a good challenge. After the interview re-evaluate how effective it was in measuring their skills.

Good coding challenges:

  • Are easy to set up and explain. The goal is to get them coding, problem-solving and talking through their solution. If you’re doing most of the talking, you need to simplify your challenge.
  • Lead to good conversations. These could be discussions about different possible solutions, architectural choices, or performance optimizations.
  • Relate to the work they would do if they get the job.
  • Ideally, start simple, you can always expand if the candidate completes the basics.

Focus on a real task, related to your team’s actual work, as best you can. I’ve even taken code snippets out of our repository, stripped it down and asked the candidate to extend it to add some new functionality. Most candidates appreciate a real-world example. It gives them further insight into the work they could expect if hired.

Here is an example question I’ve given in the past:

“Our team needs to build a new back-end for storing user account information for people signing up on our platform. Tell me what information you would collect from each user and how you would store it in the database.”

Several questions come up during this exercise:

  • What data types do we store different fields as?
  • Do we store first and last names as separate columns?
  • What character length limits do we choose?
  • How do we validate fields?
  • Will this work for international users?

There are many design decisions required to implement the code. A data model and a 20-minute conversation gives me valuable insight into their engineering skills.

Have a few interview questions prepared and tailored to each candidate’s skill set. I generally evaluate back-end programming skills, so I have SQL, Python, and some data modeling challenges. When reading their resume, I determine which is most appropriate based on their experience and skills.

Don’t be afraid to let them write code in a language they know better than you. This gives you an opportunity to ask detailed questions and evaluate their understanding of the language. Communication is an essential skill in any job, this is an excellent way to evaluate that skill.

Be prepared to throw away your programming challenge after using it for a while. Experiment, try different questions and different approaches.

Soft Skills

Pay attention to soft skills too. Don’t just focus on problem-solving skills. Are they able to answer your questions sufficiently? Can they explain a complicated topic? Do they ask good follow-up questions? Do they have experience or skills they can teach you? The best teams are ones where every individual brings some expertise to the group.

Legal Stuff

Don’t ask questions about their personal life; these topics can get into a legal no-go zone. If they bring it up, it’s ok, but direct the conversation back to the purpose of the interview. For example, I have kids, on a few occasions this has come up in the interview. Once a candidate was late due to an issue with their child that morning. We exchanged a few words about the challenges of raising kids and having a career, standard small-talk from two people with something in common.  However, then I moved the discussion back to the interview. Again, they are your guest, be polite, be sincere, but also be professional and keep on task.

After the Interview

As soon as possible, write up your thoughts while the experience is still fresh. I find it difficult to be both an active listener and a good note taker, so I rarely take any notes during the interview. After the interview, I open up Vim (I said I was a back-end engineer) and do a complete brain dump. I don’t worry about spelling or complete sentences. I’m concerned with capturing my thoughts. Later that afternoon or the following day, I formalize these notes into Lever for feedback to the hiring manager.

Attend the interview debriefing with your team, if you have them. You will learn something from your peers, questions they asked you didn’t think of or observations they made you didn’t notice.  This meeting is a great opportunity to learn from your coworkers and refine your interviewing skills.

In Closing

Do your homework, prepare in advance and keep on task during the interview. Each interview is a learning experience, re-evaluate after each one and adjust your process. Also, remember to have fun with it and enjoy meeting new people!

I’d love to hear your experience with interviewing fellow engineers. How do you balance your time during the interview? What kind of programming challenges to you find most informative?  Let me know in the comments below or reach out to me at @tophburns on Twitter.

Thanks to Nick Popoff for his interview training and to Sahar Bala for all her feedback on this post. And special thanks to Marcos Iglesias for all his work on our engineering blog.

Design System Wednesday: A Supportive Professional Community

Design systems produce a lot of value by providing an effective solution both for design and engineering. Yet, they take considerable time and work to set up and maintain. Many times, only a few people get tasked with this mammoth task and knowing where to begin is hard.

Design System Wednesday is a monthly community event where we welcome anyone working on or wanting to learn about design systems. These events provide a much-needed place to show off your system, tooling, or pose a burning question to the group. You get a group of incredible product designers, front-end engineers, and product managers. Their insightful answers and battle stories directly apply to the work you’re doing.

Keep reading to learn Design System Wednesdays. Our design system community meetings promote learning, cross-discipline partnership, and systems thinking.

Get input from other design system experts

As a design systems developer/designer, surrounding yourself with others facing the same challenge is incredibly beneficial. Most likely, you are one of a handful of designers and engineers dedicated to this vast undertaking. How daunting! Where do you begin? Have you found the most effective solution? How do you manage the balance between being too design or engineering centric? Design System Wednesday provides a space to bounce ideas off of others, ask for advice, or even crack some hilarious systems jokes!

We once had the pleasure of meeting a new design system lead whose company wanted to start a design system and they charged her with starting it. She asked her design system questions and got advice from people from over 10 companies! Questions on how to get buy-in, recommendations on tech stacks, and what design tools to use. What better way to learn than from peers working on similar things? I remember everyone’s willingness to answer her questions and help steer her in the right direction.

Grow and collaborate

I attended my very first Design System Wednesday the second day at my new job. It was exciting meeting everyone and, at the same time, a little intimidating. Still, I remember people’s welcoming and open spirit. I now look forward to attending these every month. We have a different group of people join us and different companies graciously host us every session. The open dialog, hospitality, and open day structure foster a space for growth and collaboration.

Become part of a community

As a front-end engineer, I seem to always be around other engineers. How refreshing to meet people from other roles and responsibilities! A diverse group of people from companies of all sizes and disciplines comprises the Design System Wednesday community. You can usually find product designers, front-end engineers, and product managers all sitting around the same table. I get to hear how they approach problems and how they solve them.

I even get to foster new friendships over silly easter eggs their products have that I didn’t know about. One Design System Wednesday, some Atlassian designers showed me Jira Board Jr. A Jira board for kids so they don’t miss out on the joy of building a Jira Board – their April fools joke!  I find it very refreshing to step out of my bubble and build connections with peers outside my company and discipline.

Design System Wednesday at Zendesk, Aug 2018

Design System Wednesdays is a community event for the community, by the community. I love being part of this community and helping plan these events, the same way I love helping other design system-ers come together, collaborate, and inspire each other.

We enjoy community events here at Eventbrite, what about you? What are some ways you help your community come together and inspire each other? Drop us a comment below or ping me on Twitter @mbeguiluz.

Featured Image: Design System Wednesday at Zendesk – August 8, 2018

BriteBytes: Nam-Chi Van

An Eventbrite original series, BriteBytes features interviews with Eventbrite’s growing global engineering team, shining a light on the individuals whose jobs are to build the technology that powers live experiences.

Nam-Chi Van is a Senior Software Engineer who works out of Eventbrite’s San Francisco office. She has been a part of the Eventbrite team for 6 years; writing code while taking photos of skateboarders on the side. In this interview, she tells us about a critical point in her career and why she loves working at Eventbrite.

Delaine Wendling: What brought you to Eventbrite and engineering?

Nam-Chi Van: Well, I went to art school for web design and interactive media. My first job after school was with a web agency based out of San Diego. While I was there, a recruiter from Eventbrite reached out to me. They flew me to San Francisco to visit the office, and I was blown away. The agency in San Diego was business casual, and Eventbrite was a more casual environment where I felt like I could be myself. My first role at Eventbrite was as a content developer. I worked on building WordPress themes and hacking together landing pages like this one. Technically, I was on the engineering team, but I wasn’t doing anything super heavy.

At the time, I was also doing a lot of photography on the side. I would take time off and go to events like the X Games to photograph skateboarders. It didn’t take long for me to get burned out on this schedule so I sat down with my manager to figure out my life. I told him I didn’t love what I was doing at Eventbrite and was thinking about maybe pursuing photography full time. If I was going to stay at Eventbrite, I wanted to move into a more traditional engineering role so that I would be challenged. He helped me talk through my options and made an offer for me to move into a more challenging engineering role. I decided to take it and have been really happy with my decision. I still do photography on the side but not as intensely.

Samarria Brevard, Street League 2017

D: What has kept you at Eventbrite?

N: I love that I’m surrounded by a lot of smart and supportive people. When I first switched into a more traditional engineering role, I had a lot of impostor syndrome. My teammates were amazing though and never made me feel like I couldn’t do the work. They encouraged me and helped me learn the things I needed to learn. I love being in a supportive environment like that. Eventbrite offers a lot of opportunity for growth and working with new technologies, so I don’t feel like I’ll ever get bored. I also love working at a place that encourages me to be my authentic self.

D: What project has been your favorite at Eventbrite? What made it so great?

N: Eventbrite used to have some ugly landing pages, like the career listings page, about page, etc. During a hackathon at Eventbrite, a coworker and I decided to redesign all of those pages. I took a lot of photos to make these pages more welcoming and reflective of the Eventbrite culture. Many of these pages are still being used today, like the about page.

I enjoyed this project so much because it made a real impact on the company and was something I came up with on my own. I was also able to use my photography skills for the project, which was fun.

D: What is the most complex problem you have had to solve recently?

N: I guess I haven’t actually solved this problem, but tech debt is probably the most complex problem I’ve had to face. It’s something that’s always on my mind and can feel overwhelming. We are constantly trying to find a balance between writing code that is reusable and extensible and meeting deadlines. It’s a difficult balance to find and something I will continue to try to improve.

D: Do you have a role model? Who is it and why are they your role model?

N: Yeah, my mom. She is an amazing woman who taught me the importance of being myself and being independent. I wanted to skateboard when I was younger, but there weren’t a lot of girls doing that. My mom didn’t care and encouraged me to do it anyway. She said I could do anything I put my mind to.

D: What advice would you give a new female engineer starting?

N: Have confidence in yourself, don’t be afraid to fail, learn constantly, challenge yourself, and keep going. You’ve got this.

DW: And, just because it’s fun: If you were a wrestler, what would be your theme song?

N: (laughs) Hmmm…probably some heavy metal Megadeth song. It would need to have something with a super heavy guitar riff.

Congratulations to Nam-Chi for her recent promotion to Senior Software Engineer! We are thankful to have her on the team. How has your experience been in the engineering world? Who inspires you? Share your comments with us, we would love to get to know you.

Why Would Webpack Stop Re-compiling? (The Quest for Micro-Apps)

Eventbrite is on a quest to convert our “monolith” React application, with 30+ entry points, into individual “micro-apps” that can be developed and deployed individually. We’re documenting this process in a series of posts entitled The Quest for Micro-Apps. You can read the full Introduction to our journey as well as Part 1 – Single App Mode outlining our first steps in improving our development environment.

Here in Part 2, we’ll take a quick detour to a project that occupied our time after Single App Mode (SAM), but before we continued towards separating our apps. We were experiencing an issue where Webpack would mysteriously stop re-compiling and provide no error messaging. We narrowed it down to a memory leak in our Docker environment and discovered a bug in the implementation for cache invalidation within our React server-side rendering system. Interest piqued? Read on for the details on how we discovered and plugged the memory leak!

A little background on our frontend infrastructure

Before embarking on our quest for “micro-apps,” we first had to migrate our React apps to Webpack. Our React applications originally ran on requirejs because that’s what our Backbone / Marionette code used (and still does to this day). To limit the scope of the initial switch to React from Backbone, we ran React on the existing infrastructure. However, we quickly hit the limits of what requirejs could do with modern libraries and decided to migrate all of our React apps over to Webpack. That migration deserves a whole post in itself.

During our months-long migration in 2017 (feature development never stopped by the way), the Frontend Platform team started hearing sporadic reports about Webpack “stopping.” With no obvious reproduction steps, Webpack would stop re-compiling code changes. In the beginning, we were too focused on the Webpack migration to investigate the problem deeply. However, we did find that turning off editor autosave seemed to decrease the occurrences dramatically. Problem successfully punted.

Also the migration to Webpack allowed us to change our React server-side rendering solution (we call it react-render-server or RRS) in our development environment. With requirejs react-render-server used Babel to transpile modules on-demand with babel-register.

if (argv.transpile) {
  // When the `transpile` flag is turned on, all future modules
  // imported (using `require`) will get transpiled. This is 
  // particularly important for the React components written in JSX.
  require('babel-core/register')({
      stage: 0
  });

  reactLogger('Using Babel transpilation');
}

This code is how we were able to import React files to render components. It was a bit slow but effective. However because Node caches all of its imports, we needed to invalidate the cache each time we made changes to the React app source code. We accomplished this by using supervisor to restart the server every time a source file changed.

#!/usr/bin/env bash

./node_modules/.bin/supervisor \
  --watch /path/to/components \
  --extensions js \
  --poll-interval 5000 \
  -- ./node_modules/react-render-server/server.js \
    --port 8991 \
    --address 0.0.0.0 \
    --verbose \
    --transpile \
    --gettext-locale-path /srv/translations/core \
    --gettext-catalog-domain djangojs

This addition, unfortunately, resulted in a poor developer experience because it took several seconds for the server to restart. During that time, our Django backend was unable to reach RRS, and the development site would be unavailable.

With the switch, Webpack was already creating fully-transpiled bundles for the browser to consume, so we had it create node bundles as well. Then, react-render-server no longer needed to transpile on-demand

Around the same time, the helper react-render library we were using for server-rendering also provided a new --no-cache option which solved our source code caching problem. We no longer needed to restart RRS! It seemed like all of our problems were solved, but little did we know that it created one massive problem for us.

The Webpack stopping problem

In between the Webpack migration and the Single Application Mode (SAM) projects, more and more Britelings were having Webpack issues; their Webpack re-compiling would stop. We crossed our fingers and hoped that SAM would fix it. Our theory was that before SAM we were running 30+ entry points in Webpack. Therefore if we reduced that down to only one or two, we would reduce the “load” on Webpack dramatically.

Unfortunately, we were not able to kill two birds with one stone. SAM did accomplish its goals, including reducing memory usage, but it didn’t alleviate the Webpack stoppages. Instead of continuing to the next phase of our quest, we decided to take a detour to investigate and fix this Webpack stoppage issue once and for all. Any benefits we added in the next project would be lost due to the Webpack stoppages. Eventbrite developers are our users so we shouldn’t build new features before fixing major bugs.

The Webpack stoppage investigations

We had no idea what was causing the issue, so we tried many different approaches to discover the root problem. We were still running on Webpack 3 (v3.10.0 specifically), so why not see if Webpack 4 had some magic code to fix our problem? Unfortunately, Webpack 4 crashed and wouldn’t even compile. We chose not to investigate further in that direction because we were already dealing with one big problem. Our team will return to Webpack 4 later.

Sanity check

First, our DevTools team joined in on the investigations because they are responsible for maintaining our Docker infrastructure. We observed that when Webpack stopped re-compiling, we could still see the source file changes reflected within the Docker container. So we knew it wasn’t a Docker issue.

Reliably reproducing the problem

Next, we knew we needed a way to reproduce the Webpack stoppage quickly and more reliably. Because we observed that editor autosave was a way to cause the stoppage, we created a “rapid file saver” script. It updated dummy files by changing imported functions in random intervals between 200 to 300 milliseconds. This script would update the file before Webpack finished re-compiling just like editor autosave, and enabled us to reproduce the issue within 5 minutes. Running this script essentially became a stress test for Webpack and the rest of our system. We didn’t have a fix, but at least we could verify one when we found it!

var fs = require('fs');
var path = require('path');

const TEMP_FILE_PATH = path.resolve(__dirname, '../../src/playground/tempFile.js');

// Recommendation: Do not set lower than 200ms 
// File changes that quickly will not allow webpack to finish compiling

const REWRITE_TIMEOUT_MIN = 200; 
const REWRITE_TIMEOUT_MAX = 300;
const getRandomInRange = (min, max) => (Math.random() * (max - min) + min)
const getTimeout = () => getRandomInRange(REWRITE_TIMEOUT_MIN, REWRITE_TIMEOUT_MAX);

const FILE_VALUES = [
    {name: 'add', content:'export default (a, b) => (a + b);'},
    {name: 'subtract', content:'export default (a, b) => (a - b);'},
    {name: 'divide', content:'export default (a, b) => (a / b);'},
    {name: 'multiply', content:'export default (a, b) => (a * b);'},
];

let currentValue = 1;
const getValue = () => {
    const value = FILE_VALUES[currentValue];
    if (currentValue === FILE_VALUES.length-1) {
        currentValue = 0;
    } else {
        currentValue++;
    }
    return value;
}


const writeToFile = () => {
    const {name, content} = getValue();
    console.log(`${new Date().toISOString()} -- WRITING (${name}) --`);
    fs.writeFileSync(TEMP_FILE_PATH, content);
    setTimeout(writeToFile, getTimeout());
}

writeToFile();

With the “rapid file saver” at our disposal and a stroke of serendipity, we noticed the Docker container’s memory steadily increasing while the files were rapidly changing. We thought that we had solved the Docker memory issues with the Single Application Mode project. However, this did give us a new theory: Webpack stopped re-compiling when the Docker container ran out of memory.

Webpack code spelunking

The next question we aimed to answer was why Webpack 3 wasn’t throwing any errors when it stopped re-compiling. It was just failing silently leaving the developer to wonder why their app wasn’t updating. We began “code spelunking” into Webpack 3 to investigate further.

We found out that Webpack 3 uses chokidar through a helper library called watchpack (v1.4.0) to watch files. We added additional console.log debug statements to all of the event handlers within (transpiled) node_modules, and noticed that when chokidar stopped firing its change event handler, Webpack also stopped re-compiling. But why weren’t there any errors? It turns out that the underlying watcher didn’t pass along chokidar’s error events, so Webpack wasn’t able to log anything when chokidar stopped watching.

The latest version of Webpack 4, still uses watchpack, which still doesn’t pass along chokidar’s error events, so it’s likely that Webpack 4 would suffer from the same error silence. Sounds like an opportunity for a pull request!

For those wanting to nerd out, here is the full rabbit hole:

This whole process was an interesting discovery and a super fun exercise, but it still wasn’t the solution to the problem. What was causing the memory leak in the first place? Was Webpack even to blame or was it just a downstream consequence?

Aha!

We began looking into our react-render-server and the --no-cache implementation within  react-render, the dependency that renders the components server-side. We discovered that react-render uses decache for its --no-cache implementation to clear the require cache for every request for our app bundles (and their node module dependencies). This was successful in allowing new bundles with the same path to be required, however, decache was not enabling the garbage collection of the references to the raw text code for the bundles.

Whether or not the source code changed, each server-side rendering request resulted in more orphaned app bundle text in memory. With app bundle sizes in the megabytes, and our Docker containers already close to maxing out memory, it was very easy for the React Docker container to run out of memory completely.

We found the memory leak!

Solution

We needed a way to clear the cache, and also reliably clear out the memory. We considered trying to make decache more robust, but messing around with the require cache is hairy and unsupported.

So we returned to our original solution of running react-render-server (RRS) with supervisor, but this time being smarter with when we restart the server. We only need to take that step when the developer changes the source files and has already rendered the app. That’s when we need to clear the cache for the next render. We don’t need to keep restarting the server on source file changes if an app hasn’t been rendered because nothing has been cached. That’s what caused the poor developer experience before, as the server was unresponsive because it was always restarting.

Now, in the Docker container for RRS, when in “dynamic mode”, we only restart the server if a source file changes and the developer has a previous version of the app bundle cached (by rendering the component prior). This rule is a bit more sophisticated than what supervisor could handle on its own, so we had to roll our own logic around supervisor. Here’s some code:

// misc setup stuff
const createRequestInfoFile = () => (
    writeFileSync(
        RRS_REQUEST_INFO_PATH,
        JSON.stringify({start: new Date()}),
    )
);

const touchRestartFile = () => writeFileSync(RESTART_FILE_PATH, new Date());

const needsToRestartRRS = async () => {
    const rrsRequestInfo = await safeReadJSONFile(RRS_REQUEST_INFO_PATH);

    if (!rrsRequestInfo.lastRequest) {
        return false;
    }

    const timeDelta = Date.parse(rrsRequestInfo.lastRequest) - Date.parse(rrsRequestInfo.start);

    return Number.isNaN(timeDelta) || timeDelta > 0;
};

const watchSourceFiles = () => {
    let isReady = false;

    watch(getFoldersToWatch())
        .on('ready', () => (isReady = true))

        .on('all', async () => {
            if (isReady && await needsToRestartRRS()) {
                touchRestartFile();
                createRequestInfoFile();
            }
        });
}

const isDynamicMode = shouldServeDynamic();
const supervisorArgs = [
    '---timestamp',
    '--extensions', extname(RESTART_FILE_PATH).slice(1),

    ...(isDynamicMode ? ['--watch', RESTART_FILE_PATH] : ['--ignore', '.']),
];
const rrsArgs = [
    '--port', '8991',
    '--address', '0.0.0.0',
    '--verbose',
    '--request-info-path', RRS_REQUEST_INFO_PATH,
];

if (isDynamicMode) {
    createRequestInfoFile();
    touchRestartFile();
    watchSourceFiles();
}

spawn(
    SUPERVISOR_PATH,
    [...supervisorArgs, '--', RRS_PATH, ...rrsArgs],
    {
        // make the spawned process run as if it's in the main process
        stdio: 'inherit',
        shell: true,
    },
);

In short we:

  1. Create __request.json and initialize it with a start timestamp.
  2. Pass the _request.json file to RRS to update it with the lastRequest timestamp every time an app bundle is rendered.
  3. Use chokidar directly to watch the source files.
  4. Check to see if the lastRequest timestamp is after the start timestamp when the source files change and touch a __restart.watch file if that is the case. This means we have the app bundle cached because we’ve rendered an app bundle after the server was last restarted.
  5. Set up supervisor to only watch the __restart.watch file. That way, we restart the server only when all of our conditions are met.
  6. Recreate and reinitialize the __request.json file when the server restarts, and start the process again.

All of our server-side rendering happens through our Django backend. That’s where we’ve been receiving the timeout errors when react-render-server is unreachable by Django. So, in development only, we also added 5 retry attempts separated by 250 milliseconds if the request failed because Django couldn’t connect to the react-render-server.

The results are in

Because we had the “rapid file saver” with which to test, we were able to leverage it to verify all of the fixes. We ran the “rapid file saver” for hours, and Webpack kept humming along without a hiccup. We analyzed Docker’s memory over time as we reloaded pages and re-rendered apps and saw that the memory remained constant as expected. The memory issues were gone!

Even though we were once again restarting the server on file changes, the react-render-server connection issues were gone. There were some corner cases where the site would automatically refresh and not be able to connect, but those situations were few and far between.

Coming up next

Now that we finished our detour of a major bug we’ll return to the next milestone towards apps that can be developed and deployed independently.

The next step in our goal towards “micro-apps” is to give each application autonomy and control with its own package.json and dependencies. The benefit is that upgrading a dependency with a breaking change doesn’t require fixing all 30+ apps at once; now each app can move at its own pace.

We need to solve two main technical challenges with this new setup:

  • how to prevent each app from having to manage its infrastructure, and
  • what to do with the massive, unversioned, shared common/ folder that all apps use

We’re actively working on this project right now, so we’ll share how it turns out when we’re finished. In the meantime, we’d love to hear if you’ve had any similar challenges and how you tackled the problem. Let us know in the comments or ping me directly on Twitter at @benmvp.

Photo by Sasikan Ulevik on Unsplash

How To Move From Customer Support to Engineering in 5 Steps

When I explain that I did a career move from customer support to full-time software engineer at Eventbrite, I’m often met with dubious looks: “Wait, what? How is that even possible? How did you do that?”. They are even more surprised to learn that I didn’t go back to school or even take a coding boot camp.

With the right strategy, you don’t need a technical degree to become a software engineer. Read on to learn about several steps you can take to move from a customer facing role at your company into engineering.

A pipeline to Engineering within Eventbrite

I’m not the first to do a career move from customer support into software engineering. At Eventbrite alone, eight people have moved to technical roles in engineering from our customer support team. This pipeline has also been beneficial for our dev teams in many ways. For instance, we’ve seen an increase in customer empathy when a former customer support representative joins, which usually helps to boost quality in our product development. In fact, roles in quality assurance (QA) are an especially good fit for those coming from customer support. This step in the pipeline can be a good choice for those looking to later move to full-time software engineering roles. (For more info on Eventbrite’s QA philosophy, check out Andrew’s post on rethinking quality).

As a high performer in customer support, you too can move from a customer facing role at your company into software engineering. However, you won’t get there by continuing to do only your assigned role. You need to take some actions to put yourself into a position to succeed.

A step by step approach

Imagine this conversation: an engineering manager is chatting with her team about a new role she’s opening up for a QA engineer to join. What if at the moment she announced this, her team immediately piped up with “We should hire {insert name} for that role, {he/she} would be fantastic at that!”? How do you guarantee that your name is the one brought up?

For me, the five steps outlined below were crucial to making sure I’d be recognized when a hiring opportunity came up for a QA engineer position. I was later able to make another transition to a full-time software engineering position because I continued these practices of putting myself into a position to succeed.

Step 1: Be a top performer in your day job

Before everything else, dedicate yourself to excellence in your core role. You want to be recognized as a highly qualified individual. Maintain a high customer satisfaction rating while still answering a high number of customer queries. Your company will likely be more willing to provide you with new opportunities in engineering if you are a top performer in your current role. Top performers are smaller risks for lateral moves, and no company wants to lose high-potential talent to another company.

Step 2: Build relationships in engineering

You’ll need to get friendly with engineering so that your name is top of mind when new opportunities are available. Grab a 1:1 lunch with individual developers and ask them about their path to software engineering. I talked to a mix of engineers: QA engineers, senior software engineers, junior software engineers that had gone through coding boot camps.

Gain some name recognition by leading a hackathon team and presenting your team’s work to the company. You don’t need engineering experience to do this. In fact, I led a project with a cross-functional team of support members, engineers, and marketers having no technical expertise at all. It was a small project, but it allowed me to work with engineers and to show my interest in engineering projects to the company. Plus there were plenty of engineering leaders watching the project demos who afterward recognized my name.

Step 3: Leverage your product expertise

Your product and customer expertise are invaluable to product and engineering. Leverage this knowledge by sharing it with your engineering teams and advocating for your customers. Reach out to engineers to ask for help when a customer encounters a bug. Alternatively, tell a product manager about your ideas for small product improvements that would enhance the customer experience.

The first time I did this was intimidating, but I was surprised to find that the engineers on the other side were more than happy to help. By doing this, you’ll establish yourself as a trusted customer expert. Engineers and product managers will begin to turn to you when they have questions or ideas for how to build the product, and later they will want to have your expertise on their teams full-time.

Step 4: Invest in your technical learning

Prepare yourself for a transition to engineering by learning the basics of whatever programming language your company uses. There are abundant resources for you to learn new technical skills. I started with Python, Javascript, and SQL by taking free Codecademy classes online. If your company already has a good learning culture (check out Randall’s post on supporting junior engineers), ask to attend peer-led training or to participate in a mentorship program to supplement your learning. Show everyone around you that you invest in your learning by spending time outside of work developing these new skills. Even a consistent 30 minutes per day can be very effective. If you demonstrate a growth mindset by dedicating time to improving yourself, you will also build trust with engineering leaders who will be more willing to disregard your lack of formal technical education.

Step 5: Advocate for yourself

Carefully look for situations that might help you, today or later on. Even bite-sized opportunities can be beneficial in the long run, but you must advocate for yourself to take them on and reap the rewards.

While I was still in customer support, I looked for an opportunity to get involved with our Support Triage team. That team’s responsibility is to investigate incoming bug reports and send them to engineering. It wasn’t an official position, but I saw that they were overwhelmed with their workload and I volunteered my help. I was able to contribute to the team by investigating bugs, but I also got to learn about our bug process, try out new tools, and talk to engineers. Through this work, I built a reputation for submitting well-investigated and detailed bug tickets. That helped me stand out when a QA position was later opened up.

Another example of advocating for myself happened after I was in QA for a few months. I asked for help from my manager to learn how to fix small bugs I reported, resulting in dedicated pair programming time. After that, I asked for small feature projects that I could pair on with my team’s developers to continue building programming skills. Some time later, I asked my leaders in engineering to move me to a full-time software engineering position. They helped me make the transition with little hesitation. Even though I have no formal education in computer science, I had proven to them that I was invested in my learning and capable of being a software engineer.

Final thoughts

My collegiate track and field coach’s favorite piece of advice to me was to “Put yourself in a position to succeed.”

In the running world, this meant pushing hard during practice sessions to get a little bit faster, stronger, and better every day. This way you allow yourself to be successful on race day when it matters the most. You had already put in miles of effort and hours of mental practice to support a personal best at the finish line. This same strategy applies to lateral career moves as well. Take the time now to prepare and put yourself in a position to succeed so that you are ready for new opportunities when they arise.

I hope that the steps above will help you get closer to achieving your career dreams of moving to software engineering from a customer support position. Of course, beyond these five steps, there are many other details to discuss such as communication strategies, technical learning tips, and how to create a support system.

If you have any questions or want to chat more about how I made this career move, feel free to reach out. Leave a comment below, or you can also reach me at @snazbala on Twitter or through my website at saharbala.com.

P.S.: Engineering leaders, keep an eye out for a follow-up post. I’ll cover why you should hire QA engineers from customer support and how you can create a supportive culture for these lateral career moves.

Boosting Big Data workloads with Presto Auto Scaling

The Data Engineering team at Eventbrite recently completed several significant improvements to our data ecosystem. In particular, we focused on upgrading our data warehouse infrastructure and improving the tools used to drive Eventbrite’s data-driven analytics.

Here are a few highlights:

  • Transitioned to a new Hadoop cluster. The result is a more reliant, secure, and performant data warehouse environment.
  • Upgraded to the latest version of Tableau and migrated our Tableau servers to the same  AWS infrastructure as Presto. We also configured Tableau to connect via its own dedicated Presto cluster. The data transfer rates, especially for Tableau extracts, are 10x faster!
  • We upgraded Presto and fine-tuned the resource allocation (via AWS Auto Scaling) to make the environment optimal for Eventbrite’s analysts. Presto is now faster and more stable. Our daily Tableau dashboards, as well as our ad-hoc SQL queries, are running 2 to 4 times faster.

This post focuses on how Eventbrite leverages AWS Auto Scaling for Presto using Auto Scaling Groups, Scaling Policies, and Launch Configurations. This update has allowed us to meet the data exploration needs of our Engineers, Analysts, and Data Scientists by providing better throughput at a fraction of the cost.

High level overview

Let’s start with a high-level view of our data warehouse environment running on AWS.

Auto Scale Overview

Analytics tools: Presto, Superset and Tableau

We’re using Presto to access data in our data warehouse. Presto is a tool designed to query vast amounts of data using distributed queries. It supports the ANSI SQL standard, including complex queries, aggregations, and joins. The Presto team designed it as an alternative to tools that query HDFS using pipelines of MapReduce jobs. It connects to a Hive Metastore allowing users to share the same data with Hive, Spark, and other Hadoop ecosystem tools.

We’re also using Apache Superset packaged alongside Presto. Superset is a data exploration web application that enables users to process data in a variety of ways including writing SQL queries, creating new tables and downloading data in CSV format. Among other tools, we rely heavily on Superset’s SQL Lab IDE to explore and preview tables in Presto, compose SQL queries, and save output files as CSV.

We’re exploring the use of Superset for dashboard prototyping although currently the majority of our data visualization requirements are being met by Tableau. We use Tableau to represent Eventbrite’s data in dashboards that are easily digestible by the business.

The advantage of Superset is that it’s open-source and cost-effective, although we have performance concerns due to lack of caching and it’s missing some features (triggers on charts, tool-tips, support for non-SQL functions, scheduling) that we would like to see. We plan to continue to leverage Tableau as our data visualization tool, but we also plan to adopt more Superset usage in the future.

Both Tableau and Superset connect to Presto,  which retrieves data from Hive tables located on S3 and HDFS commonly stored as Parquet.

Auto scaling overview

Amazon EC2 Auto Scaling enables us to follow the demand curve for our applications, and thus reduces the need to manually provision Amazon EC2 capacity in advance. For example, we can use target tracking scaling policies to select a load metric for our application, such as CPU utilization or via the Presto metrics.

It’s critical to understand the terminology for AWS Auto Scaling. Tools such as “Launch Configuration,”  “Auto Scaling Group” and “Auto Scaling Policy” are vital components we show below. Here is a diagram that shows the relationship between the main components of AWS Auto Scaling. As an old-school data modeler, I tend to think in terms of entities and relationships via the traditional ERD model 😀

Auto Scaling ERD

Presto auto scaling

We’re using AWS Auto Scaling for our Presto “spot” instances based on (I) CPU usage and (II) number of queries (only used for scaledown). Here is an overview of our EC2 auto-scaling setup for Presto.

Auto Scaling with Presto

Here are some sample policies:

Policy type:  Simple scaling (I)

Execute policy when:  CPU Utilization >= 50 for 60 seconds for the metric dimensions .

Take the action:  Add 10 instances (provided by EC2).

Policy type: Simple scaling (II)

Execute policy when: running Queries <= 0 for 2 consecutive periods of 300 seconds for the metric dimensions.

Take the action: Set to 0 instances.

Note: A custom Python script was developed by Eventbrite’s Data Engineering team to handshake with Cloudwatch concerning scaledown.  It handles the race condition where another query comes in during the scaledown process. We’ve added “termination protection” which leverages this Python script (running as a daemon) on each Presto worker node. If it detects a query is currently running on this node, then it won’t scale down.

Tableau scheduled actions

We’re using “Scheduled Scaling” features for our Tableau Presto instances as well as our “base” instances used for Presto. We scale up the instances in the morning and scale down at night. We’ve set up scheduled scaling based on predictable workloads such as Tableau.

“Scheduled Scaling” requires configuration of scheduled actions, which tells Amazon EC2 Auto Scaling to act at a specific time. For each scheduled action, we’ve specified the start time, and the new minimum, maximum, and the desired size of the group. Here is a sample setup for scheduled actions:

Auto scale actions

Cloudwatch

We’ve enabled Auto Scaling Group Metrics to identify capacity changes via CloudWatch alarms. When triggered, these alarms will cause autoscaling groups to execute the policy when a threshold is breached. In some cases, we’re using EC2 alerts and in others, we’re pushing custom metrics through python scripts to Cloudwatch.

Sample Cloudwatch alarms:

Multiple Presto clusters

We’ve separated Tableau connections from ad-hoc Presto connections. This abstraction allows us to separate ad-hoc query usage from Tableau usage.

EMR

Our Presto workers read data that is written by our persistent EMR clusters.  Our ingestion and ETL jobs run on daily and hourly scheduled EMR clusters with access to Spark, Hive and Sqoop. Using EMR allows us to decouple storage from computation by using a combination of S3 and a custom HDFS cluster. The key is we only pay for computation when we use it!

We have multiple EMR clusters that write the data to Hive tables backed by S3 and  HDFS. We launch EMR clusters to run our ETL processes that load our data warehouse tables daily/hourly. We don’t currently tie our EMR clusters to auto-scaling.

By default, EMR stores Hive Metastore information in a MySQL database on the master node. It is the central repository of Apache Hive metadata and includes information such as schema structure, location, and partitions. When a cluster terminates, we lose the local data because the node file systems use ephemeral storage. We need the Metastore to persist, so we’ve created an external Metastore that exists outside the cluster.

We’re not using the AWS Glue Data Catalog. The Data Engineering team at Eventbrite is happy managing our Hive Metastore on Amazon Aurora. If something breaks, like we’ve had in the past with Presto race conditions writing to the Hive Metastore, then we’re comfortable fixing it ourselves.

The Data Engineering team created a persistent EMR single node “cluster” used by Presto to access Hive. Presto is configured to read from this cluster to access the Hive Metastore. The Presto workers communicate with the cluster to relay where the data lives, partitions, and table structures.

The end

In summary, we’ve focused on upgrading our data warehouse infrastructure and improving the tools used to drive Eventbrite’s data-driven analytics.  AWS Auto Scaling has allowed us to improve efficiency for our analysts while saving on cost.  Benefits include:

Decreased Costs

AWS Auto Scaling allows us to only pay for the resources we need. When demand drops, AWS Auto Scaling removes any excess resource capacity, so we avoid overspending.

Improved Elasticity

AWS Auto Scaling allows us to dynamically increase and decrease capacity as needed. We’ve also eliminated lost productivity due to non-trivial error rates caused by failed queries due to capacity issues.

Improved Monitoring

We use metrics in Amazon CloudWatch to verify that our system is performing as expected. We also send metrics to CloudWatch that can be used to trigger AWS Auto Scaling policies we use to manage capacity.

All comments are welcome, or you can message me at ed@eventbrite.com. Thanks to Eventbrite’s Data Engineering crew (Brandon Hamric, Alex Meyer, Beck Cronin-Dixon, Gray Pickney and Paul Edwards) for executing on the plan to upgrade Eventbrite’s data ecosystem. Special thanks to Rainu Ittycheriah, Jasper Groot, and Jeremy Bakker for contributing/reviewing this blog post.

You can learn more about Eventbrite’s data infrastructure by checking out my previous post at Looking under the hood of the Eventbrite data pipeline.