Automated Cross-Browser Testing for WebGL— It’s Not Going to Happen

Apologies to the folks who found this post while searching for “automated WebGL testing,” “how to write cross-browser WebGL tests,” or similar. I’ve been there, and it is not my favorite part of the job. Sadly I do not know a magic recipe for writing cross-browser acceptance tests for web apps that integrate WebGL canvas interactions as part of a larger user flow. This post offers a look into how the Reserved squad at Eventbrite uses Rainforest QA to test complex WebGL flows.

I’m a frontend software engineer on the Reserved squad, which recently (at the time of writing) launched an end-to-end experience for reserving seats within Eventbrite’s embedded checkout flow. While we were developing this feature, we ran into a roadblock: how could we write reliable acceptance tests for our WebGL-dependent flows? Furthermore, how could we reliably test our user flows without sinking hundreds of additional engineering hours into coercing Selenium to click on the precise canvas coordinates necessary to reserve a seat? We decided to try testing some of our user flows with a crowdsourced quality assurance (QA) platform called Rainforest QA, and have been quite happy to ship the results.

WebGL: What it’s good at, and one unfortunate consequence

WebGL is useful for rendering complex 2D and 3D graphics in the client’s web browser. It’s natively supported by all major browsers and under the hood interfaces with OpenGL API to render content in the canvas element. Because it allows code to run in the client’s GPU, there are significant performance benefits when you need to render and listen to actions on hundreds or thousands of elements.

My squad at Eventbrite uses WebGL (with help from Three.js, which you can learn more about in an earlier blog post) to render customizable venue maps that allow organizers to determine seat selling order. Once the organizer publishes the event, we allow attendees to choose the location of their seat on the rendered venue map. Because WebGL draws the venue maps in the canvas element rather than needlessly generating DOM elements for every seat, we can provide a relatively performant experience, even for maps with tens of thousands of seats. The only major drawback is that there is no DOM element to target in our acceptance tests when we want to test what happens when a user clicks on a seat.

The code to render a seat map using Three.js looks roughly like this:

// Initialize scene, camera values based on client browser width
const {scene, camera} = getSceneAndCamera();
const element = document.getElementById('canvas');
const renderer = new THREE.WebGLRenderer();

// Add objects like seats, stage, etc. to the scene, then render it
addObjectsToScene(scene);
renderer.render(scene, camera);

This code renders content in the canvas element:

But when we inspect the generated markup, this is all that we see:

<canvas width="719" height="656"></canvas>

Because the canvas element does not contain targetable DOM elements, simulating a seat click using WebDriver or other test scripting frameworks requires specifying exact coordinates within the canvas element.

How did Rainforest solve our testing problem?

For several months, my squad had been working in a green pasture of unreleased code as we made steady progress on new pick-a-seat features. Throughout the development process, we maintained test coverage with unit tests, integration tests, and page-level JS functional tests using enzyme and fetch-mock. However, our test coverage contained a glaring hole: we had not yet written tests that fully verified our user stories.

Acceptance tests are black-box tests that formally describe a user story and that we run at the system level. An acceptance test script might load a URL in a virtual machine (VM), automate some user actions, and confirm that the user can complete a flow (such as checkout) as expected. Eventbrite engineers rely on acceptance tests to ensure that our user interfaces don’t break when squads across the organization push code to our shared, continuously deployed repositories. Most acceptance tests at Eventbrite are written using Selenium WebDriver and often look something like this:

    def test_checkout_widget_free_event(self):
        """Verify it is possible to purchase a free ticket."""
        
        # Go to the test page
        self.checkout_widget.go_to_widget_test_page()

        # Select a ticket and click the checkout button
        self.checkout_widget.select_ticket_quantity(free_ticket.id, 1)
        self.checkout_widget.click_checkout_button()

        # Verify the purchase confirmation page is displayed
        self.checkout_widget.verify_purchase_confirmation_page_rendered()

But when targeting a canvas element, clicking on a seat looks a bit more like this:

   action = ActionChains(webdriver_instance)
   action.move_by_offset(seat_px, seat_py)
   action.click()
   action.perform()

In other words, we need to know the exact x and y coordinates of the seat within the canvas element. Even after the chore of automating clicks on precise coordinates within the canvas, we knew that minor style changes might require us to revisit each test and hunt down updated coordinates.

As the projected release date loomed near, we considered our options and determined that it would require several dedicated sprints to write the tests needed to thoroughly cover all of our new features. What if, instead of wrangling data and coordinates, we could write out test plans that could be quickly verified by human QA testers?

Enter Rainforest! Rainforest is a crowdsourced QA solution that puts our flow in front of real users. Because testers access sessions through a VM, we can specify which browsers they need to test, and they can run the tests against our staging environment. The Rainforest app runs the test suite on a customizable schedule, and the entire test run is parallelized and completed in less than 30 minutes. We wrote out all of our as-yet-untested user story test cases (in plain English) and got the system up and running.

Our Rainforest tests look like this:

We write each step of the test as a direction, followed by a yes-or-no question for the tester to answer. During a testing session, the tester follows the instructions, such as: “Click ‘Buy on Map’ located on the right-hand side.” Next, they mark the step as passed if the click caused the rendered map to zoom to the two highlighted seats.

Our key to Rainforest success: one-step event creation

Once we decided to proceed with this approach, our squad invested some time into developing an API that would allow us to automate a critical step of this workflow. When Rainforest testers log into their VMs, we provide them a URL that will, upon load, create a new QA user account with an event that is in the exact state needed to test the features covered by the test. A tester loading this URL is analogous to an acceptance test run instantiating the factory classes that generate test data for our WebDriver tests.

The endpoint accepts URL parameters that define relevant features of the event:

/testing/create_event/?redirect=checkout&map_size=medium&num_ticket_types=4

Loading this URL creates a new QA user with restricted permissions, builds an event with a medium-sized seat map and four ticket types (authored by the new user), and then redirects to the embedded checkout test URL for the given event.

Without this tool, Rainforest testing would require a manual tester dozens of clicks and page refreshes to create an event, design a venue map, publish the event, and then finally reach the checkout flow. Eventbrite engineers have already covered all of these actions with automated acceptance tests elsewhere—when we are testing the seat reservation flow, we want to focus on precisely that. One-step event creation has allowed us to get testers into the correct state to access our flow with a single keystroke.

Additionally, because we have configured Rainforest to run against our staging environment, Rainforest QA testers catch bugs for us before they are released. While unit and integration tests give us confidence that our code works at a more granular level, Rainforest has given us an additional layer of security, assuring that the features we already built are still working so that we can move on to the next challenge.

Universal takeaways

Yes, Rainforest does cost money, and I’m not here to tell you how your company should spend its money. (If you’re curious about Rainforest, you can always request a demo). It’s also not the only solution in this space. Rainforest works very well for us, but a related platform such as Testlio, GlobalAppTesting, TestingBot, or UseTrace may be a better fit for your team.

Here are some takeaway learnings from our case study that might still come in handy:

  • Cross-browser testing pays off. If your current acceptance suite only runs tests against one browser, it might be worth re-evaluating. (If you’re doing your own cross-browser QA, Browserstack is indispensable.)
  • When you automate testing user stories as part of your continuous integration (CI) flow, you ensure that your system reliably meets product requirements.
  • Don’t stop writing automated tests, but do consider how much time you are spending writing and maintaining tests that could be more reliably tested by a human QA tester.
  • You can get the most out of your testing and QA by automating critical steps of the process.

For my squad, Rainforest has been an excellent solution and has helped us catch many browser-specific and complex multi-page bugs before they made their way to the release branch. While we are still working on improving its visibility in our CI flow so that newly introduced bugs are surfaced earlier in the development cycle, automated test runs assure us that our features remain stable across all major browsers. As a developer, I love that I get to spend my time building new features rather than writing and maintaining fussy WebDriver tests.

Have you found another way to save time writing acceptance tests for complex WebGL flows? Do you have questions about our Rainforest experience that I didn’t cover? Do you want to have a conversation about the ethics of crowdsourcing QA work? Let me know what you think in the comments below or on Twitter.

Rethinking quality and the engineers who protect it

Testing software is an important responsibility, but testing is not a synonym for quality. At Eventbrite, we are trying to dig deeper into quality and what it means to be a QA Engineer. This article is not just for QA engineers, it is for anyone who wants to better understand how to deliver higher quality products and better utilize QA resources. If you don’t have QA resources, by the end of this article you will have a better idea of what to ask for when you look to add a QA Engineer to your team.

Rethinking the role

When I sat down to write an updated job description for our QA Engineering position, I started my research by looking at job listings from similar companies. Most of the listings agreed on one thing: QA Engineers test. The specifics vary, but the posting would always include a range of automated and manual testing tasks.

While these testing tasks are worth doing, testing software doesn’t ensure that  the output is a high quality product. In practice, effective QA extends well beyond testing. QA Engineers should ensure teams develop products that work and address a targeted customer need.

The iron triangle

Being a strong advocate for quality requires understanding what could cause quality to suffer. I’d like to start this post by introducing the concept of “The Iron Triangle” The triangle is a visualization sometimes used to describe the constraints of a project, but it also works as a model for the challenges of maintaining quality.

Illustration by Sarah Baran

The idea here is that we constrain the quality of a project by its scope, deadline, and budget (among other factors). Changes to one of these constraints then require balancing adjustments to the others, or quality suffers.

External quality

The team can’t control all of these constraints, but it is critical that they monitor them. These constraints directly impact the quality of work. This sort of quality is external because it is quality as understood by the customer.

Some scenarios

  • A project has a broad scope. The timeline for the project is likely full of feature work, with limited time left for testing tasks. Intervention can mean working to carve out time to write and perform tests, advocating for a reduction in scope, or developing a testing approach that is lean without sacrificing too much coverage.
  • A project has a tight budget. This type of project is likely to have even less time to spend on quality. In these cases, my preference is to establish clear goals and expectations with stakeholders for quality in the planning step. This process enables the team to pack their limited QA time with more precise and targeted testing tasks without misrepresenting how hardened our code may be when we finish the work.
  • A project has an open timeline. This is less common but has its own challenges to quality. When we give plenty of time to projects, they naturally move more slowly. In these situations, it is essential to test along the way, because the closing days of this project can be hectic. I like to limit final testing before release as much as possible with incremental tasks and plenty of automated testing. That way, I can protect the development team from last-minute changes, complexity, and most major bugs.

External quality is linked directly to the success of the business and is everyone’s responsibility. All arms of the business are responsible for maintaining external quality and delivering functional products.

Beyond bugs

I loosely consider an issue a bug any time the software produces an incorrect or unexpected result or behaves in unintended ways. Bugs are going to happen, and minimizing their occurrence is why we test software. However, external quality can only cover so far as we understand how the product will be used. You cannot write a test to cover a use case you don’t understand or know about

If something works as expected but fails to meet the user’s need, this is still an issue of quality. However, this is not a bug. The QA team should bring knowledge of the product and the user to the entire development process. If QA is involved in the planning phase, and the testing phase of development, they can help with more than just finding bugs. They can help ensure developers more thoroughly understand how users employ the products they are building.

Internal quality

That said, there is also an internal, procedural component to quality. Writing code and building products in a way that minimizes technical debt and mitigates risk maintains internal quality. Being good at managing external quality does not make an organization good at managing internal quality.

A new scenario

  • The development team is wrapping up a project and is ready to execute their test plan. Through testing, they uncover some bugs and edge cases that they didn’t think of when writing requirements for the project. To fix these issues, they need to add cyclomatic complexity. This could reduce internal quality and has downstream effects on external quality too. This issue could have been curtailed by involving QA in the writing of product requirements, or by being more deliberate when considering edge cases and architecting the feature.

Balancing external and internal quality

Good external quality is not an indication of good internal quality. Since QA Engineers are driving external quality, they need to be cognizant of increased complexity as an output of testing. Testing uncovers more than bugs, it also uncovers where the product we are building may be failing to meet user needs. Addressing these gaps is critical to quality, but can have a significant impact on timeline, budget, and scope. Our compromises are likely to produce technical debt

Technical debt

Technical debt should be a conscious compromise. The development team can give up some internal quality to make the project work within other constraints. Future work to pay off that technical debt often competes for the same development time as work done to fix a bug, and both issues concern overall quality. This can be a confusing number of plates to keep spinning at once. We should neglect neither type of quality work for the other, and understanding their relation to one another is crucial to preserving high overall quality.

One final scenario

  • The business asks for a feature with very narrow scope, a small budget, and a tight deadline. The feature will require new development work on an old, neglected part of the codebase. The development team is worried about losing time to cleaning up technical debt around their integration points and bringing the old code in line with new standards and practices. Testing time for the new feature work is already tight, and the business wants the development team to prioritize keeping the existing feature set healthy. The team needs to make certain compromises to meet their target release date. One of those compromises is balancing investment in internal quality against the external quality of this new feature and the old code.

Protecting quality

While it is critical to be understanding and compromise during development, QA Engineers should remain biased toward quality. The organization has managers charged with protecting budget, scope, and deadlines – but quality should have an advocate too. QA Engineers should spend time encouraging and coaching development teams on bugs and testing tasks, but the real goal should be to encourage those teams to take ownership of quality.

When the user-need and gravity of testing is well-communicated and well-understood by developers, they write higher quality code. Developers that understand their users write better tests that leverage user stories, rather than the developer’s expectation for what their code does. Beyond testing functionality, they are making sure that what they have developed aligns with how the product is addressing targeted need.

Engaged developers make the best testers

To be clear, I am advocating that developers do their testing and own their quality. Outsourcing your testing to automation engineers or manual testers is an option, but comes with drawbacks. Developers bring vital skills for driving quality into the product at speed. Engineers are also uniquely positioned to solve problems with their code, and developers that write their tests are more vested in fixing them when they fail.

The QA team can and should assist with this process. They can help developers deliver higher quality products by making sure the project is testable upfront, and making sure the approach to testing is thorough and considerate of other constraints to development. Beyond just saying that “quality should be high”, the team should set expectations for quality within the context of other constraints. These expectations serve two purposes. Foremost, they helps with estimation. If you fail to consider QA tasks during estimation, then you have not made time for quality. Secondly, it binds quality to the development process, fostering ownership within the team. Teams that take ownership of their work are more invested in delivering higher quality products.

The new job description

QA Engineers should protect overall quality. They should work with teams to find the right balance of testing for each unique project. To do this, a good QA Engineer understands quality in the context of other constraints to development and is willing to compromise, but will never allow the business to concede quality. When a business delivers low-quality products, it fails.

SQA Quality Engineer

New Job Listing for QA Engineer

What strategies do your teams use to assure quality? How do you leverage your QA team beyond testing? Tell us about it in the comments and drop me a line on Twitter @aqualityhuman.

Getting started with Unit Tests

“No amount of testing can prove a software right, but a single test can prove a software wrong.”— Amir Ghahrai

Many developers think that Unit Testing is like flossing. Everybody knows it’s good, but many don’t do it. Some even think that with good code, tests are unnecessary. For a small project you are working on, that might be ok. You know the definition, the implementation details, and the desired behavior because you created them. For any other scenario, it’s a slippery slope, to say the least.

Keep on reading and learn why unit tests are an essential part of your development workflow and how you can start writing them for your new and legacy projects. For those who are unfamiliar to unit testing, you might want to start with a thoughtful article about what they are.

Example code

An unexpected morning regression

Last week I came into the office, grabbed my first coffee, leaned on my chair and start sipping from my old Tarantino’s cup while reading a bunch of emails, as usual. Eventually, I opened the code and faced what I had left undone the day before. “What was I thinking about?” I muttered as I started pounding the keyboard. “I managed to fix that!”

A few days later, we discovered a regression caused by that same line of code. Shame on me. “How could our unit tests allow this to happen?” Oops! No tests whatsoever, of any kind. Now, who’s to blame? Nobody in particular, of course. All of us are responsible for the codebase, even for the code we didn’t write, so it’s everybody’s fault. We need to prevent this from happening again. I usually forget what I broke — and, especially what I fix— these missing tests should be the first to start with.

Here are a few steps I should have followed before crashing our codebase first thing in the morning:

  • If I change any code, I am changing its definition and the expected behavior for any other parts involved. Unit tests are the definition. “What does this code do?” “Here are the tests, read them yourself.”
  • If I create new code, I am assuming it works not just for my current implementation, but for others to come. By testing, I force myself to make it extendable and parameterizable, allowing me to think about any possible input and output. If I have tests that cover my particular case, it is easy to cover the next ones. By testing, we realize how difficult it could be for others to extend our first implementation. This way, our teammates won’t need to alter its behavior: they will inject it!
  • If I write complex code, I ally encounter someone that puts me to the test: “Does this work?”, “Yes, here are the tests. It works”. Tests are proof that it works, your best friend and lawyer. Moreover, if someone messes up and your code is included somewhere, chances are developers summon you to illuminate the situation. Probably your tests will guide you to narrow the issue.
  • If I am making a new feature, I should code the bare minimum necessary for it to work. Writing tests first, before actually writing any real code is the fastest and most reliable way to accomplish that. I can estimate how much time I spend writing tests. I cannot estimate how long I will spend in front of the debugger trying to figure out where things went south because I made the whole thing a little too complicated.

Now I want to write unit tests. What’s next?

Let’s say I have convinced you that tests are not a dispensable part of our daily work, but your team does not believe in this. Maybe they think there is still too much work to do, of that if you were to write all the missing tests, that would take weeks, even months! How can you explain this to your Product Owner?

Here’s the thing: you won’t. If testing becomes a time-demanding task that requires it to be on a plan or a roadmap, it won’t likely ever take off. However, I want to offer you some tips to get started with testing that would work both if you have a significant deficit in test or you just started a new project:

  • Write unit tests first if you don’t know where to start.
  • Only write tests for the code you made and understand.
  • Don’t test everything. Just ensure that you add a couple of tests will every time you merge code.
  • Test one thing only. I’d rather maintain five simple tests than one complex one.
  • Test one level of abstraction. This means that when you test a component which affects others, you can ignore them. Make the component testable instead of testing everything around it.
  • If some new code is too complex to test, don’t. Rather, try to split it into smaller pieces and test each individually.
  • Don’t assume current locales or configuration. Run tests using different languages and time zones, for instance.
  • Keep them simple: Arrange just one “System Under Test” (SUT), perform some action on it to retrieve an output, and assert the result is the one you want.
  • Don’t import too much stuff into test suites. The fewer components involved, the easier it is to test yours.
  • Start testing the borders of the system, the tools, and utility libraries. Create compelling public APIs and test them. Ignore implementation details, as they are likely to change, and focus on input and outputs.

Remember, these tips work well for a codebase with no tests. The very first time you are about to fix, refactor or change the behavior of any part of the code, you must write the tests first to ensure you are not breaking anything. However, when working with legacy code, you would likely see the test coverage increase as the code changes.

Conclusion

In this blog post, we included some pieces of advice taken from our own experience with unit testing. There are other types of tests, but if you and your team want to start testing, unit tests suit you best.

Unit tests are more “straight to the point” than any other kind since they focus on validating single parts of a more complex codebase. If you are new to them, don’t panic: start from the smallest piece and build upon that. You’ll learn a lot along the process, and detect implicit dependencies or troublesome APIs you had previously skipped.

One nice thing about testing is that you make a massive leap towards coding from the outside out — instead of from the inside out, which is usually better for the implementer, and never for the user — which turns out to create a more elegant, comprehensive, and extendable code. It goes without saying that manual testing is still a thing.

What’s your experience with testing? Is there any other tip you would suggest to newcomers? Drop us some lines in the comments or ping me directly in Twitter @Maquert.

Photos by Markus Spiske and Isis França on Unsplash.

8 Reasons Why Manual Testing is Still Important

The increase of test automation adoption has unjustly framed manual testing as an archaic and unnecessary practice. After watching an automation suite swiftly execute an entirely library of test cases, it can be easy to tunnel vision on the great benefits of automation. However, the value of manually executing your tests cannot be understated; here are a few reasons why manual is still relevant as ever.

Tape 1: Cycle Times

There’s no way around it; initial automation requires an increased investment in both, time & resources. You are setting up a foundation to continually benefit from in your future testing endeavors. However, in some cases, your automation efforts will not be the ideal solution for your testing.  Attempting to initialize automation while close to the end of your testing cycle would be a moot effort; the time you take to set up (and the sudden resource shift) means you’ll be nearing your release date before you can start running reliable and core automated testing. During that same timeframe, you could be focusing your testing resources towards manual execution. As the majority of their time is focused on test case validation, the end result is more coverage within your test cycle.

Tape 2: Even Your Automation Has Errors

Like any piece of code, your automation will contain errors (and fail). An error filled automation script may be misinterpreted as failed functionality in your tested application, or (even worse) your automation script will interpret an error as a correct functionality. Manually testing your core, critical-path functionality ensures that your test case is passing from a user perspective, with no room for misinterpretation.

Tape 3: UI Validations

The advent of automated testing platforms for Responsive and UI testing has provided a much appreciated convenience. However, it should be a boost to your UI testing efforts, not a crutch. These programs validate your test cases by checking element distance, image placement, and alignment of elements in relation to each other. Because of this, there are more than a dozen ways that something such as alignment between a menu and logo can be misinterpreted; a manual tester would immediately be able to catch something that looked “off”, and fail the test case.

Tape 4: Un-Automatable Scenarios:

Some scenarios are simply not feasible to automate; they are either actually impossible due to technological limitation + the complexity of the scenario, or the resource cost of automating it greatly outweighs the cost of a simple manual test. Case in point, we recently had a customer who needed to test their manual tap-and-pay function for their mobile wallet app. Developing a way to automate this scenario is not worth it when compared to manually testing it with your device.

Tape 5: (Short-Term) Cost

Over time, automation leads to cost savings, faster execution, and continous testing. In the immediate short term however, there is an investment cost (and learning curve for the unfamiliar) that can be a situational disadvantage. The cost of setting up and running your initial automation framework can range anywhere from 5-15x the cost of your manual testing endeavors. And as discussed earlier, implementing automation while crunched for time towards the end of a test cycle will not allow you to enjoy automation’s full potential. Choosing to conduct manual testing at this stage provides an immediate, tangible result from your testing resources.

Tape 6: Exploratory Testing

Exploratory testing describes the process of freely testing the application for the purpose of finding defects can`t subsequently designing new test cases. Defects found through exploratory testing are often the results of testing complex scenarios that would not have been addressed through your predefined test cases. Having a foundation of core, repeatable tests automated will free up time to designate resources towards exploratory testing.

Tape 7: Skills

While the end result of Automation is ease, the set up of framework and development of scripts are no easy tasks. An effective automator has a foundation of programming skills, as well as an inherent understanding of test design. These skills are learned over years of experience in both QA and Development, and acquiring somebody with these specific skillsets (especially on short notice) is not a simple process. On the other hand, the majority of Manual test cases are simple to execute and can easily be taught; follow the steps in the test case, and validate that your actual results are consistent with the expected results.

Tape 8: Agile

In the context of Agile testing, automation is of great benefit. Having a library of tests reliably and quickly executable truly helps with test completion & coverage during a tight sprint. By that same token, manual testing is a quick way to execute for any test cases that are not yet automated. There may be no time to build automation for new features introduced in the current build, making manual the best option for test completion.

As a conclusion, the need for increased test coverage across an ever increasing range of software and devices has made test automation more important than ever. As automation continues to grow, it can be easy to forget about the wide spectrum of benefits manual testing still has to offer. Appreciating the value of both approaches will make for a wholesome testing experience.

Cowboys and Consultants Don’t Need Unit Tests

As a developer, my understanding and respect for software testing has been slow coming because in my previous work I have been an engineer and a consultant, and in these roles it wasn’t yet obvious how important testing really is. But over the past year I have finally gained an appropriate respect and appreciation for testing; and it’s even improving the way I write code. In this post I will explain where I’ve come from and how far I’ve traveled in my testing practices. I’ll then list out some of the more important principles I’ve picked up along the way.

Engineers are cowboys … and cowboys don’t need no stinkin’ tests.
I got my start as an Aerospace engineer. And as an engineer, if you do any programming at all, testing is probably not part of it. Why? Because engineers are cowboy coders. As engineering students, we are taught just enough to implement whatever algorithm we have in mind, make some pretty graphs, and then we graduate.

It wasn’t much better at my first job. I had shown an interest in software development and so, in one particular project, I was given the task or reworking and improving the project codebase. We were developing autonomous aircraft control algorithms and it soon became apparent that after months of work, no one had thought to run the simulation using different starting conditions. After finally trying different starting conditions we found that our control system was generally better at crashing the plane rather than flying it. This should have been the biggest hint in my early career that testing might be important. But it would still be quite a while before I learned that lesson.

Continue reading “Cowboys and Consultants Don’t Need Unit Tests”

Readable JavaScript Tests with Object Builders

Building objects purely for testing purposes is often tedious. Let’s walk through the wondrous world of using design patterns to improve your JavaScript tests.

Over the past weeks we’ve been porting features to our new Event pages. Events are complicated, and the front end of the event page that supports them is feature-rich and has to deal with all sorts of different states an event can be in, in addition to all the types of ticket an event can have. As you can imagine, writing tests to cater for all these different scenarios can get tricky fast.

Continue reading “Readable JavaScript Tests with Object Builders”

Tracking Method Calls During Testing

Our automated testing is broken into two broad areas: unit tests and integration tests. Unit tests are where we test the domain logic for our models, with few dependencies. The tests may hit a MySQL database to Django ORM related logic, but the test runner can’t access external services or things like Cassandra. (We’re using Django and the Django test runner, which creates test databases during setup. You may object that hitting the database means these aren’t “unit” tests. I agree. Nonetheless, we call them unit tests.) Our integration tests, on the other hand, are run against full builds of the site, and have access to all of the services that our site normally does. Continue reading “Tracking Method Calls During Testing”

Smarter Unit Testing with nose-knows

No one likes to break unit tests. You get all stressed about it, feel like you’ve let your peers down, and sometimes even have to get everyone donuts the next day. Our production Python codebase is complex, and the smallest changes can have an unexpectedly large impact; this is only complicated by the fact that Python is a dynamic language, making it hard to figure out what code touches what.

Enter nose-knows, a plugin for the nose unit test runner (and py.test, experimentally). It traces your code while unit tests are running, and figures out which files have been touched by which tests. Now, running your full test suite with code tracing turned on is expensive, so we have a daily Jenkins job that does it and creates an output file. It can also do the converse, as it knows how to leverage this file to run specific tests.

Continue reading “Smarter Unit Testing with nose-knows”