Parsable Blog

React Full Stack Tests with Nightwatch

Parsable Team

Plenty of great ink has been spilled over unit testing React components and Redux reducers et al, but we had a lot of difficulty finding meaty posts about settings up proper full-stack integration tests for the modern JS stack.

This is article 1 in a 4-part series:

Our setup was particularly tricky because we have a handful of fully decoupled backend services and two React apps sharing a lot of code. If you’re going down a similar road, hopefully this series saves you a couple steps.

Nightwatch and API Calls for Clean, Tight, Isolated Full Stack Tests

The goal is to write a set of tests that actually pressure test how the front end apps and back end services work together. More, since we have users around the world using Chrome, Firefox, and IE on Windows and Mac, we want to be certain we don’t break on any combination of browser, OS, and screen size we commit to supporting.

Here’s a debug preview of what we execute via CI every push to Github, every branch:

Critical note: The above tests are running without mocks. All backend services are available for these tests.

Trap A: I have often seen full stack “smoke tests” written for Phantom or Selenium wrappers that run a long chain of commands in series, each building on the previous, logging in and then stepping through the scope of an application’s features to make sure everything works. Every time you need to change or add something, it breaks something downstream. No thanks.

Trap B: Similarly, I have seen engineers build macros of sorts that perform common actions, and these actions are then performed repeatedly in the browser to create common states. Effective but slow, particularly if those steps involve a lot of UI.

This time around, we avoided both problems by leaning on API utilities to set up tests precisely as we need them:

In this example, seven API calls must be executed to tee up a test of “Template Sets” after a fresh user and team have been set up. The automation suite can then navigate directly to the test page directly to confirm behavior.

There are a couple useful tricks buried in these initial Promises. helpers.createTeam uses our API to create a new team and user (hat tip to faker.js), then uses the generated credentials both to login with Nightwatch…

…and then separately to retrieve an API token it can use for subsequent requests:

The simple assign() function stores away the returned objects for subsequent use.

This means that for each and every block of tests, we create a new team and user, as well as all of the data objects we will need for the tests. Seems excessive except that we now have automated tests we can be confident are isolated…and this also means they can be run in parallel. More on that to come.

Aligning Nightwatch Page Objects with React Router

Nightwatch Page Objects implement an idea Martin Fowler proposed here, that you can separate the code that hooks into a given page from the tests that verify its functionality.

We have found this to be huge for test code reuse, and it helps us think logically about functionality we are building, particularly for non-trivial features.

The benefit is particularly apparent when reviewing a heavier pull request. With a glance, it is quite clear what’s going on:

Then you can view the page object for a sense of how we use UI to achieve what the tests are asking for:

You may notice that to archive a template requires accepting a SweetAlert confirmation dialog. We have concepts such as modals and sweetalert captured as their own Nigthwatch page objects, so accounting for this step is trivial, and since it built into the command on the page object, the test doesn’t need to know that a confirmation is required.

Once finished reviewing the tests and page objects, it’s a small jump to view the React components that implement the page object and the Redux/Thrift machinery that handle what the React components need. If you have ever had your eyes blur trying to consume a larger PR, believe me this is an incredible step forward.

Since React Router gives us such a clean view into our site map, we can ensure that the page objects remain in sync with our routes for both apps, and we are good to go.

Upgrade Nightwatch with Power Tools like Drag-and-Drop

NightwatchJS is a Selenium wrapper that provides out of the box most of the UI navigation functionality we need. But they also allow you to inject scripts to level up your testing capabilities.

In our app, we make extensive use of Dan Abramov’s fabulous React DnDlibrary. To test it, Aaron set up a Nightwatch command that wraps Andy Wermke’s Drag Mock thusly:

This allows us to include drag actions on page objects:

Which gives us a simple and semantic way to define drag behaviors in tests:

Just like using JSX has opened the door to our designers submitting more design PRs, these testing techniques allow us to tag in pretty junior QA automation engineers to help write high quality tests for us, and manual QA testers on the team can identify faulty interpretation of requirements early in the process. Everyone wins.

The Role of Full Stack Integration Tests

Mind you, none of these techniques obviate the need for solid units. We use Mocha to test Redux reducers/actions/etc., and we use Enzyme to test React components in isolation. We also fail a test run if the source doesn’t pass ESLint.

But I place far more value in these full stack tests, because they confirm for us that when you take the mocks away and plug in all the backend services, we still have all the functionality we promise our users, and not just in Chrome on a Cinema Display in Soma, but in the wild on a busted Vista laptop running IE11 on an oil rig. That’s where our software needs to deliver, and Nightwatch helps give us confidence that we are there.