Strategies For Simplifying Integration Testing For React Apps

We've got a little setup work to do before we get to the actual business of writing our tests.

Project Source Code

Get the project source code below, and follow along with the lesson material.

Download Project Source Code

To set up the project on your local machine, please follow the directions provided in the README.md file. If you run into any issues with running the project source code, then feel free to reach out to the author in the course's Discord channel.

This lesson preview is part of the The newline Guide to Modernizing an Enterprise React App course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.

This video is available to students only
Unlock This Course

Get unlimited access to The newline Guide to Modernizing an Enterprise React App, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

Thumbnail for the \newline course The newline Guide to Modernizing an Enterprise React App
  • [00:00 - 00:12] Before we can get to writing our first tests in our app, we have to do a little configuration and tooling setup. Never fear though, this is nothing like the early lessons where we set up Volta or ES Lint or anything like that.

    [00:13 - 00:28] This will be much quicker. In this lesson, we'll discuss strategies for integration testing and add some new NPM scripts to hardware handler to make running our integration tests and generating interactive code coverage files a breeze.

    [00:29 - 00:51] As always, if you need a copy of the sample app before we start setting up our automated testing, it can be downloaded here in the lesson. While there's no one way to do anything and react, it's an accepted best practice to write integration tests using RTL and Jest, starting with the smaller components and working up to the larger, more complex ones.

    [00:52 - 00:59] I want to stress this though, this is just one way to do it. It is by no means the only way to do it.

    [01:00 - 01:16] Think about it though. It's simpler to test a component with a few functions and API calls first, than to try to test it when it gets injected into a larger component with three or four other sibling components and test that they all work together correctly, and so on and so forth.

    [01:17 - 01:37] Sounds better than starting with the parent component and having to test that parent's functionality plus all the child components functionality in one giant test file, right? While this won't always apply, test smaller to larger when possible to avoid duplicating effort and testing more than necessary in the bigger, more complex components.

    [01:38 - 01:56] Don't worry, you'll start to get a feel for it soon enough and find your own best way of testing. Since we're using Create React app as the basis for our hardware handler application, we have some test tooling set up for us in this project, in addition to many of the testing libraries we'll be working with already being installed.

    [01:57 - 02:12] But let's go through both aspects of it in case you run across a React project that doesn't take care of these things for you ahead of time. First, we'll take a look at the libraries that we'll be utilizing to write all of our integration tests.

    [02:13 - 02:20] Switch over to your VS code instance at this point. If you open up your apps client folder in your IDE and navigate to the package.

    [02:21 - 02:31] json file, you should see the following libraries under your dependencies. Testing library just dumb, testing library react, and testing library user event.

    [02:32 - 02:42] So these three libraries are going to be critical to our tests and will make up the bulk of what we'll lean on to write them. There are a couple of things that we should probably do before we move on though.

    [02:43 - 03:08] For some reason, these test libraries, which are not central to our apps functionality when it is running in production, are listed under its dependencies instead of its dev dependencies, which is where all libraries not essential to the prod app should go so they don't get bundled into the final production build. Move these three libraries to be at the beginning of our list of dev dependencies in our package.json file.

    [03:09 - 03:34] This won't affect our local development at all because the libraries are already installed locally, but if we bundled this app for deployment into a cloud environment production or otherwise, it would impact the final bundle size. So copy all three of these, delete them from here, and scroll down in your dev dependencies until you find these, add a comma, and paste them back in.

    [03:35 - 03:39] Okay, much better. You might be wondering, why does this matter?

    [03:40 - 03:59] For us, it really doesn't matter that much, but for our users who don't have internet connections as reliable or machines as powerful as the ones developers typically work on, it can matter a great deal. The smaller the final bundle size, the quicker our site will load and be interactive for our users, which is always a good thing.

    [04:00 - 04:10] Okay, that looks better already. So one more thing that I want to do is add another testing library that we're going to need when we write tests for our custom hook components.

    [04:11 - 04:19] It's aptly named TestingLibraryReact Hooks. Open up a terminal and inside of the client folder, run the following.

    [04:20 - 04:32] YarnAdd@TestingLibraryReact Hooks-dev. With that dependency added, we can turn our attention elsewhere in this file.

    [04:33 - 04:50] With our TestingLibrary installed, we've still got a few scripts to add, so we don't have to manually type all the flags into the command line every time we want to check our automated tests. Thanks to CreateReactApp, we already have one test command in the Scripts section of our package JSON file.

    [04:51 - 04:55] Right there. There is more that we can do here, though.

    [04:56 - 05:05] We are going to add two more scripts right underneath the standard test. You are more than welcome to copy these straight out of the lesson, but I will write them out.

    [05:06 - 05:19] The first one that we're going to add is CI-test, and in here we will have CI= true. React scripts test-verbos.

    [05:20 - 05:38] The second one that we're going to add is called coverage. For that, we will have React scripts test-derbos- coverage-watchall=false.

    [05:39 - 05:45] And don't forget the comma. The first script that we wrote, CI-test, will run the same test scripts as the original test command.

    [05:46 - 05:57] However, the CI=true part stands for continuous integration is true. This would typically be used in an instance where a CI/CD pipeline was in place for builds.

    [05:58 - 06:08] By default, YarnTest runs a test watcher with an interactive just CLI. Basically, the tests will rerun every time something changes in your IDE.

    [06:09 - 06:31] However, you can force it to run tests once and finish the process by setting the flag CI=true in the command line. The dash-verbos command is one that works with the just CLI, which is our test runner, and will print out individual test results with the test suite hierarchy displayed, which does not happen by default.

    [06:32 - 06:53] With this command, all the descriptions in it describe it or test blocks will be printed out into the terminal, which can make debugging failing tests easier. The second script coverage is very similar to the first except it removes the CI=true and it includes this dash-dash-coverage flag.

    [06:54 - 07:06] I removed the CI=true because I like how just formats and color codes file code coverage when this flag is not included in the script. You'll see this when we actually run some tests later on in this lesson.

    [07:07 - 07:24] The dash-dash-coverage flag indicates the test coverage information should be collected and reported in the output, and it generates a pretty decent code coverage printout. I'm more partial to the interactive report that we can open up and view in a browser, but I'll show you how to access either later in this lesson.

    [07:25 - 07:38] The inclusion of the dash-dash-watch-all false flag is required for code coverage for the whole report to generate correctly. It's a known issue in the Create React app that dates back to 2018 and still hasn't been fixed.

    [07:39 - 07:46] So let's check out how our new scripts work. Lucky for us, we already have a single test in our application.

    [07:47 - 07:57] It lives in the Containers app test folder. This is my preferred method for writing tests, by the way.

    [07:58 - 08:10] When it comes to React components, I like to keep their test files in the same folder as the component that they're testing. It's nested inside of a test folder for that little extra bit of separation and organization.

    [08:11 - 08:31] For integration tests, Hardware Handler doesn't actually have to be running for just to be able to run its tests, though this can be helpful when debugging tricky tests by testing actual application functionality, so we don't even need to start the app to run them. This is not the case for end-to-end tests, but that's for the module after this one.

    [08:32 - 08:47] So instead, just navigate into your client folder in a terminal and run the following command, YarnTest. And you should see the following info print out to the terminal.

    [08:48 - 09:18] Just so you know, I updated the text in this test so that it would pass after the changes I made to the app.js file, but I neglected to update the test description, a slight oversight that's easily rectified. Go ahead and open up the app.test.js file, and let's just change the test text to read renders hardware handler without crashing, instead of renders learn react link.

    [09:19 - 09:27] It won't stay like this forever. We'll update all sorts of things about how we structure these test files in the next lesson, but it will do for now as an example.

    [09:28 - 09:39] I want to show you a couple of cool things that you can do while running tests before this lesson is over. One thing that I'd like to demonstrate is how to run a single test file instead of all of them.

    [09:40 - 10:09] As I stated earlier, if we just run the default YarnTest command in the NPM scripts, the just CLI test washer will be watching for any changes in the app, test files or otherwise, and will start running all of the tests over and over again every time a code change is made. While this can be helpful, it can also be time consuming when there are hundreds or thousands of unit tests to rerun, and believe me, they can get into the thousands easily as apps grow and evolve.

    [10:10 - 10:21] If you're working on one particular set of test files, sometimes you just want to run those tests to see the effect that your changes have, but don't sweat it. Just makes this easy to do.

    [10:22 - 10:36] If we want to run just a single test, we'll start off with the same YarnTest command that we used to run all of our test files. But once the just test runner starts up in the terminal, tap a key to bring up the just CLI options.

    [10:37 - 10:46] It doesn't really matter what key, just any key to get the CLI watchers attention, so it will display the common options menu. So let's try that now.

    [10:47 - 10:54] We're going to type key. Once this menu is displayed, you'll see a list of the most common options people reach for.

    [10:55 - 11:11] But the one that I want to focus on is the pattern matcher that's invoked by typing the P key. When pattern matcher is invoked, you can start typing in a particular test file that you want to run, and just will do its best to find any and all files that match what you're typing in.

    [11:12 - 11:36] It works with Redjax searching too, but I usually just rely on typing in a specific test file name that I know I want to run. Since we've only got one test file, there isn't much to do yet, but as soon as I start typing in a P P in the pattern matching input, just starts to search all of our test files for files that meet that criteria, and it looks like we have just a one.

    [11:37 - 11:47] So once your file pops up in the options, just use your arrow keys to navigate it and hit enter to run that file. And trust me, this comes in handy in day-to-day development.

    [11:48 - 12:01] So next up, let's run just one test within a suite, which is extremely useful for debugging a broken test. There are two ways that we can do this, and what testing syntax you're using will determine which syntax you'll use for this.

    [12:02 - 12:18] The first is tests.only. If you've got tests written using the test keyword, such as this one, you can add the dot only keyword behind test, and it will only run tests with that added to them.

    [12:19 - 12:30] For demonstration purposes, here's a second test that I will add, change it slightly, and let it run. And we see that only one test was run and one test was skipped.

    [12:31 - 12:51] If you're using the test syntax that includes describe blocks for suites of tests within a file or the it keyword to write individual tests, you can place the letter F in front of your test, like F describe or fit, and only tests that have that indicator on it will run. So here's an example of that.

    [12:52 - 13:06] I will change test.only to fit, and I will change the second one to it and save , and you can see our app is rerunning. And once again, just one was run, the other one was skipped.

    [13:07 - 13:24] The F stands for focus. You can also run more than one test in more than one file with either of these tests and taxes. If you have two or more test files that need to run, you can drop a dot only or a focus on whatever test you need to run in those files.

    [13:25 - 13:33] Just as totally fine with running certain tests in more than one file. Once again, this sort of trick is very useful for me on a daily basis.

    [13:34 - 13:50] Now, let's move on to see code coverage for our application, another important and highly useful thing. So I will remove this test, and I will undo our change here and put that back to test for now.

    [13:51 - 14:02] So this is the time that we'll run our second new NPM script, the coverage script. This is a big one because it will run all the test files present in the app and show you the total code coverage for the app.

    [14:03 - 14:14] So go ahead and fill your code watcher and instead run yarn coverage in your terminal. Let's see what this printout looks like.

    [14:15 - 14:26] Okay, so if I expand this and scroll up, check out the level of detail that is included here. This is so helpful to me.

    [14:27 - 14:39] You can see from the console exactly which lines of your code for the app.js file are already covered by tests. You can see the percentage of lines covered, the exact lines still uncovered, and so on and so forth.

    [14:40 - 14:55] This is great. However, if there's still a great deal of code that needs to be covered, perhaps when you've written new code but haven't started the tests to match them yet, and you're still trying to ascertain what needs to be done, it could be helpful to have a more in-depth look at the code.

    [14:56 - 15:03] Well, I have a solution for this and it's the next section of our lesson. This is the browser related code coverage report.

    [15:04 - 15:22] The interactive, browser based code coverage report is my go-to when checking what my tests cover. I appreciate its detailed visual representation of which lines still need to be tested instead of having to put the line numbers together, printed out in the console with the file that I'm looking at in my IDE.

    [15:23 - 15:47] Plus, sometimes there are more lines uncovered than the console printout can handle, and when that happens, it just lists the maximum number of lines it can before trailing off with a dot dot dot. If you've already run all the tests like we have in this lesson, you'll notice in your IDE's list of folders inside of our client folder, there's a new auto-generated one called coverage right here.

    [15:48 - 16:00] This is where our code coverage report is that will open in the browser. Open up the file inside of the elkov report folder, which is named index dot html inside of the browser.

    [16:01 - 16:14] If you're using VS code as your IDE like I am, I'll show you a cool trick to open up this coverage report from your terminal's command line. Open a new terminal instance inside of it, type the following command.

    [16:15 - 16:25] Open coverage elkov report index dot html. Once it's open in the browser, you should see something similar to this.

    [16:26 - 16:44] All the files in our project and any that have tests like app dot JS will be displayed here along with a high level overview of their code coverage. After the code coverage report is first generated locally, you can click into any of these files and see a more detailed view of the code and its coverage.

    [16:45 - 16:53] Go ahead and click on the app dot JS file and we'll check out its contents. And this is where the goal is.

    [16:54 - 17:09] Here we can see our actual production code and exactly which lines are tested already and which are not. In this particular instance, the majority of the lines are already showing it's tested, but we can see that the set check out updated Boolean state is not.

    [17:10 - 17:21] Not only that, but it looks like the if statement on line 21 has only one of its two possible scenarios tested. That's what the little black and yellow eye icon next to the if stands for.

    [17:22 - 17:31] And I can verify this by looking at the branches percentage at the top of the file. It says 50% and one half of the branches as of now.

    [17:32 - 17:43] One thing to note is if you already have an instance of the code coverage report open in your browser and you update your tests and run the coverage script again, then refresh it. You'll see any updated code coverage here.

    [17:44 - 17:49] No need to open the report all over again. And that's about all there is to this.

    [17:50 - 18:09] One thing to note is just doesn't care if the project uses React Testing Library or enzyme or both. Something I want to make clear in this lesson is if you're working on a project that already has some integration testing written using enzyme or some other testing framework besides React Testing Library, it doesn't matter.

    [18:10 - 18:26] Next we'll run all the integration tests regardless of what they're written with and it will combine the results for the coverage report. This is extra good news because it means that if you're adding new functionality to an existing application, you won't need to rewrite all the unit tests it wants to use RTL.

    [18:27 - 18:51] Instead, you can rewrite the tests as you revisit older components or not at all if it turns out that you're not touching them again and you can leave the older tests in place and they'll continue to run and be factored into the overall code coverage report with no issues. In the next lesson, we'll begin writing our first few integration tests using just and React Testing Library for some of our container components.