- Introduction
- Migration Background
- Migration Guides
- Philosophy
- The Path Forward
- Deciding to Migrate a Test's Approach
- Migrating to Use Case Testing
- Documentation References
This document focuses on giving context around why migrating away from OverReact Test is necessary, how the broader migration guide is structured, and how the new testing philosophy affects Workiva's test patterns.
First, the big question - do we have to migrate?
The only time you need to migrate is if the test is relying on a class component instance. This is because the new norm is to build components using the modern patterns of hooks, ref forwarding, and all the other tools that functional components expose. OverReact Test does not have full support for function based components because the APIs and patterns rely on querying component instances. Function components do not have instances for those APIs to find. Consequently, as we transition UI to utilize functional components, the tests will need to be migrated as well.
However, all component tests can be migrated to gain the simplicity and maintainability offered by React Testing Library (RTL). RTL makes testing much more delightful, and by migrating our tests to it, we will be able to:
- Test components in a way that reflects how a user will actually interact with them.
- Avoid the pitfalls of testing implementation details.
- Encourage the accessibility of our components.
Migration guides are split into four parts that reflect how a test is set up:
import 'package:react/react.dart' as react;
import 'package:react_testing_library/matchers.dart' show isChecked;
import 'package:react_testing_library/react_testing_library.dart' as rtl;
import 'package:react_testing_library/user_event.dart';
import 'package:test/test.dart';
void main() {
test('', () {
// [1] Render the component.
final view = rtl.render(react.input({'type': 'checkbox'}));
// [2] Query for relevant nodes to test.
final checkbox = view.getByRole('checkbox');
// [3] Interact with the component.
UserEvent.click(checkbox);
// [4] Verify the expected result.
expect(checkbox, isChecked);
});
}
- Migration Guide for Component Rendering
- Migration Guide for Queries
- Migration Guide for Component Interactions
- Migration Guide for Expectations
Each guide gives insight into the new mentality behind testing with RTL, along an overview of the APIs available to facilitate testing. Before diving into each of those though, make sure you understand RTL's philosophy and best practices (discussed below)!
React Testing Library's guiding philosophy is different from what many are used to coming from OverReact Test. Before migrating, take time to read this section and the connected blog posts. Then, as you are migrating tests, keep in mind that the migration may be adjusting the goal of a test as a whole in addition to the underlying APIs being used.
RTL's philosophy, in its simplest form, is:
The more your tests resemble the way your software is used, the more confidence they can give you.
- Kent C. Dodds
A key takeaway from that quote is not to test implementation details. This is crucial because RTL is opinionated towards avoiding the implementation details, making the right thing the easy thing. On the flip side, if we need to refactor tests that historically deviated from this path, writing those tests becomes a lot more challenging. That means that migrating tests is also about making sure they are aligned with this philosophy and avoiding implementation details. Otherwise, the migration will be more difficult and not provide the confidence it should.
When it's said not to test implementation details, what does that mean? Dodds addresses that question. His short answer is:
Implementation details are things which users of your code will not typically use, see, or even know about.
That means implementation details can be somewhat insidious. It's not just about avoiding using certain getters or patterns, but also about understanding what an API can reveal to a user and only testing those possibilities. Within the scope of migrating from OverReact Test, there are some code smells that are indicative of implementation details being tested. Most of them stem from grabbing the component instance itself and utilizing the class APIs to check data on that instance (props, state, children, etc).
If you want to dive deeper into what implementation details are, Kent's blog post is a great source.
Use case testing is the answer to how to avoid testing implementation details. The goal of a test is to increase confidence that your software works like expected. As Dodds notes, it needs to work like expected for two groups:
- Other programmers who will use the code
- Users of the application
These two groups make up the actual users of the code. RTL is opinionated is towards verifying use cases for those two groups, as opposed to traditional line or branch code coverage. In Kent's blog post he articulates more, but the tl;dr is that a line of code exists because a use case requires it to. To test that line of code, a test for the use case it supports should be written.
It will be important to know what a test's use case is before migrating it, so we want to establish what a use case actually is. Predictably, Dodds' article helps with that. 😄
Summarizing from the article, a use case is a scenario that causes a change that a user will notice. When we're defining a use case for an application user, it's likely interactions that change the UI. When the user is a developer, the scenario could be creating any side effect that the component is capable of (event emissions, HTTP calls, etc).
Note that a use case is defined by something coming out of the component. Application users will be seeing the DOM that a component created. Developers will see data or events that leak outside the walls of the component. If a use case seems like it needs to look inside the component's instance to verify the outcome, that's an indication it's testing implementation details and not a real user use case.
If it's ambiguous, checking that a test verifies a use case is important before the migration. Otherwise, migrating to RTL may feel like fitting a square peg into a round hole. When a test is relying on implementation details to verify an outcome, it is easier to first correlate those details with a use case. Once the use case is known, migrate the test with that use case in mind. That approach gives the insight to know what code needs to be converted, removed, or added to fit the new philosophy.
Identifying a use case from an existing test is really just about figuring out what outcome the test is checking for the user. However, for some tests that may be non-trivial. Below are a couple steps to help guide the process:
-
Spend a little time understanding which parts of the implementation are important for this test and how those parts interact. Here are some guiding questions:
- What are the implementation details being asserted against?
- What effect do those details have on the user? What does the component do to show the user (application user or a developer) that this is happening?
- How is the test currently interacting with the component to create that final result that is asserted against?
- Are there any interactions that don't align with what is being asserted against? Does it seem like assertions could be missing?
-
Using the analysis of the implementation details, determine the scenario (i.e., "use case") being tested.
There may be a temptation here to rely on the test description (the first parameter of the
test
function) to decide the use case being tested. That's another valuable data point, but over time, the test's focus may have shifted. Instead, we should keep it in mind but use the analysis of the test in the previous step to determine what the scenario is.With the understanding of the important outcomes being tested, think through how the component would get into this circumstance out in the wild. What data needs to exist? What would the user have to do? Convert this into a single, easily stated scenario. That's the use case!
Now that you understand RTL's philosophy, use case testing, and how to identify a use case, you can start your migration! As you work through tests, the related guides for rendering, querying, interacting, and expecting can be referenced.
The rest of this guide serves as a reference in the event that you encounter a test that is asserting against implementation details. If that happens, remember how to identify a use case and work through the decision tree below. Then, if it's necessary to migrate the test's approach as a whole, the Migrating to Use Case Testing section is dedicated to that effort.
In the case that you have encountered a test that is asserting against implementation details, use this section to decide what the best next step is. See "Identifying a Test's Use Case" if it is not clear what the test's use case is.
NOTE: The first check below ("Is the test asserting against implementation details?") is very specific. If you are using implementation details to query or interact, the corresponding migration guides walk through adjusting that! This big "migrating approaches" deal is only if the expectation part of the test is so reliant on implementation details that the test needs to be re-thought.
The outcomes are one of the following:
- A test is not verifying implementation details. From there, the migration should be a relatively simple API swap.
- A test is verifying implementation details but in an attempt to verify a use case. From here, the test should be refactored to assert outcomes that a user would notice as opposed to the underlying implementation. This the focus of the Migrating to Use Case Testing section.
- A test is verifying implementation details for the sake of verifying a specific internal behavior. In this case, it needs to be decided what value the test is actually adding. It's important to note that testing implementation details can be considered an anti-pattern, ultimately hurting the codebase. With that in mind, the best path forward is to determine if the test can be reworked to verify a use case. If that use case already has coverage, the test may be able to be removed. There are exceptions to every rule, but those should be rare.
This section focuses on giving a framework for rethinking tests that are asserting against implementation details.
This step is best after verifying in the decision tree that:
- The test is asserting implementation details in an attempt to verify a use case
- No other tests already verify this use case
Remember that unnecessary tests, especially those which rely on implementation details, can cause developers grief! Once a test is deemed as necessary and needing to be refactored, we can take it through a few steps. Those steps are:
-
Determine if the existing test strategy is the most appropriate. For example:
- If this is testing in isolation, would an integration environment be more appropriate for this test?
- Or vise versa, if it's an integration test, would isolation be better?
Defining the use case may have changed the test's paradigm enough that the strategy should be revisited. Consider how the user would exercise this use case and if the current strategy is the most appropriate for that scenario. If the component is tightly coupled with another when used in the real world, the interaction may be more difficult when tested in isolation. On top of that, switching to an integration test may create more confidence because it is closer to how the user would actually interact with the code.
On the flip side, integration tests are often more complex, maybe involving mocking and lots of set up. In the case that the behavior to be tested does not necessarily require all the complexity, testing in isolation may be better. This is less likely as integration tests tend to add more confidence than isolation testing, but if the test's paradigm has shifted enough, it may be a chance to simplify it.
-
Decide what the correct expectations are.
This step doesn't need to have code. Instead, given what you now know about the test, imagine what the benchmarks for the user are when exercising this use case. What are the specific behaviors that the user should notice? This should not include any thought about what the implementation details are. Instead, what does the user see at the conclusion of the scenario and what are the important, noticeable steps prior to that outcome? Those are the expectations. If you need inspiration to know what the possibilities are, browse the matcher section in the expectations guide to see how RTL supports implementation-detail-free
expect
statements!If it seems like there are multiple use cases being tested, use the expectations as a guide to differentiate the use cases. If the use cases were grouped originally, they may be closely related. Answering why they're so closely related and what expectations they should share (and not share) can help inform how the test should be broken apart.
-
Remove anything that doesn't support the new expectations.
The goal here is to remove as much cruft from the existing test as possible.
The test may have assertions or lines that aren't necessary for the actual use case being tested. In the case that there does seem to be multiple use cases being tested, for each use case, create a new test and move over any important logic for those tests into the new test body. That can be revisited after migrating the current test.
Depending on the test and your preference, it may even be worth removing relevant interaction or expectation statements. Add code comments outlining the steps that existed before, with any important details, instead of leaving code that may just get in your way.
In the end, the test should feel like a clean (not blank) slate that is ready to be reworked to the match the new, implementation-detail-free expectations.
-
Begin the migration!
From here, the test can be migrated like one that started without relying on implementation details. As noted, there is a guide for each major test section (rendering, querying, interacting, expecting). Since the original test is slimmed down, there may be gaps to fill in, but those guides will each gives examples of ways to use RTL to do that!
This section includes links to the articles and APIs mentioned in this document.
- Kent C. Dodds Testing Reference
- Testing Implementation Details
- How to Know What to Test
- Write Tests
- Common RTL Mistakes
- React Testing Library (JS)
- React Testing Library (Dart)