Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Isolate Tests #244

Merged
merged 5 commits into from
Nov 3, 2022
Merged

Isolate Tests #244

merged 5 commits into from
Nov 3, 2022

Conversation

zonotope
Copy link
Contributor

@zonotope zonotope commented Nov 3, 2022

This started out on a different branch, but in the spirit of releasing lots of smaller patches more quickly instead of one huge branch all at once, I spun these test changes out here because they are self-contained.

My overall goal is to eliminate or limit our use of mutable state to instead rely on immutable, persistent data structures within a more functional, data-driven architecture. That architecture will help us write more idiomatic Clojure code allowing us to reap all the benefits Clojure was designed to provide.

The first thing here is a small change removing some unused code. I described my rationale for that change in commit 9e5a7bb

I have also added a dev-resources path to store our sample/test data and context, and I've set this resource path up for easy loading both at the repl and in the tests.

The rest of the changes here refactors the tests to run in isolation instead of relying on globals created by fixtures. I consider clojure test fixtures an anti-pattern to be used sparingly because it relies on global state. Using these globals (especially when those globals themselves have mutable state) couples our test suite and makes it either impossible or very difficult to run individual tests in isolation. When different tests share the same state, subsequent tests will evolve to rely on the state of the globals set up by previous tests. Not being able to reliably run individual tests in isolation slows, down the development process considerably.

The author of the clojure.test namespace himself has similar concerns about use-fixtures.

Instead of relying on global state and fixtures to set up that state, we should instead do all the set-up and teardown necessary for an individual test explicitly in that test itself.

This watch was just a placeholder for future watching functionality, but it's
not currently used. The watching functionality itself also encourages
propogating mutable state and complects the indexing and ledger update
operation. I think we should instead use explicit function calls with persistent
values instead of mutable state to implement this functionality when we
eventually need it.
This allows us to run only the integration tests with:

```
clojure -M:cljtest --focus-meta :integration
```
@zonotope zonotope requested a review from a team November 3, 2022 17:57
@cap10morgan
Copy link
Contributor

I'm sure none of the below is news to you, just want to make sure it's all been thought through and measured:

One nice thing about :once fixtures is that they execute, well, once no matter how many tests you're running. That's a nice performance win over repeating the same setup work in each test. And it should support running individual tests in isolation just fine. Were you running into issues doing that?

I'm all for making sure the test environment is clean and decoupled in each test. But if there is setup work that can be done once for the whole suite, it's nice not to repeat it in each test. I'm not sure if something like this is applicable here, but in the idiomatic ledger tests I compromised at running a ledger server process once (slow) and creating a randomly-named ledger in each test (fast).

But perhaps the performance here is fast enough that we can push further towards isolation? Or if it's specifically use-fixtures you don't like, perhaps some of the setup process could be cached (if its performance warrants it)?

@zonotope
Copy link
Contributor Author

zonotope commented Nov 3, 2022

I'm sure none of the below is news to you, just want to make sure it's all been thought through and measured:

One nice thing about :once fixtures is that they execute, well, once no matter how many tests you're running. That's a nice performance win over repeating the same setup work in each test. And it should support running individual tests in isolation just fine. Were you running into issues doing that?

I'm all for making sure the test environment is clean and decoupled in each test. But if there is setup work that can be done once for the whole suite, it's nice not to repeat it in each test. I'm not sure if something like this is applicable here, but in the idiomatic ledger tests I compromised at running a ledger server process once (slow) and creating a randomly-named ledger in each test (fast).

But perhaps the performance here is fast enough that we can push further towards isolation? Or if it's specifically use-fixtures you don't like, perhaps some of the setup process could be cached (if its performance warrants it)?

I totally agree, and I didn't mean to imply that we should never use use-fixtures, but that we should just use it sparingly. An "anti-pattern" to me isn't something that's necessarily wrong; it's just something that requires close inspection before using it.

If we have an expensive and time consuming process that the whole test suite needs, then we can use a :once fixture as a cache so we don't have to repeat that step over and over, slowing down the tests. The Stuart Sierra post I linked talks about using fixtures in a similar way.

The problem with our tests as they currently are is that we're using fixtures to dynamically update a stateful connection var and a map of stateful ledgers.

Creating a new memory connection and ledger is fast, so we don't need to worry about caching anything now. Both the connection and ledger objects are currently stateful, so if we load these vars up and rely on them in our tests, any state changes we make to these objects in early tests will be reflected to what later tests see. That means that as we add new tests, we will be writing those new tests implicitly against that updated state, and that's what will eventually make it harder to run the tests in isolation. Allowing different tests to share state opens up the possibility that implicit dependencies develop between those tests. The more tests we add, the harder it will be to untangle those dependencies later.

I'd like to take the opportunity to put us on the de-coupled path now because we haven't added too many tests relying on the connection and ledgers to already be initialized and populated yet. I think our default should be to write tests without fixtures, but I think we should definitely use fixtures later if we get a new, essential component that all the tests need to use but is expensive to initialize, but we don't have any such components yet.

@cap10morgan
Copy link
Contributor

I read that Stuart Sierra post just now. I didn't realize that :once fixtures were run once per namespace, but it makes perfect sense that they are. If we weren't low-key deprecating ledger right now, I'd be going in there to apply his pattern of only doing things once per run instead of once per ns. 😆

And I am now convinced that (use-fixtures :each ...) is always bad.

@zonotope zonotope merged commit 9812633 into main Nov 3, 2022
@zonotope zonotope deleted the feature/isolate-tests branch November 3, 2022 21:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants