You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Previously the test runner setup will show a failure if any of our tests fail, despite the last result (of the test runner itself) showing 0 tests, 0 failures. Because this appears to happen inconsistently we should revisit the test runner setup. We might want to consider CLJS tests #61 in conjunction with this, perhaps borrowing from Clara. The test runner setup came about due to inconsistencies between running tests with macroexpand on CircleCI vs locally (they would pass locally but fail on CircleCI).
The text was updated successfully, but these errors were encountered:
I wonder if this is because you're using lein do syntax, and lein's exit code reflects the lein uberjar task, not the lein test one? Possibly splitting this into two - run: lines in the circle config would help.
That may ultimately be causing a false positive but it looks like we get one before that.
Running lein test :only precept.test-runner with a failing test:
... other tests...
Testing precept.listeners-test
FAIL in (listeners-state-transitions) (listeners_test.cljc:161)
I fail
expected: (= 0 1)
actual: (not (= 0 1))
Ran 2 tests containing 23 assertions.
1 failures, 0 errors.
lein test precept.test-runner
Ran 0 tests containing 0 assertions.
0 failures, 0 errors.
$ echo$?
0
I added this test runner namespace to get the test output to agree with what I was seeing in the REPL for macro tests that assert the equality of what our DSL expands to against Clara's. Running lein test alone gives failures for these. The main thing they have in common is that they call macroexpand. The expansion doesn't appear to take place the same way with lein test. E.g.
https://circleci.com/gh/CoNarrative/precept/295?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
Previously the test runner setup will show a failure if any of our tests fail, despite the last result (of the test runner itself) showing 0 tests, 0 failures. Because this appears to happen inconsistently we should revisit the test runner setup. We might want to consider CLJS tests #61 in conjunction with this, perhaps borrowing from Clara. The test runner setup came about due to inconsistencies between running tests with
macroexpand
on CircleCI vs locally (they would pass locally but fail on CircleCI).The text was updated successfully, but these errors were encountered: