Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest function scoped fixtures should run once per example, not once per test #377

Closed
DRMacIver opened this issue Oct 18, 2016 · 24 comments · Fixed by #2356
Closed

pytest function scoped fixtures should run once per example, not once per test #377

DRMacIver opened this issue Oct 18, 2016 · 24 comments · Fixed by #2356
Labels
bug something is clearly wrong here interop how to play nicely with other packages

Comments

@DRMacIver
Copy link
Member

DRMacIver commented Oct 18, 2016

Because of the way Hypothesis just exposes a single test function, pytest just runs its fixtures once for that function.

This is a long running problem with Hypothesis's interaction with pytest and there is an open bug about it on the pytest side too.

Fixing this is tricky. The fixtures-integration branch has a partial prototype which makes some progress towards making it work, but it's very far from complete and has a number of issues which are likely to need substantial work to fix.

I am currently seeking funding to work on this ticket and it is unlikely to make much progress without said funding.

@Zac-HD Zac-HD added bug something is clearly wrong here for-a-modest-fee labels Apr 20, 2017
@Zac-HD
Copy link
Member

Zac-HD commented May 31, 2017

Related to #59

@Zac-HD
Copy link
Member

Zac-HD commented Apr 12, 2018

Note that the pytest-subtesthack package by @untitaker provides a decent workaround for this case.

@Zac-HD
Copy link
Member

Zac-HD commented Apr 26, 2018

Actually... it might be possible to incorporate a version of pytest-subtesthack into our pytest plugin, that activates this logic only for Hypothesis tests. @untitaker, do you think this could work?

@untitaker
Copy link
Contributor

Maybe but the version would look very differently, assuming you want this to be the new default behavior of "given"

@DRMacIver
Copy link
Member Author

Actually... it might be possible to incorporate a version of pytest-subtesthack into our pytest plugin, that activates this logic only for Hypothesis tests

I think we need to be quite careful about this. There are a bunch of problems with pytest-subtesthack (or were last I looked) and it's a fairly major change in behaviour. If we're going to support pytest function scoped fixtures I think we need to do it properly.

@Zac-HD
Copy link
Member

Zac-HD commented Apr 26, 2018

Fair enough! I didn't expect it to be that easy, but thought it was worth mentioning.

@untitaker
Copy link
Contributor

I think @ossronny can judge better than me how bad the hack really is. I'm not really sure about adding this to hypothesis but I am not sure if it will be fixed in pytest somehow either

@untitaker
Copy link
Contributor

After using subtesthack for a while I am now of the opinion that it's a good thing for the user to explicitly think about which fixtures really need setup/teardown, or if they are even testing the right thing if they have fixtures like that.

If given were to implicitly setup/teardown fixtures it would result in less test runs than possible with a smaller, more unit-y test.

I am not the kind of guy who has a strong opinion on which tests are supposed to test what (unit tests vs integration tests... most of the time I am just glad to have any tests at all), but I think hypothesis doesn't really work that well with I/O heavy tests. I'm constantly finding old integration tests of mine that use hypothesis and run way slower than necessary and test less of the input space than possible.

@Zac-HD
Copy link
Member

Zac-HD commented Oct 10, 2019

After using subtesthack for a while I am now of the opinion that it's a good thing for the user to explicitly think about which fixtures really need setup/teardown, or if they are even testing the right thing if they have fixtures like that.

My time doing triage on the pytest issue tracker has left me feeling the same way, as well as less optimistic that there's a nice way to hook into fixtures internals. (even if there was, it would be horribly pytest-version-dependent)

@DRMacIver, how would you feel about outright deprecating use of function-scoped fixtures with @given? I think we can detect that, and it's a much cleaner solution than the current silent subtle maybe-broken state they're left in...

@untitaker
Copy link
Contributor

untitaker commented Oct 10, 2019 via email

@Zac-HD
Copy link
Member

Zac-HD commented Oct 10, 2019

Allow use of fixtures with >function scope; emit warnings for function-scoped fixtures.

IMO this is a strict improvement over the current situation where we silently reuse them and surprise users who actually did need those values scoped to a single test invocation.

@DRMacIver
Copy link
Member Author

@DRMacIver, how would you feel about outright deprecating use of function-scoped fixtures with @given? I think we can detect that, and it's a much cleaner solution than the current silent subtle maybe-broken state they're left in...

I think this is a good idea. Ideally I'd still like to transparently support it in future, but we're not likely to make any progress on this in the foreseeable future, and adding a deprecation warning now gives us a certain leeway in future - we can either make it an error, or we can support it "properly", but until we do one or the other of these it will have significantly fewer gotchas for users if we warn them.

@untitaker
Copy link
Contributor

How do I disable this warning? Am I supposed to refactor my tests to nested fns?

def test_foo(fixture1, fixture2):

    @given(a=st.text())
    def inner(a):
        assert a or not a

    inner()

That would be less of a gotcha for sure and at the same time we can get rid of some of the argument faking code in hypothesis.

@DRMacIver
Copy link
Member Author

How do I disable this warning? Am I supposed to refactor my tests to nested fns?

It's certainly an option. I've been idly thinking it would be sensible to have a decorator to make it easier to run tests directly (basically just the equivalent of @given plus immediately calling the function) but TBH the main way to disable this warning is "don't do that then". There are very few circumstances under which the combination of given and a function scoped fixture is the right thing to do.

Another option which would get the correct semantics but I'm not sure how easy it is to do is to refactor your function scoped fixtures into functions you can call explicitly within the test body.

@DRMacIver
Copy link
Member Author

at the same time we can get rid of some of the argument faking code in hypothesis.

This wouldn't be true FWIW. We'd still need the argument handling code for less fine grained fixtures and for e.g. handling self on methods. There may be other use cases too, I don't know. It's specifically only function-scoped fixtures only that we'd want to warn on.

@untitaker
Copy link
Contributor

Forwarding, sure, but not going through the effort of making inspect return the correct results. Right?

@DRMacIver
Copy link
Member Author

Forwarding, sure, but not going through the effort of making inspect return the correct results. Right?

No we would still need that in order to get proper support for module scoped fixtures. All pytest fixtures get passed through function arguments regardless of their scope.

@untitaker
Copy link
Contributor

Got it, I was thinking of discouraging decorating a test function altogether (and always going for nested functions), since a lot of fixtures are function-scoped anyway

@Zac-HD
Copy link
Member

Zac-HD commented Nov 4, 2019

master...Zac-HD:fixture-scope-warning

Well, we can emit a warning if any function decorated with @given is also using a function-scoped fixture... but I'm not convinced that we should, it's a very noisy warning due to e.g. autouse fixtures and it's far from clear that it's a problem in practice.

IMO the status quo is the best option short of forcing function-scoped fixtures to be example-scoped instead, which might even be possible these days.

@DRMacIver
Copy link
Member Author

due to e.g. autouse fixtures

Can we not exclude autouse fixtures and only warn on ones that are explicit arguments?

it's far from clear that it's a problem in practice.

Given how often the subject comes up I think it's pretty clear it's a problem in practice!

@untitaker
Copy link
Contributor

Is the conclusion here that Hypothesis will warn in certain situations where it detects a fixture leak? Will Hypothesis include utilities to faciliate fixture setup/teardown as part of its pytest plugin?

@Zac-HD
Copy link
Member

Zac-HD commented Mar 1, 2020

Is the conclusion here that Hypothesis will warn in certain situations where it detects a fixture leak?

Yes: if a test decorated with @given requests a function-scoped fixture, our pytest plugin will emit a warning.

Will Hypothesis include utilities to faciliate fixture setup/teardown as part of its pytest plugin?

No. While I'd like this to work, the difficulty of making it robust is what led us to use the warning instead.

@untitaker
Copy link
Contributor

untitaker commented Mar 1, 2020

Okay, yeah. I think the warning message will need a lot of work because right now it seems like it's just a "this feature is not implemented yet" while we actually think "this feature cannot be implemented and what you're doing is probably an antipattern". I think you'll see a bunch of feature requests popping up :)

@Zac-HD
Copy link
Member

Zac-HD commented Mar 1, 2020

It's an antipattern in the sense that it doesn't work, but might seem to for long enough to corrupt your caches or whatever 🤷‍♂️

If anyone wants to do or to fund the work involved, I'd be delighted though!

hroncok added a commit to hroncok/h2 that referenced this issue Mar 8, 2021
Hypothesis 6.6 will produce:

    E  hypothesis.errors.FailedHealthCheck: test/test_flow_control_window.py::TestAutomaticFlowControl::test_mixing_update_forms uses the 'frame_factory' fixture, which is reset between function calls but not between test cases generated by `@given(...)`.  You can change it to a module- or session-scoped fixture if it is safe to reuse; if not we recommend using a context manager inside your test function.  See https://docs.pytest.org/en/latest/fixture.html#sharing-test-data for details on fixture scope.
    E  See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.function_scoped_fixture to the suppress_health_check settings for this test.

Since the tests already workaround the problem,
acknowledging HypothesisWorks/hypothesis#377,
we can safely disable the check.

Hypothesis 5.49 introduced the function_scoped_fixture health check value,
hence it is now the lowest required version of hypothesis.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug something is clearly wrong here interop how to play nicely with other packages
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants