Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest fixture warning - more harm than good? #2370

Closed
The-Compiler opened this issue Mar 12, 2020 · 2 comments · Fixed by #2387
Closed

pytest fixture warning - more harm than good? #2370

The-Compiler opened this issue Mar 12, 2020 · 2 comments · Fixed by #2387
Labels
legibility make errors helpful and Hypothesis grokable opinions-sought tell us what you think about these ones!

Comments

@The-Compiler
Copy link
Contributor

I just discovered #377/#2356 and wonder if the warning is really appropriate. Let me share some (simplfiied) scenarios from my testsuite where I feel like it's doing more harm than good:


monkeypatch

@given(text())
def test_something(inp, monkeypatch):
    monkeypatch.setattr(mod, 'some_attribute', 42)
    mod.do_something(inp)

Here, I'm using the builtin monkeypatch fixture to patch something away. In pretty much any scenario I can imagine, this should work just fine with hypothesis calling the test function multiple times. With the warning, I don't see a way to make the same thing work again...

Providing some object

@pytest.fixture
def hist():
    # do some setup of child-objects
    return History(...)

def test_without_hypothesis(hist):
    # ...

# (repeat this a couple of times for other tests)

@hypothesis.given(text())
def test_add(hist, s):
    hist.add(s)  # make sure there is no crash

Here things probably aren't as clear-cut. I am sure whatever thing I do inside my fixture in the hypothesis test can be repeated, yet for other tests I'd like the additional isolation between tests. There are ways to solve this one, but it'd mean code duplication between two identical fixtures.


I can see why the warning is there: There certainly are situations where it's good that developers are alerted to things maybe not behaving like they'd naively expect.

Yet, I think it's too strict: Why is it a DeprecationWarning? Is this behavior really something which will turn into an error? If so, are there good reasons for turning it into an error (e.g. some possible simplification inside hypothesis I'm not aware of), or would it just deliberately break situations which worked just fine?

I think:

  • It should be a "normal" warning - ideally even with its own warning class, so that it can easily (and safely) be ignored for cases where it's not a problem
  • It certainly shouldn't be a deprecation
  • Maybe the pytest plugin should even provide a hypothesis_fixture_whitelist setting (or so), to have a simpler way to say "hey, this scenario is okay" rather than via ignorewarnings

What do you think?

@Zac-HD
Copy link
Member

Zac-HD commented Mar 12, 2020

monkeypatch

See also pytest-dev/pytest#6888

I am sure whatever thing I do inside my fixture in the hypothesis test can be repeated, yet for other tests I'd like the additional isolation between tests.

I can see where you're coming from, but in general it's either safe to reuse a fixture (in which case it doesn't need to be function scoped), or it isn't - and that doesn't vary between Hypothesis and other tests.

Of course there really are cases where this warning triggers for programs which are actually OK! However, I'm also concerned by the inverse problem, where users incorrectly believe that cross-invocation persistence isn't a problem but module-scope isn't an option for some reason.

Is this behavior really something which will turn into an error? If so, are there good reasons for turning it into an error, or would it just deliberately break situations which worked just fine?

Our judgement when we merged it was that deliberately breaking some working setups was an unfortunate but acceptable cost to convert a (we hope) larger set of subtle problems into explicit warnings and eventually errors.

I don't think there's a 'clearly correct' response, but personally I favor designing for beginner and intermediate users - while inelegant there are workarounds for expert users, but if beginners get stuck or frustrated or generally have a bad experience we might as well not exist at all for them 😕

@Zac-HD Zac-HD added legibility make errors helpful and Hypothesis grokable opinions-sought tell us what you think about these ones! labels Mar 12, 2020
@The-Compiler
Copy link
Contributor Author

See also pytest-dev/pytest#6888

Which is a non-solution in my eyes: After my hypothesis-test is done, I want whatever I changed via monkeypatch to be undone properly. I don't want to happen at the end of the module (or test run).

I can see where you're coming from, but in general it's either safe to reuse a fixture (in which case it doesn't need to be function scoped), or it isn't - and that doesn't vary between Hypothesis and other tests.

I disagree, see my example about monkeypatch above. If I have some fixture which should do some kind of cleanup after some action, I'd prefer it to do this cleanup as soon as possible, to affect as little unrelated tests as possible.

Of course there really are cases where this warning triggers for programs which are actually OK!

Which is why I think it should just be a normal warning (to make the user aware of the consequences of their actions), not a deprecation.

However, I'm also concerned by the inverse problem, where users incorrectly believe that cross-invocation persistence isn't a problem but module-scope isn't an option for some reason.

You mean scenarios where users deliberately turn the warning off, run into issues, and you get to debug it? I don't think this will be a big issue to be honest - and it's likely not something you could prevent. If this turned into an error, I'd probably just disable the plugin entirely via -p no:hypothesis and continue using it that way...

Our judgement when we merged it was that deliberately breaking some working setups was an unfortunate but acceptable cost to convert a (we hope) larger set of subtle problems into explicit warnings and eventually errors.

I don't think there's a 'clearly correct' response, but personally I favor designing for beginner and intermediate users - while inelegant there are workarounds for expert users, but if beginners get stuck or frustrated or generally have a bad experience we might as well not exist at all for them

I fully agree, modulo the "eventually errors" part - the warning in itself is probably a good idea (and will help people realize what's going on). I just don't think it makes sense to make things more difficult for expert users in scenarios where things would be fine, for almost no gain (other than preventing novice users from deliberately ignoring the warning).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legibility make errors helpful and Hypothesis grokable opinions-sought tell us what you think about these ones!
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants