Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

-rx and -rX should turn on output for xfail/xpass #11233

Closed
4 tasks done
okken opened this issue Jul 19, 2023 · 5 comments · Fixed by #11574
Closed
4 tasks done

-rx and -rX should turn on output for xfail/xpass #11233

okken opened this issue Jul 19, 2023 · 5 comments · Fixed by #11574
Labels
type: bug problem that needs to be addressed type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@okken
Copy link
Contributor

okken commented Jul 19, 2023

  • a detailed description of the bug or problem you are having
  • -rx and -rX (as well as -ra and -rA) should allow output for xfail and xpass, but they don't.
  • There seems to be no way at all to get the traceback from xfailed tests.
  • There seems to be no way to get the output from xfail/xpass. (other than turning all xpass into fail with xpass strict, but that still doesn't solve the xfail case).
  • The summary report for xfail doesn't include the assert line, as fail does

This shows up in a discussion #9834 and another issue #10618.

---- More detail about problem and expectations ----

Observations:
Based on a simple test script below that includes test_pass, test_fail, test_xfail, test_xpass.

  1. Output and exception for test_fail but not for test_xfail.

    • I would have expected test_xfail to look mostly just like test_fail.
    • If there's a reason to NOT report exception traceback and output for xfails, we should have an option to turn it on.
  2. assert 1 == 2 is displayed in summary info for FAILED but not for XFAIL.

    • I can't come up with reason why this would be correct behavior.
    • assert 1 == 2 should show up for XFAIL also.
  3. -rP (which is included in -rA) is "pass with output", and it applies to PASSED, but not XPASS.

Opinion on how to fix this:

  • XFAIL should act like FAILED if it's turned on with -rx or -ra or -rA
    • output should be reported
    • traceback should be reported
    • assert message should be listed in summary
  • XPASS should act like PASSED if it's turned on with -rX or -ra or -rA
    • output should be reported
  • It seems reasonable that someone might not want to see all of this extra output.
    • That's why I've suggested that this extra output NOT be on by default.
    • Controlling the extra output with -r flags seems like the right way to do this.

My opinions, of course. But this would follow the idea of "behavior with the least surprise". And it doesn't require any extra flags.

  • output of pip list from the virtual environment you are using
$ pip list
Package    Version
---------- -------
colorama   0.4.6
iniconfig  2.0.0
packaging  23.1
pip        23.2
pluggy     1.2.0
pytest     7.4.0
setuptools 65.5.0
  • pytest and operating system versions

    • pytest 7.4.0
    • Windows something, but also tested on Mac, so I think OS is irrelevant.
  • minimal example if possible

Example: test_foo.py

import pytest

def test_pass():
    print('in test_pass()')
    a, b = 1, 1
    assert a == b

def test_fail():
    print('in test_fail()')
    a, b = 1, 2
    assert a == b

@pytest.mark.xfail
def test_xfail():
    print('in test_xfail()')
    a, b = 1, 2
    assert a == b

@pytest.mark.xfail
def test_xpass():
    print('in test_xpass()')
    a, b = 1, 1
    assert a == b

Current output:

$ pytest -rA test_foo.py
============================= test session starts =============================
platform win32 -- Python 3.11.0, pytest-7.4.0, pluggy-1.0.0
rootdir: C:\Users\okken\projects\instrument_updater
configfile: tox.ini
collected 4 items

test_foo.py .FxX                                                         [100%]

================================== FAILURES ===================================
__________________________________ test_fail __________________________________

    def test_fail():
        print('in test_fail()')
        a, b = 1, 2
>       assert a == b
E       assert 1 == 2

test_foo.py:11: AssertionError
---------------------------- Captured stdout call -----------------------------
in test_fail()
=================================== PASSES ====================================
__________________________________ test_pass __________________________________
---------------------------- Captured stdout call -----------------------------
in test_pass()
=========================== short test summary info ===========================
PASSED test_foo.py::test_pass
XFAIL test_foo.py::test_xfail
XPASS test_foo.py::test_xpass
FAILED test_foo.py::test_fail - assert 1 == 2
============== 1 failed, 1 passed, 1 xfailed, 1 xpassed in 0.28s ==============
@okken
Copy link
Contributor Author

okken commented Jul 19, 2023

I'd be happy to work on this, but first I was hoping for

  • Agreement that -rx and -rX are the right way to control output for xfail/xpass.
  • Some hint at where in the code the changes should go. reports.py?
  • A hint at which tests could be used as examples to test this change in behavior

Of course, I'd be "happier" if someone else worked on this. :)

@okken
Copy link
Contributor Author

okken commented Jul 19, 2023

Note that this is a significant problem for me currently, and we don't have a workable solution.

@okken
Copy link
Contributor Author

okken commented Jul 19, 2023

I'm looking in terminal.py now.
It looks like there are sections for ERRORS, FAILURES, PASSES.
Seems like there needs to be also XFAILURES and XPASSES sections, perhaps after FAILURES and before PASSES.

@okken okken added type: bug problem that needs to be addressed type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature labels Jul 20, 2023
@okken
Copy link
Contributor Author

okken commented Jul 20, 2023

I'm not sure if I'm using labels right. I think this missing functionality is a bug, and I have a proposed way it should work.

@nicoddemus
Copy link
Member

Hi @okken,

Sorry for the silence on this, seems like this one fell through the cracks.

Agreement that -rx and -rX are the right way to control output for xfail/xpass.

I think so.

Some hint at where in the code the changes should go. reports.py?

That would go into terminal.py.

A hint at which tests could be used as examples to test this change in behavior

There are several tests in test_terminal.py that handle the -r options.

There's #11574 if you want to comment/help review. 👍

bwoodsend added a commit to bwoodsend/pyinstaller that referenced this issue May 28, 2024
pytest-dev/pytest#11233 overloaded pytest's summary parameter to make it
also add tracebacks for xfailed tests. This makes finding the real
failures amongst the new xfail error messages painful. Disable this
which unfortunately means that we also loose the xfails and xpasses from
the short test summary.
bwoodsend added a commit to bwoodsend/pyinstaller that referenced this issue May 28, 2024
pytest-dev/pytest#11233 overloaded pytest's summary parameter to make it
also add tracebacks for xfailed tests. This makes finding the real
failures amongst the new xfail error messages painful. Disable this
which unfortunately means that we also loose the xfails and xpasses from
the short test summary.
bwoodsend added a commit to bwoodsend/pyinstaller that referenced this issue May 28, 2024
pytest-dev/pytest#11233 overloaded pytest's summary parameter to make it
also add tracebacks for xfailed tests. This makes finding the real
failures amongst the new xfail error messages painful. Disable this
which unfortunately means that we also loose the xfails and xpasses from
the short test summary.
bwoodsend added a commit to bwoodsend/pyinstaller that referenced this issue May 28, 2024
pytest-dev/pytest#11233 overloaded pytest's summary parameter to make it
also add tracebacks for xfailed tests. This makes finding the real
failures amongst the new xfail error messages painful. Disable this
which unfortunately means that we also loose the xfails and xpasses from
the short test summary.
bwoodsend added a commit to bwoodsend/pyinstaller that referenced this issue May 30, 2024
pytest-dev/pytest#11233 overloaded pytest's summary parameter to make it
also add tracebacks for xfailed tests. This makes finding the real
failures amongst the new xfail error messages painful. Disable this
which unfortunately means that we also loose the xfails and xpasses from
the short test summary.
bwoodsend added a commit to pyinstaller/pyinstaller that referenced this issue May 30, 2024
pytest-dev/pytest#11233 overloaded pytest's summary parameter to make it
also add tracebacks for xfailed tests. This makes finding the real
failures amongst the new xfail error messages painful. Disable this
which unfortunately means that we also loose the xfails and xpasses from
the short test summary.
rokm added a commit to rokm/pyinstaller-hooks-contrib that referenced this issue Jun 29, 2024
With pytest-dev/pytest#11233, enabling summaries for xfailed tests
also enables their full error tracebacks, which complicates finding
the actual test failures. There is no way of disabling just the
tracebacks, so disable the summaries of xpassed/xfailed tests
altogether.
rokm added a commit to pyinstaller/pyinstaller-hooks-contrib that referenced this issue Jun 29, 2024
With pytest-dev/pytest#11233, enabling summaries for xfailed tests
also enables their full error tracebacks, which complicates finding
the actual test failures. There is no way of disabling just the
tracebacks, so disable the summaries of xpassed/xfailed tests
altogether.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug problem that needs to be addressed type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
2 participants