Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to suppress exit code 5 (no tests run) #2393

Closed
jakirkham opened this issue May 5, 2017 · 23 comments
Closed

Option to suppress exit code 5 (no tests run) #2393

jakirkham opened this issue May 5, 2017 · 23 comments

Comments

@jakirkham
Copy link

For the most part, I think the addition of PR ( #817 ) to have a special exit code to denote no tests run was a good idea. However when I start out a project, I may not have any tests as there is no real code. Still I would like to get everything up and running (i.e. CIs). It would be nice in these cases like this to have an option to suppress this behavior and have exit code 0 returned instead.

@The-Compiler
Copy link
Member

FWIW you can do something like

def test_placeholder():
    pass

@RonnyPfannschmidt
Copy link
Member

im closing this, as the very reason for having and not suppressing this error code is to learn of broken/gone bad ci setups, its just as easy to simply put a first green test you know you will fill out later than to opt out of something you may forget about

@nicoddemus
Copy link
Member

Thanks for the suggestion @jakirkham, but I agree with @RonnyPfannschmidt that we shouldn't add yet another option to pytest to handle this case specially because an alternative solution is trivial and easy to do, while the possible drawbacks can be a pain (forgetting later that the option is set like @RonnyPfannschmidt mentioned).

@mkleina
Copy link

mkleina commented Apr 19, 2018

But what in case the -m (markers) option is present and provided marker is not found? I am working on the case where multiple test suites are run in particular order using tox.

[testenv:all]
commands =
    pytest testsuite1/ {posargs}
    pytest testsuite2/ {posargs}
    pytest testsuite3/ {posargs}

where {posargs} are arguments passed to tox. When I want to preserve order, but filter out selected marker, for example:

tox -e all -- -m smoketest

pytest command fails when no markers found and no further tests are started because of exit code 5.

Any idea how to substitute this use case?

@blueyed
Copy link
Contributor

blueyed commented Apr 19, 2018

@mkleina
You can wrap the command:
sh -c 'pytest testsuite1/ {posargs}; ret=$?; [ $ret = 5 ] && exit 0 || exit $ret'

I am using the following p wrapper myself:

#!/bin/sh
pytest "$@"
ret=$?
if [ "$ret" = 5 ]; then
  echo "No tests collected.  Exiting with 0 (instead of 5)."
  exit 0
fi
exit "$ret"

@RonnyPfannschmidt
Copy link
Member

we ought to have a pytest plugin to do that for people that truely need it (since a shell wrapper is pretty anti-portable

@nicoddemus
Copy link
Member

Agree with the plugin idea, should be simple to do.

@ChiKenNeg
Copy link

Any news?

@RonnyPfannschmidt
Copy link
Member

@ChiKenNeg nobody worked on a plugin for this as far as i know, if you'd like to have one, i strongly suggest making one

@yashtodi94
Copy link
Contributor

A little hack. But something on these lines should work fine. (Pytest v4.0.2)

Add this to your conftest.py:

def pytest_sessionfinish(session, exitstatus):
    if exitstatus == 5:
        session.exitstatus = 10 # Any arbitrary custom status you want to return```

@Tadaboody
Copy link
Contributor

@yashtodi94 would you be willing to create a plugin with that solution?

@yashtodi94
Copy link
Contributor

Sure...I'll try submitting a PR by next week

@Tadaboody
Copy link
Contributor

@yashtodi94 I think implementing the plugin in a different repository and linking it here is the norm, because it isn't changing pytest directly.
There's a cookie cutter template for making one that's easy to submit to pytest-dev
https://github.com/pytest-dev/cookiecutter-pytest-plugin
If you need any help let me know!

@yashtodi94
Copy link
Contributor

@yashtodi94 I think implementing the plugin in a different repository and linking it here is the norm, because it isn't changing pytest directly.
There's a cookie cutter template for making one that's easy to submit to pytest-dev
https://github.com/pytest-dev/cookiecutter-pytest-plugin
If you need any help let me know!

Yes, I discovered that after posting my previous comment. Thanks for the help :)

@yashtodi94
Copy link
Contributor

@Tadaboody @RonnyPfannschmidt

First attempt at building a plugin: https://pypi.org/project/pytest-custom-exit-code/

@nicoddemus
Copy link
Member

Awsome @yashtodi94, thanks for sharing! 🙃

szilard-nemeth added a commit to szilard-nemeth/google-api-wrapper that referenced this issue May 7, 2021
CirqBot pushed a commit to quantumlib/Cirq that referenced this issue Jun 14, 2021
Fix broken notebook tests in master.

It seems that when `pattern` does not match any tests in `pytest -k pattern`, pytest returns an exit code 5. This breaks the newly sharded notebook tests, which are leveraging the `-k partition-n` pattern for changed notebooks. 

This fix adds the fix as per the recommendations of pytest-dev/pytest#2393.
benfred added a commit to NVIDIA-Merlin/Merlin that referenced this issue Aug 5, 2022
Our nightly container builds are failing, because all the integration tests are skipped
(since we don't have faiss/feast on the containers). pytest returns error code '5'
in this case, causing us to fail the container.

Use the workaround as suggested pytest-dev/pytest#2393 (comment)
benfred added a commit to NVIDIA-Merlin/Merlin that referenced this issue Aug 5, 2022
Our nightly container builds are failing, because all the integration tests are skipped
(since we don't have faiss/feast on the containers). pytest returns error code '5'
in this case, causing us to fail the container.

Use the workaround as suggested pytest-dev/pytest#2393 (comment)
@dmcnulla
Copy link

dmcnulla commented Aug 30, 2022

https://pypi.org/project/pytest-custom-exit-code/

I made a variation of that because sometimes the re-run had zero tests and exited with code 5. Thanks for posting it.

export PYTHONPATH=.:./framework:./integration_tests
export CONFIG=test

python -m integration_tests.purge_queues
pytest -c int_tests.ini --disable-pytest-warnings
ret=$?
if [ "$ret" = 1 ]; then
  python -m integration_tests.purge_queues
  pytest --last-failed --last-failed-no-failures none -c int_tests.ini --disable-pytest-warnings
else
  exit $ret
fi

@joelazar
Copy link

Also, here is an example of how to have a workaround for this in a Makefile:

.PHONY: test
test:
	sh -c 'poetry run pytest **.py || ([ $$? = 5 ] && exit 0 || exit $$?)'

@paigeadelethompson
Copy link

I guess this works for me, would have been nice if pytest just had an option for it though python -c 'import subprocess, sys; (lambda p: p.returncode == 5 and sys.exit(0) or sys.exit(p.returncode) )(subprocess.run("pytest"))' && echo "ok"

@ardunster
Copy link

I just ran into this issue today attempting to amend a commit in which all I did was remove a few print() calls I'd put in for debugging. The commit hook refused to actually rerun tests because nothing materially changed (I assume), even though if ran manually with the same command (pipenv run pytset) I get the output listing pass/skip quantities rather than an error. I didn't want to actually change anything else in that commit, so I ended up using --no-verify, but this doesn't seem like desired behavior.

rht pushed a commit to rht/Cirq that referenced this issue May 1, 2023
Fix broken notebook tests in master.

It seems that when `pattern` does not match any tests in `pytest -k pattern`, pytest returns an exit code 5. This breaks the newly sharded notebook tests, which are leveraging the `-k partition-n` pattern for changed notebooks. 

This fix adds the fix as per the recommendations of pytest-dev/pytest#2393.
@SonOfLilit
Copy link

SonOfLilit commented Jun 18, 2023

I modified @yashtodi94 's solution a bit to only swallow the failure if I passed the -k argument to filter the tests, so I still get a failure if something bad happened to test collection (you may want to check different argument(s) instead).

NO_TESTS_COLLECTED = 5
SUCCESS = 0

def pytest_sessionfinish(session, exitstatus):
    if exitstatus == NO_TESTS_COLLECTED and session.config.getoption("-k"):
        session.exitstatus = SUCCESS```

zenflip pushed a commit to webventurer/python-template that referenced this issue Jan 12, 2024
- Add a 'test' target to run pytest against the 'tests' directory. If an
  exit code of 5 is found then exit with a 0 instead. Otherwise exit
  with the appropriate error code.

- Intercept 'pytest_sessionfinish' to do the same thing in conftest.py
  [1].

- Switch off 'disallowed_untyped_defs' in mypy.ini as hints should help
  readability where it makes sense (hence 'hints'). It should not be enforced
  all the time IMHO.

[1] pytest-dev/pytest#2393
zenflip pushed a commit to webventurer/python-template that referenced this issue Jan 13, 2024
- Add a 'test' target to run pytest against the 'tests' directory. If an
  exit code of 5 is found then exit with a 0 instead. Otherwise exit
  with the appropriate error code.

- Intercept 'pytest_sessionfinish' to do the same thing in conftest.py
  [1].

- Switch off 'disallowed_untyped_defs' in mypy.ini as hints should help
  readability where it makes sense (hence 'hints'). It should not be enforced
  all the time IMHO.

[1] pytest-dev/pytest#2393
zenflip pushed a commit to webventurer/python-template that referenced this issue Jan 13, 2024
- Add a 'test' target to run pytest against the 'tests' directory. If an
  exit code of 5 is found then exit with a 0 instead. Otherwise exit
  with the appropriate error code.

- Intercept 'pytest_sessionfinish' to do the same thing in conftest.py
  [1].

- Switch off 'disallowed_untyped_defs' in mypy.ini as hints should help
  readability where it makes sense (hence 'hints'). It should not be enforced
  all the time IMHO.

[1] pytest-dev/pytest#2393
dongjoon-hyun pushed a commit to apache/spark that referenced this issue Jan 23, 2024
…re in Python testing script

### What changes were proposed in this pull request?

This PR proposes to avoid treating the exit code 5 as a test failure in Python testing script.

### Why are the changes needed?

```
...
========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /__w/spark/spark/python/unit-tests.log
Will test against the following Python executables: ['python3.12']
Will test the following Python modules: ['pyspark-core', 'pyspark-streaming', 'pyspark-errors']
python3.12 python_implementation is CPython
python3.12 version is: Python 3.12.1
Starting test(python3.12): pyspark.streaming.tests.test_context (temp output: /__w/spark/spark/python/target/8674ed86-36bd-47d1-863b-abb0405557f6/python3.12__pyspark.streaming.tests.test_context__umu69c3v.log)
Finished test(python3.12): pyspark.streaming.tests.test_context (12s)
Starting test(python3.12): pyspark.streaming.tests.test_dstream (temp output: /__w/spark/spark/python/target/847eb56b-3c5f-49ab-8a83-3326bb96bc5d/python3.12__pyspark.streaming.tests.test_dstream__rorhk0lc.log)
Finished test(python3.12): pyspark.streaming.tests.test_dstream (102s)
Starting test(python3.12): pyspark.streaming.tests.test_kinesis (temp output: /__w/spark/spark/python/target/78f23c83-c24d-4fa1-abbd-edb90f48dff1/python3.12__pyspark.streaming.tests.test_kinesis__q5l1pv0h.log)
test_kinesis_stream (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."
test_kinesis_stream_api (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream_api) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."

----------------------------------------------------------------------
Ran 0 tests in 0.000s

NO TESTS RAN (skipped=2)

Had test failures in pyspark.streaming.tests.test_kinesis with python3.12; see logs.
Error:  running /__w/spark/spark/python/run-tests --modules=pyspark-core,pyspark-streaming,pyspark-errors --parallelism=1 --python-executables=python3.12 ; received return code 255
Error: Process completed with exit code 19.
```

Scheduled job fails because of exit 5, see pytest-dev/pytest#2393. This isn't a test failure.

### Does this PR introduce _any_ user-facing change?

No, test-only.

### How was this patch tested?

Manually tested.

### Was this patch authored or co-authored using generative AI tooling?

No,

Closes #44841 from HyukjinKwon/SPARK-46801.

Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun pushed a commit to apache/spark that referenced this issue Jan 23, 2024
…re in Python testing script

### What changes were proposed in this pull request?

This PR proposes to avoid treating the exit code 5 as a test failure in Python testing script.

### Why are the changes needed?

```
...
========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /__w/spark/spark/python/unit-tests.log
Will test against the following Python executables: ['python3.12']
Will test the following Python modules: ['pyspark-core', 'pyspark-streaming', 'pyspark-errors']
python3.12 python_implementation is CPython
python3.12 version is: Python 3.12.1
Starting test(python3.12): pyspark.streaming.tests.test_context (temp output: /__w/spark/spark/python/target/8674ed86-36bd-47d1-863b-abb0405557f6/python3.12__pyspark.streaming.tests.test_context__umu69c3v.log)
Finished test(python3.12): pyspark.streaming.tests.test_context (12s)
Starting test(python3.12): pyspark.streaming.tests.test_dstream (temp output: /__w/spark/spark/python/target/847eb56b-3c5f-49ab-8a83-3326bb96bc5d/python3.12__pyspark.streaming.tests.test_dstream__rorhk0lc.log)
Finished test(python3.12): pyspark.streaming.tests.test_dstream (102s)
Starting test(python3.12): pyspark.streaming.tests.test_kinesis (temp output: /__w/spark/spark/python/target/78f23c83-c24d-4fa1-abbd-edb90f48dff1/python3.12__pyspark.streaming.tests.test_kinesis__q5l1pv0h.log)
test_kinesis_stream (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."
test_kinesis_stream_api (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream_api) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."

----------------------------------------------------------------------
Ran 0 tests in 0.000s

NO TESTS RAN (skipped=2)

Had test failures in pyspark.streaming.tests.test_kinesis with python3.12; see logs.
Error:  running /__w/spark/spark/python/run-tests --modules=pyspark-core,pyspark-streaming,pyspark-errors --parallelism=1 --python-executables=python3.12 ; received return code 255
Error: Process completed with exit code 19.
```

Scheduled job fails because of exit 5, see pytest-dev/pytest#2393. This isn't a test failure.

### Does this PR introduce _any_ user-facing change?

No, test-only.

### How was this patch tested?

Manually tested.

### Was this patch authored or co-authored using generative AI tooling?

No,

Closes #44841 from HyukjinKwon/SPARK-46801.

Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 52b6292)
Signed-off-by: Dongjoon Hyun <[email protected]>
dongjoon-hyun pushed a commit to apache/spark that referenced this issue Jan 23, 2024
…re in Python testing script

### What changes were proposed in this pull request?

This PR proposes to avoid treating the exit code 5 as a test failure in Python testing script.

### Why are the changes needed?

```
...
========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /__w/spark/spark/python/unit-tests.log
Will test against the following Python executables: ['python3.12']
Will test the following Python modules: ['pyspark-core', 'pyspark-streaming', 'pyspark-errors']
python3.12 python_implementation is CPython
python3.12 version is: Python 3.12.1
Starting test(python3.12): pyspark.streaming.tests.test_context (temp output: /__w/spark/spark/python/target/8674ed86-36bd-47d1-863b-abb0405557f6/python3.12__pyspark.streaming.tests.test_context__umu69c3v.log)
Finished test(python3.12): pyspark.streaming.tests.test_context (12s)
Starting test(python3.12): pyspark.streaming.tests.test_dstream (temp output: /__w/spark/spark/python/target/847eb56b-3c5f-49ab-8a83-3326bb96bc5d/python3.12__pyspark.streaming.tests.test_dstream__rorhk0lc.log)
Finished test(python3.12): pyspark.streaming.tests.test_dstream (102s)
Starting test(python3.12): pyspark.streaming.tests.test_kinesis (temp output: /__w/spark/spark/python/target/78f23c83-c24d-4fa1-abbd-edb90f48dff1/python3.12__pyspark.streaming.tests.test_kinesis__q5l1pv0h.log)
test_kinesis_stream (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."
test_kinesis_stream_api (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream_api) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."

----------------------------------------------------------------------
Ran 0 tests in 0.000s

NO TESTS RAN (skipped=2)

Had test failures in pyspark.streaming.tests.test_kinesis with python3.12; see logs.
Error:  running /__w/spark/spark/python/run-tests --modules=pyspark-core,pyspark-streaming,pyspark-errors --parallelism=1 --python-executables=python3.12 ; received return code 255
Error: Process completed with exit code 19.
```

Scheduled job fails because of exit 5, see pytest-dev/pytest#2393. This isn't a test failure.

### Does this PR introduce _any_ user-facing change?

No, test-only.

### How was this patch tested?

Manually tested.

### Was this patch authored or co-authored using generative AI tooling?

No,

Closes #44841 from HyukjinKwon/SPARK-46801.

Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 52b6292)
Signed-off-by: Dongjoon Hyun <[email protected]>
szehon-ho pushed a commit to szehon-ho/spark that referenced this issue Feb 7, 2024
…re in Python testing script

### What changes were proposed in this pull request?

This PR proposes to avoid treating the exit code 5 as a test failure in Python testing script.

### Why are the changes needed?

```
...
========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /__w/spark/spark/python/unit-tests.log
Will test against the following Python executables: ['python3.12']
Will test the following Python modules: ['pyspark-core', 'pyspark-streaming', 'pyspark-errors']
python3.12 python_implementation is CPython
python3.12 version is: Python 3.12.1
Starting test(python3.12): pyspark.streaming.tests.test_context (temp output: /__w/spark/spark/python/target/8674ed86-36bd-47d1-863b-abb0405557f6/python3.12__pyspark.streaming.tests.test_context__umu69c3v.log)
Finished test(python3.12): pyspark.streaming.tests.test_context (12s)
Starting test(python3.12): pyspark.streaming.tests.test_dstream (temp output: /__w/spark/spark/python/target/847eb56b-3c5f-49ab-8a83-3326bb96bc5d/python3.12__pyspark.streaming.tests.test_dstream__rorhk0lc.log)
Finished test(python3.12): pyspark.streaming.tests.test_dstream (102s)
Starting test(python3.12): pyspark.streaming.tests.test_kinesis (temp output: /__w/spark/spark/python/target/78f23c83-c24d-4fa1-abbd-edb90f48dff1/python3.12__pyspark.streaming.tests.test_kinesis__q5l1pv0h.log)
test_kinesis_stream (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."
test_kinesis_stream_api (pyspark.streaming.tests.test_kinesis.KinesisStreamTests.test_kinesis_stream_api) ... skipped "Skipping all Kinesis Python tests as environmental variable 'ENABLE_KINESIS_TESTS' was not set."

----------------------------------------------------------------------
Ran 0 tests in 0.000s

NO TESTS RAN (skipped=2)

Had test failures in pyspark.streaming.tests.test_kinesis with python3.12; see logs.
Error:  running /__w/spark/spark/python/run-tests --modules=pyspark-core,pyspark-streaming,pyspark-errors --parallelism=1 --python-executables=python3.12 ; received return code 255
Error: Process completed with exit code 19.
```

Scheduled job fails because of exit 5, see pytest-dev/pytest#2393. This isn't a test failure.

### Does this PR introduce _any_ user-facing change?

No, test-only.

### How was this patch tested?

Manually tested.

### Was this patch authored or co-authored using generative AI tooling?

No,

Closes apache#44841 from HyukjinKwon/SPARK-46801.

Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
(cherry picked from commit 52b6292)
Signed-off-by: Dongjoon Hyun <[email protected]>
@Marc--Olivier
Copy link

For information, I ran into this issue today while running pytest --cov=. on a directory that had no test. Still, I wanted to collect the code coverage. I personally did not find satisfactory to have to handle the error code 5 or to install the pytest-custom-exit-code in multiple places.
Why does pytest return an error code when there is no test whereas unittest does not?

@nicoddemus
Copy link
Member

@Marc--Olivier

Why does pytest return an error code when there is no test whereas unittest does not?

See #812 and #500 for the discussion that led the inclusion of this feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests