Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest capture logging error still happening #5502

Open
spearsem opened this issue Jun 27, 2019 · 64 comments
Open

pytest capture logging error still happening #5502

spearsem opened this issue Jun 27, 2019 · 64 comments
Labels
plugin: capture related to the capture builtin plugin plugin: logging related to the logging builtin plugin type: bug problem that needs to be addressed

Comments

@spearsem
Copy link

spearsem commented Jun 27, 2019

Good morning,

I am seeing the same issue as described in #14 but using a much more modern combination of both (anaconda) Python and pytest.

Test session starts (platform: linux, Python 3.6.8, pytest 4.6.3, pytest-sugar 0.9.2)
rootdir: /ml/tests/ml/services, inifile: all-tests.ini
plugins: forked-1.0.2, xdist-1.29.0, sugar-0.9.2, cov-2.7.1, mock-1.10.4

In the relevant .ini file I have this:

[pytest]
testpaths = tests
addopts =
    -n 4
    --durations=20
    --disable-warnings

where -n 4 is to use 4 parallel test runners with pytest-xdist. Edit: I was able to isolate the behavior to when using parallel worker with xdist, so likely it is an issue with an xdist worker prematurely closing a logger stream.

Basically, when I run one particular test file, I see a large number of repeated error messages in the pytest output:

--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit
    stream.write(msg)
  File "/usr/local/lib/python3.6/dist-packages/_pytest/capture.py", line 441, in write
    self.buffer.write(obj)
ValueError: I/O operation on closed file
Call stack:
  File "/usr/lib/python3.6/threading.py", line 884, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/ml/ml/models/neighbors.py", line 289, in re_index
    success = self._re_index()
  File "/ml/ml/models/neighbors.py", line 349, in _re_index
    logger.info('== Fit new model with {} ids and a dimensionality of {} =='.format(n, dim))
Message: '== Fit new model with 48 ids and a dimensionality of 2 =='
Arguments: ()

The issue appears to be related to pytest prematurely closing the stream associated with a given logger for the underlying code module in question. I can't really post all the code since it is an extended example from my work, but I can confirm there is nothing exotic or unusual happening with this logger. The module being tested just uses the normal convention to define

logger = logging.getLogger(__name__)

and there are no duplicate loggers or conflicts with this logger's name. The logger itself is not defined in any multiprocessing context or anything like that. Just a boring import of a top-level module from our code base.

But during the test execution, something weird is happening with pytest such that the eventual calls into that logger produce these errors.

If I turn off pytest capturing with --capture=no, then the messages go away, but unfortunately so does a bunch of other necessary output that I want pytest to capture and display.

How can I debug this further? I'm sorry that I cannot provide a minimal working example, but I can definitely confirm that there is nothing weird going on with these tests. It's a very straightforward use of logging and a very straightforward test file with simple imports and function calls.

  • [x ] a detailed description of the bug or suggestion
  • [ x] output of pip list from the virtual environment you are using
  • [ x] pytest and operating system versions
  • [ x] minimal example if possible
@Zac-HD Zac-HD added plugin: capture related to the capture builtin plugin plugin: logging related to the logging builtin plugin plugin: xdist related to the xdist external plugin type: bug problem that needs to be addressed labels Jul 3, 2019
@Zac-HD
Copy link
Member

Zac-HD commented Jul 21, 2019

A reproducing case, which doesn't need xdist:

import atexit
import logging

LOGGER = logging.getLogger(__name__)

def test_1():
    pass    

atexit.register(lambda: LOGGER.error("test in atexit"))
logging.basicConfig()

@Zac-HD Zac-HD removed the plugin: xdist related to the xdist external plugin label Jul 21, 2019
@bannsec
Copy link

bannsec commented Jul 22, 2019

I'm having the same issue. Mine are debug logs during atexit.

@sshishov
Copy link

sshishov commented Aug 8, 2019

Same happening here in Python 3.7 and latest pytest

@spearsem
Copy link
Author

spearsem commented Aug 29, 2019

@Zac-HD Thanks for migrating that example from the other issue. In my case with my original bug, there was simply no way to reproduce an example, since it contained so many moving parts related to pytest and pytest-xdist and a large test suite in the project. In general it is quite complex to try to reduce pytest errors to small & reproducible errors, though I agree that's the ideal thing to strive for. Unfortunately most of the time the best effort someone can really do is to paste in the error message and give general context around the use case where the error appears, and hope pytest devs who know much more about the internals can take it from there to scope it down to the reproducible essence of the bug. Just wanted to say it is very appreciated!

@nicoddemus
Copy link
Member

Hi everyone,

What I believe is happening is:

  1. pytest changes sys.stdout and sys.stderr to a buffer while importing test modules.
  2. If there's user code setting up logging and/or creating a logging.StreamHandler at the import level, it will attach itself to pytest's buffer.
  3. When pytest is about to finish the test session, it will restore sys.stdout and sys.stderr to the original values, and close the "capture" buffer.
  4. Here the problem happens: if any message is emitted at this point, the StreamHandler will try to attach itself to the buffer, hence the error.

Unfortunately I don't see how pytest can work around this, because we can't know who kept a reference to the capture buffer and somehow tell the owner of the reference that we are now "dead".

@spearsem, are you calling basicConfig at import time in your application, or setting up logging yourself at import time?

If I'm getting this right, my suggestion is to avoid setting up your logging configuration at import time, moving it to a function which is called only when running your actual production code.

@spearsem
Copy link
Author

spearsem commented Aug 30, 2019

@nicoddemus the application we are testing unfortunately requires setting up logging at import time. It is part of a Flask application that has an in-house framework for configuring logging so that it is standardized across all microservices (for compliance). I think this is actually a very commonly needed use case for automating logging in microservices. Some of our pytest code is using Flask testing clients to test these services, and the construction of the services will always produce this problem (unless we change the whole framework).

I can also say this was not always happening with pytest, it appeared in some old bug reports but then went away for a while and came back with recent versions. How did pytest handle this differently inbetween (or am I mistaken about this and it was always present)? Particularly in item 3. from your comment, what prevents pytest from being able to wait definitively for the testing code to complete and be torn down entirely before switching stdout and stderr back?

You mention,

Unfortunately I don't see how pytest can work around this, because we can't know who kept a reference to the capture buffer and somehow tell the owner of the reference that we are now "dead".

but I don't understand. Shouldn't pytest exactly be able to keep track of this, or at least wait until all test operations are completed so that any pytest unit of execution possibly containing an object that requested access to the buffers has fully completed before pytest makes the "undo" switch back?

@blueyed
Copy link
Contributor

blueyed commented Oct 22, 2019

Maybe related to #4943 and/or #4988.

Shouldn't pytest exactly be able to keep track of this, or at least wait until all test operations are completed so that any pytest unit of execution possibly containing an object that requested access to the buffers has fully completed before pytest makes the "undo" switch back?

Yes, that would be good. But likely we would need to wrap around atexit then - are you using that also?

@blueyed
Copy link
Contributor

blueyed commented Oct 22, 2019

I guess what should be done here is duplicate/redirect the handlers, instead of closing them.

@blueyed
Copy link
Contributor

blueyed commented Oct 23, 2019

#6034 should fix this, if you want to give it a try.

blueyed added a commit to blueyed/pytest that referenced this issue Oct 23, 2019
blueyed added a commit to blueyed/pytest that referenced this issue Oct 23, 2019
wanam added a commit to wanam/pytest that referenced this issue Oct 24, 2019
@wanam
Copy link

wanam commented Oct 24, 2019

@blueyed I'm not a python expert, but your fix seems to me like a workaround rather than a fix, I'm wondering why we cannot do it the atexit way? It seems to work fine for me (wanam@d0d1274)

@RonnyPfannschmidt
Copy link
Member

i'd rather switch thos to a exception that clearly tells to use a better logging setup and a documented best practice for setup of logging

from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess

if you set logging in import, you deal with the fallout ! pytest shouldn't suffer excess complexity to supplement for what basically amounts to broken/discouraged setups that don't play well with the rest of the world

@bluenote10
Copy link

bluenote10 commented Oct 24, 2019

i'd rather switch thos to a exception that clearly tells to use a better logging setup and a documented best practice for setup of logging

from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess

I understand the motivation for this, but it would be a very unfortunate decision e.g. for people using ROS (robot operation system). I ran into #5577 specifically in this context. The problem is that the "tactically broken" logging setup can come out of ROS itself, and -- what makes this an impossible problem to work-around as a user -- the ROS Python side is tightly coupled to its native distribution (yes, it is quite ugly...). As a result, "fixing" these issues is not a very realistic option, because one would have to maintain their own ROS distribution fork :(.

That is just one example. It would be nice if we were able to use py.test even if third party libraries are guilty of logging setup at import -- in particular if these libraries are not pip-installable and thus hard to fix. Even worse: Third party library authors may even prefer to stick to the anti-pattern for personal reasons (I recently had such a discussion about a related on-import anti-pattern).

I'll have to give #6034 a try.

@RonnyPfannschmidt
Copy link
Member

whats preventing issuing a actual issue against ros?

i an case its safe to say that hiding this issue will just make things hideeous and strangely, i would strongly suggest to disable capture on broken platforms and reporting issues against the platform instead of putting resource leaks into pytest to "work" on broken platforms

your message can very easily be read as "our vendor wont fix things anyway, so lets make pytest worse instead"

and i reject that notion, please get at your vendor and have them fix stuff,

@bluenote10
Copy link

bluenote10 commented Oct 24, 2019

whats preventing issuing a actual issue against ros?

Nothing, but much like an operating system, ROS comes in distributions with slow release cycles. Usually robotic developers stay with one ROS distribution for years, because it is neither straightforward to migrate from one release to another nor to maintain an own distribution containing backports. In other words: Even if the upstream issues gets fixed, we wouldn't be able to use it any time soon because we have to target existing releases.

Keep in mind that in C/C++ libraries like ROS the Python bindings play a inferior role. The maintainers are not Python developers, and often Python best practices don't play a big role or are even ignored deliberately to optimize for special use cases. It might be not entirely straightforward to fix this issue in ROS in general.

your message can very easily be read as "our vendor wont fix things anyway, so lets make pytest worse instead"

I've indeed experienced such unwillingness to adhere to Python best practices already. In any case, to me pytest would be "better" if it less restrictive on the code it supports. Even for such "broken" libraries, py.test is a great tool to write unit tests ;).

@RonnyPfannschmidt
Copy link
Member

@bluenote10 in that case i advise deactivating capture as a starting point and i suspect there can be a pytest plugin, that hijacks the logging system on rhos for example, and then enables a pytest compatible setup/configruation

that way core pytest doesn't have to care about a broken system, and a pytest plugin can fix/patch the mess ups of the vendor,

in an case this should and must be a issue against ros - retiring a shoddy workaround after a decade is less horrible than leaving it be because nobody did fix the origin

given that others mess up with logging as well, this perhaps could be pytest-gumby-logging which would deal with logging in the fashion of the gumbys (see the monty python sketches)

@blueyed
Copy link
Contributor

blueyed commented Oct 24, 2019

@wanam #6034 uses atexit - similar to your patch. But with yours it might run out of open file descriptors, e.g. with pytest's own test suite.

@spearsem
Copy link
Author

from my pov its important to just say NO to tactically broken but convenient setups that would make the internals a more painful mess

I think it suggests the internals are in a messy state if a certain aspect of basing end-to-end usage isn't well supported by existing internal primitives. If this use case is somehow weird or special from point of view of what assumption pytest is making, that strikes me as a criticism of those assumptions, not a criticism of the use case.

if you set logging in import, you deal with the fallout ! pytest shouldn't suffer excess complexity to supplement for what basically amounts to broken/discouraged setups that don't play well with the rest of the world

Setting logging in import is such a ubiquitous thing that I just can't agree with this. It's like saying, "if you define your own functions, then you deal with the following mess". These are not at all broken / discouraged setups, but are straightforward basic workflows for tons of use cases that need pytest to support it.

I understand it's a difference of opinion, but it feels like it's straying very far into a philosophical contortion to justify not doing something that clearly is an end to end use case users need supported. I really don't think, "don't do it that way," is a reasonable answer to this.

@RonnyPfannschmidt
Copy link
Member

@excitoon in your example the test isolation is right to trigger an error

We should probably extend the error message to mention that a reference to a replaced stream was taken

@RonnyPfannschmidt
Copy link
Member

@excitoon again, the instructions are clear, safe io capture has to error on invalid resources, if unsafe resource usage is necessary for you, turn it off

For me this discussion feels like trying to explain the use and limits of seat belts to people who want to dance at the top of the driving car

@nicoddemus
Copy link
Member

nicoddemus commented Jan 26, 2023

@excitoon

I would also like to highlight, that given the popularity of pytest,
there are lots of programmers which have to use pytest, and could not
change it to whatever they like.

Indeed, but to be clear, it is easy to disable capture in that case, just pass --capture=no. Or is that not possible in your case?

Also I noticed you use log = functools.partial(write, sys.stdout), I think a way to make it work with pytest capture is to not bind to sys.stdout, but use a function:

def log(s):
    write(sys.stdout, s)

This way, you will always write to whatever sys.stdout is pointing to, instead of binding it to sys.stdout at the time partial is created.


Please understand that is not that we do not want to fix it, this is a complex problem without an obvious solution. Hijacking sys.stdout (like pytest does for capture) has its drawbacks (as is being demonstrated here).

"Just never close sys stdout" is not a proper solution, because if pytest hijacks sys.stdout, it should restore it at the end of the session. Not doing so we risk breaking other use cases, like pytest being called inside another script, or embedded into an IDE.

@excitoon
Copy link

excitoon commented Jan 26, 2023 via email

@RonnyPfannschmidt
Copy link
Member

@excitoon as per the documentation and examples for https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout this is demonstrably false

@nicoddemus
Copy link
Member

It actually has an obvious and clear solution — wrap sys.stdout so it
would redirect output to original stream after end of session and do its'
magic before that.

That's actually a good idea, haven't thought of it. Has this been mentioned in this thread before? I missed it.

Perhaps we could indeed install a "wrapper", which would hold a pointer to the original sys.stdout, which redirects all calls to a a separate capture stream... after we are done, the wrapper can then redirect the calls to the original sys.stdout. 🤔

@RonnyPfannschmidt
Copy link
Member

@nicoddemus as part of fd capture at least we have to destroy the textio, the buffer and the file-descriptor of stdout/err

so the file object being held would be broken to begin with

as far as execnet/xdist is concerned, it also goes deep into the file descriptors

i propose that pytest adds a own logging setup that adds a handler well before importing user code,
thus preventing the problem with logging basic setup as a starting point

imho its much better to have the test specific capture streams turn dysfunctional at the end of a test, as that commonly indicates a resource control issue in the context of having io capture with stdlib compliant stdio redirection

additionally we cannot keep fd capture file descriptors safely around, as that would create file descriptor exhaustion in larger testsuites

@RonnyPfannschmidt
Copy link
Member

a compromise may be to have a 2 level object - an outer one we replace in sys.std* and a inner one

then both get to keep track of the test state and we can trigger warnings on the leak of the "sys.stdout/err" between tests

@quickhouse
Copy link

quickhouse commented Jan 26, 2023 via email

@andyDoucette
Copy link

andyDoucette commented Mar 9, 2023

+1 This issue makes flask_matomo not work as well.

As Linus Torvalds would say, "never break userspace", and as a user, we expect that our tools will play nice together. If some logging setup is easier to do, people will do it. It's inevitable. When reading this, there seems to be a lot of "not my problem" and other type judgement here. The reality of the situation is there IS a problem, and we can fix it in one place (pytest) or we can fix it in a thousand other places. Seems to me that it would be benefitial to at least acknowledge the problem and that pytest has the power to fix it, and that doing so might be the most fruitful way of moving forward together. Otherwise, if we try and fix it elsewhere, there will always be someone who makes a package and "hasn't gotten the logging memo" and the problem will persist forever.

It seems like our rational options are either:

  1. Ignore the problem and have many users get a sub-optimal pytest experience.😞
  2. Stick with the "The problem is the rest of the world" frame, in which case we'll need to devote the next year of our lives to a campaign to educate the rest of the world about the "right" way to log or write to stdout. This should be our only focus in life until we've achieved 100% saturation and no one ever makes a library or a bit of code that tries to write to stdout on exit. Only then will we have achieved a good experience for our pytest users.
  3. Acknowledge that we have power to change pytest and fix this issue once and for all, and that it may be the most resource conscious way to get a good pytest user experience without taking up the monumental task of changing the rest of the world.

To quote Gary Goldstein: “All you can change is yourself, but sometimes that changes everything!”

@nicoddemus
Copy link
Member

Thanks @andyDoucette for the clear remark. I agree we should strive to arrive at a solution.

@nicoddemus as part of fd capture at least we have to destroy the textio, the buffer and the file-descriptor of stdout/err
so the file object being held would be broken to begin with

I see what you are saying, but this issue is the context of sys capture (due to logging), so I don't think we need to mess with fd capture?

i propose that pytest adds a own logging setup that adds a handler well before importing user code,
thus preventing the problem with logging basic setup as a starting point

That would be ideal, however it is hard to convince/educate all users to do that.

I still like the "wrapper" idea, seems like it would solve the original issue, no?

To be clear, I mean something like this:

  1. When capture starts, we install a wrapper over sys.stdout and sys.stderr, which capture can control to start/stopping capture.
  2. That wrapper is never removed: when capture ends (at the end of the session), we leave the wrapper there, but from that point on it will always redirect all write calls to the original stream.

This lets users setup logging as they are used to; the only downside I see is that "wrapper" object living forever, but should be a small class and I do not see much problem with it.

Again this would only affect sys capture.

Do other people envison other problems with this approach?

@Renatochaz
Copy link

def clear_loggers():
    """Remove handlers from all loggers"""
    import logging
    loggers = [logging.getLogger()] + list(logging.Logger.manager.loggerDict.values())
    for logger in loggers:
        handlers = getattr(logger, 'handlers', [])
        for handler in handlers:
            logger.removeHandler(handler)

This worked for me. I never even setup logging on import. I use it inside daemon threads that might close unexpectedly. The logging errors were preventing me from seeing the tests report with nox. It shouldn't be captured.

This function with with teardown setup saved me, thanks a ton!

@wtfzambo
Copy link

This function with with teardown setup saved me, thanks a ton!

Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else

@belwalkar
Copy link

This function with with teardown setup saved me, thanks a ton!

Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else
I am seeing same issue. I am guessing to call this function in atexit callback?

@belwalkar
Copy link

belwalkar commented Apr 14, 2023

This function with with teardown setup saved me, thanks a ton!

Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else
I am seeing same issue. I am guessing to call this function in atexit callback?

I tried to add it in my cleanup file and called at end of my atexit callback it didnt work for me. but I am seeing my log file has correct messages byt at end I get value error. I have simple setup where I create objects in some file like test_object.py and the another file has atexit handler where I delete object

@Renatochaz
Copy link

This function with with teardown setup saved me, thanks a ton!

Just to understand, where am I supposed to place and call this function? I'm encountering the same exact problem as everyone else

I don't think my solution is good, it was rushed to deliver a poc, but what I did was to create a pytest fixture with that code, lets call this fixture clear_log.

After that, I just used the decorator usefixtures in the problematic functions, like:

@pytest.mark.usefixtures("clear_log")
def test_something()
    ...

Not very neat, but solved the problem for me.

@wtfzambo
Copy link

@Renatochaz thanks man

@belwalkar
Copy link

when I add second atexit handler I see that same problem arises. but if I use clear_loggers method in second atexit handler then nothing gets logged but I dont see errors. if I dont add that then I see errors with logger,
any help will be good

@wjmelements
Copy link

I'm still seeing this in python 3.10.11.

I think this should be fixed in lib/python3.10/logging/__init__.py by catching the ValueError.

@elementary-charge
Copy link

elementary-charge commented Nov 9, 2023

Following fixture based on the previously mentioned idea of handlers cleanup works for my async app to avoid closed resources error after the tests have been actually finished.

Be careful about fixture order if you have another session teardowns with logging.

@pytest.fixture(scope='session', autouse=True)
def cleanup_logging_handlers():
    try:
        yield
    finally:
        for handler in logging.root.handlers[:]:
            if isinstance(handler, logging.StreamHandler):
                logging.root.removeHandler(handler)

@cas--
Copy link

cas-- commented Aug 1, 2024

Spent a while chasing this very issue down testing a module using multiprocessing.log_to_stderr so putting the details here in case it helps someone else.

import logging
import multiprocessing

multiprocessing.log_to_stderr(logging.INFO)

def test():
    assert True

The error is logged in Python 3.11 but still is cryptic unwanted console noise

pytest mp_log_err.py

--- Logging error ---
Traceback (most recent call last):
  File "/home/calum/.pyenv/versions/3.11.5/lib/python3.11/logging/__init__.py", line 1114, in emit
    stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
  File "/home/calum/.pyenv/versions/3.11.5/lib/python3.11/multiprocessing/util.py", line 332, in _exit_function
    info('process shutting down')
  File "/home/calum/.pyenv/versions/3.11.5/lib/python3.11/multiprocessing/util.py", line 54, in info
    _logger.log(INFO, msg, *args)
Message: 'process shutting down'
Arguments: ()

I implemented a modified version of the above fixture but specifically for multiprocessing

@pytest.fixture(scope="session", autouse=True)
def cleanup_multiprocessing_logging_handlers():
    """Cleanup multiprocessing logging handlers."""
    try:
        yield
    finally:
        mp_logger = logging.root.manager.loggerDict.get("multiprocessing")
        if mp_logger:
            for handler in mp_logger.handlers:
                if isinstance(handler, logging.StreamHandler):
                    mp_logger.addHandler(logging.NullHandler())
                    mp_logger.removeHandler(handler)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
plugin: capture related to the capture builtin plugin plugin: logging related to the logging builtin plugin type: bug problem that needs to be addressed
Projects
None yet
Development

No branches or pull requests