-
-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iter_content() memory leak + raw read problems #1401
Comments
Also, if the remote side is using gzip and you specify an iter_content size of 1, then the result is blank because it cannot decode anything. This is a bit misleading given that the default size is 1. If you use .raw.read() it returns a bunch of unicode because of the gzip compression. By default, I think it should use 'decode_content=True', as this is again a bit confusing. But as it is referencing a urllib3 model, this is difficult. Perhaps we should expose a read() object instead? I had to go trawling through the code to understand this, so I think these warrant a fix, or at the very least a docs patch. Personally my suggestion would be;
Thoughts? |
Hi @foxx, thanks for opening this issue! Let me address these in no particular order.
|
Thanks for the quick reply!
In ref to #4, that's an excellent question.. I wrote a small test outside of my project code base and the problem disappeared.. I even tried wrapping it with eventlet and still couldn't reproduce the problem. So it would seem that the memory leak is specific to something else I'm doing.. Until I can figure this out, I guess the ticket can be marked as closed for now. For future ref, the code used to disprove the memory leak was;
Apologies for this, I should have really done this before hand. Cal |
No need to apologise, that's why I followed up. =) Thanks for raising this! |
The following code will cause a memory leak;
To fix this, I had to use raw() instead;
Cal
The text was updated successfully, but these errors were encountered: