Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removing _gcloud_vendor. #808

Closed
wants to merge 2 commits into from

Conversation

dhermes
Copy link
Contributor

@dhermes dhermes commented Apr 8, 2015

@craigcitro Removing our vendored in code and switching to the PyPI version of apitools gives 4 errors I can't readily grok. (This was similarly reported by @tseaver.)

Care to take a look and check out my branch?

@dhermes dhermes added api: storage Issues related to the Cloud Storage API. do not merge Indicates a pull request not ready for merge, due to either quality or timing. labels Apr 8, 2015
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Apr 8, 2015
@craigcitro
Copy link
Contributor

so there were two separate issues here:

  • for the three failing download tests, there was the same root cause: in the case of a download with auto_transfer=False, and with total_size not set, we never parse the size from headers, and thus only download the first chunk. i believe this was lost in merging the gsutil branch, but it's definitely a bug in apitools -- fix will land shortly.
  • for the failing upload test, the issue is in the testing code: the mocked out _Connection object doesn't consume the bytes in the POST/PUT body in the case that it's a stream. this doesn't match the behavior of httplib, so a consistency check fails after "uploading" the first chunk.

@dhermes
Copy link
Contributor Author

dhermes commented Apr 8, 2015

  • I see, so my fix of auto_transfer=False -> True was not a fix. Is there a corresponding bug filed?
  • Where specifically should it be consumed? Anything passed to _Connection should be already consumed.

@craigcitro
Copy link
Contributor

  • i didn't file a bug, but fix is in master on apitools. (reviewed internally -- just about at the point where i don't have to do that anymore. 😀 )

  • so httplib.HTTPConnection.request allows body to be a file-like object, and (in that case) will read all contents from the stream. i think the right approach here is to exhaust body in the request method -- but i don't know if that's the only hiccup you'll hit. in particular, though, doing something like

    def request(self, uri, method, headers, body, **kw):
        if hasattr(body, 'read'):
            body = body.read()
        return self._respond(uri=uri, method=method, headers=headers,
                             body=body, **kw)
    

    leads to this test passing (since we're now actually reading the bytes from the upload stream).

@dhermes
Copy link
Contributor Author

dhermes commented Apr 9, 2015

DERP. Thanks @craigcitro! I'm going to close this out and send a new PR. Note that we can't "accept" the PR unless google/apitools@a6feb8a is into a PyPI release.

@craigcitro
Copy link
Contributor

done -- v0.4.2 is on pypi.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: storage Issues related to the Cloud Storage API. cla: yes This human has signed the Contributor License Agreement. do not merge Indicates a pull request not ready for merge, due to either quality or timing.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants