-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Identify source of poor throughput #1
Comments
nblair
added a commit
that referenced
this issue
Jun 28, 2018
This change set is the result of some investigation on the poor throughput issue (see #1). It includes the following: * Upgrades google-cloud-storage to latest release * Setup the ApacheHttpTransport to use a pooling client connection manager. This addresses an issue observed with the default connection manager; after a period of inactivity, the client would fail to connect to the google API endpoint and not retry. This implementation does successfully retry when connections are severed. This change comes with a caveat in that it requires us to use deprecated HTTP Client constructors instead of their current equivalents. * Improve the performance of hard delete by skipping an unnecessary prior GET. * Lean down object read requests to only retrieve the MEDIA_LINK field (at this time all other GCS object attributes aren't used by this implementation).
#40 introduces some improved visibility via a new HealthCheck (more checks to follow related to performance) and offloads some of our traffic from Storage to Datastore/Firestore. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Early experimentation with this blob store has demonstrated very poor throughput compared to other blob store implementations, so much so that it is not suitable for any real world traffic scenario at present.
The text was updated successfully, but these errors were encountered: