-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading Model from GCS Fails With Data loss: not an sstable (bad magic number) #1441
Comments
@stephen-lazaro , |
@rmothukuru In what sense? Here's the content of my entrypoint:
I have an env variable: The contents of
where obviously BUCKET is my bucket name. In the bucket at
Let me know if you need more information of any kind. |
@stephen-lazaro , If those links doesn't resolve your problem, can you please try performing inference on a Single Model instead of a Config File in GCS, and let us know if it is successful. |
Update, @rmothukuru it was known of those. The issue was that our files were compressed and TF serving was not respecting the compression format header. Closing as resolved. |
#Bug Report
System information
Standard Docker Container version of Tensorflow Serving
Describe the problem
I am able to load models locally, but whenever loading them from GCS the model boot fails with
logs have been mildly anonymized.
All these models load successfully when booted from local filesystem rather than GCS.
Exact Steps to Reproduce
Attempt to point to Google cloud storage for model to be loaded built from Tensorflow 1.12
The text was updated successfully, but these errors were encountered: