-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker image with prebuilt tensorflow_model_server, ready to run. #513
Comments
Thanks for the suggestion. We "get" that this would be useful. We are a small team and thus far have not had the resources to maintain an offering along those lines. We are considering whether we can support something like that going forward, but it's a complicated question on our side. |
Alright! Thanks for taking it into consideration :) |
@stianlp It could be very useful but we need to start to think to serving and ecosystem as a community contribution driven projects. As @chirsolston confirmed the resources allocated here are very small and I strongly belive in #311 (comment) |
Thanks for feedback folks! We built a docker image at work, and I decided to open source it. So if anyone is in need of an image to quickly be able to host a model. Feel free to pull. It's not working with GPUs or anything, but it works as a simple way to host your models. |
Having faced issues in compiling TensorFlow serving on the docker container, I built a docker image for CPU version of TF-serving. You can pull the image by (available on: https://hub.docker.com/r/gauravkaila/tf_serving_cpu/)
To start the container,
To start the server (within the container),
UPDATE For GPU support, use the following image,
|
Kubeflow is planning on publishing and maintaining TFServing docker images as part of its prediction story. |
The images published by Kubeflow are available at We are publishing images for TF versions 1.4 to 1.8 and publish CPU and GPU images. We aim to support new versions of TF serving as they become available but we provide no guarantees about how soon after TF serving releases that they will be published. The source is here |
We now have these as both dockerfiles and published to https://hub.docker.com/r/tensorflow/serving |
It looks like the most common way to build and deploy models is to build the tensorflow_model_server locally. If using docker, one would follow these steps:
bazel build -c opt //tensorflow_serving/model_servers:tensorflow_model_server
Wouldn't it be better if there was an image available on dockerhub that is ready to just run an provide the model path as a parameter.
Something like:
docker run -d -p <port>:<port> -v /path/to/model:/path/to/model/should/be/in/container image-where-tf-model-server-is-built
The CMD instruction for this image is something like:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=<your-port> --model_name=<you-model-name> --model_base_path=<your-base-path>
<your-*> are sent as parameters to the docker run command.
Is there a good reason why such an image does not exist already? I mean, the only thing needed to serve a model is the built model_server, or am I missing something?
The text was updated successfully, but these errors were encountered: