Skip to content

S2I image for running tensorflow_model_server on Openshift with GPU

Notifications You must be signed in to change notification settings

radanalyticsio/tensorflow-serving-gpu-s2i

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Radanalytics Tensorflow Serving GPU#

This S2I image is for running on Openshift with GPU.With CUDA-9 and CuDNN-7. This is a builder image for a tensorflow serving applications. It is meant to be used in an openshift project with tensorflow models.

The final image will have tensorflow model installed along with tensorflow_model_server binary from submod/tf-server:cuda9-cudnn7-centos7, a startup script and associated utilities to start tensorflow prediction endpoint at port 6006.

Integration With OpenShift

To make it easier to deploy a tensorflow serving endpoint a template for OpenShift is also included. This can be loaded into your project using:

oc create -f https://raw.githubusercontent.com/sub-mod/tensorflow-serving-gpu-s2i/master/template.json

Once loaded, select the tensorflow-server-gpu template from the web console. The APPLICATION_NAME , SOURCE_REPOSITORY and SOURCE_DIRECTORYmust be specified.

OR

You can create from commandline.Just create a new application within OpenShift, pointing the S2I builder at the Git repository containing your tensorflow model files.

oc new-app --template=tensorflow-server-gpu \
	--param=APPLICATION_NAME=tf-cnn-gpu \
	--param=SOURCE_REPOSITORY=https://github.com/sub-mod/mnist-models \
	--param=SOURCE_DIRECTORY=gpu/cnn

To have any changes to your model automatically redeployed when changes are pushed back up to your Git repository, you can use the web hooks integration of OpenShift to create a link from your Git repository hosting service back to OpenShift.

Producing a build image

To produce a builder image:

$ make build

To print usage information for the builder image:

$ sudo docker run -t <id from the make>

To poke around inside the builder image:

$ sudo docker run -i -t <id from the make>
bash-4.2$ cd /opt/app-root # take a look around

To tag and push a builder image:

$ sudo make push

By default this will tag the image as submod/tensorflow-serving-s2i-gpu, edit the Makefile and change PUSH_IMAGE to control this.

s2i bin files

S2i scripts are located at ./s2i/bin.

About

S2I image for running tensorflow_model_server on Openshift with GPU

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published