-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should Kubeflow publish and maintain TF Serving Docker Images? #50
Comments
I hope we support centos images as well. As a sidenote wrt to tf-serving in openshift there exists
|
If we want to start with CPU image what flags want we activate? Cause generally AVX etc.. are disabled. |
What's the issue with providing a GPU image? |
@flx42 Reason for trying to limit to CPU is just trying to scope work given staffing is very scarce and CPU is probably the more common case. If you or someone else wants to take the lead on providing a GPU image for TFServing; you'd have my full support. I think there's some basic infrastructure we need to get into place for CI/CD of images. Once we have that in place hopefully we can add images to cover more use cases. I'll open an issue to discuss official notebook images. |
We want to have a workflow that automatically build and push the TF serving image from source, and run some test against it to verify it's working. Argo is suitable for this kind of workflows, and it supports docker in docker. In this PR, the workflows contains building+pushing part. (will add testing in followup PR). It's using the Dockerfile in components/k8s-model-server/docker, which is a TF1.4 CPU image. Related #50
Update the workflow for building TF serving image to 3 steps: checkout build and push the TF serving image use the image to deploy tf-serving component send prediction to it For step 4, it needs tensorflow and tf serving api. Added a new Dockerfile, for tf-test-worker image. * The test isn't integrated into our CI system yet. * Related to #50
pre/post-submit will also build / push / test TF serving image using the workflow in components/k8s-model-server/images/releaser The image name is gcr.io/mlkube-testing/model-server, and tag is the workflow name. Related to #64 TFServing Uber Tracking Bug Related to #50 Publish and maintain TFServing Docker images.
So we've decided that we need to publish and maintain Docker images to have a good experience. At least until TFServing takes this on. Lets use the uber tracking bug #64. To track actual work. |
* Add gojek dsp group email To feast team members list. Ref Issue kubeflow#49 * Sort members alphabetically
* test_harness w/ fixes for the unit tests to pass * update for README.md * update to README.md * added test harness files to generate unit tests
The AuthService logout is implemented as follows: 1. Send POST request to logout endpoint with credentials in `Authorization: Bearer <token>` header. The token is found in the cookie set by the authservice, called `authservice_session`. 2. A successful response will have a status code of 201 and will return a JSON response. The JSON response includes the URL to redirect to after logout. This commit is a rebase of this original GH PR: kubeflow#50 Signed-off-by: Stefano Fioravanzo <[email protected]>
The AuthService logout is implemented as follows: 1. Send POST request to logout endpoint with credentials in `Authorization: Bearer <token>` header. The token is found in the cookie set by the authservice, called `authservice_session`. 2. A successful response will have a status code of 201 and will return a JSON response. The JSON response includes the URL to redirect to after logout. This commit is a rebase of this original GH PR: kubeflow#50 Signed-off-by: Stefano Fioravanzo <[email protected]>
Add terminal activity to notebook controller culler logic
Opening this issue to consider whether Kubeflow should publish and curate TF Serving Docker images.
I think there is a reasonable expectation in the community that users shouldn't have to build Docker images just to serve models: see
A number of folks have already publicly shared Docker images (see tensorflow/serving#513), Kubeflow has also made a Docker image publicly available based on this Dockerfile
Kubeflow's ksonnet component for serving depends on having a publicly available Docker image to prevent customers from having to build their own image.
I think what's missing is a committement to maintaining and supporting these images. I think kubeflow should try to figure out what that would mean and whether we want to take that on.
Here are some questions to consider
Initial proposal
/cc @bhack @chrisolston @rhaertel80 @lluunn@nfiedel @bhupchan@ddysher
The text was updated successfully, but these errors were encountered: