Skip to content

Commit

Permalink
Merge pull request #43 from gmfrasca/dashboard-tiles
Browse files Browse the repository at this point in the history
feat(manifests): ODH Dashboard tiles
  • Loading branch information
anishasthana authored Oct 5, 2022
2 parents 8889082 + 285d139 commit 648fe52
Show file tree
Hide file tree
Showing 4 changed files with 169 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ spec:
requests:
cpu: 30m
memory: 500Mi
limits:
limits:
cpu: 250m
memory: 1Gi
serviceAccountName: ml-pipeline-visualizationserver
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: odh-dashboard
app.kubernetes.io/part-of: odh-dashboard
resources:
- ./odhapplications/data-science-pipelines-odhapplication.yaml
- ./odhquickstarts/data-science-pipelines-odhquickstart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
apiVersion: dashboard.opendatahub.io/v1
kind: OdhApplication
metadata:
name: data-science-pipelines
annotations:
opendatahub.io/categories: 'Model development,Model training,Model optimization,Data analysis,Data preprocessing'
spec:
beta: true
betaTitle: Data Science Pipelines
betaText: This application is available for early access prior to official release.
displayName: Data Science Pipelines
description: Data Science Pipelines is a workflow platform with a focus on enabling Machine Learning operations such as Model development, experimentation, orchestration and automation.
provider: Red Hat
category: ODH Core
support: Open Data Hub
docsLink: https://www.kubeflow.org/docs/components/pipelines/
quickStart: create-data-science-pipeline
img: '<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 192 145"><defs><style>.cls-1{fill:#e00;}</style></defs><title>RedHat-Logo-Hat-Color</title><path d="M157.77,62.61a14,14,0,0,1,.31,3.42c0,14.88-18.1,17.46-30.61,17.46C78.83,83.49,42.53,53.26,42.53,44a6.43,6.43,0,0,1,.22-1.94l-3.66,9.06a18.45,18.45,0,0,0-1.51,7.33c0,18.11,41,45.48,87.74,45.48,20.69,0,36.43-7.76,36.43-21.77,0-1.08,0-1.94-1.73-10.13Z"/><path class="cls-1" d="M127.47,83.49c12.51,0,30.61-2.58,30.61-17.46a14,14,0,0,0-.31-3.42l-7.45-32.36c-1.72-7.12-3.23-10.35-15.73-16.6C124.89,8.69,103.76.5,97.51.5,91.69.5,90,8,83.06,8c-6.68,0-11.64-5.6-17.89-5.6-6,0-9.91,4.09-12.93,12.5,0,0-8.41,23.72-9.49,27.16A6.43,6.43,0,0,0,42.53,44c0,9.22,36.3,39.45,84.94,39.45M160,72.07c1.73,8.19,1.73,9.05,1.73,10.13,0,14-15.74,21.77-36.43,21.77C78.54,104,37.58,76.6,37.58,58.49a18.45,18.45,0,0,1,1.51-7.33C22.27,52,.5,55,.5,74.22c0,31.48,74.59,70.28,133.65,70.28,45.28,0,56.7-20.48,56.7-36.65,0-12.72-11-27.16-30.83-35.78"/></svg>'
getStartedLink: https://www.kubeflow.org/docs/started/
enable:
title: Enable Data Science Pipelines
actionLabel: Enable
description: |-
Clicking enable will add a card to the Enabled page to access the Data Science Pipelines interface.
Before enabling, be sure you have installed OpenShift Pipelines and have an S3 Object store configured
validationConfigMap: ds-pipelines-dashboardtile-validation-result
kfdefApplications: []
#kfdefApplications: ['data-science-pipelines'] # https://github.com/opendatahub-io/odh-dashboard/issues/625
route: ml-pipeline-ui
internalRoute: ml-pipeline-ui
getStartedMarkDown: |-
# Getting Started With Data Science Pipelines
Below are the list of samples that are currently running end to end taking the compiled Tekton yaml and deploying on a Tekton cluster directly. If you are interested more in the larger list of pipelines samples we are testing for whether they can be 'compiled to Tekton' format, please [look at the corresponding status page](https://github.com/opendatahub-io/ml-pipelines/tree/master/sdk/python/tests/README.md)
[DSP Tekton User Guide](https://github.com/opendatahub-io/ml-pipelines/tree/master/guides/kfp-user-guide) is a guideline for the possible ways to develop and consume Data Science Pipelines. It's recommended to go over at least one of the methods in the user guide before heading into the KFP Tekton Samples.
## Prerequisites
- Install [OpenShift Pipelines Operator](https://docs.openshift.com/container-platform/4.7/cicd/pipelines/installing-pipelines.html). Then connect the cluster to the current shell with `oc`
- Install [kfp-tekton](https://github.com/opendatahub-io/ml-pipelines/tree/master/sdk/README.md) SDK
```
# Set up the python virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install the kfp-tekton SDK
pip install kfp-tekton
```
## Samples
- [MNIST End to End example with DSP components](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/e2e-mnist)
- [Hyperparameter tuning using Katib](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/katib)
- [Trusted AI Pipeline with AI Fairness 360 and Adversarial Robustness 360 components](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/trusted-ai)
- [Training and Serving Models with Watson Machine Learning](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/watson-train-serve#training-and-serving-models-with-watson-machine-learning)
- [Lightweight python components example](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/lightweight-component)
- [The flip-coin pipeline](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/flip-coin)
- [Nested pipeline example](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/nested-pipeline)
- [Pipeline with Nested loops](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/nested-loops)
- [Using Tekton Custom Task on DSP](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/tekton-custom-task)
- [The flip-coin pipeline using custom task](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/flip-coin-custom-task)
- [Retrieve DSP run metadata using Kubernetes downstream API](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/k8s-downstream-api)
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
apiVersion: console.openshift.io/v1
kind: OdhQuickStart
metadata:
name: create-data-science-pipeline
annotations:
opendatahub.io/categories: 'Getting started,Model development,Model training,Model optimization,Data analysis,Data preprocessing'
spec:
displayName: Creating a Data Science Pipeline
appName: data-science-pipelines
durationMinutes: 5
icon: TODO
description: Create a simple pipeline that automatically runs tasks in a machine learning deployment workflow
introduction: |-
### This quick start shows you how to create a Data Science Pipeline.
Open Data Hub lets you run Data Science Pipelines in a scalable OpenShift hybrid cloud environment.
This quickstart shows you how to compile, create and run a simple example pipeline execution using the Kubeflow Pipelines Python SDK and Data Science Pipelines UI.
tasks:
- title: Launch Data Science Pipelines
description: |-
### To find the Data Science Pipelines Launch action:
1. Click **Applications** &#x2192; **Enabled**.
2. Find the Data Science Pipelines card.
3. Click **Launch** on the Data Science Pipelines card to access the **Piplines dashboard**.
A new browser tab will open displaying the **Pipelines Dashboard** page.
review:
instructions: |-
#### To verify you have launched Data Science Pipelines:
Is a new **Data Science Pipelines** browser tab visible with the **Dashboard** page open?
failedTaskHelp: This task is not verified yet. Try the task again.
summary:
success: You have launched Data Science Pipelines.
failed: Try the steps again.

- title: Install Python SDK and compile sample pipeline
description: |-
### Install the Kubeflow Pipelines Python SDK
1. Follow the [Kubeflow Pipelines Tekton Python SDK Installation instructions](https://github.com/opendatahub-io/ml-pipelines/blob/master/samples/README.md#prerequisites)
2. Download, clone or copy the [flip-coin example pipeline](https://github.com/opendatahub-io/ml-pipelines/blob/master/samples/flip-coin/condition.py)
3. Compile the python pipeline defintion into a Tekton YAML:
```
python condition.py
```
review:
instructions: |-
#### To verify you compiled the flip-coin sample pipeline:
Is there now a `condition.yaml` file in the directory you downloaded `condition.py` from?
failedTaskHelp: This task is not verified yet. Try the task again.
summary:
success: You have installed the Kubeflow Pipelines Tekton SDK and compiled a sample pipeline definition into a Tekton yaml
failed: Try the steps again.

- title: Create a Pipeline
description: |-
### Create a simple Pipeline from an example Data Science Pipeline .py file:
1. Click the **+Upload Pipeline** button in the top right corner
2. Leave the **Create a new pipeline** radio button selected
3. Type a pipeline name in the **Pipeline Name** field
4. Add a short description in the **Pipeline Description** field
5. Select the **Upload a file** radio button and click **Choose file** in the **File** text box
6. Find and select the condition.yaml file you compiled from the previous step
7. Click **Create**
The Data Science Pipelines **Upload Pipeline** page will redirect to a graph of the Pipeline you created
review:
instructions: |-
#### To verify that you have created a Pipeline:
Do you see a graph/chart in the shape of a flow diagram, that is titled with your sample pipeline's name?
failedTaskHelp: This task is not verified yet. Try the task again.
summary:
success: You have successfully created a Data Science Pipeline.
failed: Try the steps again.

- title: Run the Pipeline
description: |-
### Create the pipeline created in the previous setp:
1. Click the **+ Create run** button in the top right corner. You will be redirected to a **Start a run** form.
2. Click the **Choose** button in the **Experiment** text field. Select the **Default** experiment.
3. Leave all other fields the same.
4. Click the **Start** button.
You will now be redirected to the **Default** Experiment page. You should see an execution of the pipeline you created in the **Active** list of runs.
review:
instructions: |-
#### To verify that you have executed a Pipeline run:
Are you on the **Experiments** page of the Data Science Pipelines UI? Do you see an entry under **Active** runs with the name of the pipeline you created?
failedTaskHelp: This task is not verified yet. Try the task again.
summary:
success: You have successfully run a Data Science Pipeline.
failed: Try the steps again.
conclusion: You are now able to create and run a sample Data Science Pipeline!
nextQuickStart: []

0 comments on commit 648fe52

Please sign in to comment.