jobs/basics/hello-automl/hello-automl-job-basic.yml |
|
A Classification job using bank marketing |
jobs/single-step/dask/nyctaxi/job.yml |
|
This sample shows how to run a distributed DASK job on AzureML. The 24GB NYC Taxi dataset is read in CSV format by a 4 node DASK cluster, processed and then written as job output in parquet format. |
jobs/single-step/gpu_perf/gpu_perf_job.yml |
|
Runs NCCL-tests on gpu nodes. |
jobs/single-step/julia/iris/job.yml |
|
Train a Flux model on the Iris dataset using the Julia programming language. |
jobs/single-step/lightgbm/iris/job-sweep.yml |
|
Run a hyperparameter sweep job for LightGBM on Iris dataset. |
jobs/single-step/lightgbm/iris/job.yml |
|
Train a LightGBM model on the Iris dataset. |
jobs/single-step/pytorch/cifar-distributed/job.yml |
|
Train a basic convolutional neural network (CNN) with PyTorch on the CIFAR-10 dataset, distributed via PyTorch. |
jobs/single-step/pytorch/iris/job.yml |
|
Train a neural network with PyTorch on the Iris dataset. |
jobs/single-step/pytorch/word-language-model/job.yml |
|
Train a multi-layer RNN (Elman, GRU, or LSTM) on a language modeling task with PyTorch. |
jobs/single-step/r/accidents/job.yml |
|
Train a GLM using R on the accidents dataset. |
jobs/single-step/r/iris/job.yml |
|
Train an R model on the Iris dataset. |
jobs/single-step/scikit-learn/diabetes/job.yml |
|
Train a scikit-learn LinearRegression model on the Diabetes dataset. |
jobs/single-step/scikit-learn/iris-notebook/job.yml |
|
Train a scikit-learn SVM on the Iris dataset using a custom Docker container build with a notebook via papermill. |
jobs/single-step/scikit-learn/iris/job-docker-context.yml |
|
Train a scikit-learn SVM on the Iris dataset using a custom Docker container build. |
jobs/single-step/scikit-learn/iris/job-sweep.yml |
|
Sweep hyperparemeters for training a scikit-learn SVM on the Iris dataset. |
jobs/single-step/scikit-learn/iris/job.yml |
|
Train a scikit-learn SVM on the Iris dataset. |
jobs/single-step/spark/nyctaxi/job.yml |
|
This sample shows how to run a single node Spark job on Azure ML. The 47GB NYC Taxi dataset is read in parquet format by a 1 node Spark cluster, processed and then written as job output in parquet format. |
jobs/single-step/tensorflow/mnist-distributed-horovod/job.yml |
|
Train a basic neural network with TensorFlow on the MNIST dataset, distributed via Horovod. |
jobs/single-step/tensorflow/mnist-distributed/job.yml |
|
Train a basic neural network with TensorFlow on the MNIST dataset, distributed via TensorFlow. |
jobs/single-step/tensorflow/mnist/job.yml |
|
Train a basic neural network with TensorFlow on the MNIST dataset. |
jobs/basics/hello-code.yml |
|
no description |
jobs/basics/hello-data-uri-folder.yml |
|
no description |
jobs/basics/hello-dataset.yml |
|
no description |
jobs/basics/hello-git.yml |
|
no description |
jobs/basics/hello-iris-datastore-file.yml |
|
no description |
jobs/basics/hello-iris-datastore-folder.yml |
|
no description |
jobs/basics/hello-iris-file.yml |
|
no description |
jobs/basics/hello-iris-folder.yml |
|
no description |
jobs/basics/hello-iris-literal.yml |
|
no description |
jobs/basics/hello-mlflow.yml |
|
no description |
jobs/basics/hello-notebook.yml |
|
no description |
jobs/basics/hello-pipeline-abc.yml |
|
no description |
jobs/basics/hello-pipeline-customize-output-file.yml |
|
no description |
jobs/basics/hello-pipeline-customize-output-folder.yml |
|
no description |
jobs/basics/hello-pipeline-default-artifacts.yml |
|
no description |
jobs/basics/hello-pipeline-io.yml |
|
no description |
jobs/basics/hello-pipeline-settings.yml |
|
no description |
jobs/basics/hello-pipeline.yml |
|
no description |
jobs/basics/hello-sweep.yml |
|
Hello sweep job example. |
jobs/basics/hello-world-env-var.yml |
|
no description |
jobs/basics/hello-world-input.yml |
|
no description |
jobs/basics/hello-world-org.yml |
|
|
jobs/basics/hello-world-output-data.yml |
|
no description |
jobs/basics/hello-world-output.yml |
|
no description |
jobs/basics/hello-world.yml |
|
no description |
jobs/pipelines/cifar-10/pipeline.yml |
|
Pipeline using distributed job to train model based on cifar-10 dataset |
jobs/pipelines/nyc-taxi/pipeline.yml |
|
Train model with nyc taxi data |
jobs/automl-standalone-jobs/cli-automl-classification-task-bankmarketing/cli-automl-classification-task-bankmarketing.yml |
|
A Classification job using bank marketing |
jobs/automl-standalone-jobs/cli-automl-forecasting-task-energy-demand/cli-automl-forecasting-task-energy-demand.yml |
|
A Time-Series Forecasting job using energy demand dataset |
jobs/automl-standalone-jobs/cli-automl-image-classification-multiclass-task-fridge-items/cli-automl-image-classification-multiclass-task-fridge-items.yml |
|
A multi-class Image classification job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-image-classification-multilablel-task-fridge-items/cli-automl-image-classification-multilabel-task-fridge-items.yml |
|
A multi-label Image classification job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-image-instance-segmentation-task-fridge-items/cli-automl-image-instance-segmentation-task-fridge-items.yml |
|
An Image Instance segmentation job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-image-object-detection-task-fridge-items/cli-automl-image-object-detection-task-fridge-items.yml |
|
An Image Object Detection job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-regression-task-hardware-perf/cli-automl-regression-task-hardware-perf.yml |
|
A regression job using hardware performance dataset |
jobs/automl-standalone-jobs/cli-automl-text-classification-multilabel-paper-cat/cli-automl-text-classification-multilabel-paper-cat.yml |
|
A text classification multilabel job using paper categorization data |
jobs/automl-standalone-jobs/cli-automl-text-classification-newsgroup/cli-automl-text-classification-newsgroup.yml |
|
A text classification job using newsgroup dataset |
jobs/automl-standalone-jobs/cli-automl-text-ner-conll/cli-automl-text-ner-conll2003.yml |
|
A text named entity recognition job using CoNLL 2003 data |
jobs/pipelines-with-components/basics/1a_e2e_local_components/pipeline.yml |
|
Dummy train-score-eval pipeline with local components |
jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml |
|
E2E dummy train-score-eval pipeline with registered components |
jobs/pipelines-with-components/basics/2a_basic_component/pipeline.yml |
|
Hello World component example |
jobs/pipelines-with-components/basics/2b_component_with_input_output/pipeline.yml |
|
Component with inputs and outputs |
jobs/pipelines-with-components/basics/3a_basic_pipeline/pipeline.yml |
|
Basic Pipeline Job with 3 Hello World components |
jobs/pipelines-with-components/basics/3b_pipeline_with_data/pipeline.yml |
|
Pipeline with 3 component jobs with data dependencies |
jobs/pipelines-with-components/basics/4a_local_data_input/pipeline.yml |
|
Example of using data in a local folder as pipeline input |
jobs/pipelines-with-components/basics/4b_datastore_datapath_uri/pipeline.yml |
|
Example of using data folder from a Workspace Datastore as pipeline input |
jobs/pipelines-with-components/basics/4c_web_url_input/pipeline.yml |
|
Example of using a file hosted at a web URL as pipeline input |
jobs/pipelines-with-components/basics/4d_data_input/pipeline.yml |
|
Example of using data from a data as pipeline input |
jobs/pipelines-with-components/basics/5a_env_public_docker_image/pipeline.yml |
|
Pipeline job with component using public docker image as environment |
jobs/pipelines-with-components/basics/5b_env_registered/pipeline.yml |
|
Pipeline job with component using a registered AzureML environment |
jobs/pipelines-with-components/basics/5c_env_conda_file/pipeline.yml |
|
Pipeline job with component using environment defined by a conda file |
jobs/pipelines-with-components/basics/6a_tf_hello_world/pipeline.yml |
|
Prints the environment variable ($TF_CONFIG) useful for scripts running in a Tensorflow training environment |
jobs/pipelines-with-components/basics/6b_pytorch_hello_world/pipeline.yml |
|
Prints the environment variables useful for scripts running in a PyTorch training environment |
jobs/pipelines-with-components/basics/6c_r_iris/pipeline.yml |
|
Train an R model on the Iris dataset. |
jobs/pipelines-with-components/image_classification_with_densenet/pipeline.yml |
|
Train densenet for image classification |
jobs/pipelines-with-components/nyc_taxi_data_regression/pipeline.yml |
|
Train regression model based on nyc taxi dataset |
jobs/pipelines-with-components/pipeline_with_hyperparameter_sweep/pipeline.yml |
|
Tune hyperparameters using TF component |
jobs/pipelines-with-components/rai_pipeline_adult_analyse/pipeline.yml |
|
Sample rai pipeline |
jobs/automl-standalone-jobs/cli-automl-classification-task-bankmarketing/cli-automl-classification-task-bankmarketing.yml |
|
A Classification job using bank marketing |
jobs/automl-standalone-jobs/cli-automl-forecasting-task-energy-demand/cli-automl-forecasting-task-energy-demand.yml |
|
A Time-Series Forecasting job using energy demand dataset |
jobs/automl-standalone-jobs/cli-automl-image-classification-multiclass-task-fridge-items/cli-automl-image-classification-multiclass-task-fridge-items.yml |
|
A multi-class Image classification job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-image-classification-multilablel-task-fridge-items/cli-automl-image-classification-multilabel-task-fridge-items.yml |
|
A multi-label Image classification job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-image-instance-segmentation-task-fridge-items/cli-automl-image-instance-segmentation-task-fridge-items.yml |
|
An Image Instance segmentation job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-image-object-detection-task-fridge-items/cli-automl-image-object-detection-task-fridge-items.yml |
|
An Image Object Detection job using fridge items dataset |
jobs/automl-standalone-jobs/cli-automl-regression-task-hardware-perf/cli-automl-regression-task-hardware-perf.yml |
|
A regression job using hardware performance dataset |
jobs/automl-standalone-jobs/cli-automl-text-classification-multilabel-paper-cat/cli-automl-text-classification-multilabel-paper-cat.yml |
|
A text classification multilabel job using paper categorization data |
jobs/automl-standalone-jobs/cli-automl-text-classification-newsgroup/cli-automl-text-classification-newsgroup.yml |
|
A text classification job using newsgroup dataset |
jobs/automl-standalone-jobs/cli-automl-text-ner-conll/cli-automl-text-ner-conll2003.yml |
|
A text named entity recognition job using CoNLL 2003 data |