Skip to content

Latest commit

 

History

History
80 lines (42 loc) · 3.3 KB

README.md

File metadata and controls

80 lines (42 loc) · 3.3 KB

Docker Build

Ai-Dock + OneTrainer Docker Image

Run OneTrainer in a docker container locally or in the cloud.

This image is an extension of Ai-Dock/Linux-Desktop with OneTrainer preinstalled for user convenience.

These container images are tested extensively at Vast.ai & Runpod.io but compatibility with other GPU cloud services is expected.

Note

These images do not bundle models or third-party configurations. You should use a provisioning script to automatically configure your container. You can find examples in config/provisioning.

Documentation

All AI-Dock containers share a common base which is designed to make running on cloud services such as vast.ai and runpod.io as straightforward and user friendly as possible.

Common features and options are documented in the base wiki but any additional features unique to this image will be detailed below.

Version Tags

The :latest tag points to :latest-cuda

Tags follow these patterns:

CUDA
  • :pytorch-[pytorch-version]-py[python-version]-cuda-[x.x.x]-base-[ubuntu-version]

  • :latest-cuda:pytorch-2.1.2-py3.10-cuda-11.8.0-base-22.04

  • :latest-cuda-jupyter:jupyter-pytorch-2.1.2-py3.10-cuda-11.8.0-base-22.04

Browse here for an image suitable for your target environment.

Supported Python versions: 3.10

Supported Pytorch versions: 2.1.2, 2.2.0

Supported Platforms: NVIDIA CUDA

Additional Environment Variables

Variable Description
AUTO_UPDATE Update OneTrainer on startup (default true)
ONETRAINER_BRANCH OneTrainer branch/commit hash. (default master)
ONETRAINER_FLAGS Startup flags. eg. --generic-option1 --generic-option2

See the base environment variables here for more configuration options.

Additional Micromamba Environments

Environment Packages
onetrainer OneTrainer and dependencies

This micromamba environment will be activated on shell login.

See the base micromamba environments here.

Pre-Configured Templates

Vast.​ai


Runpod.​io


The author (@robballantyne) may be compensated if you sign up to services linked in this document. Testing multiple variants of GPU images in many different environments is both costly and time-consuming; This helps to offset costs