-
-
Notifications
You must be signed in to change notification settings - Fork 7
4.0 Running the Image
A 'feature-complete' docker-compose.yaml
file is included for your convenience. All features of the image are included - Simply edit the environment variables in .env
, save and then type docker compose up
.
If you prefer to use the standard docker run
syntax, the command to pass is init.sh
.
This image should be compatible with any GPU cloud platform. You simply need to pass environment variables at runtime.
Note
Please raise an issue on the relevant repository if your provider cannot run the image.
Container Cloud
Container providers don't give you access to the docker host but are quick and easy to set up. They are often inexpensive when compared to a full VM or bare metal solution.
All images built for AI-Dock are tested for compatibility with vast.ai but should be compatible with a wide range of cloud platforms.
Template links for Vast will be made available in the README.md
of the relevant image repository.
Warning
Container cloud providers may offer both 'community' and 'secure' versions of their cloud. If your use case involves storing sensitive information (eg. API keys, auth tokens) then you should always choose the secure option.
VM Cloud
Running docker images on a virtual machine/bare metal server is much like running locally.
You'll need to:
- Configure your server
- Set up docker
- Clone this repository
- Edit
.env
anddocker-compose.yml
- Run
docker compose up
Compatible VM Providers
Images that do not require a GPU will run anywhere - Use an image tagged :*-cpu-xx.xx
Where a GPU is required you will need either :*cuda*
or :*rocm*
depending on the underlying hardware.
A curated list of VM providers currently offering GPU instances:
It can be useful to perform certain actions when starting a container, such as creating directories and downloading files.
You can use the environment variable PROVISIONING_SCRIPT
to specify the URL of a script you'd like to run.
The URL must point to a plain text file - GitHub Gists/Pastebin (raw) are suitable options.
If you are running locally you may instead opt to mount a script at /opt/ai-dock/bin/provisioning.sh
.
Each application repository will provide a default script located at https://raw.githubusercontent.com/ai-dock/[repository-name]/config/provisioning/default.sh
. This file will be downloaded at runtime unless you have specified your own URL as detailed above. You may use the default file as inspiration for your own provisioning script.
Note
If configured, sshd
, caddy
, cloudflared
, serviceportal
, storagemonitor
, syncthing
& logtail
will be launched before provisioning; Any other processes will launch after.
Warning
Only use scripts that you trust and which cannot be changed without your consent.
All services listen for connections via the caddy proxy at 0.0.0.0
. This gives you some flexibility in how you interact with your instance:
Expose the Ports
This is fine if you are working locally but can be dangerous for remote connections where data is passed in plaintext between your machine and the container over http.
SSH Tunnel
You will only need to expose port 22
(SSH) which can then be used with port forwarding to allow secure connections to your services.
If you are unfamiliar with port forwarding then you should read the guides here and here.
Cloudflare Tunnel
You can use the included cloudflared service to make secure connections without having to expose any ports to the public internet. See more below.
All applications are listed with easy to use links in the Service Portal (port 1111
) so you should begin your interaction with AI-Dock containers here.