-
-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add a custom docker image with GPU support #290
Comments
Hey there, thanks for creating the issue. Unfortunately, there are a few other problems I have with the GPU support:
If you want to use the GPU widget, I suggest running from source. If that does not work, and you want to get it running in Docker, it would be really cool if you could report back all the steps you took, so we can put it in the docs :) I will keep this open as a feature request. In case anyone wants to spend some time working on it, feel free to PM me on Discord! |
I thought it's something ready and already made that you have in your drawer :D |
I'd also like to use it for a nvidia gpu. |
@MauriceNino Somehow this is working for me, but got an empty graph https://i.ibb.co/zNqXYYG/dash2.png
systeminformation output:
|
Hi @lukasmrtvy, thanks for trying it out and reporting back. Why are you using Also, the problem with your setup is, that the GPU entry in your SI test would need the properties I have tried your test container, by installing the GPU support according to [this guide]https://www.howtogeek.com/devops/how-to-use-an-nvidia-gpu-with-docker-containers/) and got the following output:
As you can see, the needed props exist there. As to your questions:
|
Here's what I've tried so far to get this working on Unraid (unsuccessfully). On Unraid, the general procedure is to install the Nvidia-Driver plugin, which in turn installs necessary drivers and the Nvidia Container Toolkit Once the container toolkit is installed, you can make your Nvidia GPU available to docker containers by adding the Additionally, you need to add the container toolkit environment variables NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES Using this setup, most docker containers can access the For example, running
However when using the same docker run arguments (runtime and variables) with the I don't really know enough about Docker to know what to troubleshoot next. I have been wondering if there is something fundamentally different about the base image you're using which prevents nvidia runtime from working? |
I was able to get it working by creating my own image using the dockerfile shared by @lukasmrtvy. I do not have the same issue with missing statistics, it appears to be working as expected. |
@HoreaM managed to get it working too with the following config. docker-compose: dashdot-gpu:
image: dashdot-gpu:latest
restart: unless-stopped
deploy:
resources:
reservations:
devices:
- capabilities:
- gpu
privileged: true
ports:
- 7678:3001
environment:
DASHDOT_ENABLE_CPU_TEMPS: 'true'
DASHDOT_ALWAYS_SHOW_PERCENTAGES: 'true'
DASHDOT_SPEED_TEST_INTERVAL: '1440'
DASHDOT_ENABLE_STORAGE_SPLIT_VIEW: 'true'
DASHDOT_WIDGET_LIST: 'os,storage,network,cpu,ram,gpu'
DASHDOT_FS_DEVICE_FILTER: 'sdd'
DASHDOT_OVERRIDE_NETWORK_SPEED_UP: '500000000'
DASHDOT_OVERRIDE_NETWORK_SPEED_DOWN: '500000000'
volumes:
- /:/mnt/host:ro Dockerfile (its a modified version of the official one, just running cuda/ubuntu as a base instead of alpine): # BASE #
FROM nvidia/cuda:12.2.0-base-ubuntu20.04 AS base
WORKDIR /app
ARG TARGETPLATFORM
ENV DASHDOT_RUNNING_IN_DOCKER=true
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES="compute,video,utility"
ENV TZ=Europe/Bucharest
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN \
/bin/echo ">> installing dependencies" &&\
apt-get update &&\
apt-get install -y \
wget \
mdadm \
dmidecode \
util-linux \
pciutils \
curl \
lm-sensors \
speedtest-cli &&\
if [ "$TARGETPLATFORM" = "linux/amd64" ] || [ "$(uname -m)" = "x86_64" ]; \
then \
/bin/echo ">> installing dependencies (amd64)" &&\
wget -qO- https://install.speedtest.net/app/cli/ookla-speedtest-1.1.1-linux-x86_64.tgz \
| tar xmoz -C /usr/bin speedtest; \
elif [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$(uname -m)" = "aarch64" ]; \
then \
/bin/echo ">> installing dependencies (arm64)" &&\
wget -qO- https://install.speedtest.net/app/cli/ookla-speedtest-1.1.1-linux-aarch64.tgz \
| tar xmoz -C /usr/bin speedtest &&\
apk --no-cache add raspberrypi; \
elif [ "$TARGETPLATFORM" = "linux/arm/v7" ]; \
then \
/bin/echo ">> installing dependencies (arm/v7)" &&\
wget -qO- https://install.speedtest.net/app/cli/ookla-speedtest-1.1.1-linux-armhf.tgz \
| tar xmoz -C /usr/bin speedtest &&\
apk --no-cache add raspberrypi; \
else /bin/echo "Unsupported platform"; exit 1; \
fi
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN curl -sL https://deb.nodesource.com/setup_19.x | bash -
RUN \
/bin/echo ">>installing yarn" &&\
apt-get update &&\
apt-get install -y \
yarn
# DEV #
FROM base AS dev
EXPOSE 3001
EXPOSE 3000
RUN \
/bin/echo -e ">> installing dependencies (dev)" &&\
apt-get install -y \
git &&\
git config --global --add safe.directory /app
# BUILD #
FROM base as build
ARG BUILDHASH
ARG VERSION
RUN \
/bin/echo -e ">> installing dependencies (build)" &&\
apt-get install -y \
git \
make \
clang \
build-essential &&\
git config --global --add safe.directory /app &&\
/bin/echo -e "{\"version\":\"$VERSION\",\"buildhash\":\"$BUILDHASH\"}" > /app/version.json
RUN \
/bin/echo -e ">> clean-up" &&\
apt-get clean && \
rm -rf \
/tmp/* \
/var/tmp/*
COPY . ./
RUN \
yarn --immutable --immutable-cache &&\
yarn build:prod
# PROD #
FROM base as prod
EXPOSE 3001
COPY --from=build /app/package.json .
COPY --from=build /app/version.json .
COPY --from=build /app/.yarn/releases/ .yarn/releases/
COPY --from=build /app/dist/apps/api dist/apps/api
COPY --from=build /app/dist/apps/cli dist/apps/cli
COPY --from=build /app/dist/apps/view dist/apps/view
CMD ["yarn", "start"] This requires building your own image right now, but I would like to add it to the main repo down the line. Just didn't get time to implement it yet. Can you confirm that one working for you as well @caesay? |
I dropped your Dockerfile into Portainer on my server to build an image and got the following build failure: ...
...
Step 22/33 : RUN /bin/echo -e ">> clean-up" && apt-get clean && rm -rf /tmp/* /var/tmp/*
---> Running in ae7986553966
>> clean-up
Removing intermediate container ae7986553966
---> c68bc9e2835f
Step 23/33 : COPY . ./
---> ca116c40c3d2
Step 24/33 : RUN yarn --immutable --immutable-cache && yarn build:prod
---> Running in aa462cd0b41c
yarn install v1.22.19
info No lockfile found.
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
Done in 0.03s.
yarn run v1.22.19
error Couldn't find a package.json file in "/app"
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
The command '/bin/sh -c yarn --immutable --immutable-cache && yarn build:prod' returned a non-zero code: 1 |
Sorry, I'm not really an expert in this, but the only thing that comes to mind is you probably forgot to do a git clone of this project before. The way I did it was: I first did a git clone https://github.com/MauriceNino/dashdot.git and then replaced the Dockerfile in the project with the one above. |
Hi, any ides on how to do this for intel igpu with i915 driver? Usually for plex it's enough to do
|
Subscribing to this... for whenever this becomes mainstream, maybe a Not sure I wanna build this manually all the time since I use watchtower, so just crossing my fingers here. |
+1 |
🎉 This issue has been resolved in version 5.8.0 Please check the changelog for more details. |
So nothing for Intel GPUs? Or it will work with nvidia image? |
@PilaScat No nothing for Intel GPUs right now. If you know how to make it work, please create a PR for it. I have no machine with an Intel GPU for testing, so I can't implement it, unfortunately. Same goes for AMD. |
I ran some builds with docker locally.
I don't get memory or load data populated, though. I did a bit of research, and NVTOP (non-alpine multi-brand gpu monitoring software) mentioned that Intel is working on exposing more hardware information through HWMON, but that was about two years ago that their readme updated to say that. Intel patchwork for what it's worth I ran a bunch of different commands on the system files visible from alpine. Not really much of value that I could see for gpu stats. I tried stressing the iGPU while repeatedly checking those files to see if the current frequency values would change, but they wouldn't. Here's the output for what that's worth:
|
Description of the feature
Hello
Is there any chance that you could add the GPU widget to the docker image on hub.docker.com with custom tag?
Since you said : "Docker images do not include the necessary tools (mainly, because I don't want to bloat the image for everyone)."
You can add it with a custom tag to the hub.docker.com so anyone looking to have the GPU widget with docker image they'll just pull the image with GPU tag (for example) and others who doesn't need it just pull the image with the latest tag.
This way everyone will be happy :)
Additional context
No response
The text was updated successfully, but these errors were encountered: