-
-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEAT] Why not build frontend in the Dockerfile #461
Comments
I can't find the issue/pr at the moment, but this is mostly because multi-arch ARM builds are incredibly slow on GH Actions. GH Actions uses virtualization, via QEMU, to support ARM and I was seeing build times in the order of 1h+ during ARM builds. It was related to file access if I remember correctly. Something similar happened in my other project - Fasten Health, but there we were able to solve it via a dedicated build service that natively supports ARM. If GH Actions starts natively supporting ARM build nodes, we should definitely move the frontend compilation back into the Dockerfile. Until then we may be stuck with this Frankenstein build |
Ahhah! That makes perfect sense. I (mistakenly due to ignorance) thought the frontend was arch agnostic! Makes perfect sense, cheers! |
actually it is, which is why we can currently build the frontend in linux (which doesn't have any performance issues) and then re-use it in the multi-arch builds. The issue is purely related to performance of ARM builds -- something about the 10,000's of tiny javascript files kills the ARM virtualization in QEMU. |
Yuuuuup, working on the borgmatic project, we've certainly come into this issue. There might actually be a way to limit which arch each build step is applied to by using the https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ I'll reopen this so we can have it on the radar to investigate. |
intriguing. so basically force GH to build the frontend stage using the host native architecture, and then only use emulation for the Go/backend stage? Thats kinda genius. I never knew that was an option. |
@AnalogJ - I found a potential resolution to the ARM problem. Currently, you're using Unfortunately, I'm not great at golang, and I don't really want to muck around with your makefile, but my thinking is this: Break out the makefile into separate steps, e.g.: Step 1: Break out the makefile RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
<<EOF
set -x
# Binary Clean
go clean
# Binary-collector
go build -ldflags "$(LD_FLAGS)" -o $(COLLECTOR_BINARY_NAME) $(STATIC_TAGS) ./collector/cmd/collector-metrics/
ifneq ($(OS),Windows_NT)
chmod +x $(COLLECTOR_BINARY_NAME)
file $(COLLECTOR_BINARY_NAME) || true
ldd $(COLLECTOR_BINARY_NAME) || true
./$(COLLECTOR_BINARY_NAME) || true
endif
go build -ldflags "$(LD_FLAGS)" -o $(WEB_BINARY_NAME) $(STATIC_TAGS) ./webapp/backend/cmd/scrutiny/
ifneq ($(OS),Windows_NT)
chmod +x $(WEB_BINARY_NAME)
file $(WEB_BINARY_NAME) || true
ldd $(WEB_BINARY_NAME) || true
./$(WEB_BINARY_NAME) || true
endif
EOF Step 2: Utilise the relevant xx tools such as RUN --mount=type=cache,target=/root/.cache/go-build \
go build -o /out/example . I can help with steps 2 and 3, but would need assistance in breaking the makefile into a bash-esque script for each section of binary-all. This is what I have currently, needs work though: FROM --platform=${BUILDPLATFORM} tonistiigi/xx AS xx
FROM --platform=${BUILDPLATFORM} node AS front
ENV SCRUTINYVER='0.6.0'
ENV NPM_CONFIG_LOGLEVEL="warn"
ENV NG_CLI_ANALYTICS="false"
ENV NODE_OPTIONS="--openssl-legacy-provider"
ADD --link --keep-git-dir=true https://github.com/AnalogJ/scrutiny.git#v$SCRUTINYVER /scrutiny
WORKDIR /scrutiny
RUN --mount=type=cache,target=/root/.npm \
<<EOF
set -x
cd webapp/frontend && ./git.version.sh
cd /scrutiny/webapp/frontend
npm install -g @angular/[email protected]
mkdir -p /scrutiny/webapp/frontend/dist
npm ci
npm run build:prod -- --output-path=/scrutiny/webapp/frontend/dist
EOF
FROM golang:1.18-bullseye AS backendbuild
COPY --from=xx / /
ARG TARGETPLATFORM
WORKDIR /go/src/github.com/analogj/scrutiny
COPY . /go/src/github.com/analogj/scrutiny
RUN --mount=type=cache,target=/root/.npm \
<<EOF
set -x
xx-info env
make binary-clean binary-all WEB_BINARY_NAME=scrutiny
EOF |
I wouldnt worry much about building the Go code, Go has built-in support for cross-compilation and its stupidly simple, as long as you're not building native code. I don't think I have any native code dependencies in Scrutiny. Forcing the frontend build step to use native node arch would be the big win here. |
Oh, that's relatively easy! Here's my mostly working version so far: Issues/notes:
# syntax = docker/dockerfile:labs
########################################################################################################################
# Omnibus Image
# NOTE: this image requires the `make binary-frontend` target to have been run before `docker build` The `dist` directory must exist.
########################################################################################################################
########
FROM --platform=${BUILDPLATFORM} node AS frontendbuild
ENV NPM_CONFIG_LOGLEVEL='warn'
ENV NG_CLI_ANALYTICS='false'
ENV NODE_OPTIONS='--openssl-legacy-provider'
ENV SCRUTINYVER='0.6.0'
ADD --link --keep-git-dir=true https://github.com/AnalogJ/scrutiny.git#v$SCRUTINYVER /scrutiny
RUN <<EOF
set -xe
cd /scrutiny/webapp/frontend
./git.version.sh
cd /scrutiny/webapp/frontend
npm install -g @angular/[email protected]
mkdir -p /scrutiny/webapp/frontend/dist
npm ci
npm run build:prod -- --output-path=/scrutiny/webapp/frontend/dist
EOF
########
FROM golang:1.18-bullseye AS backendbuild
ARG TARGETPLATFORM
WORKDIR /go/src/github.com/analogj/scrutiny
COPY --link . /go/src/github.com/analogj/scrutiny
RUN <<EOF
set -xe
make binary-clean binary-all WEB_BINARY_NAME=scrutiny
EOF
########
FROM debian:bullseye-slim AS runtime
ARG TARGETARCH
ENV PATH="/opt/scrutiny/bin:${PATH}"
ENV INFLUXD_CONFIG_PATH=/opt/scrutiny/influxdb
ENV S6VER="1.21.8.0"
ENV INFLUXVER="2.2.0"
WORKDIR /opt/scrutiny
RUN <<EOF
set -xe
apt-get update
DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
ca-certificates \
cron \
curl \
smartmontools \
tzdata
case ${TARGETARCH} in
"amd64") S6_ARCH=amd64 ;;
"arm64") S6_ARCH=aarch64 ;;
esac
curl https://github.com/just-containers/s6-overlay/releases/download/v${S6VER}/s6-overlay-${S6_ARCH}.tar.gz -L -s --output /tmp/s6-overlay-${S6_ARCH}.tar.gz
tar xzf /tmp/s6-overlay-${S6_ARCH}.tar.gz -C /
rm -rf /tmp/s6-overlay-${S6_ARCH}.tar.gz
curl -L https://dl.influxdata.com/influxdb/releases/influxdb2-${INFLUXVER}-${TARGETARCH}.deb --output /tmp/influxdb2-${INFLUXVER}-${TARGETARCH}.deb
dpkg -i --force-all /tmp/influxdb2-${INFLUXVER}-${TARGETARCH}.deb
# Update smartctl database
savedAptMark="$(apt-mark showmanual)"
DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
gnupg
/usr/sbin/update-smart-drivedb # Update smartctl
apt-mark auto '.*' > /dev/null
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null
# Cleanup
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=true
rm -rf /var/lib/apt/lists/*
rm -f /etc/cron.daily/*
EOF
COPY /rootfs /
COPY --link --from=backendbuild --chmod=755 /go/src/github.com/analogj/scrutiny/scrutiny /opt/scrutiny/bin/scrutiny
COPY --link --from=backendbuild --chmod=755 /go/src/github.com/analogj/scrutiny/scrutiny-collector-metrics /opt/scrutiny/bin/scrutiny-collector-metrics
COPY --link --from=frontendbuild /scrutiny/webapp/frontend/dist /opt/scrutiny/web
RUN <<EOF
set -xe
mkdir -p /opt/scrutiny/web
mkdir -p /opt/scrutiny/config
chmod 0644 /etc/cron.d/scrutiny
chmod -R ugo+rwx /opt/scrutiny/config
EOF
EXPOSE 8080
CMD ["/init"] Takes a couple of seconds over 6 minutes to build on my system using buildkit/buildx. Once we get at least the git version script issue resolved, I'll raise a proper PR for this. |
this looks great, but can you fork Scrutiny and get this working with GH Actions? Thanks again for doing all this 🥳 |
https://github.com/modem7/scrutiny-docker Both the CI and Docker build actions complete (albeit I only tested Omnibus as that's all I was working on). The actions have been changed slightly (as I don't really want to push the images), but otherwise remain largely the same. Once we're happy with things, a new branch/fork would be made to keep only the relevant parts changed obviously. Although as I said, the following issues/suggestions should be implemented where possible:
This would allow for proper Docker cache without constantly busting the cache itself, and allow for quicker cross compilation as it wouldn't need to do it every time, just when things change, significantly improving build speeds. A few things need optimising (like adding cache mounts), but otherwise, I think we're quite a ways there but by no means complete. |
catching up on some Scrutiny related PR's and Issues this weekend, hopefully I'll have time to merge these changes (and the badly needed Angular 14 upgrade) |
Hey @modem7 I did rip out the Smartctl database update, but we can make that change in another commit. Fantastic work, I definitely learned something new here :) |
Glad it helped!! |
Out of curiosity, why not build the frontend during the Dockerfile build step?
I noticed that you've got it in GH actions, but not in the Dockerfile, so was just curious!
Thanks!
The text was updated successfully, but these errors were encountered: