Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Docker support #245

Merged
merged 8 commits into from
Jun 12, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions docker/001-config.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
#!/bin/sh
set -ex

echo "IPFS PATH: ${IPFS_PATH}"

# We backup old config file
cp ${IPFS_PATH}/config ${IPFS_PATH}/config_bak

# We inject ResourceMgr JSON object in IPFS config
# See: https://github.com/ipfs/kubo/blob/master/docs/config.md#swarmresourcemgr
# We also inject the S3 plugin datastore
# Important: Make sure your fill out the optionnal parameters $CLUSTER_S3_BUCKET, $CLUSTER_AWS_KEY, $CLUSTER_AWS_SECRET in the cloudformation parameters
cat ${IPFS_PATH}/config_bak | \
jq ".Swarm.ResourceMgr.Limits.System = {
Memory: 1073741824,
FD: 1024,
Conns: 1024,
ConnsInbound: 256,
ConnsOutboun: 1024,
Streams: 16384,
StreamsInbound: 4096,
StreamsOutbound: 16384
}" | \
lidel marked this conversation as resolved.
Show resolved Hide resolved
jq ".Datastore.Spec = {
mounts: [
{
child: {
type: \"s3ds\",
region: \"${AWS_REGION}\",
bucket: \"${CLUSTER_S3_BUCKET}\",
rootDirectory: \"${CLUSTER_PEERNAME}\",
accessKey: \"${CLUSTER_AWS_KEY}\",
secretKey: \"${CLUSTER_AWS_SECRET}\"
},
mountpoint: \"/blocks\",
prefix: \"s3.datastore\",
type: \"measure\"
},
{
child: {
compression: \"none\",
path: \"datastore\",
type: \"levelds\"
},
mountpoint: \"/\",
prefix: \"leveldb.datastore\",
type: \"measure\"
}
],
type: \"mount\"
}" > ${IPFS_PATH}/config

# We override the ${IPFS_PATH}/datastore_spec file
echo "{\"mounts\":[{\"bucket\":\"${CLUSTER_S3_BUCKET}\",\"mountpoint\":\"/blocks\",\"region\":\"${AWS_REGION}\",\"rootDirectory\":\"${CLUSTER_PEERNAME}\"},{\"mountpoint\":\"/\",\"path\":\"datastore\",\"type\":\"levelds\"}],\"type\":\"mount\"}" > ${IPFS_PATH}/datastore_spec
44 changes: 44 additions & 0 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
FROM golang:1.19.1-buster AS builder

WORKDIR /

# Install jq for JSON manipulation in the config file
RUN apt update && apt install -y jq

# Kubo build process
# See details: https://github.com/ipfs/go-ds-s3
ENV GO111MODULE on
ENV GOPROXY direct

# We clone Kubo source code.
RUN git clone https://github.com/ipfs/kubo

# Move to kubo folder
WORKDIR /kubo

# Install the plugin and build ipfs
RUN go get github.com/ipfs/go-ds-s3/plugin@latest
RUN echo "\ns3ds github.com/ipfs/go-ds-s3/plugin 0" >> plugin/loader/preload_list
RUN make build
RUN go mod tidy
RUN make build
RUN make install

# The actual IPFS image we will use
FROM ipfs/kubo:latest

# We copy the new binaries we built in the 'builder' stage (--from=builder)
COPY --from=builder /kubo/cmd/ipfs/ipfs /usr/local/bin/ipfs
COPY --from=builder /kubo/bin/container_daemon /usr/local/bin/start_ipfs
COPY --from=builder /kubo/bin/container_init_run /usr/local/bin/container_init_run

# Fix permissions on start_ipfs
RUN chmod 0755 /usr/local/bin/start_ipfs

# We copy jq so we can manipulate the JSON config file easily in the init.d scripts
COPY --from=builder /usr/bin/jq /usr/local/bin/jq
COPY --from=builder /usr/lib/*-linux-*/libjq.so.1 /usr/lib/
COPY --from=builder /usr/lib/*-linux-*/libonig.so.5 /usr/lib/

# init.d script IPFS runs before starting the daemon. Used to manipulate the IPFS config file.
COPY 001-config.sh /container-init.d/001-config.sh
51 changes: 51 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Dockerfile and script to integrate the go-ds-s3 plugin

The Dockerfile builds IPFS and the go-ds-s3 plugin together using the same golang version.
It copies the relevant files to the final Docker image.

We also copy the `001-config.sh` shell script to manipulate the IPFS config file before startup.

## Config changes

The init script injects the following config in the `Swarm.ResourceMgr.Limits.System` object:

```
{
Memory: 1073741824,
FD: 1024,
Conns: 1024,
ConnsInbound: 256,
ConnsOutboun: 1024,
Streams: 16384,
StreamsInbound: 4096,
StreamsOutbound: 16384
}
```

The script also injects the correct config in the `Datastore.Spec` object to setup the plugin and
update the `datastore_spec` file to reflect the new datastore configuration.

Edit the `001-config.sh` to fit your use case.

## Building the image

`docker build -t my-ipfs-image .`

## Running a container

```
export ipfs_staging=/local/data/ipfs_staging
export ipfs_data=/local/data/ipfs_data
docker run -d -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p 4001:4001 -p 4001:4001/udp -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 --env-file .env my-ipfs-image`
```

Note that we pass a `.env` file that contains the following environment variables:

```
AWS_REGION=<my_region>
CLUSTER_S3_BUCKET=<my_bucket>
CLUSTER_PEERNAME=<node_name>
CLUSTER_AWS_KEY=<aws_key>
CLUSTER_AWS_SECRET=<aws_secret>
```