Bittensor Auto-validator initiative.
See issues for more information what is being worked on.
WIP, click details for development setup info.
- docker with compose plugin
- python 3.11
- pdm
- nox
./setup-dev.sh
docker compose up -d
cd app/src
pdm run manage.py wait_for_database --timeout 10
pdm run manage.py migrate
pdm run manage.py runserver
pdm run manage.py run_bot
This sets up "deployment by pushing to git storage on remote", so that:
git push origin ...
just pushes code to Github / other storage without any consequences;git push production master
pushes code to a remote server running the app and triggers a git hook to redeploy the application.
Local .git ------------> Origin .git
\
------> Production .git (redeploy on push)
Use ssh-keygen
to generate a key pair for the server, then add read-only access to repository in "deployment keys" section (ssh -A
is easy to use, but not safe).
# remote server
mkdir -p ~/repos
cd ~/repos
git init --bare --initial-branch=master auto-validator.git
mkdir -p ~/domains/auto-validator
# locally
git remote add production root@<server>:~/repos/auto-validator.git
git push production master
# remote server
cd ~/repos/auto-validator.git
cat <<'EOT' > hooks/post-receive
#!/bin/bash
unset GIT_INDEX_FILE
export ROOT=/root
export REPO=auto-validator
while read oldrev newrev ref
do
if [[ $ref =~ .*/master$ ]]; then
export GIT_DIR="$ROOT/repos/$REPO.git/"
export GIT_WORK_TREE="$ROOT/domains/$REPO/"
git checkout -f master
cd $GIT_WORK_TREE
./deploy.sh
else
echo "Doing nothing: only the master branch may be deployed on this server."
fi
done
EOT
chmod +x hooks/post-receive
./hooks/post-receive
cd ~/domains/auto-validator
sudo bin/prepare-os.sh
./setup-prod.sh
# adjust the `.env` file
mkdir letsencrypt
./letsencrypt_setup.sh
./deploy.sh
Only master
branch is used to redeploy an application.
If one wants to deploy other branch, force may be used to push desired branch to remote's master
:
git push --force production local-branch-to-deploy:master
To push a new version of the application to AWS, just push to a branch named deploy-$(ENVIRONMENT_NAME)
.
Typical values for $(ENVIRONMENT_NAME)
are prod
and staging
.
For this to work, GitHub actions needs to be provided with credentials for an account that has the following policies enabled:
- AutoScalingFullAccess
- AmazonEC2ContainerRegistryFullAccess
- AmazonS3FullAccess
See .github/workflows/cd.yml
to find out the secret names.
For more details see README_AWS.md
- see Terraform template in
<project>/devops/vultr_tf/core/
- see scripts for interacting with Vultr API in
<project>/devops/vultr_scripts/
- note these scripts need
vultr-cli
installed
- note these scripts need
For more details see README_vultr.md.
Click to for backup setup & recovery information
Add to crontab:
# crontab -e
30 0 * * * cd ~/domains/auto-validator && ./bin/backup-db.sh > ~/backup.log 2>&1
Set BACKUP_LOCAL_ROTATE_KEEP_LAST
to keep only a specific number of most recent backups in local .backups
directory.
Backups are put in .backups
directory locally, additionally then can be stored offsite in following ways:
Backblaze
Set in .env
file:
BACKUP_B2_BUCKET_NAME
BACKUP_B2_KEY_ID
BACKUP_B2_KEY_SECRET
Set in .env
file:
EMAIL_HOST
EMAIL_PORT
EMAIL_HOST_USER
EMAIL_HOST_PASSWORD
EMAIL_TARGET
- Follow the instructions above to set up a new production environment
- Restore the database using bin/restore-db.sh
- See if everything works
- Set up backups on the new machine
- Make sure everything is filled up in .env, error reporting integration, email accounts etc
Skeleton of this project was generated using cookiecutter-rt-django.
Use cruft update
to update the project to the latest version of the template with all current bugfixes and features.