This repository contains a reference configuration for deploying Teamscale using docker-compose. This setup uses multiple Teamscale instances for staging and a reverse proxy for SSL termination. Focus is put on switching the production instance with zero downtime.
Although this repository will give you a good starting point in setting up Teamscale, we strongly recommend reading the following documentation first:
As a general note, it can be useful to start with a single instance setup first. Addressing infrastructure and configuration problems is typically much easier in a single instance setup. Please refer to the respective section in this guide.
The whole deployment setup can be executed locally as follows, given docker
and docker-compose
are installed:
-
Download the contents of this repository as a zip file and extract it.
-
Start the containers using
sudo ./start.sh
. This starts all containers in detached mode (docker-compose up -d
), reloads the nginx config (./nginx-reload.sh
) and then follows the logsdocker-compose logs --follow
.Note: You can detach from the console at any time using
Ctrl+C
. Teamscale will keep running. -
The Teamscale servers should be available via NGINX at the following URLs
- https://teamscale.localhost (the production server)
- https://teamscale-staging.localhost (the staging server)
Remark: The web services are SSL-encryted using a self-signed certificate (located in folder
nginx
) for demo-purposes. It is strongly recommended to exchange this certificate with an appropriate certificate.
You may ask, why do I need to deploy two Teamscale instances for updates at all? The reason is that Teamscale does not perform database upgrades upon updating from one feature version to another (e.g. v7.7 to v7.8). Instead it performs a fresh analysis of source code and other artifacts to compensate for newly added or modified analyses. Ideally this reanalysis is performed on a second instance to avoid unavailability of analysis results. Patch releases, however, (e.g. v7.7.1 to v7.7.2) are drop-in and thus do not require setting up a staging instance . For more information, please consult the corresponding how-to on updating Teamscale.
Instead of hard-coding two instances, e.g. named production
and staging
, this guides follows a Teamscale release-based deployment setup.
It configures Teamscale services according to releases and has the advantage of documenting the the version in paths and service names.
Thus, it may prevent you from "mixing up" instances and allows to switch the production server without downtime.
It comes, however, with increased effort of creating folders and copying files for each release.
This example describes two Teamscale instances:
v7.7
is the production instance: in a real-world scenario, this instance would already be filled with analysis datav7.8
is the staging instance: in a real-world scenario, this instance would reanalyze the data ofv7.7
and go live after the analysis has finished
The servers should be available using
- https://teamscale.localhost (7.7, the production server)
- https://teamscale-next.localhost (7.8, the staging server)
The data and configuration files for these instances are stored in folders named according to the deployed releases, e.g. v7.7
and v7.8
.
Also the services described in docker-compose.yml
have the same naming scheme, e.g. v7.7
and v7.8
.
Note: v7.7
and v7.8
are only used as example, please replace these with your current needs.
Especially when migrating from a previous setup like setting up a first independent instance, copy all config files and the storage
folder to e.g. ./v7.7
.
- Ensure a valid Teamscale license file is placed in the config directories, e.g.
./v7.7/config/teamscale.license
. - Adjust deploy-time configuration such as the amount of workers (
engine.workers
inteamscale.properties
) and memory (TEAMSCALE_MEMORY
indocker-compose.yml
). - Change the
server_name
s in./nginx/teamscale.conf
to the production domains and replace the self-signed certificates in./nginx
by valid ones matching this domain. - Enable automatic backups (Starting with Teamscale v7.8 backups are enabled by default).
- Make sure Docker and the services are automatically restarted when you restart your server.
Note: For initial setup you may want to just run a single instance.
Our documentation describes how to perform Patch Version Updates and Feature Version Updates using this setup.
The staging instance (in this example v7.8
) can be disabled in docker-compose.yml
for the initial setup, e.g. by renaming the service from v7.8
to x-v7.8
.
Prefixing a service with x-
hides it from docker-compose.
If you want to use this setup to run exactly one instance this service can be removed completely.
In addition, you should delete the staging server in the teamscale.conf
nginx configuration.
A different deployment pattern is the so-called blue-green deployment strategy: It contains two Teamscale instances, called "blue" and "green", that alternate between being the production server and the staging environment.
The setup is similar to the release-based naming but relieves you from creating/copying new services and configuration files for each release.
It contains two docker-compose
services named blue
and green
with data and configuration directories named accordingly:
blue:
image: 'cqse/teamscale:7.7.latest'
restart: unless-stopped
working_dir: /var/teamscale
volumes:
- ./blue/:/var/teamscale/
green:
image: 'cqse/teamscale:7.8.latest'
restart: unless-stopped
working_dir: /var/teamscale
volumes:
- ./green/:/var/teamscale/
The nginx configuration teamscale.conf
will look as follows if blue
is the production server:
# The production server (live)
server {
# Binds the production server to the "blue" docker container
set $teamscale_prod blue;
server_name teamscale.localhost;
listen 443 ssl http2;
listen [::]:443 ssl http2;
location / {
proxy_pass http://$teamscale_prod:8080;
}
}
# The staging server (next)
server {
# Binds the staging server to the "green" docker container
set $teamscale_next green;
server_name teamscale-next.localhost;
listen 443 ssl http2;
listen [::]:443 ssl http2;
location / {
proxy_pass http://$teamscale_next:8080;
}
}
Preparation of a new deployment follows the same principle as described above in the release-based deployment.
Once you are satisfied with the changes in the staging instance just edit ./nginx/teamscale.conf
and switch the blue
and green
values of variables $teamscale_prod
and $teamscale_next
.
After saving the file simply execute sudo ./nginx-reload.sh
(or sudo ./start.sh
as it reloads the configuration and ensures all containers are started).
You should now be able to access the previous staging instance using https://teamscale.localhost.
The downside of this deployment strategy is that you need to be careful when making configuration changes or plan a new deployment not to "mix up" the colors blue and green. This especially is the case when you need to purge the storage directory when setting up a fresh instance.
Set the instance.name
property to the release number (e.g. v7.7
, or blue
and green
) respectively in each instance's teamscale.properties
config file or in docker-compose.yml
.
This allows you to easily differentiate the environment from the Web UI.
You can make use of yaml-anchors to extract common configuration that is shared between services:
x-teamscale-common: &teamscale-common
restart: unless-stopped
working_dir: /var/teamscale
environment:
JAVA_OPTS: "-Dlog4j2.formatMsgNoLookups=true"
v7.7:
<<: *teamscale-common
You can also serve Teamscale using subpaths instead of subdomains. Please follow the guide outlined in the documentation.
Important: Currently there is no way to switch instances without downtime as you need to change the server.urlprefix
configuration property in teamscale.properties
for both instances.
We are working on a way to resolve this issue for the next releases.
In order to reduce complexity and provide meaningful defaults, the default nginx configuration shipped within the container is used.
The directory ./nginx
is mounted as /etc/nginx/conf.d/
in the container and all config files matching *.conf
are included within the http
configuration block.
If you want to have full control over the nginx configuration, please follow the official guide.
In particular change the mount from /etc/nginx/conf.d/
to /etc/nginx/
and provide a nginx.conf
in the nginx
directory.
If you need to access the HTTP interface of the container directly, e.g. for debugging reasons, you need to explicitly map the port:
v7.7:
# ...
ports:
- '8080:8080'
For Teamscale problems, the Teamscale logs will be stored in the folder ./logs
of the respective instance.
In addition, the console output is available via:
sudo docker-compose logs <service-name>
For nginx problems, consult the nginx logs:
sudo docker-compose logs nginx
Please restart nginx by running sudo docker-compose restart nginx
.
Nginx noticed that the Teamscale instance was down (e.g. due to a restart) and is now refusing to try to reconnect to it.
After restarting, it should be reachable again.
The provided nginx configuration forbids a page from being displayed in a frame to prevent click-jacking.
You can learn more about thise here.
If you still want to embed a Teamscale in Jira, Azure DevOps or another Teamscale instance, the line add_header X-Frame-Options "DENY";
in nginx/common.conf
has to be commented out.