Substra is an open source federated learning (FL) software. It provides a flexible Python interface and a web application to run federated learning training at scale. This specific repository is about the web application - also called frontend.
If you need support, please either raise an issue on Github or ask on Slack.
Substra warmly welcomes any contribution. Feel free to fork the repo and create a pull request.
- Make sure
substra-frontend.org-1.com
andsubstra-frontend.org-2.com
are pointing to the cluster's ip in your/etc/hosts
: a. If you use minikube,minikube ip
will give you the cluster's ip b. If you use k3s, the cluster ip is127.0.0.1
- Execute
skaffold [run|dev]
. You can use thedev
skaffold profile (via-p dev
) to use thedev
docker target which serves files usingvite
insteadnginx
and benefits from hot reloading inside the cluster. - Access the frontend at
http://substra-frontend.org-1.com
- Make sure
substra-frontend.org-1.com
andsubstra-frontend.org-2.com
are pointing to127.0.0.1
in your/etc/hosts
- Make sure you're using node 18.16.0 (
nvm install 18.16.0
andnvm use 18.16.0
) - Install dependencies:
npm install --dev
- Run
npm run dev
- Access the frontend at
http://substra-frontend.org-1.com:3000
Note: Backend is expected to be served at http://substra-backend.org-1.com
on http port (80). In case you are using a development backend served on another url or port, you can set it using API_URL env var.
ex: API_URL=http://127.0.0.1:8000 npm run dev
Alternatively you can run it inside a container by using dev target.
docker build -f docker/substra-frontend/Dockerfile --target dev -t substra-frontend .
docker run -it --rm -p 3000:3000 -v ${PWD}/src:/workspace/src substra-frontend
Note: Use -e API_URL=http://127.0.0.1:8000
to specify another backend URL.
Many developments done in the frontend go hand in hand with API changes coming from either the backend or the orchestrator. These changes aren't always available in the last release and sometimes aren't even merged in the main
branches. In order to use them, you'll need to:
- Check out the branches / commits in your local clones of the repos
- In each repo, run
skaffold run
(with your usual options, for example-p single-org
forsubstra-backend
)
To dump the orchestrator DB into orchestrator.sql
:
# Example with an 'org-1' organization and an orchestrator's postgres pod with name 'orchestrator-org-1-postgresql-0'
kubectl exec -n org-1 -i orchestrator-org-1-postgresql-0 -- pg_dump --clean --no-owner postgresql://postgres:postgres@localhost/orchestrator > orchestrator.sql
To dump the backend DB of org-1 into backend.sql
:
# Example with an `org-1` organization and a backend's postgres pod with name `backend-org-1-postgresql-0`
kubectl exec -n org-1 -i backend-org-1-postgresql-0 -- pg_dump --clean --no-owner postgresql://postgres:postgres@localhost/substra > backend.sql
To restore the orchestrator DB from orchestrator.sql
:
# Example with an `org-1` organization and an orchestrator's postgres pod with name `orchestrator-org-1-postgresql-0`
cat orchestrator.sql| kubectl exec -n org-1 -i orchestrator-org-1-postgresql-0 -- psql postgresql://postgres:postgres@localhost/orchestrator
To restore the backend DB of org-1 from backend.sql
:
# Example with an `org-1` organization and a backend's postgres pod with name `backend-org-1-postgresql-0`
cat backend.sql| kubectl exec -n org-1 -i backend-org-1-postgresql-0 -- psql postgresql://postgres:postgres@localhost/substra
Run npm run prepare
to install git hooks that will format code using prettier.
Since we're using emotion for CSS in JS, you should install the vscode-styled-components plugin. It provides intellisense and syntax highlighting for styled components.
When logging in, the backend will set 3 cookies:
refresh
(httpOnly)signature
(httpOnly)header.payload
In order to fetch data, you then have to send back both refresh
and signature
as cookies (automatically handled by the browser) and header.payload
in an Authorization
header with the JWT
prefix.
E.g. Authorization: JWT eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNjIxNjY5MDQ3LCJqdGkiOiJiYzAzNjU1NTJkNzc0ZmJjYTBmYmUwOTQ5Y2QwMGVhZiIsInVzZXJfaWQiOjF9
The CI will build all commits on the main
branch as "unstable" builds, with the version from package.json
.
Builds are a Docker image + a Helm chart.
Tagged commits will be made into full release (not marked as "unstable" and with a GitHub release).
Typically, git tag 1.2.3 && git push origin 1.2.3
should be enough.
The changelog is managed with towncrier, a Python tool.
To add a new entry in the changelog, add a file in the changes
folder. The file name should have the following structure:
<unique_id>.<change_type>
.
The unique_id
is a unique identifier, we currently use the PR number.
The change_type
can be of the following types: added
, changed
, removed
, fixed
.
To generate the changelog (for example during a release), you need to have towncrier
installed. You can either install it in a virtual env, or use pipx
(please refer to pipx documentation for installation instructions).
$ pipx install towncrier
Then use the following command :
towncrier build --version=<x.y.z>
You can use the --draft
option to see what would be generated without actually writing to the changelog (and without removing the fragments).
They are written using Jest in files ending in .test.ts
. These files live next to the module / component they test.
To run these tests:
npx vite-dev
or using our alias:
npm run test:unit
They are written using Cypress. All E2E tests are under e2e-tests/cypress/integration/
and end in .spec.js
.
To install cypress, move to the e2e-tests
folder and run npm install
.
Then, still in this folder:
- run these tests using
npx cypress run
ornpm run test:e2e
- run these tests in dev mode using
npx cypress open
In order to use microsoft clarity, you need to have a clarity ID that you can then use as such.
Locally:
MICROSOFT_CLARITY_ID=xxxxxxxxxx npm run dev
In production:
You'll have to define the microsoftClarity.id
value.