-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli: demo experience #1096
Comments
Adds a simple healthcheck on the `divviup-api` service in `compose.yaml` that tickles the health endpoint. Also adds a check to the Docker CI job that runs Docker compose and waits 120 seconds for it to become healthy. Part of #1096
Adds a simple healthcheck on the `divviup-api` service in `compose.yaml` that tickles the health endpoint. Also adds a check to the Docker CI job that runs Docker compose and waits 120 seconds for it to become healthy. Part of #1096
We may want to clarify the stability policy of divviup-api, now that we would effectively be shipping it to end-users. I think we still want to be empowered to take incompatible CLI and configuration changes to divviup-api (the server) and the docker-compose file without having to care about compatibility. Maybe the policy ought to be "always download the docker-compose.yml file corresponding to the Janus release"? |
It makes me somewhat uncomfortable how we rely on the Its purpose is to remove the authentication requirement and allow Not sure if there's any action to take here, really. If anything we can break Long term I was hoping to replace the "no authentication required" part of |
Adds a simple healthcheck on the `divviup-api` service in `compose.yaml` that tickles the health endpoint. Also adds a check to the Docker CI job that runs Docker compose and waits 120 seconds for it to become healthy. Part of #1096
|
To enable the demo workflow, users will need to pair two aggregators with the control plane. Further, one of them must be first-party, which can only be done using a CLI built with feature `admin`. But the one we distribute via GitHub releases isn't, and I argue it shouldn't be, which means that demo users can't appropriately pair aggregators. We could provide a `curl` or `wget` command that'd do it, but as it turns out, we already bundle the `divviup` CLI in the `divviup_api` container, so let's just use that, since the API URL and token for the aggregators are static! This commit: - adds feature `admin` to builds of `divviup_api_integration_test` in `docker-release.yaml` so that the bundled CLI will have `--first-party` and `--shared` - adds a service `pair_aggregator` to `compose.yaml` that uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party, shared aggregator - adds a stanza for `pair_aggregator` to `compose.dev.override.yaml` - makes `compose.dev.override.yaml` build all images with features `integration_test,admin` Part of #1096
To enable the demo workflow, users will need to pair two aggregators with the control plane. Further, one of them must be first-party, which can only be done using a CLI built with feature `admin`. But the one we distribute via GitHub releases isn't, and I argue it shouldn't be, which means that demo users can't appropriately pair aggregators. We could provide a `curl` or `wget` command that'd do it, but as it turns out, we already bundle the `divviup` CLI in the `divviup_api` container, so let's just use that, since the API URL and token for the aggregators are static! I'm thinking we should have the demo user pair the other aggregator themselves with `divviup` to simulate what they might have to do in a realistic use case, if they bring their own helper. This commit: - adds feature `admin` to builds of `divviup_api_integration_test` in `docker-release.yaml` so that the bundled CLI will have `--first-party` and `--shared` - adds a service `pair_aggregator` to `compose.yaml` that uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party, shared aggregator - adds a stanza for `pair_aggregator` to `compose.dev.override.yaml` - makes `compose.dev.override.yaml` build all images with features `integration_test,admin` Part of #1096
We want the `divviup` CLI packaged into the `divviup_api_integration_test` container to be built with feature `admin` for reasons discussed in #1099. In this commit: - add feature `admin` to builds of `divviup_api_integration_test` in `docker-release.yaml` so that the bundled CLI will have `--first-party` and `--shared` - adds a service `pair_aggregator` to `compose.dev.override.yaml` that uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party, shared aggregator - factors the `build` stanza out of services in `compose.dev.override.yaml` to apply features to them uniformly Part of #1096
We want the `divviup` CLI packaged into the `divviup_api_integration_test` container to be built with feature `admin` for reasons discussed in #1099. In this commit: - add feature `admin` to builds of `divviup_api_integration_test` in `docker-release.yaml` so that the bundled CLI will have `--first-party` and `--shared` - adds a service `pair_aggregator` to `compose.dev.override.yaml` that uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party, shared aggregator - factors the `build` stanza out of services in `compose.dev.override.yaml` to apply features to them uniformly - tunes the health checks on `divviup-api` and the aggregators so they will succeed sooner and speed up startup Part of #1096
We want the `divviup` CLI packaged into the `divviup_api_integration_test` container to be built with feature `admin` for reasons discussed in #1099. In this commit: - add feature `admin` to builds of `divviup_api_integration_test` in `docker-release.yaml` so that the bundled CLI will have `--first-party` and `--shared` - adds a service `pair_aggregator` to `compose.dev.override.yaml` that uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party, shared aggregator - factors the `build` stanza out of services in `compose.dev.override.yaml` to apply features to them uniformly - tunes the health checks on `divviup-api` and the aggregators so they will succeed sooner and speed up startup Part of #1096
In #1100, we started building `divviup_api_integration_test` with feature `admin`. This breaks `docker-release.yml`, because the Rust features are used to construct GH Actions cache keys, and commas are illegal there (at the very least, `docker buildx build --cache-from` doesn't like it). Construct the cache key by joining with `-` to work `around this. Additionally, this _should_ have broken `docker.yml`, but I forgot to add the `admin` feature there, which this PR corrects. Part of #1096
In #1100, we started building `divviup_api_integration_test` with feature `admin`. This breaks `docker-release.yml`, because the Rust features are used to construct GH Actions cache keys, and commas are illegal there (at the very least, `docker buildx build --cache-from` doesn't like it). Construct the cache key by joining with `-` to work `around this. Additionally, this _should_ have broken `docker.yml`, but I forgot to add the `admin` feature there, which this PR corrects. Part of #1096
To enable the demo workflow, users will need to pair two aggregators with the control plane. Further, one of them must be first-party, which can only be done using a CLI built with feature `admin`. But the one we distribute via GitHub releases isn't, and I argue it shouldn't be, which means that demo users can't appropriately pair aggregators. We could provide a `curl` or `wget` command that'd do it, but as it turns out, we already bundle the `divviup` CLI in the `divviup_api` container, so let's just use that, since the API URL and token for the aggregators are static! I'm thinking we should have the demo user pair the other aggregator themselves with `divviup` to simulate what they might have to do in a realistic use case, if they bring their own helper. Building on previous changes, this commit moves the `pair_aggregator` service into the main `compose.yaml`, leaving `build` and `develop` stanzas for it in `compose.dev.override.yaml`. Additionally, the `divviup_api_integration_test` image version is bumped to one built with feature `admin`, which should get the tests working. Part of #1096
To enable the demo workflow, users will need to pair two aggregators with the control plane. Further, one of them must be first-party, which can only be done using a CLI built with feature `admin`. But the one we distribute via GitHub releases isn't, and I argue it shouldn't be, which means that demo users can't appropriately pair aggregators. We could provide a `curl` or `wget` command that'd do it, but as it turns out, we already bundle the `divviup` CLI in the `divviup_api` container, so let's just use that, since the API URL and token for the aggregators are static! I'm thinking we should have the demo user pair the other aggregator themselves with `divviup` to simulate what they might have to do in a realistic use case, if they bring their own helper. Building on previous changes, this commit moves the `pair_aggregator` service into the main `compose.yaml`, leaving `build` and `develop` stanzas for it in `compose.dev.override.yaml`. Additionally, the `divviup_api_integration_test` image version is bumped to one built with feature `admin`, which should get the tests working. Part of #1096
- Autopair the second aggregator in `compose.yaml` - Rewrite demo script to guide users through brining up `docker compose` environment and then doing aggregations against it Part of #1096
- Autopair the second aggregator in `compose.yaml` - Rewrite demo script to guide users through brining up `docker compose` environment and then doing aggregations against it Part of #1096
There's a race between the various DB migrate jobs (well, services, |
We want demo users to be able to get started with nothing but a `divviup` binary, a working Docker Compose install and `compose.yaml`. `divviup_api_vite` relies on having a local checkout of the static assets to serve. We now use the `diviup_api_integration_test` image to serve static assets from service `static_assets`. The assets in question are already present in the image being run in service `divviup_api`, but I did it this way for the following reasons: - `divviup-api` routes requests to the static asset handler based on hostname, making it difficult to serve alongside the API. I tried creating some aliases ([1]) in Docker Compose, but those names are only visible inside the compose network, meaning you have to set the `Host` header from outside the netns, which is a hassle I don't want to inflict on demo users. - Having a distinct service for assets is convenient because we can make it depend on `pair_aggregators`. If that ran last, it could cause `docker compose up --wait` to fail (see comment in `compose.yaml`). [1]: https://docs.docker.com/compose/compose-file/05-services/#aliases Part of #1096
We want demo users to be able to get started with nothing but a `divviup` binary, a working Docker Compose install and `compose.yaml`. `divviup_api_vite` relies on having a local checkout of the static assets to serve. We now use the `diviup_api_integration_test` image to serve static assets from service `static_assets`. The assets in question are already present in the image being run in service `divviup_api`, but I did it this way for the following reasons: - `divviup-api` routes requests to the static asset handler based on hostname, making it difficult to serve alongside the API. I tried creating some aliases ([1]) in Docker Compose, but those names are only visible inside the compose network, meaning you have to set the `Host` header from outside the netns, which is a hassle I don't want to inflict on demo users. - Having a distinct service for assets is convenient because we can make it depend on `pair_aggregators`. If that ran last, it could cause `docker compose up --wait` to fail (see comment in `compose.yaml`). [1]: https://docs.docker.com/compose/compose-file/05-services/#aliases Part of #1096
We want demo users to be able to get started with nothing but a `divviup` binary, a working Docker Compose install and `compose.yaml`. `divviup_api_vite` relies on having a local checkout of the static assets to serve. We now use the `diviup_api_integration_test` image to serve static assets from service `static_assets`. The assets in question are already present in the image being run in service `divviup_api`, but I did it this way for the following reasons: - `divviup-api` routes requests to the static asset handler based on hostname, making it difficult to serve alongside the API. I tried creating some aliases ([1]) in Docker Compose, but those names are only visible inside the compose network, meaning you have to set the `Host` header from outside the netns, which is a hassle I don't want to inflict on demo users. - Having a distinct service for assets is convenient because we can make it depend on `pair_aggregators`. If that ran last, it could cause `docker compose up --wait` to fail (see comment in `compose.yaml`). [1]: https://docs.docker.com/compose/compose-file/05-services/#aliases Part of #1096
Addresses some rough edges in compose.yaml: - Adds a healthcheck on the postgres service to make startup of dependent schema migrator services more reliable. - `pair_aggregator` would re-run every time `docker compose up` is run, and fail after the first time. It now drops a file `/tmp/done` to indicate it has already run to completion. Part of #1096
We want demo users to be able to get started with nothing but a `divviup` binary, a working Docker Compose install and `compose.yaml`. `divviup_api_vite` relies on having a local checkout of the static assets to serve. We now use the `diviup_api_integration_test` image to serve static assets from service `static_assets`. The assets in question are already present in the image being run in service `divviup_api`, but I did it this way for the following reasons: - `divviup-api` routes requests to the static asset handler based on hostname, making it difficult to serve alongside the API. I tried creating some aliases ([1]) in Docker Compose, but those names are only visible inside the compose network, meaning you have to set the `Host` header from outside the netns, which is a hassle I don't want to inflict on demo users. - Having a distinct service for assets is convenient because we can make it depend on `pair_aggregators`. If that ran last, it could cause `docker compose up --wait` to fail (see comment in `compose.yaml`). [1]: https://docs.docker.com/compose/compose-file/05-services/#aliases Part of #1096
We want demo users to be able to get started with nothing but a `divviup` binary, a working Docker Compose install and `compose.yaml`. `divviup_api_vite` relies on having a local checkout of the static assets to serve. We now use the `diviup_api_integration_test` image to serve static assets from service `static_assets`. The assets in question are already present in the image being run in service `divviup_api`, but I did it this way for the following reasons: - `divviup-api` routes requests to the static asset handler based on hostname, making it difficult to serve alongside the API. I tried creating some aliases ([1]) in Docker Compose, but those names are only visible inside the compose network, meaning you have to set the `Host` header from outside the netns, which is a hassle I don't want to inflict on demo users. - Having a distinct service for assets is convenient because we can make it depend on `pair_aggregators`. If that ran last, it could cause `docker compose up --wait` to fail (see comment in `compose.yaml`). [1]: https://docs.docker.com/compose/compose-file/05-services/#aliases Part of #1096
Addresses some rough edges in compose.yaml: - Adds a healthcheck on the postgres service to make startup of dependent schema migrator services more reliable. - `pair_aggregator` would re-run every time `docker compose up` is run, and fail after the first time. It now drops a file `/tmp/done` to indicate it has already run to completion. Part of #1096
Addresses some rough edges in compose.yaml: - Adds a healthcheck on the postgres service to make startup of dependent schema migrator services more reliable. - `pair_aggregator` would re-run every time `docker compose up` is run, and fail after the first time. It now drops a file `/tmp/done` to indicate it has already run to completion. Part of #1096
We need parentheses around the right hand side of the `||` so that everything get skipped if `/tmp/done` exists. Part of #1096
We need parentheses around the right hand side of the `||` so that everything get skipped if `/tmp/done` exists. Also tests in CI that bringing the deployment up, down and up again works. Part of #1096
You may know about this but I am getting Also, I get an empty account list when I try to go through the README instructions.
|
I installed podman version 4.9.3 on Linux, and it's using version 2.27.1 of the docker-compose plugin as an external compose provider. With this setup, |
@divergentdave Yeah, I installed the latest, 5.1.1, and still encountering the same thing. I installed Docker Desktop and it is fine. I will just proceed from there. |
We concluded that managing collector credentials in a config file isn't a good idea right now:
|
We want it to be easy for prospective users of Divvi Up to give it a whirl and do some simple aggregations. To enable that, we want to make it easy to use the included
compose.yaml
to spin up a control plane, web app and a pair of aggregators, and then usedivviup dap-client
to upload and collect.Missing pieces are:
compose.yaml
in divviup-api releases so that users don't have to check out the project (Update Docker Compose yaml to releases #1098)docker compose
(Rewrite demo script #1123)docker compose up
works (Add healthcheck for divviup-api to compose.yaml #1097)docker.yml
: run demo script in CI #1126)compose.yaml
divviup_api_vite
so you can runcompose.yaml
without checking out sources (Only use Vite in compose.dev.yaml #1124)compose.yaml
rough edges #1125)make collector credential handling less awkward (cli: collector credential sensitive information to stdout #1033, cli: put collector credential in a dotfile #1073)cli/README.md
) somewhere like docs.divviup.orgThe text was updated successfully, but these errors were encountered: