Skip to content

cybertec-postgresql/pgwatch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Documentation License: MIT Go Build & Test Coverage Status

pgwatch v3-beta. Please test it as much as possible!

This is the next generation of pgwatch2.

Quick Start

To fetch and run the latest demo Docker image, exposing

  • Grafana on port 3000,
  • the administrative web UI on port 8080,
  • the internal configuration and metrics database on port 5432:
docker run -d --name pw3 -p 5432:5432 -p 3000:3000 -p 8080:8080 -e PW_TESTDB=true cybertecpostgresql/pgwatch-demo

After some minutes you could open the "Database Overview" dashboard and start looking at metrics. For defining your own dashboards you need to log in Grafana as admin (admin/pgwatchadmin).

If you don't want to add the test database for monitoring, remove the PW_TESTDB parameter when launching the container.

Development and production use

For production and long term installation cybertecpostgresql/pgwatch Docker image should be used. For the fastest development and deployment experience the Docker compose files are provided:

git clone https://github.com/cybertec-postgresql/pgwatch.git && cd pgwatch

docker compose -f ./docker/docker-compose.yml up --detach
 ✔ Network pgwatch_default       Created
 ✔ Container pgwatch-postgres-1  Healthy
 ✔ Container pgwatch-pgwatch-1   Started
 ✔ Container pgwatch-grafana-1   Started

These commands will build and start services listed in the compose file:

  • configuration and metric database;
  • pgwatch monitoring agent with WebUI;
  • Grafana with dashboards.

Monitor Database

After start, you could open the monitoring dashboard and start looking at metrics.

To add a test database under monitoring, you can use built-in WebUI. Or simply execute from command line:

docker compose -f ./docker/docker-compose.yml up add-test-db --force-recreate
[+] Running 2/0
 ✔ Container pgwatch-postgres-1     Running                                                                       0.0s
 ✔ Container pgwatch-add-test-db-1  Created                                                                       0.0s
Attaching to pgwatch-add-test-db-1
pgwatch-add-test-db-1  | BEGIN
...
pgwatch-add-test-db-1  | GRANT
pgwatch-add-test-db-1  | COMMENT
pgwatch-add-test-db-1  | INSERT 0 1
pgwatch-add-test-db-1 exited with code 0

Produce Workload

To emulate workload for added test database execute:

docker compose -f ./docker/docker-compose.yml up pgbench
[+] Running 2/2
 ✔ Container pgwatch-postgres-1  Running                                                                          0.0s
 ✔ Container pgwatch-pgbench-1   Created                                                                          0.1s
Attaching to pgwatch-pgbench-1
pgwatch-pgbench-1  | dropping old tables...
pgwatch-pgbench-1  | NOTICE:  table "pgbench_accounts" does not exist, skipping
pgwatch-pgbench-1  | NOTICE:  table "pgbench_branches" does not exist, skipping
pgwatch-pgbench-1  | NOTICE:  table "pgbench_history" does not exist, skipping
pgwatch-pgbench-1  | NOTICE:  table "pgbench_tellers" does not exist, skipping
pgwatch-pgbench-1  | creating tables...
pgwatch-pgbench-1  | generating data (client-side)...
pgwatch-pgbench-1  | 100000 of 5000000 tuples (2%) done (elapsed 0.11 s, remaining 5.17 s)
pgwatch-pgbench-1  | 200000 of 5000000 tuples (4%) done (elapsed 0.25 s, remaining 6.06 s)
...
pgwatch-pgbench-1  | 5000000 of 5000000 tuples (100%) done (elapsed 16.28 s, remaining 0.00 s)
pgwatch-pgbench-1  | vacuuming...
pgwatch-pgbench-1  | creating primary keys...
pgwatch-pgbench-1  | done in 42.29 s (drop tables 0.03 s, create tables 0.04 s, client-side generate 18.23 s, vacuum 1.29 s, primary keys 22.70 s).
pgwatch-pgbench-1  | pgbench (15.4)
pgwatch-pgbench-1  | starting vacuum...
pgwatch-pgbench-1  | end.
pgwatch-pgbench-1  | progress: 5.0 s, 642.2 tps, lat 15.407 ms stddev 11.794, 0 failed
pgwatch-pgbench-1  | progress: 10.0 s, 509.6 tps, lat 19.541 ms stddev 9.493, 0 failed
...
pgwatch-pgbench-1  | progress: 185.0 s, 325.3 tps, lat 16.825 ms stddev 8.330, 0 failed
pgwatch-pgbench-1  |
pgwatch-pgbench-1  |
pgwatch-pgbench-1  | transaction type: builtin: TPC-B (sort of)
pgwatch-pgbench-1  | scaling factor: 50
pgwatch-pgbench-1  | query mode: simple
pgwatch-pgbench-1  | number of clients: 10
pgwatch-pgbench-1  | number of threads: 2
pgwatch-pgbench-1  | maximum number of tries: 1
pgwatch-pgbench-1  | number of transactions per client: 10000
pgwatch-pgbench-1  | number of transactions actually processed: 100000/100000
pgwatch-pgbench-1  | number of failed transactions: 0 (0.000%)
pgwatch-pgbench-1  | latency average = 18.152 ms
pgwatch-pgbench-1  | latency stddev = 13.732 ms
pgwatch-pgbench-1  | initial connection time = 25.085 ms
pgwatch-pgbench-1  | tps = 534.261013 (without initial connection time)
pgwatch-pgbench-1  | dropping old tables...
pgwatch-pgbench-1  | done in 0.45 s (drop tables 0.45 s).
pgwatch-pgbench-1 exited with code 0

Inspect database

Important

pgAdmin uses port 80. If you want it to use another port, change it in docker-compose.yml file.

To look what is inside pgwatch database, you can spin up pgAdmin4:

docker compose -f ./docker/docker-compose.yml up --detach pgadmin

Go to localhost in your favorite browser and login as [email protected], password admin. Server pgwatch should be already added in Servers group.

Development

If you apply any changes to the source code and want to restart the agent, it's usually enough to run:

docker compose -f ./docker/docker-compose.yml up pgwatch --build --force-recreate --detach

The command above will rebuild the pgwatch agent from sources and relaunch the container.

Logs

If you are running containers in detached mode, you still can follow the logs:

docker compose -f ./docker/docker-compose.yml logs --follow

Or you may check the log of a particular service:

docker compose -f ./docker/docker-compose.yml logs pgwatch --follow

Contributing

Feedback, suggestions, problem reports, and pull requests are very much appreciated.