Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uptime Kuma unresponsive with High CPU after clearing events on a monitor #3248

Closed
2 tasks done
bignay2000 opened this issue Jun 11, 2023 · 11 comments
Closed
2 tasks done
Labels
bug Something isn't working

Comments

@bignay2000
Copy link

bignay2000 commented Jun 11, 2023

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

Description

Clearing the Events on a http(s) monitor hangs Uptime Kuma for over a minute. Get a black page with only showing the Uptime Kuma banner & Logo.

👟 Reproduction steps

Run louislam/uptime-kuma:1.21.3-alpine container on Raspberry Pi 4
Create a HTTP(s) web type monitor
Run monitor for more than 180 days
Log into Uptime Kuma
Click on the monitor
Click Clear Data
Click Events
Refresh webpage

👀 Expected behavior

Clear Data -> Events should not hang the application.

I think their is a SQL query that is long running without a limit that takes up all the CPU.
Maybe delete 100 records at a time with a 1 second sleep between calls to SQL? Or limit the SQL transcaction to a single CPU Core.

sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?'

😓 Actual Behavior

Raspberry Pi Load one minute Load Average jumps to 4.5
Errors are logged.

uptime

load average: 4.07, 4.57, 2.80

Screenshot 2023-06-11 at 3 26 56 PM

🐻 Uptime-Kuma Version

louislam/uptime-kuma:1.21.3-alpine

💻 Operating System and Arch

Docker on Raspbian

🌐 Browser

Google Chrome 114

🐋 Docker Version

Docker version 24.0.2, build cb74dfc

🟩 NodeJS Version

v16.13.1

📝 Relevant log output

2023-06-11T15:11:21-04:00 [AUTH] INFO: Login by token. IP=172.16.16.7
2023-06-11T15:11:21-04:00 [AUTH] INFO: Username from JWT: hiveadmin
2023-06-11T15:13:21-04:00 [AUTH] ERROR: Invalid token. IP=172.16.16.7
2023-06-11T15:13:52-04:00 [MONITOR] WARN: Monitor #1 'esxi HTTPS': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:52-04:00 [MONITOR] WARN: Monitor #2 'esxi SSH': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: port
2023-06-11T15:13:52-04:00 [MONITOR] WARN: Monitor #7 'influxdb': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:52-04:00 [MONITOR] WARN: Monitor #16 'netdata.pi': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:52-04:00 [MONITOR] WARN: Monitor #14 'home': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:53-04:00 [MONITOR] WARN: Monitor #10 'hive VPN': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: dns
2023-06-11T15:13:54-04:00 [MONITOR] WARN: Monitor #3 'jenkins': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:54-04:00 [MONITOR] WARN: Monitor #13 'checkmk': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:55-04:00 [MONITOR] WARN: Monitor #6 'oldgitlab': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:55-04:00 [MONITOR] WARN: Monitor #5 'gitlab': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:55-04:00 [MONITOR] WARN: Monitor #15 'netdata.hivevm': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:55-04:00 [MONITOR] WARN: Monitor #4 'nexus': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:56-04:00 [MONITOR] WARN: Monitor #12 'glance.hivevm': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:57-04:00 [MONITOR] WARN: Monitor #11 'medpay.test': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2023-06-11T15:13:57-04:00 [MONITOR] WARN: Monitor #9 'hivetechnologies.net': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 60 seconds | Type: http
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async exports.setting (/app/server/util-server.js:438:12)
    at async /app/server/server.js:188:13 {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at runNextTicks (node:internal/process/task_queues:61:5)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'primaryBaseURL', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
    at runNextTicks (node:internal/process/task_queues:65:3)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async exports.setting (/app/server/util-server.js:438:12)
    at async Namespace.<anonymous> (/app/server/server.js:1525:13) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'disableAuth', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async UptimeKumaServer.getClientIP (/app/server/uptime-kuma-server.js:242:13)
    at async Socket.<anonymous> (/app/server/server.js:279:30) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at runNextTicks (node:internal/process/task_queues:61:5)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'primaryBaseURL', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
    at runNextTicks (node:internal/process/task_queues:65:3)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async exports.setting (/app/server/util-server.js:438:12)
    at async Namespace.<anonymous> (/app/server/server.js:1525:13) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'disableAuth', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async UptimeKumaServer.getClientIP (/app/server/uptime-kuma-server.js:242:13)
    at async Socket.<anonymous> (/app/server/server.js:279:30) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at runNextTicks (node:internal/process/task_queues:61:5)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'primaryBaseURL', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
    at runNextTicks (node:internal/process/task_queues:65:3)
    at listOnTimeout (node:internal/timers:526:9)
    at processTimers (node:internal/timers:500:7)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async exports.setting (/app/server/util-server.js:438:12)
    at async Namespace.<anonymous> (/app/server/server.js:1525:13) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'disableAuth', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:569:22)
    at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:555:22)
    at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:590:19)
    at async Function.get (/app/server/settings.js:54:21)
    at async UptimeKumaServer.getClientIP (/app/server/uptime-kuma-server.js:242:13)
    at async Socket.<anonymous> (/app/server/server.js:279:30) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.<anonymous> (/app/server/server.js:1804:13)
    at process.emit (node:events:390:28)
    at emit (node:internal/process/promises:136:22)
    at processPromiseRejections (node:internal/process/promises:242:25)
    at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
2023-06-11T15:15:18-04:00 [AUTH] INFO: Login by token. IP=172.16.16.7
2023-06-11T15:15:18-04:00 [AUTH] INFO: Username from JWT: hiveadmin
2023-06-11T15:15:19-04:00 [AUTH] INFO: Successfully logged in user hiveadmin. IP=172.16.16.7
@bignay2000 bignay2000 added the bug Something isn't working label Jun 11, 2023
@CommanderStorm
Copy link
Collaborator

What kind of storage do you use on your pi?

@CommanderStorm
Copy link
Collaborator

This may be a duplicate of #2346

@bignay2000
Copy link
Author

bignay2000 commented Jun 11, 2023

What kind of storage do you use on your pi?

Runs off a micro SD card. The Pi is a great device to pair with Uptime for a easy monitoring solution...

@bignay2000
Copy link
Author

bignay2000 commented Jun 11, 2023

This may be a duplicate of #2346

Any work arounds to throttle SQL queries?

@bignay2000
Copy link
Author

bignay2000 commented Jun 11, 2023

I only need 4 days worth of metrics (just enough to cover a long weekend).

Going to change the History from 180 days to 30 days..

Screenshot 2023-06-11 at 7 19 13 PM

@bignay2000
Copy link
Author

bignay2000 commented Jun 11, 2023

Update to 30 from 180 days history. Ran Shrink Database and the Clear all Statistics. So now the database is under 1 megabyte (Down from 600 MB). Deleting events is responsive now (but I am only deleting a few minutes worth instead of 180 days).

Screenshot 2023-06-11 at 7 42 25 PM

@bignay2000
Copy link
Author

bignay2000 commented Jun 12, 2023

I wonder why the latest container is shipping with older Alpine and Node.js

louislam/uptime-kuma:1.21.3-alpine is Alpine Linux v3.12 (reelased on 2020-05-29) with Node v16.13.1 installed.

Node 16.20.0 is the latest v.16 released on 2023-03-29.

Uptime Kuma supports Node v.18 (per the README in GitHub)

@bignay2000
Copy link
Author

Moved from louislam/uptime-kuma:1.21.3-alpine to louislam/uptime-kuma:1.21.3-debian

Node is now 16.20.0 and the OS is more current.

@CommanderStorm
Copy link
Collaborator

Runs off a micro SD card

Running IO-Heavy Workloads of off slow micro SD cards is not recomended. The delete operation could delete hundreds of MB worth of data => I would expect there to be performance penalties.

Any work arounds to throttle SQL queries

Over in #2346 Nelson has noted that PRAGMA synchronous = NORMAL or PRAGMA synchronous = OFF might be worth a try, but no user has yet reported back the results.'
Given that I don't have access to a Pi, could you install uptime-kuma on your machine and see if this solves the performance problems?

@CommanderStorm
Copy link
Collaborator

CommanderStorm commented Jun 13, 2023

@bignay2000
I think this is a duplicate of #2346 could we close this issue and continue investigation over there?

@bignay2000
Copy link
Author

Running on Raspberry Pi 4. Reduced the history from 180 days to 30 days. Performance is reasonable considering the Pi is running from an SD card.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants