-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? #2346
Comments
Please use a normal VPS with official docker. |
@louislam
Logs:
These are connections to two seperate docker servers. |
I'm running on a Raspberry Pi 2 Model B, which I think may be a bit underpowered for Uptime-Kuma? |
I have the same issue and its not due to underpowered machine. $ docker version
Client:
Version: 20.10.17-ce
API version: 1.41
Go version: go1.17.13
Git commit: a89b84221c85
Built: Wed Jun 29 12:00:00 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.17-ce
API version: 1.41 (minimum version 1.12)
Go version: go1.17.13
Git commit: a89b84221c85
Built: Wed Jun 29 12:00:00 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.6.6
GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-ga916309fff0f
docker-init:
Version: 0.1.5_catatonit
GitCommit:
|
I get the same - a restart of the container fixes it. Is there any way of monitoring for this - some kind of API that can be polled and cause a restart if no monitors are found?
|
I am getting the same running on Kubernetes. Found this while looking in the knex knex/knex#2820 |
This is now getting more serious for me - Uptime Kuma is non-responsive more often than it isn't now. I have to do daily restarts of the Uptime Kuma docker container, and even then it doesn't always become responsive again. I've just had to restart the container about 10 times before it kicked back into life, and I've setup a monitor to monitor when it stops responding (having it call a Home Assistant Webhook as a form of reverse heartbeat). Anyone got any ideas - is there a way to resolve this? |
From a very brief skim of Google some people are pointing towards sqlite being a limitation. @louislam support for MySQL still not likely to be considered? I know it has been mentioned before that Kuma isn't a production ready monitoring tool but in reality it's not far off. Bar the above issues we have found it very useful. |
I can't remember which issue it was, but there was a suggestion about splitting up the config and results into two separate databases, something that would make sense. I think for the results database, a time series one would be an appropriate choice, then we could just stick to sqlite for config |
It's in my 2.0 roadmap. |
@louislam is there an estimate of timescale for 2.0 ? |
For ref currently seem to have ameliorated this issue by changing the connection pool settings to:
|
uptime-kuma/server/database.js Lines 165 to 169 in ce82ad1
Currently we are using I was also thinking about a split database, where the configs are stored with |
I had the same issue. As soon as I am trying to delete a specific monitor (which as a lot of events associated to it), I get:
Then, the DB is corrupted and I have to (force) stop the container and restore an old DB to get uptime kuma working. Other monitor deletion worked. I think it is a timeout somewhere related to a big SQL query. I have no issue related to performances (it is a big VM). docker version:
Uptime kuma is the last version. Thanks, EDIT : after stopping the container, and waiting for it to be stopped (...very long time), removing it and restarting it back (long period to wait before it becomes healthy and is available again), it worked. |
Happening to me as well, editing a monitor kills uptime Kuma |
Is it perhaps possible to set the desired setting by means of an environment variable? Maybe for some deployments NORMAL or OFF is good enough. |
I have this error from time to time, causing a burst of downtime notifications that are quickly resolved. It would be nice to get rid of those false-positives. |
I am having this error quite often lately, is there any resolution on the horizon? |
I have discovered the problem for me at least. I am running uptime inside docker on a NAS. When the disk activity was high i would get this error message. Once i addressed the continues high disk read/write actions the messages stayed away. Hopefully someone else can benefit from this respons. |
Same error, and it happens every night at a specific time. (3.17 AM) |
Same.. not sure what uptime kuma does at that time, but multiple monitors go offline with this error at 3:14 for me then come back online like 4min later. Maybe it's some DB cleanup process that hammers the DB and causes it I suppose. |
Same here, also at night, around 2AM - 4AM. |
Users are strongly encouraged to update to The server runs the task clearing monitor history data beyond the defined period at 03:14am each day (server time). If you are still having issues, pressing the "Settings" -> "Monitor History" -> "Shrink Database" button should also help in the short term (the description previously written is not entirely accurate). Finally, disk performance is important and if your server has poor IO performance and/or you are running a large number of monitors, the chance of this error occurring will increase. |
Awesome! Will try it out |
@toineenzo Did upgrading to 1.23 work? I'm still facing this error after the upgrade. |
I got it one or two times. At least not daily. Op 10 sep 2023 om 10:14 heeft Uthpal P ***@***.***> het volgende geschreven:
@toineenzo Did upgrading to 1.23 work? I'm still facing this error after the upgrade.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
I constantly get this. Just happened again on Version: 1.23.2 today. I'm running bare-metal on a quad-core Xeon with the latest docker, 16GB RAM, and 12x RAID6. I don't think this system is underpowered. Any new ideas? |
What fixed this for me was Settings -> Monitor History -> Clear all Statistics. Then change Keep monitor history for 7 days. This is likely not a cpu power issue but an issue of having too much data in sqlite which takes longer (and ultimately times out) to run queries with so much data. I believe the old default was 0 for keep monitor history (forever) which that default should be changed to something like 7 or 14. I probably had a years worth of data which is also pretty useless but since I cleared everything I haven't had any issues. |
Forgot to answer, but yes! The latest updates fixed it. Now I rarely get this error but thats only when my NAS CPU/RAM usage is really high. So seems like its fixed. At least I dont get spammed on my Telegram webhook with this error |
having the same, enough powerful server (8 cores, arm based), 16ram, but seems Kuma need normal database. Will be great to have Postgres and/or Radis for fixing this limitations. I see ping shows ~10 seconds, can't say that it is true, looks like it has queue. |
@mmospanenko the current architecture will not use more than one core. But this is not the limit of the current Architecutre in any sense, IO-Throughput and latency is. See #4500 for ways to mitigate this until the v2 release. |
🛡️ Security Policy
Description
Logged in this evening to find no monitors and the following error displayed:
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Full startup log below. Is this a known issue?
Matt
👟 Reproduction steps
👀 Expected behavior
Login is normal and view monitors/status pages etc.
😓 Actual Behavior
🐻 Uptime-Kuma Version
1.18.5
💻 Operating System and Arch
louislam/uptime-kuma Container Image
🌐 Browser
107.0.5304.110
🐋 Docker Version
Amazon Fargate LATEST(1.4.0)
🟩 NodeJS Version
No response
📝 Relevant log output
The text was updated successfully, but these errors were encountered: