Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uptime kuma 100% usage of cpu #4094

Open
2 tasks done
syamsullivan opened this issue Nov 24, 2023 · 16 comments
Open
2 tasks done

uptime kuma 100% usage of cpu #4094

syamsullivan opened this issue Nov 24, 2023 · 16 comments
Labels
area:core issues describing changes to the core of uptime kuma feature-request Request for new features to be added question Further information is requested

Comments

@syamsullivan
Copy link

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
70b8a69b1ae8 uptime-kuma-saas 103.76% 133.9MiB / 7.637GiB 1.71% 991kB / 8.08MB 41kB / 129MB 12

i have issue with uptimekuma that used only single core, then affected performance dashboard

any suggestion

im using docker
Docker version 24.0.5, build ced0996
centos 7
with 8 cores 8G

📝 Error Message(s) or Log

No response

🐻 Uptime-Kuma Version

1.22.1

💻 Operating System and Arch

CentOS Linux release 7.9.2009

🌐 Browser

Version 117.0.5938.88

🐋 Docker Version

Docker version 24.0.5

🟩 NodeJS Version

No response

@chakflying
Copy link
Collaborator

You can post the container logs, the output of htop when run inside the container, and the number and types of monitors you are running to help with troubleshooting.

@CommanderStorm
Copy link
Collaborator

*also include the retention time you configured

@syamsullivan
Copy link
Author

is retention will cost the CPU ?

and also i used docker as main platform for deploy kuma.
and always cost cpu 100%, should i increase the CPU limit of container ?

@CommanderStorm
Copy link
Collaborator

CommanderStorm commented Nov 27, 2023

@syamsullivan please give us the information we asked for.
See https://github.com/louislam/uptime-kuma/wiki/Troubleshooting if you need help getting this information.

is retention will cost the CPU ?

Retention is not a likely culprit. Please report it anyway.

should i increase the CPU limit of container ?

Depends what you set your limits to. One CPU is the max node should use.
Note that CPU limits were originally designed to curb power consumption in large datacenters. Use this feature of your runtime with caution.

@CommanderStorm CommanderStorm added the area:core issues describing changes to the core of uptime kuma label Dec 14, 2023
@bmdbz
Copy link

bmdbz commented Jan 11, 2024

I have the same problem.
The problem was exacerbated when I logged into WebUI.
Generally, after I restart the docker container, I can log in to WebUI and see the contents of the monitoring items normally. After a period of time (maybe 15 minutes or less), I will open WebUI again and the interface will not display any monitoring items. (But the monitoring task is actually still running)
Number of my monitoring items 500+
Mainly because uptime-kuma is easier than zabbix tools, but the CPU utilization rate of 100% makes me unable to start.

@CommanderStorm
Copy link
Collaborator

@bmdbz
Could you report the values for:

  • htop-output
  • retention time (not likely a problem, still worth reporting)
  • estimated average heartbeat check-time per monitor
  • monitor-type distribution
  • "Do you expect a lot of traffic on your status pages"?

Note that the first beta of v2.0 is still a few weeks out, but said release will come with a lot of performance improvements.
In v1, 500+ (depending what "+" means) is likely pushing it.

@bmdbz
Copy link

bmdbz commented Jan 11, 2024

Thank you for your reply.

  • htop output, to be provided tomorrow after using environment query
  • 7 days retention time
  • Heartbeat check time for each monitoring item 10-60 seconds
  • Monitor type is ping
  • "Do you expect a lot of traffic on your status pages"? I don't understand this problem very well.

In v1, 500+ means more than 500.

@bmdbz
Copy link

bmdbz commented Jan 12, 2024

image
The above is the output screenshot of htop, thank you!

@CommanderStorm
Copy link
Collaborator

missed this response.
The htop output you reported is sorted by memory, could you sort by CPU utilisation instead?
In the screenshot, this is not 100%, but rather 30%

@cayenne17
Copy link

cayenne17 commented Mar 16, 2024

I just noticed the same problem. When I don't have the uptime-kuma web interface open, I'm in the 5%~ CPU range:
image



When I have a tab open in the background with no actions on it, it's a variable 30%-70% CPU:
image




Uptime kuma is installed in a Docker version 25.0.4, build 1a576c5 on a Debian 12.5 VM.

root@UptimeKuma:~# docker -v
Docker version 25.0.4, build 1a576c5

root@UptimeKuma:~# cat /etc/debian_version 
12.5

Uptime Kuma
Version: 1.23.11
Version frontend: 1.23.11

AVG VM CPU graph from Proxmox VE:
image

@sunlewuyou
Copy link

Non-Docker
image

Copy link

We are clearing up our old help-issues and your issue has been open for 60 days with no activity.
If no comment is made and the stale label is not removed, this issue will be closed in 7 days.

@github-actions github-actions bot added the Stale label Jun 29, 2024
@cayenne17
Copy link

The problem still exists

@github-actions github-actions bot removed the Stale label Jul 1, 2024
@CommanderStorm CommanderStorm added feature-request Request for new features to be added question Further information is requested and removed help labels Jul 1, 2024
@CommanderStorm
Copy link
Collaborator

This is likely resolved by the performance improvement in #4500, more specific #3515

Testing PRs can be done via https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests, but I don't expect that due to you needing to create 500 monitors without good import/export functionality.

I have changed this to a FR to avoid stalebot doing shit.

What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about

  • how many monitors do you have configured
  • what is their type
  • what is your retention

@cayenne17
Copy link

What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about

  • how many monitors do you have configured
  • what is their type
  • what is your retention

@CommanderStorm

how many monitors do you have configured ?
74 online, 2 offline and 5 on pause

what is their type ?
It's mostly ICMP probes and a few HTTPS probes

what is your retention ?
30 days

@rezzorix
Copy link
Contributor

Since Proxmox is used; Just some questions on terminology: you are using a VM - not an LXC, correct?

In any case, CPU usage by default in Proxmox is not exclusive.
Lets say CPU 1 is assigned to your VM/LXC, and the host computer decides to use it for some reason, the usage % of the process on the VM/LXC would automatically look very high.

You can assign CPU resources exclusively / reserved for a VM/LXC then you will not have this issues.

To mitigate this and ensure more predictable CPU usage, you can:

Set CPU Affinity (Exclusive CPU Allocation):

  • For LXC containers, set lxc.cgroup.cpuset.cpus in the container configuration file.
  • For Docker containers, use --cpuset-cpus when running the container.

Limit CPU Usage:

  • Use CPU limits to control how much CPU time the VM/LXC can use.
  • For LXC: Set lxc.cgroup.cpu.shares.
  • For Docker: Use --cpus or --cpu-shares.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma feature-request Request for new features to be added question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants