-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP checks on non-standard ports reporting as down #1821
Comments
Will take a look |
Is there any special settings like proxy or basic auth etc? I saw someone have also reported similar issue. |
Can't reproduce on d5da5af / 1.17.0 Looking forward to debugging more if I get more info, like @louislam said, there must be some special configuration. There is some trickery involved with a proxy or ntlm though: |
3 http(s) services are reported down since my container was updated to 1.17 today. There's definitely an issue, 1 uses a custom port, the other 2 use standard port 443. All hosts are up and reachable from the container, I just checked with curl. |
I suspect it is related to axios dns cache (#1598) or ntlm (#1639). monitor.js diff: |
Oh no, I found an issue, if I use the tailscale domain name instead of the ip, it goes down. While in 1.16.1, it is up. @osc86 @webworxshop Are you using similar domain name too? |
It should be related to #1598, I am reverting this pull request, 1.17.1 should be released soon. |
I have a similar looking issue, but I think the cause is different. I have a monitor for I believe the issue is that there is a forward
and when uptime kuma follows that it kinda loses the port and checks for the standard port instead. If I set the monitor to |
1.17.1 has been release, it should fix the issue. |
It does |
Thanks for your testing. I will transfer issues to axios-cached-dns-resolve. cc: @paul-michael, since you reported in another thread, just let you know the issue has been fixed. |
@JacksonChen666 It has been fixed, please read my previous comments in this thread. |
@louislam i'm on 1.17.0, which is supposedly the version with the bug. either way i do not have issues and will remain on version 1.17.0 because there's not much to get from 1.17.1 for me. |
@JacksonChen666 It hit the users who are using such as Wireguard/Tailscale networks with custom local domain only. It is hard to spot the problem before final release, that why I am afraid to merge new large features now. |
@louislam ah, makes sense. |
Wow, looks like this got some traction while I was AFK 😃 @louislam Yes, I am using a domain name, but from my local DNS server, not tailscale. I can confirm the issue is fixed with 1.17.1. Thank you for the awesomely quick resolution and the great software! |
Regarding DNS cache: Why are we using this at all? We have three (and more) caches then, at least: DNS provider cache, local server DNS cache, Uptime Kuma cache, … I think that this is not necessary or am I missing something? |
Depending on the configuration, there may not be a local cache. GNU/Linux does not cache DNS by default. Projects like systemd-resolvd (not often configured to run by default on many setups even if they use systemd), dnsmasq, etc are needed. If, in your setup, you are constantly querying DNS, you will see worse, sporadic performance out of Uptime Kuna. See the screenshot on #1598. |
Can this be a configurable option per monitor, it would benefit some monitors and not others. New features that would be considered beta should be off by default. Turn on new features if you want to try them out.
Bypass DNS resolution altogether set the URL to https:// and the header "Host: domain.com". If you want to know if the DNS record changes, set up a DNS monitor Having the option of enabling DNS resolution or specifying the IP/HOST during the monitor creation would save time versus setting headers each time. If you choose DNS resolution, you can pick local resolve, remote resolver or build in Kum DNS cache. |
🛡️ Security Policy
Description
As of version 1.17.0, HTTP checks on non-standard ports are reporting as down. The resulting notification reports that the check in unable to reach port 80, which is wrong. See screenshot below:
In this case the Jellyfin service is configured on port 8096.
👟 Reproduction steps
Create a check for a HTTP service on a non-standard port.
👀 Expected behavior
The service should be reported as up when it is up and down when it is down!
😓 Actual Behavior
The service is always reported as down.
🐻 Uptime-Kuma Version
1.17.0
💻 Operating System and Arch
Ubuntu 20.04
🌐 Browser
Doesn't matter
🐋 Docker Version
Docker 20.10.14
🟩 NodeJS Version
No response
📝 Relevant log output
No response
The text was updated successfully, but these errors were encountered: