Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Added HealthCheck #33

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from
Draft

Conversation

gb-123-git
Copy link

Healthcheck has become a standard feature for deployments (specially in Kubernetes).

Updated to add Healthcheck to the container
Added Sponsor Message
Removed Extra Blank Line
@zyclonite
Copy link
Owner

i do not think that's a good way of doing it... if the status goes to offline, the zerotier client itself will try to recover, so no need to fail a healthcheck and restart the container

i also do not think it's always a best practice, e.g. look at some official docker containers, they do not have a healthcheck defined by default

@gb-123-git
Copy link
Author

gb-123-git commented Aug 20, 2024

The reason for this is that currently, if other containers are dependent on this, then they have a problem in starting as the IP & Ports are not allotted. Checking this ensures that this container has started up so the other containers depending this can be started.

How do you want me to implement this ? I Can change the implementation.

The reason I checked for being "online" is that the docker image is not exactly of any use when it is "offline", since most people use it for remote connection (VPN) and internet is required for it.
Further, sometimes there is a network 'hang' (due to router restart, isp, network connection. etc.) in which cases restart is required.

But I am open to suggestions for other implementations.

I do feel building a healthcheck is necessary.

@Paraphraser
Copy link
Contributor

I hope nobody minds if I chime in here with a few observations. I apologise in advance for the length.

As a general principle, I support the idea of health-check scripts. I think they add to the overall user experience of Docker containers. The typical scenario where a health-check status is very handy is:

  1. Something seems to be wrong.
  2. You run docker ps.
  3. Seeing "unhealthy" gives you somewhere to start.

Over on IOTstack where I spend a fair bit of time, I've added health checking to a bunch of containers. In some cases (InfluxDB 1.8 and Grafana) I've done it via a healthcheck clause in the container's service definition. In others (MariaDB and Mosquitto), I've done it via a local Dockerfile.

So, I definitely "get" the idea of doing this. I have also experienced push-back when either I or some third party have pointed to the IOTstack implementation in the hope it, or something like it, would be adopted by the upstream maintainer.

I also agree with Lukas about many official containers not having health-checking by default. All the containers where I've added the feature in IOTstack qualify as "official" and "mainstream".

Bottom line: I think this seems like a good idea from a usability perspective so please see this as a "+1" for your proposal.


To be clear, I'm supporting the idea of a health-check script. I haven't done any investigation into what zerotier-cli status returns under various failure conditions. I'm assuming you've done that work and have concluded that "not equal to ONLINE" is sufficient and appropriate.

As to the specifics of healthcheck.sh, might I suggest one small improvement? Your proposed line:

if [[ $status = "ONLINE" ]]; then

implicitly assumes that the status variable will always be non-null and be a single word with no special characters that might be open to misinterpretation by the shell. I think it would be better to quote the variable:

if [[ "$status" = "ONLINE" ]] ; then

That way, null or anything unexpected (save for embedded quotes) will still evaluate as a string for comparison purposes. I've also added a space between ]] and ; because, sometimes, that matters.


Now I'd like to focus on what it is that you are actually trying to achieve. That's something you haven't really explained. It's possible that, if I understood your actual use-case, I'd immediately see the practical application.

In the absence of that knowledge, I can only talk in generalities.

In general, health-check scripts are slow. You've specified --interval=60s and you'll inherit the default --retries=3 so it'll take around 3 minutes for the container to react.

Sure, you can tweak things to run the script more frequently but the faster you go, the more overhead and the greater the chance of propagating small transients into bigger downstream problems. This is always the conundrum of data-communications timeouts and why they tend to be long.

So, while I'm convinced that health status is a valuable user diagnostic aid, I'm not at all persuaded that it's going to prove valuable as an automation aid. I hope that makes sense. I also suspect that's a point Lukas was making.

More generally, I think you are arguing with one of the key design assumptions that underpins TCP/IP:

  • Any device between a client and a server is free to drop packets. It is up to the client and server to recover.

That might sound simple and self-evident but it has far-reaching consequences.

ZeroTier, whether it's running in a container or otherwise, is a (logical) "device" that sits between a client and a server. Its core job is to forward packets. Irrespective of the reason, any inability to forward a packet means the packet gets dropped. That's something the client and server need to handle. It's an end-station responsibility, not something that should be delegated. Sure, it might be convenient to make the assumption that the nearby ZeroTier container going unhealthy covers a multitude of sins (no DNS resolution, no viable local route to the Internet, ZeroTier Cloud being offline) but it sure doesn't get all possible reasons why a client and server can't talk to each other, such as whatever physical device is running the ZeroTier service at the other end not being available, or device present but container not running, or device present and container running but simply not having a viable path to the Internet because the router at the other end is down.

It's a non-delegable client and server responsibility.


Before I make the next point, I'd like to thank you. Until I read your PR and went to the Docker documentation to double-check my understanding, I was not aware that the depends_on clause had acquired a long-form syntax.

Until today I thought that a depends_on: did little more than govern the start order. But now that there's support for things like condition: service_healthy I think I might be getting a glimpse of the gleam in your eye.

Even so, I don't think it's going to be all that robust. I've done some experimentation. I didn't want to either interfere with my existing ZeroTier network or set up a separate ZeroTier network just for experimentation so I used Mosquitto and Node-RED as proxies for ZeroTier and "some other container", respectively. I augmented Node-RED's service definition with:

depends_on:
  mosquitto:
    condition: service_healthy
    restart: true

For its part, Mosquitto has an IOTstack health-check script which uses mosquitto_pub and mosquitto_sub to first publish a retained message, then subscribe to see if that same message is returned.

Tests:

  1. Both containers down. Start the stack.

    Mosquitto takes 5 seconds to go healthy and then Node-RED starts. All according to Hoyle.

  2. Both containers up. Explicitly terminate Mosquitto:

    $ docker-compose down mosquitto
    

    Node-RED doesn't get taken down. It just continues.

  3. Node-RED still running, Mosquitto down. Start Mosquitto:

    $ docker-compose up -d mosquitto
    

    No reaction from Node-RED. It makes no difference if I omit the container name and leave it implied that docker-compose should start all non-running containers.

  4. Both containers running. Explicitly restart Mosquitto:

    $ docker-compose restart mosquitto
    

    docker-compose also restarts Node-RED.

    At test 1, the "up time" difference between the containers is the 5 seconds Mosquitto takes to go healthy. In this test, the time-difference seems to be 3 seconds. I'm not sure either whether it's significant or how to explain it.

  5. Because of the way the Mosquitto health-check script works, it's possible to interfere with its operation to simulate various failure modes. In particular, I can cause the container to go "unhealthy" without needing to issue any Docker commands. I think this would be the rough equivalent of ZeroTier up and running but suddenly finding it had no route to the destination network.

    The starting point is both containers running:

    $ DPS
    NAMES       CREATED          STATUS                    SIZE
    nodered     37 seconds ago   Up 30 seconds (healthy)   0B (virtual 636MB)
    mosquitto   37 seconds ago   Up 36 seconds (healthy)   0B (virtual 17.7MB)
    

    DPS is an alias on docker ps which focuses on just those columns.

    Now I cause Mosquitto to go unhealthy:

    $ DPS
    NAMES       CREATED         STATUS                     SIZE
    nodered     4 minutes ago   Up 4 minutes (healthy)     0B (virtual 636MB)
    mosquitto   4 minutes ago   Up 4 minutes (unhealthy)   20B (virtual 17.7MB)
    

    Technically, all I did was to change its configuration to disallow anonymous users. Because I didn't provide credentials for the mosquitto_pub and _sub commands in the health-check script, they fail, the script exits with return code 1, and Docker treats that as a fail. The IOTstack Dockerfile for Mosquitto specifies the Docker defaults of 3 successive fails 30 seconds apart before declaring "unhealthy" but practical experience suggests the actual time between the condition being triggered and Mosquitto going "unhealthy" is about 72 seconds.

    The material point is that Mosquitto going "unhealthy" has had zero effect on the Node-RED container. It has not been either taken down or restarted by Docker.

    When I undo the damage, Mosquitto returns to "healthy" within about 22 seconds but, again, there is zero impact on the Node-RED container.

    Bottom line: while long-form syntax for the depends_on clause is likely to help when inter-dependent containers are starting, you're not going to get much help for conditions that occur once your dependent containers are running.

    The question for you is, "is that enough?"

Changes as proposed by @Paraphraser
@gb-123-git
Copy link
Author

@Paraphraser

Actually shell behaves differently on different distributions(eg. some have ash instead of bash), nevertheless, I have updated the code as per your recommendation. Thanks. 🙏

Many thanks for your detailed Test. 👍
I cannot comment on your test much since it may likely be a bug on the Docker Engine side. Ideally, I believe the Node-Red should have been re-started once you have put restart:true. However, from my experience, docker usually acts on "Schedules/Pooling" rather than "Immediate Event Action" so sometimes it is slow to react and you don't get to see immediate reaction. However, this may not be true, I am only assuming because of slow reaction instead of real-time reaction .

Now for some of your specific statements/questions :

It's a non-delegable client and server responsibility.

Agreed. But no harm in having a health-check in the middleware also.

The question for you is, "is that enough?"

For my use case- YES. To better describe this, I use a software which takes the IP from ZeroTier. If the ZeroTier is down, the IP is not allotted and the Software container fails and I have to manually restart the software container while keeping the ZT container as is. This happens everytime the docker compose is run without health checks. Now, once the ZT has IP, then even if the network is dropped, I do not need to start the software container (since your rule of client-server dropping packet applies) hence docker is free to restart ZT container only when it is unhealthy. So even if the Software Container is not restarted (like in your case the Node-Red), it does not matter for my use case scenario. (Though I still think restart:true should have restarted it in your case). I don't have much time these days so I would not be going into the details on why your Node-Red was not restarted.

Secondly, at times there are certain network hangs (like router restart) from the backend which sometimes 'hangs' the Zerotier client (or ZT client takes long time to recover). Though this happens mostly on consumer grade hardware (like my home-lab since server grade h/w is beyond what I can afford), having a health-check is again useful.

Lastly, once you move to container orchestration (e.g. kubernetes) you would appreciate how important it is to build healthchecks in the docker containers. Although another counter-point is that for containers without health checks, it can also be added directly through docker compose.

In general, health-check scripts are slow. You've specified --interval=60s and you'll inherit the default --retries=3 so it'll take around 3 minutes for the container to react.

Well there will always be a debate on 'slowness' vs 'performace'. If you have faster healthchecks, it will also take cpu cycles possibly degrading performance. Besides, what is specified in Dockerfile is only the default values. They can be changed at runtime by the user anytime either by using docker run or specifying overriding values in docker compose. So no need for debate here. Just use what is best for your use condition.

So, while I'm convinced that health status is a valuable user diagnostic aid, I'm not at all persuaded that it's going to prove valuable as an automation aid. I hope that makes sense. I also suspect that's a point Lukas was making.

I already explained my use-case where it is also helping me in automation aid.

Regarding the implementation of healthcheck, please read my earlier post wherein I have clearly stated that I am open to better implemenation of healthcheck if someone can find and propose the same. 🙏

@hoppke
Copy link

hoppke commented Aug 21, 2024

if I'm reading it right, content of healthcheck.sh could be reduced to sth along the lines of

exec zerotier-cli status -j | jq .online | grep -q true

(with 'apk add jq' prerequisite)

Not saying it's better, just different.

@zyclonite
Copy link
Owner

this one is from the official container, maybe worth looking into?
https://github.com/zerotier/ZeroTierOne/blob/f176e2539e10e8c0f61eb1d2e1f0e690a267a646/entrypoint.sh.release#L109-L113

i did often run into a situation where zerotier-cli status returns online but no network interface was created... this might be worth checking as well

@gb-123-git
Copy link
Author

if I'm reading it right, content of healthcheck.sh could be reduced to sth along the lines of

exec zerotier-cli status -j | jq .online | grep -q true

(with 'apk add jq' prerequisite)

Not saying it's better, just different.

This essentially does the same thing (which is checking the status as "online") albeit using a different program(apk). I personally feel there is no point in adding another package (jq, which is essentially a json parser) if we can do without it. The leaner and lesser programs installed in a container, the better (as a general rule), so I think we should only install necessary apks.

@gb-123-git
Copy link
Author

gb-123-git commented Aug 21, 2024

i did often run into a situation where zerotier-cli status returns online but no network interface was created... this might be worth checking as well.

That is true. The way zerotier-cli status works is that It checks if the client itself is online and ready, not the networks.

this one is from the official container, maybe worth looking into? https://github.com/zerotier/ZeroTierOne/blob/f176e2539e10e8c0f61eb1d2e1f0e690a267a646/entrypoint.sh.release#L109-L113

Definitely!

$ZEROTIER_JOIN_NETWORKS is not populated when we run this container. But I think this can be implemented by first loading all the networks into an array and then checking the same.

The other approach is - we can change the entrypoint.sh in a way that it takes $ZEROTIER_JOIN_NETWORKS as array in docker compose/docker run, checks if the network config exists (either using files or using zerotier-cli listnetworks), else joins the network. If network is not present in $ZEROTIER_JOIN_NETWORKS but still comes up in the above check, then leave network would be called. This would be done to keep the docker in sync with the networks specified by the user. Then we check healthcheck as mentioned by you.
The advantage of the above approach would be that user would not need to call join network command after starting the docker.

UPDATE: I just went through the entrypoint.sh and saw that you have implemented this in a different manner (i.e check if existing configuration exists and use that to join network else join network in the first run, which is better so that join network is technically called only once. We can further add to this that join network is only called if user adds some value to docker file.)
Flow would then be:


##### (First Remove additional Networks (If Any) for proper syncing)
Check user Config for `$ZEROTIER_JOIN_NETWORKS`.
Match user provided networks with config files (if exists).
In case we find a config file not included in `$ZEROTIER_JOIN_NETWORKS`, then leave network for that network. 
##### (Join networks as per  `$ZEROTIER_JOIN_NETWORKS`)
Check for config files to check if config exists.
If config exists skip the network joining for that particular network,
else call join network.

Though 1 question still bugs me... In your approach, What if user has specified multiple networks, and some of them are down but not all of them, in that case what do we do? should the container be 'healthy' or 'unhealthy' ?
Another Question would be what if the docker is started without joining any network, but the client is online ? Then would it be considered healthy or unhealthy ?
Or should we give specific networks to check (as argument) to health-check ?

Which approach would you like to go by ?

@Paraphraser
Copy link
Contributor

@gb-123-git

I started to write this reply last night (I'm at UTC+10), intending to finish it this morning, but I see things have moved on a bit.

Actually shell behaves differently on different distributions(eg. some have ash instead of bash), nevertheless, I have updated the code as per your recommendation.

For some reason, ash seems to be a standard inclusion when the container is based on Alpine Linux but not when the container is based on Debian or Ubuntu. However, it's actually up to the container's designer. For example, the :latest tag for Node-RED gets you an Alpine-based container which contains bash as well as ash.


I did some more thinking about your test for ONLINE and it led me here. Unless I'm mis-reading that, the info (aka status) command can return "ONLINE", "OFFLINE" or "TUNNELED".

In reading the documentation, it seems that TUNNELED means that the client accepts the default route override. I set up another container to run as a client, joined it to my existing network, then enabled allowDefault=1. The host's routing table changed showing the override was in effect but the info status remained "ONLINE".

I'm not sure what to make of all that. However, if it were me, I'd assume a status of TUNNELED could be taken at face value and that it would imply the client was still ONLINE. In that case, I might express the test like this:

status=$(zerotier-cli status | awk '{print $5}')
[[ "$status" = "OFFLINE" ]] && exit 1
exit 0

BUT please put a pin in all that and study this. First, the normal situation:

$ docker exec zerotier zerotier-cli status
200 info 0123456789 1.14.0 ONLINE

$ ip r | grep "dev zt"
10.244.0.0/16 dev ztr2qsmswx proto kernel scope link src 10.244.233.146 
192.168.0.0/23 via 10.244.124.118 dev ztr2qsmswx proto static metric 5000 

$ docker exec zerotier zerotier-cli listnetworks
200 listnetworks <nwid> <name> <mac> <status> <type> <dev> <ZT assigned ips>
200 listnetworks 9999888877776666 My_ZeroTier 22:9b:3d:a2:ba:0b OK PRIVATE ztr2qsmswx 10.244.233.146/16

In words:

  1. The client is reporting ONLINE.
  2. The host's routing table shows routes associated with the ZeroTier network interface.
  3. The client knows it is connected to the ZeroTier network (network ID changed to protect the innocent).

Now let's make a mess by leaving the ZeroTier network:

$ docker exec zerotier zerotier-cli leave 9999888877776666
200 leave OK

What's the story?

$ docker exec zerotier zerotier-cli listnetworks
200 listnetworks <nwid> <name> <mac> <status> <type> <dev> <ZT assigned ips>

$ ip r | grep "dev zt"

$ docker exec zerotier zerotier-cli status
200 info 0123456789 1.14.0 ONLINE

In words:

  1. The client no longer lists the network.
  2. The host has no routes associated with the ZeroTier network interface.
  3. But the client still reports ONLINE.

I'm now wondering if ONLINE means what we think it does. I think this was the point Lukas was making.


I cannot comment on your test much since it may likely be a bug on the Docker Engine side. Ideally, I believe the Node-Red should have been re-started once you have put restart:true. However, from my experience, docker usually acts on "Schedules/Pooling" rather than "Immediate Event Action" so sometimes it is slow to react and you don't get to see immediate reaction. However, this may not be true, I am only assuming because of slow reaction instead of real-time reaction.

I didn't actually find the behaviour all that surprising. It seemed to me that it was behaving exactly as documented.

Another test I've done in the meantime is to force a change to the Mosquitto service definition (adding a nonsense environment variable). The subsequent "up" recreated both containers, starting them in the order Mosquitto, then Node-RED when Mosquitto went healthy.

And when I say "recreated" I mean that. The Node-RED container was not simply restarted.

Conversely, if I merely restart Mosquitto, the Node-RED container merely restarts too (ie it's not a container re-creation).


Secondly, at times there are certain network hangs (like router restart) from the backend which sometimes 'hangs' the Zerotier client (or ZT client takes long time to recover). Though this happens mostly on consumer grade hardware (like my home-lab since server grade h/w is beyond what I can afford), having a health-check is again useful.

Intriguing. I have found ZeroTier to be absolutely rock solid. I never have to touch it, no matter what else happens. I'm running the ZeroTier-router container in close to Topology 4. My local and remote ZeroTier routers (the A and F in the diagram) are on Raspberry Pis. Other devices (Macs and iOS; the B, E and G in the diagram) are running the standard ZeroTier clients (ie not in Docker containers) just as they come from Atlassian.

The biggest problem I have ever had with ZeroTier is documented here and it was peculiar to the GUI widget controls in macOS where the condition was triggered by upgrading to a new Mac.

On the other hand, having visited ZeroTier Central a few times in the last 48 hours and been peppered with "upgrade to paid" popups, I'm starting to wonder whether ZeroTier has launched itself onto the enshittification curve?


Lastly, once you move to container orchestration (e.g. kubernetes) you would appreciate how important it is to build healthchecks in the docker containers. Although another counter-point is that for containers without health checks, it can also be added directly through docker compose.

I have no experience with this so I'll take your word for it.


Regarding the implementation of healthcheck, please read my earlier post wherein I have clearly stated that I am open to better implemenation of healthcheck if someone can find and propose the same.

If you are able to replicate the example I gave you above (where leaving the ZeroTier network still saw the service reporting ONLINE), you might perhaps try something like this:

routes=$(ip r | grep -c "dev zt")
[ $routes -eq 0 ] && exit 1
exit 0

This is more-or-less where I got to last night. Please keep reading...


$ZEROTIER_JOIN_NETWORKS is not populated when we run this container. But I think this can be implemented by first loading all the networks into an array and then checking the same.

The zerotier-router container supports ZEROTIER_ONE_NETWORK_IDS and it operates in the manner you suggest.

But how about something like this?

  1. Define an optional variable which sets the expectation of the number of zerotier-related routes that should exist in order for the container to be considered healthy.

    environment:
      - ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH=2
  2. In the health-check script:

    • apply a default where that variable is not passed into the container which implements "at least one route", as in:

       ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH="${ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH:-1}"
      
    • Count the zerotier-related routes:

       routes=$(ip r | grep -c "dev zt")
      
    • Test the actual routes against the expectation:

       [ $routes -lt $ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH ] && exit 1
      

If all you care about is a single network created by the client when it has connectivity with the ZeroTier Cloud, the default of 1 will suffice, the health-check will work, and nobody will need to add that variable to their service definition. In other words, you'll have full backwards compatibility.

On the other hand, if you typically propagate managed routes, enable full tunnelling, and so on, what constitutes "normal" will be a larger number of entries in the routing table. Then you can add the variable to your service definition and tune it to the correct value.

How does that sound?

@Paraphraser
Copy link
Contributor

@hoppke

if I'm reading it right, content of healthcheck.sh could be reduced to sth along the lines of

exec zerotier-cli status -j | jq .online | grep -q true

(with 'apk add jq' prerequisite)

Not saying it's better, just different.

If it were me, I'd probably use jq too because it's more robust against upstream formatting changes. On the other hand, given the choice between clever and obvious, I generally opt for the latter, even if it is more wordy. Something like:

status=$(zerotier-cli status -j | jq -r '.online')
[ "$status" = "false" ] && exit 1

Two lines is good for debugging because you can stuff an echo in the middle. The code will succeed if the "online" key goes away (jq will return null) so users won't freak out seeing "unhealthy" just because the API changed (albeit at the price of a "healthy" which is no better than fake news). The -r guards against something other than Boolean tokens coming back and jq wrapping the result in quotes.

Like you, I'm not saying better, just different. 😎

@Paraphraser
Copy link
Contributor

Here's a proof-of-concept for you to consider if you think counting routes is a reasonable approach...

setup

  1. Health-check script:

    #!/bin/sh
    
    # set expected number of routes to "at least one"
    ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH="${ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH:-1}"
    
    # count the number of zerotier-associated routes in the routing table
    routes=$(ip r | grep -c "dev zt")
    
    # return "unhealthy" if not at least the expected number of routes
    [ $routes -lt $ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH ] && exit 1
    
    # formal catch-all indicating "healthy"
    exit 0
  2. Dockerfile:

    FROM zyclonite/zerotier:latest
    
    COPY --chmod=755 healthcheck.sh /usr/bin/
    
    HEALTHCHECK \
       --start-period=30s \
       --interval=20s \
       --timeout=10s \
       --retries=1 \
       CMD /usr/bin/healthcheck.sh || exit 1
    
    # EOF

    Ignore the fact that I'm using different paths.

  3. Service definitions:

    ---
    
    services:
    
      nodered:
        container_name: nodered
        build:
          context: ./services/nodered/.
          args:
            - DOCKERHUB_TAG=latest
            - EXTRA_PACKAGES=mosquitto-clients bind-tools tcpdump tree
        restart: unless-stopped
        environment:
          - TZ=${TZ:-Etc/UTC}
        ports:
          - "1880:1880"
        user: "0"
        volumes:
          - ./volumes/nodered/data:/data
          - ./volumes/nodered/ssh:/root/.ssh
        depends_on:
          zerotier-client:
            condition: service_healthy
            restart: true
    
      zerotier-client:
        container_name: zerotier
        build:
          context: ./.templates/zerotier-client/.
        x-image: "zyclonite/zerotier"
        restart: unless-stopped
        x-profiles:
          - all-hosts-off
        environment:
          - ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH=2
        network_mode: host
        volumes:
          - ./volumes/zerotier-one:/var/lib/zerotier-one
        devices:
          - "/dev/net/tun:/dev/net/tun"
        cap_add:
          - NET_ADMIN
          - SYS_ADMIN

    Again, I'm just using Node-RED in the role of "some other container which has a dependency on ZeroTier".

tests

  1. Nothing running. Up the stack.

    $ docker-compose up -d
    [+] Running 3/3
     ✔ Network iotstack_default  Created                                                                                                                       0.2s 
     ✔ Container zerotier        Healthy                                                                                                                       5.8s 
     ✔ Container nodered         Started                                                                                                                       6.1s 
    

    Some time later...

    $ DPS
    NAMES      CREATED          STATUS                    SIZE
    nodered    52 seconds ago   Up 46 seconds (healthy)   0B (virtual 636MB)
    zerotier   52 seconds ago   Up 51 seconds (healthy)   0B (virtual 14.2MB)
    

    The routing table situation is:

    $ ip r | grep "dev zt"
    10.244.0.0/16 dev ztr2qsmswx proto kernel scope link src 10.244.233.146 
    192.168.0.0/23 via 10.244.124.118 dev ztr2qsmswx proto static metric 5000 
    
  2. Disconnect from the network

    $ docker exec zerotier zerotier-cli leave 9999888877776666 ; date
    200 leave OK
    Thu Aug 22 11:51:09 AM AEST 2024
    

    Observe the reaction from a separate window calling docker ps at one-second intervals:

    Thu Aug 22 11:51:16 AM AEST 2024
    NAMES      CREATED         STATUS                     SIZE
    nodered    4 minutes ago   Up 4 minutes (healthy)     0B (virtual 636MB)
    zerotier   4 minutes ago   Up 4 minutes (unhealthy)   0B (virtual 14.2MB)
    

    It took the container 7 seconds to go unhealthy. Fairly obviously, this time depends on when the healthcheck script actually fires. The parameters I set were 20-second intervals with a single failure enough to declare "unhealthy".

  3. Reconnect to the network

    $ docker exec zerotier zerotier-cli join 9999888877776666 ; date
    200 join OK
    Thu Aug 22 11:54:49 AM AEST 2024
    

    Reaction (8 seconds):

    Thu Aug 22 11:54:57 AM AEST 2024
    NAMES      CREATED         STATUS                   SIZE
    nodered    8 minutes ago   Up 8 minutes (healthy)   0B (virtual 636MB)
    zerotier   8 minutes ago   Up 8 minutes (healthy)   0B (virtual 14.2MB)
    

However, as before, none of that affected the Node-RED container but if you reckon going "unhealthy" will trigger events in a Kubernetes environment then, great!

Two situations where Node-RED is affected are:

  1. Restart the ZeroTier container:

    $ date ; docker-compose restart zerotier-client
    Thu Aug 22 11:57:28 AM AEST 2024
    [+] Restarting 2/2
     ✔ Container zerotier  Started                                                                                                                             2.8s 
     ✔ Container nodered   Started                                                                                                                             2.6s 
    

    Reaction:

    Thu Aug 22 11:57:34 AM AEST 2024
    NAMES      CREATED          STATUS                                     SIZE
    nodered    11 minutes ago   Up Less than a second (health: starting)   0B (virtual 636MB)
    zerotier   11 minutes ago   Up 3 seconds (health: starting)            0B (virtual 14.2MB)
    

    A bit later:

    Thu Aug 22 11:57:36 AM AEST 2024
    NAMES      CREATED          STATUS                            SIZE
    nodered    11 minutes ago   Up 2 seconds (health: starting)   0B (virtual 636MB)
    zerotier   11 minutes ago   Up 5 seconds (healthy)            0B (virtual 14.2MB)
    

    And then the 3-second difference continues (the Node-RED container goes healthy at 30 seconds).

  2. Change the ZeroTier service definition (add a bogus environment variable):

    $ date ; docker-compose up -d
    Thu Aug 22 12:06:02 PM AEST 2024
    [+] Running 2/2
     ✔ Container zerotier  Healthy                                                                                                                            10.5s 
     ✔ Container nodered   Started                                                                                                                             8.5s 
    

    Reaction:

    Thu Aug 22 12:06:08 PM AEST 2024
    NAMES      CREATED         STATUS                                     SIZE
    nodered    3 seconds ago   Created                                    0B (virtual 636MB)
    zerotier   5 seconds ago   Up Less than a second (health: starting)   0B (virtual 14.2MB)
    

    Then, later:

    Thu Aug 22 12:06:13 PM AEST 2024
    NAMES      CREATED          STATUS                   SIZE
    nodered    8 seconds ago    Created                  0B (virtual 636MB)
    zerotier   10 seconds ago   Up 5 seconds (healthy)   0B (virtual 14.2MB)
    
    

    Then, later again:

    Thu Aug 22 12:06:14 PM AEST 2024
    NAMES      CREATED          STATUS                           SIZE
    nodered    9 seconds ago    Up 1 second (health: starting)   0B (virtual 636MB)
    zerotier   11 seconds ago   Up 6 seconds (healthy)           0B (virtual 14.2MB)
    
    

    And, finally:

    Thu Aug 22 12:06:44 PM AEST 2024
    NAMES      CREATED          STATUS                    SIZE
    nodered    39 seconds ago   Up 30 seconds (healthy)   0B (virtual 636MB)
    zerotier   41 seconds ago   Up 36 seconds (healthy)   0B (virtual 14.2MB)
    

But then there are situations where a docker-compose command affects the ZeroTier container without affecting the Node-RED container:

  1. Force a recreation of the ZeroTier container:

    $ date ; docker-compose up -d --force-recreate zerotier-client
    Thu Aug 22 12:01:54 PM AEST 2024
    [+] Running 1/1
     ✔ Container zerotier  Started                                                                                                                             3.0s 
    

    Reaction:

    Thu Aug 22 12:01:57 PM AEST 2024
    NAMES      CREATED          STATUS                                     SIZE
    zerotier   3 seconds ago    Up Less than a second (health: starting)   0B (virtual 14.2MB)
    nodered    15 minutes ago   Up 4 minutes (healthy)                     0B (virtual 636MB)
    

    Followed later by:

    Thu Aug 22 12:02:03 PM AEST 2024
    NAMES      CREATED          STATUS                   SIZE
    zerotier   9 seconds ago    Up 6 seconds (healthy)   0B (virtual 14.2MB)
    nodered    15 minutes ago   Up 4 minutes (healthy)   0B (virtual 636MB)
    

    No reaction from Node-RED.

  2. Stop ZeroTier.

    $ docker-compose down zerotier-client
    [+] Running 1/1
     ✔ Container zerotier  Removed                                                                                                                             2.4s 
    
    $ DPS
    NAMES     CREATED          STATUS                    SIZE
    nodered   27 minutes ago   Up 27 minutes (healthy)   0B (virtual 636MB)
    

    No reaction from Node-RED.

  3. Start ZeroTier again:

    $ docker-compose up -d
    [+] Running 2/2
     ✔ Container zerotier  Healthy                                                                                                                             5.8s 
     ✔ Container nodered   Running                                                                                                                             0.0s 
    
    $ DPS
    NAMES      CREATED          STATUS                    SIZE
    zerotier   8 seconds ago    Up 7 seconds (healthy)    0B (virtual 14.2MB)
    nodered    28 minutes ago   Up 28 minutes (healthy)   0B (virtual 636MB)
    

    No reaction from Node-RED.

for the record...

$ docker version -f "{{.Server.Version}}"
27.1.2

$ docker compose version
Docker Compose version v2.29.1

$ docker-compose version
Docker Compose version v2.29.1

I know docker-compose v2.29.2 has been released on GitHub. It hasn't made it into the apt repositories yet. I doubt that it would make a difference.

@hoppke
Copy link

hoppke commented Aug 22, 2024

@gb-123-git

if I'm reading it right, content of healthcheck.sh could be reduced to sth along the lines of
exec zerotier-cli status -j | jq .online | grep -q true
(with 'apk add jq' prerequisite)
Not saying it's better, just different.

This essentially does the same thing (which is checking the status as "online") albeit using a different program(apk). I personally feel there is no point in adding another package (jq, which is essentially a json parser) if we can do without it. The leaner and lesser programs installed in a container, the better (as a general rule), so I think we should only install necessary apks.

Yes. What matters IMO is that it sources structured data instead of scraping stdout, and tries to avoid making assumptions about what flavour/version of /bin/sh you happen to get. IMO that pays off the price of adding jq.

Seeing a 'jq .status' line in the script provides more hints than 'awk '{print $5}'' (should zerotier-cli change the output format and someone needs to go in and make the script work again), and the json interface exposes more details, so you might be able to build smarter/more accurate checks based on it (if ever needed). Not major features, but still good practices.

Of course both versions achieve the same result, today.

@gb-123-git
Copy link
Author

gb-123-git commented Aug 22, 2024

@hoppke
You are right, using jq does provide better hints at what is being done. Its just a matter of choice I guess. IMHO, I feel we can write code comments explaining what awk '{print $5} rather than installing a package just for better visibility.

should zerotier-cli change the output format and someone needs to go in and make the script work again, the json interface exposes more details, so you might be able to build smarter/more accurate checks based on it (if ever needed). Not major features, but still good practices.

I absolutely agree with this. However, we still might to change the script in case zerotier plans to change the name/label & value pairs itself.

I think let us first decide on the method (i.e. in which conditions the node should be considered 'healthy' and in which cases do we need to make it 'unhealthy' )
Once that is finalized, we can then decide if we want to install jq or use stdout scrapping.

@gb-123-git
Copy link
Author

@Paraphraser
Thank you so much for the detailed tests and suggestions. Just for your ready reference :

i did often run into a situation where zerotier-cli status returns online but no network interface was created... this might be worth checking as well.

That is true. The way zerotier-cli status works is that It checks if the client itself is online and ready, not the networks.

I think its up to @Paraphraser @hoppke & @zyclonite to decide what we finally want to do and I'll try to build up the script accordingly.
Another thing that comes to mind is the following:

We can also give user the option to check client (client status) or network (status)
and further if the user wants to check a specific network.
If network is chosen without specifying a network, the script will return healthy on the first network it finds OK
i.e if any 1 of the network is OK, means the container is OK.

How does the above sound ?

Personally, the container itself is the client, not the network, so I had earlier done a healthcheck based on the client not the network. But I am glad that you guys went into the details and we now have so many good suggestions from @hoppke , @Paraphraser & @zyclonite .

Waiting to hear from you guys on what you think should be done.

@Paraphraser
Copy link
Contributor

I have been doing some more research. Please study this:

$ docker exec zerotier zerotier-cli status ; DPS
200 info 0982546292 1.14.0 OFFLINE

$ ip r | grep "dev zt"
10.244.0.0/16 dev ztr2qsmswx proto kernel scope link src 10.244.233.146 
192.168.0.0/23 via 10.244.124.118 dev ztr2qsmswx proto static metric 5000 

$ DPS
NAMES      CREATED              STATUS                        SIZE
nodered    About a minute ago   Up About a minute (healthy)   0B (virtual 636MB)
zerotier   About a minute ago   Up About a minute (healthy)   0B (virtual 15.4MB)

So, what we have is the client reporting itself OFFLINE, yet the expected routes are present, and the container is reporting itself to be "healthy".

How I contrived this was by adding some net-filter rules to block port 9993.

The situation is actually a bit weird. I've blocked UDP port 9993 as both a source port and destination port, at both the pre-routing and post-routing hooks, and for both IP and the bridge. I know the filters are working because the associated counters are incrementing. Yet a tcpdump shows the host (a Proxmox-ve Debian guest) is still both receiving and transmitting some packets on UDP port 9993.

I don't understand how this is happening and, for now, I'm assuming it is some artifact of the filter rules that Docker (and Zerotier) add to the tables, and those are getting in first.

Nevertheless, this has all turned out to be beneficial, precisely because it demonstrates that it is possible for the container to be both OFFLINE yet still capable of both adding routes to the routing table and forwarding packets across the ZeroTier Cloud.

With that in mind:

  1. Revised Dockerfile:

    FROM zyclonite/zerotier:latest
    
    RUN apk --no-cache add jq
    
    COPY --chmod=755 healthcheck.sh /usr/bin/
    
    HEALTHCHECK \
       --start-period=30s \
       --interval=20s \
       --timeout=10s \
       --retries=1 \
       CMD /usr/bin/healthcheck.sh || exit 1
    
    # EOF
  2. Revised health-check script:

    #!/bin/sh
    
    # determine whether the container is online (returns false or true)
    status=$(zerotier-cli status -j | jq -r '.online')
    
    # set expected number of routes to "at least one"
    ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH="${ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH:-1}"
    
    # count the number of zerotier-associated routes in the routing table
    routes=$(ip r | grep -c "dev zt")
    
    # return "unhealthy" if not online and not at least the expected number of routes
    [ "$status" = "false" -o $routes -lt $ZEROTIER_MINUMUM_ROUTES_FOR_HEALTH ] && exit 1
    
    # formal catch-all indicating "healthy"
    exit 0

Tests:

  1. No containers running, put net-filter blocks in place, start the stack:

    $ sudo ../set-nft-rules ; date ; docker-compose up -d ; date
    Fri Aug 23 01:49:17 PM AEST 2024
    [+] Running 3/3
     ✔ Network iotstack_default  Created                                                                                                                       0.1s 
     ✘ Container zerotier        Error                                                                                                                        31.3s 
     ✔ Container nodered         Created                                                                                                                       0.1s 
    dependency failed to start: container zerotier is unhealthy
    Fri Aug 23 01:49:48 PM AEST 2024
    
    $ DPS
    NAMES      CREATED          STATUS                      SIZE
    nodered    35 seconds ago   Created                     0B (virtual 636MB)
    zerotier   35 seconds ago   Up 35 seconds (unhealthy)   0B (virtual 15.4MB)
    
  2. Clear the net-filter blocks:

    $ sudo ../clear-nft-rules ; date
    Fri Aug 23 01:50:34 PM AEST 2024
    

    Some time later:

    Fri Aug 23 01:50:48 PM AEST 2024
    NAMES      CREATED              STATUS                        SIZE
    nodered    About a minute ago   Created                       0B (virtual 636MB)
    zerotier   About a minute ago   Up About a minute (healthy)   0B (virtual 15.4MB)
    

    So, ZeroTier goes healthy but nothing happens to Node-RED. Much later:

    Fri Aug 23 01:53:23 PM AEST 2024
    NAMES      CREATED         STATUS                   SIZE
    nodered    4 minutes ago   Created                  0B (virtual 636MB)
    zerotier   4 minutes ago   Up 4 minutes (healthy)   0B (virtual 15.4MB)
    

    The same situation. I didn't wait any longer than that but I suspect this means a container which has a dependency on ZeroTier will never come up. In the case of Node-RED (non-host mode) its port (1880) never gets mapped and any attempt to open a shell into the container gets:

    Error response from daemon: container ... is not running
    

    Thus I think this is just as it appears. The container is ready to be launched but never gets the necessary signal. It might be worth raising this with the good folks at docker-compose.

  3. What happens if I just "up" the stack? The answer is Node-RED gets the needed kick in the pants:

    NAMES      CREATED         STATUS                                     SIZE
    nodered    9 minutes ago   Up Less than a second (health: starting)   0B (virtual 636MB)
    zerotier   9 minutes ago   Up 9 minutes (healthy)                     0B (virtual 15.4MB)
    
  4. Assuming the containers are running, what happens if I cause ZeroTier to go unhealthy by putting the net-filter blocks in place?

    $ sudo ../set-nft-rules ; date
    Fri Aug 23 02:01:24 PM AEST 2024
    

    It takes a fair while to react but, eventually:

    Fri Aug 23 02:06:14 PM AEST 2024
    NAMES      CREATED          STATUS                      SIZE
    nodered    16 minutes ago   Up 7 minutes (healthy)      0B (virtual 636MB)
    zerotier   16 minutes ago   Up 16 minutes (unhealthy)   0B (virtual 15.4MB)
    

    It's much faster to go healthy once the block is removed (see test 2 above).

Which brings me to:

i did often run into a situation where zerotier-cli status returns online but no network interface was created... this might be worth checking as well.

Well, you can't get routes without the associated network interface so I think checking the expected number of routes probably covers the "no interface" condition.

But I do agree with the inverse of your proposal because I've just demonstrated "offline" but still forwarding packets, hence the revisions above.

We can also give user the option to check client (client status) or network (status)
and further if the user wants to check a specific network. If network is chosen without specifying a network, the script will return healthy on the first network it finds OK i.e if any 1 of the network is OK, means the container is OK.

Well, save for being able to nominate a specific network, that's what this revised proposal is doing.

Ultimately, you can only build so much functionality into a "health check". That's not because you can't write complicated code into the script - you can. It's because scripts only return a binary results (healthy or unhealthy).

Just thinking "out loud", if I had a problem such as you describe where a ZeroTier client joined multiple networks, and I wanted different "reactions" depending on which network went down, I'd move the goal-posts a bit. As well as having the script return healthy or unhealthy, I'd add the Mosquitto clients to the container and publish detailed status. Then, the dependent container (the "Node-RED" in my example) could subscribe to the topic(s) and take appropriate action.

@hoppke
Copy link

hoppke commented Aug 23, 2024

@Paraphraser

Just thinking "out loud", if I had a problem such as you describe where a ZeroTier client joined multiple networks, and I wanted different "reactions" depending on which network went down, I'd move the goal-posts a bit. As well as having the script return healthy or unhealthy, I'd add the Mosquitto clients to the container and publish detailed status. Then, the dependent container (the "Node-RED" in my example) could subscribe to the topic(s) and take appropriate action.

Out of curiosity, would it be possible to configure node-red to check connectivity to certain "landmarks" (hosts/services) on the meshed network and automate directly around that?

E.g. I've a box somewhere that monitors the local ISP's reliability not by fetching WAN/LAN statuses from the router/modem (even though it offers snmp), but by periodically pinging the gateway and a known external "evergreen" (like 8.8.8.8).

It should be possible for a node-red appliance to pick up "host/network unreachable" events in some generic way without getting vendor locked-in by ZT/wireguard/citrix/...

@Paraphraser
Copy link
Contributor

@hoppke

I think we might be heading away from the subject-matter (a health-check script for ZeroTier) but I'll try to answer.

I'm not immediately sure what you mean. If you mean:

Can you run pings from a Node-RED flow and "do things" depending on whether the ping is successful or not?

Yes. Install one of these:

  • node-red-configurable-ping
  • node-red-node-ping

The former needs an external trigger (eg "inject" node) while the latter can self-time (eg every 60 seconds). In both cases, they return "false" in the payload if the ping fails. You feed the output to a "switch" or "filter" node and then handle the situation as you see fit.

If you meant something like:

Can Node-RED somehow manipulate the routing table to sense and fail around network problems by taking alternate routes?

No. Or, more precisely, even if it was possible, you shouldn't. That's what routing protocols are for. There's no need to reinvent that particular wheel.

If you meant something like:

Aside from something like pings, is it possible to have a flow receive external events like routing-table changes?

Yes. Indirectly. Try running this on the host:

$ ip monitor route

Then, from another terminal window on the same host, do something that causes a routing table change, such as downing and upping the ZeroTier container. You'll get a bunch of status messages.

You can forward those status changes as MQTT payloads like this:

ip monitor route | while read STATUS ; do
   mosquitto_pub -h mqtt.your.domain.com -t "route/$HOSTNAME/status" -m "$STATUS"
done

In theory the ip command will return JSON but I haven't figured out how to make it work so, for now, it's just a textual payload that will have to be post-processed. But that basic approach will get the status messages as far as a Node-RED flow and then you can do what you want.

You just write a script around that which you fire-up at boot time with an @reboot directive, and then you get almost real-time updates when your routing table changes.

Does that help?


If you want to go further, please join the IOTstack Discord and ask me there (@paraphraser on Discord too).

@zyclonite
Copy link
Owner

i like the discussion a lot, it highlights all the different aspects

just some thoughts on the main topic of having a healthcheck in the first place...
what would be the resulting action for the hosting system if the container gets unhealthy?
ex1: a simple restart might not solve the problem if it's a configuration issue or the node might not even be allowed in the network
ex2: zerotier might recover on it's own if it was simply a network hickup or similar, so knowing about that unhealthy state is nice but might not add a lot of value - restarting would be the wrong way of handling it (only creates more stress on the system)
ex3: a restart might help if the host had temporary issues with permissions or resources that the zerotier process could not recover from
ex4: just monitoring my zt network, i would aim for using the apis or hosting my own controller to make sure all critical nodes are online and reachable - or have some artificial tests like pinging the endpoints

what are we really trying to solve?
do we risk creating more issues or complexity with that solution?
an individual healthcheck could as well be done with mounting the health script and using the cli parameters...
adding jq will add 1,5mb to the container image just as a side note...

@hoppke
Copy link

hoppke commented Aug 23, 2024

@Paraphraser

I think we might be heading away from the subject-matter (a health-check script for ZeroTier) but I'll try to answer.
I'm not immediately sure what you mean. If you mean:

I meant to take the proverbial step back. ZT can tell you that it think's it's "online". Is sufficient to say things are "good"? If there's more involved (e.g. firewall rules, DNS resolution, auth etc.), then maybe ZT can never deliver all the info needed to support automated decisions, and a different source would be better.

I can check if ZT thinks it's up, but I have ZT in place for a reason - to set up connections across the tunnel. So maybe a simple ping test across the tunnel can tell more than ZT "status" ever could?

@Paraphraser
Copy link
Contributor

@zyclonite Well, my view remains unchanged. A health-check is a nice-to-have and, ideally, all containers should report their health. But I've never seen it as more than a human-readable value which turns up in docker ps, where its sole purpose is to give a human user somewhere to start.

The expanded depends_on syntax actually has me slightly bemused. The use-case for short syntax is fairly obvious because it solves start-order problems such as Zigbee2MQTT going into a restart loop if Mosquitto isn't running. That said, I've always thought it would be better if every dependent container had sufficient smarts to recognise that its dependencies not being there was a possibility and that, rather than crash-and-burn, it'd be more user-friendly to just bung out a "waiting on X" and keep retrying.

Even the short-syntax form isn't all that useful. It has always seemed to me that, if you say "Zigbee2MQTT depends on Mosquitto" then taking Mosquitto down should have the side-effect of also taking Zigbee2MQTT down first. That takedown order does happen if you down the whole stack but not if you just down a dependency.

The notion of a dependent container taking some programmatic action in response whatever health state its dependency happens to be in seems to me to be just plain weird. I also reckon my testing in the earlier parts of this PR shows the (docker-compose) implementation is a bit half-baked, at least in its current form. Basically, in order to be useful, a dependent container needs to know a lot more about the internal state of its dependency than a binary health indicator will ever be able to communicate.

I also agree with "zerotier might recover on its own". Until I started folding, spindling and otherwise mutilating ZeroTier's environment while testing for the purposes of this PR, I've never yet seen it go wrong. Plus, as soon as I unfolded, de-spindled or healed whatever mutilation I had put in place, ZeroTier always seemed to recover all on its own. No restarts needed.

So +1 to the idea of reporting the health status but -1 to relying on that to automate anything.

ex1: concur but "unhealthy" will probably send you to the log where (hopefully) you'll find a clue.

ex2: definitely concur.

ex3: I see this the same as ex1.

ex4: abso-fraggin-lutely!

do we risk creating more issues or complexity with that solution?

You could make that argument about any container with a health-check script. That someone might misuse a feature is no argument against providing the feature, so long as it has at least one beneficial use.

an individual healthcheck could as well be done with mounting the health script and using the cli parameters...

Not sure what you're getting at. If you mean an external script could do all this (eg fire off an MQTT message if status goes OFFLINE, or if expected routes go too low), I agree. An external script also has the advantage of sensing "container not running". That last test is something I already do on the remote Pi in my network, including keeping track of re-tries and replacing docker-compose.yml with a "known good" implementation on the off-chance an "unfortunate edit" leads to the remote Zt service going down.

adding jq will add 1,5mb to the container image just as a side note...

I'd see that as the price of greater robustness but if you'd prefer the awk approach then "your repo, your rules".


And speaking of which (extra packages) adding tzdata means that setting TZ as an environment variable propagates into the container (ie the same as is already done in Dockerfile.router). In IOTstack, we follow the convention:

- TZ=${TZ:-Etc/UTC}

and initialise .env as:

$ echo "TZ=$(cat /etc/timezone)" >>.env

so all containers derive from that. Heckofalot better than hacks like mapping /etc/localtime. And it does mean that log entries are all in local time, which I see as beneficial.


@hoppke to be honest, I rarely bother with middlemen. I always want to talk to the dude in charge.

In terms of this PR and your question, I see a health-check status as a middleman. Or merely hearsay evidence if you want to put it in lawyerspake.

Suppose you have a ZT network of some arbitrarily-complex design involving multiple sites, multiple ZT network IDs, multiple ZT clients, and so on. Ultimately, the game is "can A talk to B?" but you can usually find proxies where A being able to talk to C implies it must also be able to talk to B, so you can reduce your monitoring complexity a fair bit.

In general I find that something as simple as a cron job, firing every 5 minutes, using mosquitto_pub to post a "heartbeat" kind of message, with a Node-RED flow subscribing to those and generating a notification of some kind (usually just an email) when an expected heartbeat goes missing is more than sufficient.

You'll note the pattern here: sure, automate problem discovery but leave resolution to the human. There's nothing worse than chasing your tail because some automated process has a bee in its bonnet and is fighting your every move.

Although pings have their place, they are pretty low-level in terms of the IP stack. I have often found that devices will respond to pings even though the higher-levels are frozen. A cronjob triggering an MQTT message and that message arriving tells you a fair bit more, with a much lower false-positive rate.

But that's just my 2¢. Your network, your rules.

@gb-123-git
Copy link
Author

@zyclonite

just some thoughts on the main topic of having a healthcheck in the first place... what would be the resulting action for the hosting system if the container gets unhealthy?
ex1: a simple restart might not solve the problem if it's a configuration issue or the node might not even be allowed in the network

Agreed that restart cannot solve the problem.

ex2: zerotier might recover on it's own if it was simply a network hickup or similar, so knowing about that unhealthy state is nice but might not add a lot of value - restarting would be the wrong way of handling it (only creates more stress on the system)

Agreed

ex3: a restart might help if the host had temporary issues with permissions or resources that the zerotier process could not recover from

Agreed

ex4: just monitoring my zt network, i would aim for using the apis or hosting my own controller to make sure all critical nodes are online and reachable - or have some artificial tests like pinging the endpoints.
what are we really trying to solve? do we risk creating more issues or complexity with that solution? an individual healthcheck could as well be done with mounting the health script and using the cli parameters... adding jq will add 1,5mb to the container image just as a side note...

Agreed once more.

My 2 cents on this issue :

Principally, the intention of health-check is never to solve the problem automatically. The basic intent is to detect the problem.

(Maybe that's why its health-check instead of health-resolve. ;) I am just kidding, please don't be offended, I just had to write the line coz I found it very funny without any ill-intention towards you.)

It is not necessary that you restart the container using health-check itself, you may want to just mark the status for other scripts to take over and solve the problem or notify the user. It actually depends on different use-case scenarios so there is no 1 answer to it. eg. you want to call a web-hook on container going unhealthy etc.
Restarting is one way to solve the problem for some software. May not work for others. Everyone has different use case scenarios.

Tell you what:

I'll Re-do this PR next week and include advanced options for health-check. The Health-check shall be disabled by default in the Dockerfile so everything works normally as before. I'll just include my health-check script in the image (should not be more than a few kbs). Those who want to run health-checks can run using runtime variables or compose command. For the rest, it will be same as before.

How does that sound ?

@gb-123-git gb-123-git marked this pull request as draft August 25, 2024 18:43
@gb-123-git
Copy link
Author

I have marked it as draft for now.

@gb-123-git
Copy link
Author

@hoppke Pinging to check is a brilliant idea! but I feel that is what the get<network> status command does (though I am not sure.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants