You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, recover-control-plane.yml only works if the broken control plane node is reachable. When the node is offline, task Remove etcd data dir will fail and break the playbook run. ignore_errors: true bypasses only task failure but not cases like node unreachable, and one would need to add ignore_unreachable: true.
Clarification in runbook
Also, it's probably very obvious to etcd experts, but the runbook, Recovering the control plane, does not mention that the newly added replacement node needs to set the same etcd_memeber_name variable as the broken one, otherwise, when joining the cluster, the new etcd node will get an "ignored streaming request; ID mismatch" error.
* If your new control plane nodes have new ip addresses you may have to change settings in various places.
It's very vague and I find it a bit confusing. In fact, I am not sure if this is still needed at all. The main control plane role already has a task to update etcd node IPs in api-server configs, and all the certificates were updated by the existing tasks automatically. After finishing the execution of recover-control-plane.yml, I didn't have to do anything. Granted the replacement node I used had the same IP address as the one that's broken, and I didn't have the opportunity to test the scenario when the replacement node has a new IP.
Environment:
Version of Ansible (ansible --version): 2.12.5
Version of Python (python --version): 3.9.18
Kubespray version (commit) (git rev-parse --short HEAD): 2.22.0
The text was updated successfully, but these errors were encountered:
playbook exits with node unreachable error
Currently,
recover-control-plane.yml
only works if the broken control plane node is reachable. When the node is offline, task Remove etcd data dir will fail and break the playbook run.ignore_errors: true
bypasses only task failure but not cases like node unreachable, and one would need to addignore_unreachable: true
.Clarification in runbook
Also, it's probably very obvious to etcd experts, but the runbook, Recovering the control plane, does not mention that the newly added replacement node needs to set the same
etcd_memeber_name
variable as the broken one, otherwise, when joining the cluster, the new etcd node will get an "ignored streaming request; ID mismatch" error.In the runbook, the last section has:
It's very vague and I find it a bit confusing. In fact, I am not sure if this is still needed at all. The main control plane role already has a task to update etcd node IPs in api-server configs, and all the certificates were updated by the existing tasks automatically. After finishing the execution of
recover-control-plane.yml
, I didn't have to do anything. Granted the replacement node I used had the same IP address as the one that's broken, and I didn't have the opportunity to test the scenario when the replacement node has a new IP.Environment:
Version of Ansible (
ansible --version
): 2.12.5Version of Python (
python --version
): 3.9.18Kubespray version (commit) (
git rev-parse --short HEAD
): 2.22.0The text was updated successfully, but these errors were encountered: