-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature flags need quality of life improvements #9677
Comments
... that considers the local node as if it was reset. [Why] When a node joins a cluster, we check its compatibility with the cluster, reset the node, copy the feature flags states from the remote cluster and add that node to the cluster. However, the compatibility check is performed with the current feature flags states, even though they are about to be reset. Therefore, a node with an enabled feature flag that is unsupported by the cluster will refuse to join. It's incorrect because after the reset and the states copy, it could have join the cluster just fine. [How] We introduce a new variant of `check_node_compatibility/2` that takes an argument to indicate if the local node should be considered as a virgin node (i.e. like after a reset). This way, the joining node will always be able to join, regardless of its initial feature flags states, as long as it doesn't require a feature flag that is unsupported by the cluster. This also removes the need to use `$RABBITMQ_FEATURE_FLAGS` environment variable to force a new node to leave stable feature flags disabled to allow it to join a cluster running an older version. References #9677.
I'd like to add my 2 cents. There's an error when using 1 replica in a RabbitMQ Cluster Kubernetes operator: |
All RabbitMQ nodes in a cluster need to run before a feature flag can be enabled. Could you please expand on your use case? |
"All nodes" in my scenario is 1 single node (as defined in yaml: replicas: 1), so why is it expecting more? |
@wast please start a separate GitHub Discussion, we will not let well defined issues to be turned into open ended discussions and troubleshooting sessions. |
Most likely because there were more nodes in the cluster at some point and existing nodes still have knowledge of their prior peers. The Cluster Operator does not support shrinking the cluster, at least not in all cases, IIRC. There is a certain workaround but in general, shrinking member count should not be considered a supported operation. This is a topic for a separate discussion, this issue has well defined and specific scope. |
This issue was "converted" to a GitHub project: |
Why
Since the introduction of the first required feature flags, it became more painful for users to upgrade if they did not pay attention to the feature flags states. Things like:
There is room for improvement in the current subsystem and I would like to follow several routes:
How
Here is a list of improvements that I plan to make:
join_cluster
to take into account the fact that the node will be reset. There should be no need to mess with$RABBITMQ_FEATURE_FLAGS
because the joining node's feature flags states will be aligned with the remote cluster anyway.See:
When a clustered node is upgraded to a version that requires some feature flags, it should be possible to enable them remotely in the cluster and then proceed with the start of the local node.When a node is upgraded, users could configure RabbitMQ to automatically enable all stable feature flags as soon as possible. This could be an opt-in or opt-out behavior.preinst
script can compare the list of feature flags from the installed version and the new one.$RABBITMQ_FEATURE_FLAGS
to allow to enable feature flags in addition to default ones using+my_feature_flag
.The text was updated successfully, but these errors were encountered: