-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NiFi upgrade doesn't work #238
Comments
…Fi version. fixes #238 Signed-off-by: Sönke Liebau <[email protected]>
…Fi version. fixes #238 Signed-off-by: Sönke Liebau <[email protected]>
None of the boxes have been checked. |
I'm afraid the boxes are not really checkable at the moment, as we did not implement a generic solution for this. Not sure if I wrote those checkboxes back in the day, but I'd say we don't need something abstract at the moment, as this only affects NiFi at the moment. |
No, I'm fine with that and I'm fine with not checking the boxes. |
We can create that as and when needed I think. |
Affected version
0.5.0
Current and expected behavior
Scenario
A NifiCluster with three nodes was deployed with version 1.13.2 and is up and running.
The NifiCluster CRD is now changed to version 1.15.0.
Current Behavior
The StatefulSet is updated with the new image and triggers a rolling restart of the NiFi Pods with the new container image set to NiFi 1.15.0.
However NiFi does not support running a cluster with mixed versions, instead a full stop and restart with the new version is required.
Reference ticket:
https://issues.apache.org/jira/browse/NIFI-4068?jql=project%20%3D%20NIFI%20AND%20text%20~%20%22rolling%20upgrade%22
Due to this, the new pod never successfully starts and the restart hangs indefinitely, or until the user deletes all pods and they are rewritten with the same version by the StatefulSet.
Expected Behavior
The operator should notice that a version changes is happening and trigger a full restart of NiFi.
This is done when
Possible solution
The operator needs to be able to recognize a version change during reconciliation and then act accordingly to perform a full cluster restart.
Something along the lines of this code (suggested by @teozkr ) might work:
Environment
This should be reproducable independently of the K8s environment.
The text was updated successfully, but these errors were encountered: