Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restart on every update #71

Closed
bcroq opened this issue May 28, 2021 · 7 comments
Closed

Restart on every update #71

bcroq opened this issue May 28, 2021 · 7 comments

Comments

@bcroq
Copy link
Contributor

bcroq commented May 28, 2021

Is there a reason why the pods are re-created on every helm update?

I know that the date in date/deploy-date is the reason why k8s re-creates the pods, but why did you choose to ask for a new pod on each update?

@bokysan
Copy link
Owner

bokysan commented May 29, 2021

Because otherwise postfix won't get reloaded if any configs are changed. I do expect many more issues with people changing the config and not seeing it applied than knowing that they need to do a SIGHUP on all postfix and opendkim instances.

One option would be to send a SIGHUP to all running instances from a post-update job, but that would require quite a bit more coding. If you're willing to submit a patch, pull requests are welcome.

@bcroq
Copy link
Contributor Author

bcroq commented May 31, 2021

Could there be a value that would disable this restart once you know your configuration is stable?

@bokysan
Copy link
Owner

bokysan commented May 31, 2021

Sure, there could be an option values.yml to disable injecting of date/deploy-date, but that would be at your own risk and I wouldn't be able to support support it.

May I ask what the use case it, as usually people don't need to redeploy the chart very often?

@bcroq
Copy link
Contributor Author

bcroq commented May 31, 2021

This chart is used as a dependency in a umbrella chart so each time our application is upgraded its mail relay is restarted and is unavailable for a few seconds.

Now that the mail relay configuration is done, it would be great to keep it running when our application is upgraded.

@bokysan
Copy link
Owner

bokysan commented May 31, 2021

I will look into it but running at least two instances should resolve your issue -- kubernetes will do a rolling deployment and your mail relay will have 100% uptime.

@bcroq
Copy link
Contributor Author

bcroq commented Jun 7, 2021

As proposed, I currently deploy using 2 replicas; rolling deployments make sure at least one pod is running.

I see you added recreateOnRedeploy in 3.3.0. I test it as soon as possible.

Thanks!

@bcroq
Copy link
Contributor Author

bcroq commented Jun 11, 2021

Tested and works as expected.

@bcroq bcroq closed this as completed Jun 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants