-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add command for use in "PostStart" hook that delays until proxy has started #1117
Comments
I'm not sure what value a script in the proxy image would have, as you'd probably want something in your application image. Have you considered using the HTTP health-checks? I think you could use something like this as a wrapper in your application container: until $(curl --output /dev/null --silent --head --fail http://127.0.0.1:8090/readiness); do
printf '.'
sleep .1 # 100 ms
done
# start your app here |
For the proposed solution, this has to run from the proxy's container. The idea is to use a lifecycle hook to delay startup of the application container until the proxy is ready (see istio/istio#11130 (comment) for more details, this is exactly what Istio does so that the application container doesn't start until the service mesh sidecar is ready). One could sidestep all of this and run a wait from the application container like you suggest, but I'd rather not couple my application to an implementation detail of one of it's execution environments (in general but also in my particular case, this is being run cross cloud, so it's not even communicating with CloudSQL Proxy half of the time). So basically what I'm asking for is a way to wait until the proxy is healthy from within the proxy container, ideally without |
Ok, I think I understand now. I'm not really sure how good of an idea it is depend on the synchronous order of containers though - feels a bit like an implementation detail. It looks like this could probably be an HTTP endpoint, so maybe something like |
I agree that it's an implementation detail. But it has to be dealt with somewhere. Since the thing introducing the implementation detail is the CloudSQL Proxy and the sidecar pattern (we wouldn't have to wait for something to start up if we were connecting to the DB directly) to me it seems most natural to handle that implementation detail via the proxy as well instead of the application. But that's just my opinion. |
Re-reading this I realized that by:
You were talking about a different implementation detail than in my response. I agree with you, that's an implementation detail of Kubernetes. But seeing that Istio uses it for exactly the same purpose, and that there is currently no better solution (at least that I know of) it seems like it's safe to rely on that until a more official pattern comes about. |
We're currently working to get v2 shipped. Once that's done, we'll revisit this and adjust the priority as needed. |
Another option would be to use https://github.com/karlkfi/kubexit to support dependency ordering. |
For the record, |
We've recently taken a cue from Istio and added a If we wanted to avoid a solution that required |
This would be really helpful for running tools that are CloudSQL-proxy unaware (for example, For the case of Is there even an existing HTTP endpoint exposed by the CloudSQL proxy today that would be usable for this purpose? The README section on the Localhost Admin Server doesn't mention one. |
Do you think our HTTP healthchecks would fit the bill? We have startup, liveness, and readiness. In particular the startup probe might be the right one to try. |
Ah, I had missed the docs for them! Yes, it does seem like these would probably work (at least if |
Istio images contain a pilot-agent binary which has specific commands including a {{wait}} command – internally this does just poll the readiness endpoint of the proxy (source) up until a (configurable) timeout, and on timeout it calls This seems to work quite nicely… |
That's a nice idea and pretty cheap to implement as well. |
👋 as a heavy cloudsql-proxy user (100's of proxied workloads in a high churn environment) i thought i'd share my piece. We also use istio, which as mentioned has the It's worth remembering with kubernetes 1.28 sidecar containers are being introduced which "solves" this problem officially. |
Thanks @Stono -- even with the support for sidecars coming, I'd guess that a wait command will still be useful in other situations. We have a bunch of stuff higher on the priority list, but we'll try to squeeze this in soon. |
For reference, this is a good overview of the post start hook approach: https://medium.com/@marko.luksa/delaying-application-start-until-sidecar-is-ready-2ec2d21a7b74 |
I have a related problem: I'd like to use Docker Compose's health checks to bring up the proxy before starting dependent services, but I can't seem to be able to ping the endpoints from within the container image. It would be awesome if there was a health check defined in the cloud-sql-proxy Dockerfile itself (see https://docs.docker.com/engine/reference/builder/#healthcheck). |
@GergelyKalmar We'd be happy to investigate and support that (assuming no major technical issues). Would you mind opening a separate feature request for that work? |
I was able to implement the sidecar = k8s.V1Container(
name="cloud-sql-proxy",
image="gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.7.2-alpine",
# ....
lifecycle=k8s.V1Lifecycle(
post_start=k8s.V1LifecycleHandler(
_exec=k8s.V1ExecAction(command=[
"sh",
"-c",
f"""until [ "$(wget --server-response 'http://0.0.0.0:{CLOUD_SQL_HTTP_PORT}/startup' -O - 2>&1 | grep -c 'HTTP/1.1 200 OK')" -eq 1 ]; do
echo "Waiting for Cloud SQL Proxy to be ready";
sleep 10;
done"""
])
)
)
) |
To help ensure the Proxy is up and ready, this commit adds a wait command with an optional `--max` flag to set the maximum time to wait. By default when invoking this command: ./cloud-sql-proxy wait The Proxy will wait up to the maximum time for the /startup endpoint to respond. This command requires that the Proxy be started in another process with the HTTP health check enabled. If an alternate health check port or address is used, as in: ./cloud-sql-proxy <INSTANCE_CONNECTION_NAME> --http-address 0.0.0.0 \ --http-port 9191 Then the wait command must also be told to use the same custom values: ./cloud-sql-proxy wait --http-address 0.0.0.0 \ --http-port 9191 By default the wait command will wait 30 seconds. To alter this value, use: ./cloud-sql-proxy wait --max 10s Fixes #1117
To help ensure the Proxy is up and ready, this commit adds a wait command with an optional `--max` flag to set the maximum time to wait. By default when invoking this command: ./cloud-sql-proxy wait The Proxy will wait up to the maximum time for the /startup endpoint to respond. This command requires that the Proxy be started in another process with the HTTP health check enabled. If an alternate health check port or address is used, as in: ./cloud-sql-proxy <INSTANCE_CONNECTION_NAME> --http-address 0.0.0.0 \ --http-port 9191 Then the wait command must also be told to use the same custom values: ./cloud-sql-proxy wait --http-address 0.0.0.0 \ --http-port 9191 By default the wait command will wait 30 seconds. To alter this value, use: ./cloud-sql-proxy wait --max 10s Fixes #1117
To help ensure the Proxy is up and ready, this commit adds a wait command with an optional `--max` flag to set the maximum time to wait. By default when invoking this command: ./cloud-sql-proxy wait The Proxy will wait up to the maximum time for the /startup endpoint to respond. This command requires that the Proxy be started in another process with the HTTP health check enabled. If an alternate health check port or address is used, as in: ./cloud-sql-proxy <INSTANCE_CONNECTION_NAME> --http-address 0.0.0.0 \ --http-port 9191 Then the wait command must also be told to use the same custom values: ./cloud-sql-proxy wait --http-address 0.0.0.0 \ --http-port 9191 By default the wait command will wait 30 seconds. To alter this value, use: ./cloud-sql-proxy wait --max 10s Fixes #1117
It is a well known issue there is no way to sequence container startups in Kubernetes (e.g. kubernetes/kubernetes#65502). As far as I know, this is the best solution we currently have: istio/istio#11130 (comment)
This would be made a lot better if the images provided some
wait-until-proxy-is-up-and-running.sh
script (especially since the default image lackssleep
,curl
, etc.) so that users can easily force Kuberentes to wait until the proxy is up before it starts the next container. Could be as simple as the proxy creating a file when it finishes starting up.The text was updated successfully, but these errors were encountered: