-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add port spec in probes #8288
Comments
/area API
|
livenessProbe: As the above one is the default liveness-probes setting,I am looking to have exactly same thing to add it in the knative.is that possibel? @JRBANCEL |
How is this supposed to work with multi-container if only one container can specify a port? Without being able to specify a port in the probe configuration, it seems the only current option is to perform no readiness/liveness probes on containers that are not directly serving traffic. Am I missing some other way to define this? |
The container serving traffic could probe the other containers. |
We rewrite the port for network probes of the user container regardless to test the full network path (and ensure the queue-proxy is up). cc @tcnghia Let's open something else for multi-container. |
This issue is stale because it has been open for 90 days with no |
/reopen I'm going to necro this based on some feedback from the Convention Service team at Tanzu. It's not clear why the Today you have to write:
And the following produces an error:
Furthermore, it's not clear why a user-container might not want to open two separate HTTP ports -- one for healthcheck probing, and one for the data path. That might look like:
In this way, only the queue-proxy can probe the To tell me if my reasoning is good or bad. Strangely, the Knative API spec doesn't mention the |
@evankanderson: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
CC @scothis , because he'd complained about it before as well. |
/remove-lifecycle stale |
…rPort (#8288) (#12270) * Allow probes to explicitly set the port to the containerPort (#8288) * Address julz feedback Co-authored-by: Evan Anderson <[email protected]>
I have an special use case related to this, i need to run health checks on a different port than the serving port but as per validation added by #12225 it is only allowed probes to be in same port as serving |
/reopen |
The notion here seems to be that you don't want to expose (for example) the health or liveness probes to external application consumers, but you do want to be able to have that communication channel between the infrastructure and the application. The way you would express that would look something like (showing only spec:
- container:
image: foo
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 5000 With the intent that |
If the user specifies a probe on a separate port, then I think a tcp probe on the main port (via the QP) should still be defaulted. For the portless option, we rewrite it to test the QP path. |
What if it is a I think the |
Yes, that's exactly what I mean. I think the single probe thing is from our validation, not K8s. Worth confirming, and probably relaxing. (on a completely unrelated note, I'm very excited for the GRPC probing stuff 🤩 ) |
What would happed if the probes are in an application context of management which is different from main app context (serving) and, each context is exposed in different ports, the management context besides the probes has other app settings so the port should be available to operators to changes those management settings, i would need at least two |
Right, that is why I think we should relax things (assuming K8s allows it, which seems likely given it is a list). |
This means that we can't add a However, we could have the |
I think I was (incorrectly) remembering it as a list field (like ports!), so my bad. |
FYI - there's been some queue-proxy probe related changes since this issue was originally filed: #11956 I added this to the Icebox for me to revisit |
Well my intent, going basic, is to have multiple ports in the serving container, with that i can address my probes/management scenarios |
Do you actually want to serve end-user traffic on multiple ports, or simply Knative doesn't prevent the second, but feels pretty strongly that a pod has exactly one port which receives outside traffic for the purposes of scaling and concurrency (the first case). |
No, only the primary port should be exposed to ingress traffic and autoscaled for requests. The second port is for management actions that will connect to the container via other means (like a metrics endpoint). The particular use case is for a Spring Boot workload, which hosts the liveness and readiness endpoints with the other management endpoints that we would rather not expose to ingress. Adding an additional entry in the ports list is useful for avoiding conflicts, but not strictly required. |
Wanted to clarify the outcomes we need to realize here. The final pod will have:
|
For point 2, I think that you're asking that traffic to the second port is only exposed within the Pod, but if you're asking something different (for example, to directly address individual Pods from another node on the cluster), then loosening the probe restrictions won't be sufficient. The reason that supporting probes on a different TCP port is more difficult than you might expect is that Knative actually rewrites the TCP and HTTP probes to be executed by the #8471 has one use case for exposing multiple ports on the container (Prometheus metrics scraping), but "a container that provides service on multiple ports" is a larger API change to Knative and will probably need a bit more thought. For Prometheus, it turned out that an annotation worked in place of specific |
Yes, the traffic in 2nd port would within the pod/deployment , not need to be opened outside due to the second port is for management matters, what we wanted to say in 2 is if after the deployment an operator want to do some management to the app then he can connect to the pod on that port and the request would go to app container (it can be through the sidecar). |
I'm assuming that you don't mean a human operator ("he" or "she") would be running in some sort of sidecar (I'm not sure if you mean Again, a bit more specifics will help. I'm guessing that you may be referring to Actuator endpoints for Spring Boot as the application functionality you want exposed on a different port? |
Yes @evankanderson you are right, and yes in general all this is about the Actuator endpoints that you linked; by the sidecar i mean the |
Is it expected that interacting with the management port will change the state of the running application, or is it for observability (including debugging) only? I'm concerned about the pattern of "start a Pod, need some sort of poke on a management port before it can be Ready / serving correctly" having bad interactions with Knative's lifecycle management. (As one example, if the management service doesn't get to a Pod in time during Revision first-start, the entire Revision might get abandoned as "failed".) |
it is expected for observability only, not expected the interaction with management change the app status in a way it affects lifecycle |
Whoops, just realized that we should continue this discussion in #8471 in terms of supporting containers providing off-Pod endpoints on multiple ports. The end-resolution here is that we should loosen the probing restrictions to allow probing a different port than the serving port for healthchecks, with the acknowledged risk that the main serving port is actually malfunctioning, but the health-check port is working correctly. |
In what area(s)?
Remove the '> ' to select
Other classifications:
Describe the feature
Currently we can add host, httpHeaders, path, scheme under HTTPGetAction.If possible can we add port also .
something like this
The text was updated successfully, but these errors were encountered: