-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support watching logs in multi-container Deployments #2712
Comments
Hi @ccidral I think we currently don't support this feature for deployments (@rohanKanojia ?). I created a gist with the workaround until we implement a proper solution (if not already there): The idea would be basically to retrieve the logs from the You can check the provided solution by: # Create an example Deployment with a Pod with 2 containers
$ jbang https://gist.github.com/manusa/70c51eeaee0fabc222186310255e71b3#file-multideployments-java example
# Retrieve the log for the first Pod that matches and the first container
$ jbang https://gist.github.com/manusa/70c51eeaee0fabc222186310255e71b3#file-multideployments-java log Relates toLine 376 in 1e1fdb8
|
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
@ccidral: were you using a Deployment with more than 1 replicas along with multiple containers? Looks like our code was showing wrong error message; it fails due to multiple pods found for the provided labels and complains for multiple containers. In Kubectl, the behavior is to pick up the first pod and show logs. I'm going to add same behavior here in KubernetesClient as well. For Deployment with multiple containers, I'm making
|
@rohanKanojia I was using a deployment with more than one replica. Later I realized that the
Of course this command is agnostic in regards to deployments. So here I assume that pods belonging to a particular deployment have a label named |
This command using label selectors is similar to what Marc proposed I think you can watch logs for all pod replicas with something like this: try (KubernetesClient client = new DefaultKubernetesClient()) {
List<LogWatch> logWatchList = new ArrayList<>();
PodList podList = client.pods().inNamespace("default").withLabel("app", "nginx").list();
podList.getItems().forEach(p -> logWatchList.add(client.pods()
.inNamespace("default")
.withName(p.getMetadata().getName())
.inContainer("hello")
.tailingLines(10)
.watchLog(System.out)));
TimeUnit.MINUTES.sleep(2);
logWatchList.forEach(LogWatch::close);
} catch (InterruptedException interruptedException) {
Thread.currentThread().interrupt();
interruptedException.printStackTrace();
}
|
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
…Controllers Signed-off-by: Rohan Kumar <[email protected]>
Signed-off-by: Rohan Kumar <[email protected]>
Thanks, this should work for me. |
Apparently watching deployment logs aren't supported. I was hoping that the equivalent of
Was this
But it throws
Am I missing something? If not, is it possible to add support for that?
The text was updated successfully, but these errors were encountered: