Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support watching logs in multi-container Deployments #2712

Closed
ccidral opened this issue Jan 7, 2021 · 5 comments · Fixed by #3280
Closed

Support watching logs in multi-container Deployments #2712

ccidral opened this issue Jan 7, 2021 · 5 comments · Fixed by #3280
Assignees
Milestone

Comments

@ccidral
Copy link

ccidral commented Jan 7, 2021

Apparently watching deployment logs aren't supported. I was hoping that the equivalent of

kubectl logs -n blah -f deployment/foobar

Was this

client.apps().deployments().inNamespace("blah").withName("foobar").watchLog(System.out);

But it throws

Watching logs is not supported for multicontainer jobs

Am I missing something? If not, is it possible to add support for that?

@manusa
Copy link
Member

manusa commented Jan 7, 2021

Hi @ccidral
In a multi-container Pod you need to specify the container to be able to retrieve the logs.

I think we currently don't support this feature for deployments (@rohanKanojia ?).

I created a gist with the workaround until we implement a proper solution (if not already there):
https://gist.github.com/manusa/70c51eeaee0fabc222186310255e71b3#file-multideployments-java-L115-L119

The idea would be basically to retrieve the logs from the pods() DSL method instead.

You can check the provided solution by:

# Create an example Deployment with a Pod with 2 containers
$ jbang https://gist.github.com/manusa/70c51eeaee0fabc222186310255e71b3#file-multideployments-java example
# Retrieve the log for the first Pod that matches and the first container
$ jbang https://gist.github.com/manusa/70c51eeaee0fabc222186310255e71b3#file-multideployments-java log

Relates to

throw new KubernetesClientException("Watching logs is not supported for multicontainer jobs");

@manusa manusa added this to the 5.1.0 milestone Jan 7, 2021
@manusa manusa changed the title Support watching deployment logs Support watching logs in multi-container Deployments Feb 8, 2021
@manusa manusa modified the milestones: 5.1.0, 5.2.0 Feb 9, 2021
@manusa manusa modified the milestones: 5.2.0, 5.3.0 Mar 3, 2021
@manusa manusa modified the milestones: 5.4.0, 5.5.0 May 13, 2021
@rohanKanojia rohanKanojia self-assigned this Jun 25, 2021
rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jun 25, 2021
rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jun 28, 2021
rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jun 28, 2021
rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jun 28, 2021
rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jun 28, 2021
@rohanKanojia
Copy link
Member

@ccidral: were you using a Deployment with more than 1 replicas along with multiple containers? Looks like our code was showing wrong error message; it fails due to multiple pods found for the provided labels and complains for multiple containers.

In Kubectl, the behavior is to pick up the first pod and show logs. I'm going to add same behavior here in KubernetesClient as well.

For Deployment with multiple containers, I'm making inContainer(String containerId) DSL method available to Deployment DSL so that user can specify container which he wants to watch logs of:

client.apps().deployments().inNamespace("default")
                    .withName("multi-container-deploy")
                    .inContainer("hello")
                    .watchLog(System.out);

@ccidral
Copy link
Author

ccidral commented Jun 28, 2021

@rohanKanojia I was using a deployment with more than one replica. Later I realized that the kubectl command that shows me logs for all pods tied to a deployment is this:

kubectl -n $NAMESPACE logs -f --tail=$LINES -l run=$DEPLOYMENT_NAME

Of course this command is agnostic in regards to deployments. So here I assume that pods belonging to a particular deployment have a label named run whose value is the deployment name.

@rohanKanojia
Copy link
Member

rohanKanojia commented Jun 29, 2021

This command using label selectors is similar to what Marc proposed I think you can watch logs for all pod replicas with something like this:

try (KubernetesClient client = new DefaultKubernetesClient()) {
    List<LogWatch> logWatchList = new ArrayList<>();

    PodList podList = client.pods().inNamespace("default").withLabel("app", "nginx").list();
    podList.getItems().forEach(p -> logWatchList.add(client.pods()
            .inNamespace("default")
            .withName(p.getMetadata().getName())
            .inContainer("hello")
            .tailingLines(10)
            .watchLog(System.out)));

    TimeUnit.MINUTES.sleep(2);
    
    logWatchList.forEach(LogWatch::close);
} catch (InterruptedException interruptedException) {
    Thread.currentThread().interrupt();
    interruptedException.printStackTrace();
}

Or perhaps you're suggesting that you would like to have these Deployment logs based on the label selector feature available in KubernetesClient?

rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jun 29, 2021
@manusa manusa modified the milestones: 5.5.0, 5.6.0 Jun 29, 2021
rohanKanojia added a commit to rohanKanojia/kubernetes-client that referenced this issue Jul 2, 2021
manusa pushed a commit that referenced this issue Jul 5, 2021
@ccidral
Copy link
Author

ccidral commented Jul 5, 2021

This command using label selectors is similar to what Marc proposed I think you can watch logs for all pod replicas with something like this:

Thanks, this should work for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants