Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate labels from PipelineRun to TaskRun to Pods #488

Merged

Conversation

dwnusbaum
Copy link
Contributor

Previously, labels were propagated from TaskRun to Build, but not from Build to Pod. This PR fixes that, and also propagates labels from PipelineRun to TaskRun, so that users are more easily able to filter, identify, and query Pods created by Build Pipeline.

In all cases, if a user specifies a label whose key overlaps with one of the labels that we set automatically, the user's label is ignored.

As mentioned in taskrun_test.go, the current tests do not fully demonstrate that the TaskRun to Pod propagation works correctly since they use Build as an intermediate form for comparison. I am happy to try to address that in this PR by introducing a builder for Pod and using that in the tests, but the changes were already getting a bit heavy in taskrun_test.go due to wrapping all of the BuildSpecs in a Build, so I wanted to submit this first to solicit feedback.

Fixes #458

CC @abayer @jstrachan

Previously, labels were propagated from TaskRun to Build, but not from
Build to Pod. This PR fixes that, and also propagates labels from
PipelineRun to TaskRun, so that users are more easily able to filter,
identify, and query Pods created by Build Pipeline.

In all cases, if a user specifies a label whose key overlaps with one
of the labels that we set automatically, the user's label is ignored.

As mentioned in taskrun_test.go, the current tests do not fully
demonstrate that the TaskRun to Pod propagation works correctly since
they use Build as an intermediate form for comparison.

Fixes tektoncd#458
@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here (e.g. I signed it!) and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

@knative-prow-robot knative-prow-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Feb 11, 2019
@knative-prow-robot knative-prow-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 11, 2019
@dwnusbaum
Copy link
Contributor Author

Working on the CLA 😄

wantBuildSpec buildv1alpha1.BuildSpec
name string
taskRun *v1alpha1.TaskRun
wantBuild *buildv1alpha1.Build
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the reason for most of the changes in this file. I could isolate the specific test I added to avoid most of the changes if desired, or go even farther and move to using a Pod builder as discussed in the TODO I added:

// TODO: Using MakePod means that a diff will not catch issues
// specific to the Build to Pod translation (e.g. if labels are
// not propagated in MakePod). To avoid this issue we should create
// a builder for Pods and use that instead.

Any thoughts/feedback would be appreciated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I made a similar change in pr #491 to not use MakePod and just put down the expected Containers specs which is a bigger change, but I had to validate the container's names

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's exactly what I had in mind, these 2 PRs will definitely have a ton of conflicts with each other in that file. I think it would be good to go with your approach but to add a builder for creating Pods/Containers similarly to the other builders we have.

@dwnusbaum
Copy link
Contributor Author

CLA should be signed now.

@googlebot
Copy link

CLAs look good, thanks!

@googlebot googlebot added cla: yes Trying to make the CLA bot happy with ppl from different companies work on one commit and removed cla: no labels Feb 11, 2019
// non-existent build.
// TODO(jasonhall): Just set this directly when creating a Pod from a
// TaskRun.
pod.OwnerReferences = []metav1.OwnerReference{
Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this into MakePod, but I'm not sure if that's ok or if we are trying to keep the MakePod code in sync with upstream for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, the MakePod function is not used at all right now. We still rely on the knative/build API to create a Build instead of using the MakePod above.

Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, maybe I am misunderstanding something. As far as I can tell, we are import the resources package containing pod.go from build-pipeline and then use that version here so that as of #326 we aren't actually using Knative/build, and in my local testing the changes I made to MakePod did affect the output (maybe only the test code is using the local version of MakePod?). This commit in #326 about forking pod.go instead of vendoring it seems to support my understanding. Is there some kind of flag enabling different behavior in testing vs production usage or something, or am I missing some key branch in a code path somewhere?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tejal29 : @imjasonh submitted PR to remove dependency on Build API. Currently there is code dependency but TaskRun is not creating Build CRD object anymore.

@dwnusbaum : There is no need for maintaining code similarity with Build code base. If it makes sense to pull the logic into separate function then please feel free to do so.

@@ -360,15 +360,19 @@ func (c *Reconciler) createTaskRun(logger *zap.SugaredLogger, rprt *resources.Re
taskRunTimeout = nil
}

labels := make(map[string]string, len(pr.ObjectMeta.Labels)+2)
for key, val := range pr.ObjectMeta.Labels {
labels[key] = val
Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the keys need to be validated based on my reading of https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/. Will that be handled by Kubernetes itself when the PipelineRun/TaskRun are created, in which case we don't need to do anything since these labels only come from those resources, or do we need to do some validation here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes i am pretty sure, its handled by the kubernetes api and we don't need to do any thing here

@tejal29
Copy link
Contributor

tejal29 commented Feb 11, 2019

/ok-to-test

@knative-prow-robot knative-prow-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 11, 2019
@tejal29 tejal29 self-assigned this Feb 11, 2019
@@ -360,15 +360,19 @@ func (c *Reconciler) createTaskRun(logger *zap.SugaredLogger, rprt *resources.Re
taskRunTimeout = nil
}

labels := make(map[string]string, len(pr.ObjectMeta.Labels)+2)
for key, val := range pr.ObjectMeta.Labels {
labels[key] = val
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes i am pretty sure, its handled by the kubernetes api and we don't need to do any thing here

@@ -0,0 +1,28 @@
/*
Copyright 2018 The Knative Authors
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2019

// non-existent build.
// TODO(jasonhall): Just set this directly when creating a Pod from a
// TaskRun.
pod.OwnerReferences = []metav1.OwnerReference{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, the MakePod function is not used at all right now. We still rely on the knative/build API to create a Build instead of using the MakePod above.

Copy link
Contributor

@shashwathi shashwathi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great PR @dwnusbaum

Thank you for submitting this. From glancing the PR it looks good. I will review in detail tomorrow. 👍

@knative-metrics-robot
Copy link

The following is the coverage report on pkg/.
Say /test pull-knative-build-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/v1alpha1/pipelinerun/pipelinerun.go 84.3% 85.0% 0.7
pkg/reconciler/v1alpha1/taskrun/resources/pod.go 83.6% 83.3% -0.3
pkg/reconciler/v1alpha1/taskrun/taskrun.go 74.2% 74.6% 0.4
test/builder/build.go 92.9% 95.5% 2.6
test/builder/owner_reference.go Do not exist 100.0%
test/builder/pipeline.go 91.6% 91.9% 0.3
test/builder/task.go 95.9% 95.9% -0.0

@nader-ziada
Copy link
Member

Looks good to me, but will let @shashwathi and @tejal29 finish their review

@@ -220,6 +219,14 @@ func MakePod(build *v1alpha1.Build, kubeclient kubernetes.Interface) (*corev1.Po
}
annotations["sidecar.istio.io/inject"] = "false"

labels := map[string]string{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Similar to pipelinerun pattern of adding label this could be similar to make(map[string]string, len(build.ObjectMeta.Labels)+1)

for key, val := range build.ObjectMeta.Labels {
labels[key] = val
}
// TODO: Redundant with TaskRun label set in `taskrun.makeLabels`. Should
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously makelabels function added an additional label to existing taskrun labels but the additional label is added in this line. So can we delete makelabels function and pass taskrun labels directly to build? does that make sense @dwnusbaum ?

Copy link
Contributor Author

@dwnusbaum dwnusbaum Feb 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makeLabels and MakePod add a label with different keys, but the same value. The key added in makeLabels is pipeline.knative.dev/taskRun, and the one added in MakePod is build.knative.dev/buildName. I wasn't sure if we needed to keep the buildName label for compatibility or anything, but if not then yeah I think we should just delete it.

I think we need to keep makeLabels, since that passes labels from TaskRun to Build, so that MakePod can pass them from Build to Pod, but if we delete the extra label added in MakePod then we could just set Pod.ObjectMeta.Labels = Build.ObjectMeta.Labels so we don't need to create a new map in MakePod. Otherwise we could make MakePod take TaskRun directly instead of Build so that we could delete makeLabels, but that seems like more work (or we could just add it as an additional parameter, but maybe that is confusing). What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following up with this is #494.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure if we needed to keep the buildName label for compatibility or anything, but if not then yeah I think we should just delete it.

I think there is no value in adding buildName label anymore.

I think we need to keep makeLabels,

I like this idea

then we could just set Pod.ObjectMeta.Labels = Build.ObjectMeta.Labels

👍

Copy link
Contributor

@shashwathi shashwathi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work. Great test coverage. Thank you for submitting such detailed PR 👍 🎆

I have left some minor code cleanup suggestions. I think this would be a good feature to have e2e test coverage as well.
Can you please consider updating go e2e test of pipelinerun with labels and adding assertion that taskruns / pods were created with expected labels? Let me know if you need any help with this. 👍

@knative-prow-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dwnusbaum, shashwathi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 12, 2019
@shashwathi
Copy link
Contributor

/assign @shashwathi

@abayer
Copy link
Contributor

abayer commented Feb 12, 2019

/lgtm
/meow boxes

@knative-prow-robot
Copy link

@abayer: cat image

In response to this:

/lgtm
/meow boxes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 12, 2019
@knative-prow-robot knative-prow-robot merged commit 6ac1ceb into tektoncd:master Feb 12, 2019
@dwnusbaum
Copy link
Contributor Author

@shashwathi Thanks for taking a look! I will try to update the e2e tests to use a label and let you know if I need any help.

@dwnusbaum dwnusbaum deleted the 458-propagate-labels-to-pods branch February 12, 2019 19:01
dwnusbaum added a commit to dwnusbaum/pipeline that referenced this pull request Feb 12, 2019
Previously, we added a label with key "build.knative.dev/buildName"
to Pods. That label was redundant with a different label whose key
is "pipeline.knative.dev/taskRun" as of beda1f8. Since we are no
longer depending on Knative Build at runtime, it seems best to
remove the redundant label.

Followi-up to tektoncd#488.
dwnusbaum added a commit to dwnusbaum/pipeline that referenced this pull request Feb 12, 2019
Previously, we added a label with key "build.knative.dev/buildName"
to Pods. That label was redundant with a different label whose key
is "pipeline.knative.dev/taskRun" as of beda1f8. Since we are no
longer depending on Knative Build at runtime, it seems best to
remove the redundant label.

Follow-up to tektoncd#488.
knative-prow-robot pushed a commit that referenced this pull request Feb 12, 2019
Previously, we added a label with key "build.knative.dev/buildName"
to Pods. That label was redundant with a different label whose key
is "pipeline.knative.dev/taskRun" as of beda1f8. Since we are no
longer depending on Knative Build at runtime, it seems best to
remove the redundant label.

Follow-up to #488.
dwnusbaum added a commit to dwnusbaum/pipeline that referenced this pull request Feb 13, 2019
The e2e tests in test/pipelinerun.go now assert that custom labels set
on the PipelineRun are propagated through to the TaskRun and Pod, and
that the static labels that are added to all TaskRuns and Pods are
propagated correctly as well.

In addition, documentation has been added to explain that labels are
propagated and to mention the specific labels that are added
automatically to generated TaskRuns and Pods.

Follow-up to tektoncd#488.
dwnusbaum added a commit to dwnusbaum/pipeline that referenced this pull request Feb 13, 2019
The e2e tests in test/pipelinerun.go now assert that custom labels set
on the PipelineRun are propagated through to the TaskRun and Pod, and
that the static labels that are added to all TaskRuns and Pods are
propagated correctly as well.

In addition, documentation has been added to explain that labels are
propagated and to mention the specific labels that are added
automatically to generated TaskRuns and Pods.

Follow-up to tektoncd#488.
knative-prow-robot pushed a commit that referenced this pull request Feb 13, 2019
The e2e tests in test/pipelinerun.go now assert that custom labels set
on the PipelineRun are propagated through to the TaskRun and Pod, and
that the static labels that are added to all TaskRuns and Pods are
propagated correctly as well.

In addition, documentation has been added to explain that labels are
propagated and to mention the specific labels that are added
automatically to generated TaskRuns and Pods.

Follow-up to #488.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Trying to make the CLA bot happy with ppl from different companies work on one commit lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

copy the labels from Task ->TaskRun to the generated Pod
8 participants