Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assume task not skipped if the run is associated #4583

Merged
merged 2 commits into from
Jun 18, 2022

Conversation

devholic
Copy link
Contributor

@devholic devholic commented Feb 16, 2022

Changes

Resolves #4582

Currently, the status of PipelineRun with Finally is flakey. For example, PipelineRun's completionTime is earlier than the last TaskRun's completionTime, and also Running Finally tasks are marked as Skipped.

This issue occurs when TaskRun has no conditions. Because the task state context is not applied to the running Final tasks and IsFinallySkipped assumes TaskRun with no conditions as not started.

This commit resolves this issue by assuming the finally task is not skipped if the run is associated.

/kind bug

Submitter Checklist

As the author of this PR, please check off the items in this checklist:

  • Docs included if any changes are user facing
  • Tests included if any functionality added or changed
  • Follows the commit message standard
  • Meets the Tekton contributor standards (including functionality, content, code)
  • Release notes block below has been filled in or deleted (only if no user facing changes)

Release Notes

Fixes controller with the high value of `ThreadsPerController` to report the correct status of PipelineRun, which contains Finally tasks.

@tekton-robot tekton-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 16, 2022
@tekton-robot
Copy link
Collaborator

Hi @devholic. Thanks for your PR.

I'm waiting for a tektoncd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 16, 2022
@tekton-robot tekton-robot added release-note-none Denotes a PR that doesnt merit a release note. and removed release-note Denotes a PR that will be considered when it comes time to generate release notes. labels Feb 16, 2022
@afrittoli
Copy link
Member

/ok-to-test

@tekton-robot tekton-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 16, 2022
@afrittoli
Copy link
Member

Thank you @devholic !

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 93.8% 93.4% -0.3

@devholic
Copy link
Contributor Author

/test pull-tekton-pipeline-alpha-integration-tests

if t.IsCustomTask() {
return t.Run != nil
}
return t.TaskRun != nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@afrittoli @jerop @vdemeester my understanding was this is not possible 🙃 A taskRun or run is associated must have a succeeded condition with unknown i.e. running status.

We rely on the same check for the non-final tasks:

case facts.isFinalTask(t.PipelineTask.Name) || t.IsStarted():

Copy link
Contributor Author

@devholic devholic Feb 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not sure about the root cause (my current assumption is a timing issue), but I think it makes sense to check if Run exists only and change GetFinalTasks() to use it instead of IsSuccessful()

// GetFinalTasks returns a list of final tasks without any taskRun associated with it
// GetFinalTasks returns final tasks only when all DAG tasks have finished executing successfully or skipped or
// any one DAG task resulted in failure
func (facts *PipelineRunFacts) GetFinalTasks() PipelineRunState {
	tasks := PipelineRunState{}
	finalCandidates := sets.NewString()
	// check either pipeline has finished executing all DAG pipelineTasks
	// or any one of the DAG pipelineTask has failed
	if facts.checkDAGTasksDone() {
		// return list of tasks with all final tasks
		for _, t := range facts.State {
			// if facts.isFinalTask(t.PipelineTask.Name) && !t.IsSuccessful() {
			if facts.isFinalTask(t.PipelineTask.Name) && !t.IsScheduled() {
				finalCandidates.Insert(t.PipelineTask.Name)
			}
		}
		tasks = facts.State.getNextTasks(finalCandidates)
	}
	return tasks
}

Whether it has a Succeeded-type condition or not, I think we can assume the task is scheduled since the reconciler found that it has associated Run.

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pritidesai also thought that a TaskRun or Run must have a ConditionSucceeded - but it looks like we don't initialize it when we create a TaskRun in the PipelineRun reconciler:

func (c *Reconciler) createTaskRun(ctx context.Context, rprt *resources.ResolvedPipelineRunTask, pr *v1beta1.PipelineRun, storageBasePath string, getTimeoutFunc getTimeoutFunc) (*v1beta1.TaskRun, error) {
logger := logging.FromContext(ctx)
tr, _ := c.taskRunLister.TaskRuns(pr.Namespace).Get(rprt.TaskRunName)
if tr != nil {
// Don't modify the lister cache's copy.
tr = tr.DeepCopy()
// is a retry
addRetryHistory(tr)
clearStatus(tr)
tr.Status.MarkResourceOngoing("", "")
logger.Infof("Updating taskrun %s with cleared status and retry history (length: %d).", tr.GetName(), len(tr.Status.RetriesStatus))
return c.PipelineClientSet.TektonV1beta1().TaskRuns(pr.Namespace).UpdateStatus(ctx, tr, metav1.UpdateOptions{})
}
rprt.PipelineTask = resources.ApplyPipelineTaskContexts(rprt.PipelineTask)
taskRunSpec := pr.GetTaskRunSpec(rprt.PipelineTask.Name)
tr = &v1beta1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: rprt.TaskRunName,
Namespace: pr.Namespace,
OwnerReferences: []metav1.OwnerReference{*kmeta.NewControllerRef(pr)},
Labels: combineTaskRunAndTaskSpecLabels(pr, rprt.PipelineTask),
Annotations: combineTaskRunAndTaskSpecAnnotations(pr, rprt.PipelineTask),
},
Spec: v1beta1.TaskRunSpec{
Params: rprt.PipelineTask.Params,
ServiceAccountName: taskRunSpec.TaskServiceAccountName,
Timeout: getTimeoutFunc(ctx, pr, rprt, c.Clock),
PodTemplate: taskRunSpec.TaskPodTemplate,
StepOverrides: taskRunSpec.StepOverrides,
SidecarOverrides: taskRunSpec.SidecarOverrides,
}}
if rprt.ResolvedTaskResources.TaskName != "" {
// We pass the entire, original task ref because it may contain additional references like a Bundle url.
tr.Spec.TaskRef = rprt.PipelineTask.TaskRef
} else if rprt.ResolvedTaskResources.TaskSpec != nil {
tr.Spec.TaskSpec = rprt.ResolvedTaskResources.TaskSpec
}
var pipelinePVCWorkspaceName string
var err error
tr.Spec.Workspaces, pipelinePVCWorkspaceName, err = getTaskrunWorkspaces(pr, rprt)
if err != nil {
return nil, err
}
if !c.isAffinityAssistantDisabled(ctx) && pipelinePVCWorkspaceName != "" {
tr.Annotations[workspace.AnnotationAffinityAssistantName] = getAffinityAssistantName(pipelinePVCWorkspaceName, pr.Name)
}
resources.WrapSteps(&tr.Spec, rprt.PipelineTask, rprt.ResolvedTaskResources.Inputs, rprt.ResolvedTaskResources.Outputs, storageBasePath)
logger.Infof("Creating a new TaskRun object %s for pipeline task %s", rprt.TaskRunName, rprt.PipelineTask.Name)
return c.PipelineClientSet.TektonV1beta1().TaskRuns(pr.Namespace).Create(ctx, tr, metav1.CreateOptions{})
}

Then we initialize ConditionSucceeded in the TaskRun reconciler:

if !tr.HasStarted() {
tr.Status.InitializeConditions()

If the above is correct, that a TaskRun or Run can exist without ConditionsSucceeded, then it makes sense to either initialize ConditionsSucceeded to Unknown in createTaskRun/createRun functions or check for existence of TaskRun or Run instead of ConditionsSucceeded which could be missing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerop thanks for the review! 🙂

I prefer checking the existence of TaskRun or Run instead of initializing Conditions, since it is not possible to create resources with Status (as far as I know - please let me know if this is wrong 🙏 ) - which may cause this issue again since the reconciliation time is not guaranteed for the resources.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@devholic makes sense to go with that option 👍🏾

cc @tektoncd/core-maintainers

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the TaskRun condition is set by the TaskRun reconciler on the first reconciliation loop.
In the TaskRun controller, at least within the ReconcileKind function, the condition will always be set as initialising the condition is the first thing we do in ReconcileKind.
In the PipelineRun controller instead there is no guarantee. It may not be very likely to catch a TaskRun with no condition set, but it is certainly possible and I guess it may happen more under high load.

@devholic
Copy link
Contributor Author

Could someone give comments on this PR or issue(#4582)? 🙏
@pritidesai @afrittoli @jerop @vdemeester

@devholic devholic marked this pull request as ready for review March 13, 2022 12:52
@tekton-robot tekton-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 13, 2022
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 93.4% 93.1% -0.3

@devholic
Copy link
Contributor Author

/test pull-tekton-pipeline-alpha-integration-tests

Copy link
Member

@jerop jerop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for the contribution @devholic!

if t.IsCustomTask() {
return t.Run != nil
}
return t.TaskRun != nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pritidesai also thought that a TaskRun or Run must have a ConditionSucceeded - but it looks like we don't initialize it when we create a TaskRun in the PipelineRun reconciler:

func (c *Reconciler) createTaskRun(ctx context.Context, rprt *resources.ResolvedPipelineRunTask, pr *v1beta1.PipelineRun, storageBasePath string, getTimeoutFunc getTimeoutFunc) (*v1beta1.TaskRun, error) {
logger := logging.FromContext(ctx)
tr, _ := c.taskRunLister.TaskRuns(pr.Namespace).Get(rprt.TaskRunName)
if tr != nil {
// Don't modify the lister cache's copy.
tr = tr.DeepCopy()
// is a retry
addRetryHistory(tr)
clearStatus(tr)
tr.Status.MarkResourceOngoing("", "")
logger.Infof("Updating taskrun %s with cleared status and retry history (length: %d).", tr.GetName(), len(tr.Status.RetriesStatus))
return c.PipelineClientSet.TektonV1beta1().TaskRuns(pr.Namespace).UpdateStatus(ctx, tr, metav1.UpdateOptions{})
}
rprt.PipelineTask = resources.ApplyPipelineTaskContexts(rprt.PipelineTask)
taskRunSpec := pr.GetTaskRunSpec(rprt.PipelineTask.Name)
tr = &v1beta1.TaskRun{
ObjectMeta: metav1.ObjectMeta{
Name: rprt.TaskRunName,
Namespace: pr.Namespace,
OwnerReferences: []metav1.OwnerReference{*kmeta.NewControllerRef(pr)},
Labels: combineTaskRunAndTaskSpecLabels(pr, rprt.PipelineTask),
Annotations: combineTaskRunAndTaskSpecAnnotations(pr, rprt.PipelineTask),
},
Spec: v1beta1.TaskRunSpec{
Params: rprt.PipelineTask.Params,
ServiceAccountName: taskRunSpec.TaskServiceAccountName,
Timeout: getTimeoutFunc(ctx, pr, rprt, c.Clock),
PodTemplate: taskRunSpec.TaskPodTemplate,
StepOverrides: taskRunSpec.StepOverrides,
SidecarOverrides: taskRunSpec.SidecarOverrides,
}}
if rprt.ResolvedTaskResources.TaskName != "" {
// We pass the entire, original task ref because it may contain additional references like a Bundle url.
tr.Spec.TaskRef = rprt.PipelineTask.TaskRef
} else if rprt.ResolvedTaskResources.TaskSpec != nil {
tr.Spec.TaskSpec = rprt.ResolvedTaskResources.TaskSpec
}
var pipelinePVCWorkspaceName string
var err error
tr.Spec.Workspaces, pipelinePVCWorkspaceName, err = getTaskrunWorkspaces(pr, rprt)
if err != nil {
return nil, err
}
if !c.isAffinityAssistantDisabled(ctx) && pipelinePVCWorkspaceName != "" {
tr.Annotations[workspace.AnnotationAffinityAssistantName] = getAffinityAssistantName(pipelinePVCWorkspaceName, pr.Name)
}
resources.WrapSteps(&tr.Spec, rprt.PipelineTask, rprt.ResolvedTaskResources.Inputs, rprt.ResolvedTaskResources.Outputs, storageBasePath)
logger.Infof("Creating a new TaskRun object %s for pipeline task %s", rprt.TaskRunName, rprt.PipelineTask.Name)
return c.PipelineClientSet.TektonV1beta1().TaskRuns(pr.Namespace).Create(ctx, tr, metav1.CreateOptions{})
}

Then we initialize ConditionSucceeded in the TaskRun reconciler:

if !tr.HasStarted() {
tr.Status.InitializeConditions()

If the above is correct, that a TaskRun or Run can exist without ConditionsSucceeded, then it makes sense to either initialize ConditionsSucceeded to Unknown in createTaskRun/createRun functions or check for existence of TaskRun or Run instead of ConditionsSucceeded which could be missing

@pritidesai
Copy link
Member

There are two separate entities involved, taskRun golang object v/s actual taskRun CRD.

PipelineRun controller creates a new taskRun object if it does not exist followed by creating an actual taskRun CRD.

The taskRun controller relies on the fact that the startTime of a taskRun is either nil or set to zero time to determine whether the taskRun has started or not. If taskRun has not started, the controller initialized the status condition (setting it to running) and sets the start time.

If you are running into this kind of issue:

Currently, the status of PipelineRun with Finally is flakey. For example, PipelineRun's completionTime is earlier than the last
TaskRun's completionTime, and also Running Finally tasks are marked as Skipped.

Checking if the taskRun golang object exists might not fix this issue with multi-threaded controller. Most likely what's happening here is while one instance of pipelineRun controller is creating the taskRun object followed by CRD and the taskRun controller is setting the condition, the another instance of pipelineRun controller is checking for the condition.

We have a test to confirm this does not happen in our CI/CD but this could be cluster/configuration specific and might not be feasible to replicate and/or reproduce it. I would highly recommend adding such multi-threaded e2e test in our CI/CD if possible.

I would recommend to add an additional check if the startTime is set, very similar to the check (tr *TaskRun) HasStarted() which taskRun controller relies on. Also, any changes you introduce, please verify them in your cluster to make sure they are solving this issue.

@pritidesai
Copy link
Member

We highly appreciate any additional best practices or troubleshooting documentation from your experience here.

@devholic
Copy link
Contributor Author

@pritidesai Thanks for the review and comment 🙇‍♂️

PipelineRun controller creates a new taskRun object if it does not exist followed by creating an actual taskRun CRD.

I'm not sure if I understand this correctly, but I think TaskRun (Object / CRD) is created in createTaskRun only - which seems create an actual TaskRun only 🤔. Could you give some more details?

Checking if the taskRun golang object exists might not fix this issue with multi-threaded controller. Most likely what's happening here is while one instance of pipelineRun controller is creating the taskRun object followed by CRD and the taskRun controller is setting the condition, the another instance of pipelineRun controller is checking for the condition.

As far as I know, race condition will not happen even in the multi-threaded controller. workqueue, which is used internally, will not enqueue the request when the same request is processing and Get() (dequeue) is also a thread-safe call.

Please let me know if my understanding is wrong.

We have a test to confirm this does not happen in our CI/CD but this could be cluster/configuration specific and might not be feasible to replicate and/or reproduce it. I would highly recommend adding such multi-threaded e2e test in our CI/CD if possible.

I'll try to figure out the method to reproduce this behavior as much as I can 🙏

Also, any changes you introduce, please verify them in your cluster to make sure they are solving this issue.

I'll add test results to this thread 🙂

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 91.9% 91.7% -0.3

@abayer
Copy link
Contributor

abayer commented Jun 6, 2022

For what it's worth, I was able to reproduce this, with -threads-per-controller "32" and using the PipelineRun in #4583 (comment), the test.sh in #4583 (comment), and a slightly modified Task:

# task.yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: dummy-task
spec:
  params:
    - name: sleep-second
      type: string
      default: "3"
    - name: message
      type: string
      default: "hello, world"
  steps:
    - name: echo
      image: alpine:3.14.3
      script: |
        #!/usr/bin/env sh
        sleep "$(params.sleep-second)"
        echo "$(params.message)"

The original Task had sleep "${sleep-second}", but that never resolved to anything, since that's shell script substitution syntax, not Pipeline param substitution syntax. Hence the change I made.

I did tweak the test script slightly to just run 100 PipelineRuns, since I was doing this on my laptop with kind and going the full 250 just took too long for me to be sure any results were legitimate.

With Pipeline 0.36.0, I generally saw the first 80 or so PipelineRuns having the right skippedTasks, but then the last 20 or so all were incorrect. When I used a build of this PR, I always saw all 100 being correct.

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 93.7% 93.3% -0.3

@devholic
Copy link
Contributor Author

devholic commented Jun 9, 2022

/test pull-tekton-pipeline-alpha-integration-tests

@devholic
Copy link
Contributor Author

devholic commented Jun 9, 2022

I rebased changes :)

@tekton-robot tekton-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 11, 2022
tektoncd#4582

Currently, the status of PipelineRun with Finally is flakey. For
example, PipelineRun's completionTime is earlier than the last
TaskRun's completionTime, and also Running Finally tasks are marked as
Skipped.

This issue occurs when TaskRun has no conditions. Because the task
state context is not applied to the running Finally tasks and
IsFinallySkipped assumes TaskRun with no conditions as not started.

This commit resolves this issue by assuming the Finally task is not
skipped if the run is associated instead of checking the `Succeeded`
condition.

Signed-off-by: Sunghoon Kang <[email protected]>
@tekton-robot tekton-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 12, 2022
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 94.2% 94.0% -0.3

if facts.isFinalTask(t.PipelineTask.Name) && !t.isSuccessful() {
if facts.isFinalTask(t.PipelineTask.Name) && !t.isScheduled() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would break retries in finally tasks - the current code purposely includes failed tasks because getNextTasks will do the filtering again, and it needs failed tasks to be in so that getRetryableTasks can get tasks which failed but may be retried. The comment on L405 should be updated to reflect that.

I find the code a bit confusing here because it seems that the filter logic is shared between the GetFinalTasks and getNextTasks. I think that L415 could be:

   if facts.isFinalTask(t.PipelineTask.Name) {

So that we get all final tasks and the filtering is done only by getNextTasks

Copy link
Member

@afrittoli afrittoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR, I've only reviewed the code, not the tests yet.

I think there is one issue with the change in pipelinerunstate.go - if I'm correct it also means we don't have a test case for that. wdyt?

The rest seems good. I'll release tomorrow, so perhaps we can still get this fixed in time?

devholic pushed a commit to devholic/pipeline that referenced this pull request Jun 17, 2022
tektoncd#4583 (comment)

We don't have to filter final task candidates since `getNextTasks`
returns the tasks which needs to be executed next only.

Signed-off-by: Sunghoon Kang <[email protected]>
tektoncd#4583 (comment)

We don't have to filter final task candidates since `getNextTasks`
returns the tasks which needs to be executed next only.

Signed-off-by: Sunghoon Kang <[email protected]>
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 94.3% 94.0% -0.3

@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/resources/pipelinerunresolution.go 94.3% 94.0% -0.3

@devholic
Copy link
Contributor Author

Thanks for the review @afrittoli 🙂

As you pointed out, delegating task filtering logic to getNextTasks seems better in the point of clarity, and it will work with retrying failed tasks.

I updated code with a test case. Could you take a look?

Copy link
Member

@afrittoli afrittoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for discovering and fixing this issue!
The new test coverage looks good to me, it catches the issues I highlight in the previous revision.

/approve

@tekton-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: afrittoli

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 17, 2022
@afrittoli
Copy link
Member

@pritidesai @abayer if you manage to review this today it can still make the release :)

@abayer
Copy link
Contributor

abayer commented Jun 17, 2022

/lgtm

@tekton-robot tekton-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 17, 2022
@abayer
Copy link
Contributor

abayer commented Jun 17, 2022

/retest

2 similar comments
@abayer
Copy link
Contributor

abayer commented Jun 17, 2022

/retest

@abayer
Copy link
Contributor

abayer commented Jun 17, 2022

/retest

@tekton-robot tekton-robot merged commit b6e87ff into tektoncd:main Jun 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Status of PipelineRun with Finally is flakey
7 participants