Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use proper context instead of context.TODO() #157

Closed
wants to merge 3 commits into from
Closed

use proper context instead of context.TODO() #157

wants to merge 3 commits into from

Conversation

bilalcaliskan
Copy link

What this PR does / why we need it:
This PR uses proper context instead of context.TODO() on kube-apiserver calls.

Which issue(s) this PR fixes:
Fixes #96

Special note(s) for reviewers:
I've set 60 seconds for each request in a function because kube-apiserver default request timeout is 60s.

Copy link
Collaborator

@ibihim ibihim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution.

@@ -38,6 +38,12 @@ import (
"k8s.io/client-go/kubernetes"
)

var parentCtx context.Context
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it is more effort, but lets hand down the parent context like in the go libs, instead of a package-wide state.

@@ -96,40 +102,46 @@ func CreatedManifests(client kubernetes.Interface, paths ...string) Setup {
}

func createClusterRole(client kubernetes.Interface, ctx *ScenarioContext, content []byte) error {
reqCtx, cancel := context.WithTimeout(parentCtx, 120 * time.Second)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is too much effort to hand down the parentCtx through the function parameters, it would be enough to start here with a context.Background.

Here and elsewhere in the PR.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: You are using multiples of 60s with a good reason. It would make a nice constant. WDYT?


ctx.AddFinalizer(func() error {
return client.RbacV1().ClusterRoles().Delete(context.TODO(), cr.Name, metav1.DeleteOptions{})
return client.RbacV1().ClusterRoles().Delete(reqCtx, cr.Name, metav1.DeleteOptions{})
Copy link
Collaborator

@ibihim ibihim Jan 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To end the context after two minutes after the creation of a cluster role sounds a little bit optimistic. Could we have an individual timeout for the finalizer?

I don't know how quickly the tests are running on the CI pipeline, but finalizers are ran at the end of the scenario and the Makefile has a 55m timeout.

Here and elsewhere in the PR

@bilalcaliskan
Copy link
Author

@ibihim thanks for your valuable code review! i will try to implement these!

@ibihim
Copy link
Collaborator

ibihim commented Feb 10, 2022

Looking forward too! 😄

@bilalcaliskan
Copy link
Author

@ibihim hello again, i have tried to cover your recommendations, was that what you mean? :)

@ibihim
Copy link
Collaborator

ibihim commented Feb 16, 2022

/lgtm, ping @s-urbaniak

@ibihim
Copy link
Collaborator

ibihim commented Feb 17, 2022

@bilalcaliskan,

I must admit that I am pretty new to the project and I hope you have patience with me: I was pondering why we have so much code for context and realized, that we have something like ScenarioContext. Maybe it would be nice to add the context.Context to the Scenario context and hand it down that way. WDYT?

In general your code is still lgtm as it improves and resolves the issue, but I see already a follow up task and I am curious if you would like to continue that path. If there is too much scope creep for you, I could think of adding a follow up issue. We could also ask @s-urbaniak for a third opinion.

You can always ping me on the kubernetes slack, I am there more or less daily.

@s-urbaniak
Copy link
Collaborator

agreed with @ibihim and thank you @bilalcaliskan for the contribution 🎉

@bilalcaliskan
Copy link
Author

hello again @ibihim, you mean something like below while defining ScenarioContext?

baseContext, baseCancel := context.WithTimeout(context.Background(), 60 * time.Second)
ctx := &ScenarioContext{
    Namespace: "default",
    Context: baseContext,
    CancelFunc: baseCancel,
}

My concern on that approach is that if we define context like that, we will starting counting 60 seconds when we first define the context. And i think we should increase the timeout seconds to 3600 for example. Purpose of defining multiple contexts was handling each API request in seperate lifecycle actually.

@bilalcaliskan
Copy link
Author

btw is E2E tests are failing because of my change? i could not understand why it is failing

@ibihim
Copy link
Collaborator

ibihim commented Mar 24, 2022

Hey, @bilalcaliskan. Sorry. I was busy with other stuff. I will follow up on your issue this / next week.

E2E tests are a beast 😄

@ibihim
Copy link
Collaborator

ibihim commented Mar 24, 2022

Thanks @bilalcaliskan, that you continue development. I was worried that you would be discouraged due to the ongoing change requests.

The point is to leverage the existing ScenarioContext and add context.Context to it. Then you could fork the context.Background in one central place, where the ScenarioContext is initialized. But not sure what about the Finalizers, if we would like to go with a very big timeout or if we would create one anew.

edit: Exactly as you assumed it. Yes, I understand your concern. Let me sync with @s-urbaniak first.

@s-urbaniak
Copy link
Collaborator

@ibihim your suggestion SGTM

@ibihim
Copy link
Collaborator

ibihim commented Mar 30, 2022

@bilalcaliskan, you can run the tests locally by executing:

rm -rf _output \
    && VERSION=local make container \
    && kind delete cluster  \
    && kind create cluster --config test/e2e/kind-config/kind-config.yaml \
    && kind load docker-image quay.io/brancz/kube-rbac-proxy:local \
    && make test

Beware that it uses the default kind cluster.

@ibihim
Copy link
Collaborator

ibihim commented Mar 30, 2022

IMHO I think there are two path ways:

  1. Continue using ScenarioContext and leverage the setTimeout and come up with another one that creates a forked context, which look identically to WithTimeout...
  2. OR (my preferred solution) drop all the custom context logic that might be older than the native context package and use native context package.

But this is something that @s-urbaniak needs to decide, I am just a mini-maintainer.

  1. would look like so
func NameSpaceFrom(context.Context) string
func NameSpaceTo(context.Context, string) context.Context
func FinalizerFrom(context.Context) []Finalizer
func FinalizerTo(context.Context, []Finalizer) context.Context

func createClusterRoleScenarioContext(ctx context.Context, client kubernetes.Interface, content []byte) error {
    // with huge timeout
    _, err := client.RbacV1().ClusterRoles().Create(ctx, cr, metav1.CreateOptions{})
}

func createClusterRoleIndividualContext(ctx context.Context, client kubernetes.Interface , content []byte) error {
    // with small timeout
    ctx, cancel = context.WithTimeout(ctx, defaultTimeout)
    defer cancel()
    _, err := client.RbacV1().ClusterRoles().Create(ctx, cr, metav1.CreateOptions{})
}

@bilalcaliskan
Copy link
Author

@ibihim i am working on the first scenario that you've mentioned your last message but as you know it is a little tough to implement thx for your patience.

@ibihim
Copy link
Collaborator

ibihim commented Apr 29, 2022

@bilalcaliskan, it is cool that you are still working on it! 😄 Thanks for your patience with me!

@ibihim
Copy link
Collaborator

ibihim commented Jun 3, 2022

Hey, how is it going? Do you need any help?

@ibihim
Copy link
Collaborator

ibihim commented Jun 28, 2022

@bilalcaliskan, I looked into the topic and... oh my... the context.Context is pretty thread-safe, so we can't add Finalizers to it, except we do some very weird things like here, where we use a pointer to a slice of Finalizers 😄

Maybe its not worth the effort at all to use context.Context. Maybe it is just easier to do the something like so:

func (s Scenario) Run(t *testing.T) bool {
	pctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
	defer cancel()

	ctx := &ScenarioContext{
		Ctx: pctx, // is used for kube client calls
		Namespace: "default",
	}

	defer func(pctx *ScenarioContext) {
		// We can't use the main ctx as root as it has a different timeout.
		ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
		defer cancel()

		// Clean up in reverse order of creating it
		for i := len(pctx.Finalizer); i >=0; i-- {
			if err := pctx.Finalizer[i](); err != nil {
				panic(err)
			}
		}
	}(ctx)

	return t.Run(s.Name, func(t *testing.T) { /* use ctx.Ctx for client calls and ctx.Finalizers to add CleanUp functions */ })
}

@ibihim
Copy link
Collaborator

ibihim commented Nov 28, 2022

I would close this issue as it is stale. Feel free to reopen it, once you work on it again.

@ibihim
Copy link
Collaborator

ibihim commented Dec 14, 2022

The PR is pretty stale, please reopen it and rebase it, if you want to continue to work on it.

@ibihim ibihim closed this Dec 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replace context.TODO() calls with proper context
3 participants