-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use proper context instead of context.TODO() #157
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution.
test/kubetest/kubernetes.go
Outdated
@@ -38,6 +38,12 @@ import ( | |||
"k8s.io/client-go/kubernetes" | |||
) | |||
|
|||
var parentCtx context.Context |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know it is more effort, but lets hand down the parent context like in the go libs, instead of a package-wide state.
test/kubetest/kubernetes.go
Outdated
@@ -96,40 +102,46 @@ func CreatedManifests(client kubernetes.Interface, paths ...string) Setup { | |||
} | |||
|
|||
func createClusterRole(client kubernetes.Interface, ctx *ScenarioContext, content []byte) error { | |||
reqCtx, cancel := context.WithTimeout(parentCtx, 120 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is too much effort to hand down the parentCtx
through the function parameters, it would be enough to start here with a context.Background
.
Here and elsewhere in the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: You are using multiples of 60s with a good reason. It would make a nice constant. WDYT?
test/kubetest/kubernetes.go
Outdated
|
||
ctx.AddFinalizer(func() error { | ||
return client.RbacV1().ClusterRoles().Delete(context.TODO(), cr.Name, metav1.DeleteOptions{}) | ||
return client.RbacV1().ClusterRoles().Delete(reqCtx, cr.Name, metav1.DeleteOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To end the context after two minutes after the creation of a cluster role sounds a little bit optimistic. Could we have an individual timeout for the finalizer
?
I don't know how quickly the tests are running on the CI pipeline, but finalizer
s are ran at the end of the scenario and the Makefile
has a 55m
timeout.
Here and elsewhere in the PR
@ibihim thanks for your valuable code review! i will try to implement these! |
Looking forward too! 😄 |
@ibihim hello again, i have tried to cover your recommendations, was that what you mean? :) |
/lgtm, ping @s-urbaniak |
I must admit that I am pretty new to the project and I hope you have patience with me: I was pondering why we have so much code for context and realized, that we have something like In general your code is still lgtm as it improves and resolves the issue, but I see already a follow up task and I am curious if you would like to continue that path. If there is too much scope creep for you, I could think of adding a follow up issue. We could also ask @s-urbaniak for a third opinion. You can always ping me on the kubernetes slack, I am there more or less daily. |
agreed with @ibihim and thank you @bilalcaliskan for the contribution 🎉 |
hello again @ibihim, you mean something like below while defining ScenarioContext?
My concern on that approach is that if we define context like that, we will starting counting 60 seconds when we first define the context. And i think we should increase the timeout seconds to 3600 for example. Purpose of defining multiple contexts was handling each API request in seperate lifecycle actually. |
btw is E2E tests are failing because of my change? i could not understand why it is failing |
Hey, @bilalcaliskan. Sorry. I was busy with other stuff. I will follow up on your issue this / next week. E2E tests are a beast 😄 |
Thanks @bilalcaliskan, that you continue development. I was worried that you would be discouraged due to the ongoing change requests. The point is to leverage the existing edit: Exactly as you assumed it. Yes, I understand your concern. Let me sync with @s-urbaniak first. |
@ibihim your suggestion SGTM |
@bilalcaliskan, you can run the tests locally by executing: rm -rf _output \
&& VERSION=local make container \
&& kind delete cluster \
&& kind create cluster --config test/e2e/kind-config/kind-config.yaml \
&& kind load docker-image quay.io/brancz/kube-rbac-proxy:local \
&& make test Beware that it uses the default kind cluster. |
IMHO I think there are two path ways:
But this is something that @s-urbaniak needs to decide, I am just a mini-maintainer.
func NameSpaceFrom(context.Context) string
func NameSpaceTo(context.Context, string) context.Context
func FinalizerFrom(context.Context) []Finalizer
func FinalizerTo(context.Context, []Finalizer) context.Context
func createClusterRoleScenarioContext(ctx context.Context, client kubernetes.Interface, content []byte) error {
// with huge timeout
_, err := client.RbacV1().ClusterRoles().Create(ctx, cr, metav1.CreateOptions{})
}
func createClusterRoleIndividualContext(ctx context.Context, client kubernetes.Interface , content []byte) error {
// with small timeout
ctx, cancel = context.WithTimeout(ctx, defaultTimeout)
defer cancel()
_, err := client.RbacV1().ClusterRoles().Create(ctx, cr, metav1.CreateOptions{})
} |
@ibihim i am working on the first scenario that you've mentioned your last message but as you know it is a little tough to implement thx for your patience. |
@bilalcaliskan, it is cool that you are still working on it! 😄 Thanks for your patience with me! |
Hey, how is it going? Do you need any help? |
@bilalcaliskan, I looked into the topic and... oh my... the Maybe its not worth the effort at all to use func (s Scenario) Run(t *testing.T) bool {
pctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
defer cancel()
ctx := &ScenarioContext{
Ctx: pctx, // is used for kube client calls
Namespace: "default",
}
defer func(pctx *ScenarioContext) {
// We can't use the main ctx as root as it has a different timeout.
ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
defer cancel()
// Clean up in reverse order of creating it
for i := len(pctx.Finalizer); i >=0; i-- {
if err := pctx.Finalizer[i](); err != nil {
panic(err)
}
}
}(ctx)
return t.Run(s.Name, func(t *testing.T) { /* use ctx.Ctx for client calls and ctx.Finalizers to add CleanUp functions */ })
} |
I would close this issue as it is stale. Feel free to reopen it, once you work on it again. |
The PR is pretty stale, please reopen it and rebase it, if you want to continue to work on it. |
What this PR does / why we need it:
This PR uses proper context instead of context.TODO() on kube-apiserver calls.
Which issue(s) this PR fixes:
Fixes #96
Special note(s) for reviewers:
I've set 60 seconds for each request in a function because kube-apiserver default request timeout is 60s.