Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: populate Programmed condition for KongConsumer #4409

Merged
merged 2 commits into from
Jul 26, 2023

Conversation

czeslavo
Copy link
Contributor

@czeslavo czeslavo commented Jul 25, 2023

What this PR does / why we need it:

Populates Programmed condition for KongConsumer objects.

Which issue this PR fixes:

Part of #3344.

  • the CHANGELOG.md release notes have been updated to reflect any significant (and particularly user-facing) changes introduced by this PR

@czeslavo czeslavo force-pushed the feat/programmed-condition branch 4 times, most recently from e0b4c92 to d6c2c27 Compare July 25, 2023 19:09
@codecov
Copy link

codecov bot commented Jul 25, 2023

Codecov Report

Patch coverage: 95.7% and project coverage change: +0.2% 🎉

Comparison is base (86295c0) 66.5% compared to head (cd6a62e) 66.8%.
Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##            main   #4409     +/-   ##
=======================================
+ Coverage   66.5%   66.8%   +0.2%     
=======================================
  Files        158     159      +1     
  Lines      18556   18624     +68     
=======================================
+ Hits       12352   12442     +90     
+ Misses      5454    5428     -26     
- Partials     750     754      +4     
Files Changed Coverage Δ
...trollers/configuration/zz_generated_controllers.go 30.0% <85.0%> (+2.3%) ⬆️
internal/controllers/utils/conditions.go 100.0% <100.0%> (ø)
internal/dataplane/parser/parser.go 78.3% <100.0%> (-1.7%) ⬇️
internal/manager/controllerdef.go 98.8% <100.0%> (+<0.1%) ⬆️

... and 5 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@czeslavo czeslavo self-assigned this Jul 25, 2023
@czeslavo czeslavo added the area/feature New feature or request label Jul 25, 2023
@czeslavo czeslavo added this to the KIC v2.11.0 milestone Jul 25, 2023
@czeslavo czeslavo marked this pull request as ready for review July 25, 2023 19:29
@czeslavo czeslavo requested a review from a team as a code owner July 25, 2023 19:29
@rainest
Copy link
Contributor

rainest commented Jul 25, 2023

What was the need for moving the referred objects update in the controllers? After that change, integration tests that involve Secrets are failing because the Secrets never get added to the store. Referred object updates (which tell the Secret controller to track the Secrets referenced by the Ingress or whatever) used to happen at line 404. The controller now exits and doesn't hit the referred update's new location at 429 because it's requeuing after seeing that the Ingress isn't configured in the proxy:

> github.com/kong/kubernetes-ingress-controller/v2/internal/controllers/configuration.(*NetV1IngressReconciler).Reconcile() ./internal/controllers/configuration/zz_generated_controllers.go:409 (PC: 0x3e6db97)
   404:		// if status updates are enabled report the status for the object
   405:		if r.DataplaneClient.AreKubernetesObjectReportsEnabled() {
   406:			log.V(util.DebugLevel).Info("determining whether data-plane configuration has succeeded", "namespace", req.Namespace, "name", req.Name)
   407:			if !r.DataplaneClient.KubernetesObjectIsConfigured(obj) {
   408:				log.V(util.DebugLevel).Info("resource not yet configured in the data-plane", "namespace", req.Namespace, "name", req.Name)
=> 409:				return ctrl.Result{Requeue: true}, nil // requeue until the object has been properly configured
   410:			}
   411:	
   412:			log.V(util.DebugLevel).Info("determining gateway addresses for object status updates", "namespace", req.Namespace, "name", req.Name)
   413:			addrs, err := r.DataplaneAddressFinder.GetLoadBalancerAddresses(ctx)
   414:			if err != nil {
(dlv) n

They never get configured because the parser can't find the Secrets they require:

ERRO[0631] resource processing failed: failed to fetch the secret (433a62f5-9bfe-4a93-8f88-2a9850a5bcd2/secret1)  GVK="networking.k8s.io/v1, Kind=Ingress" name=ingress1 namespace=433a62f5-9bfe-4a93-8f88-2a9850a5bcd2

@czeslavo czeslavo force-pushed the feat/programmed-condition branch 3 times, most recently from fac6062 to 9b15fe4 Compare July 26, 2023 08:17
@czeslavo
Copy link
Contributor Author

czeslavo commented Jul 26, 2023

What was the need for moving the referred objects update in the controllers? After that change, integration tests that involve Secrets are failing because the Secrets never get added to the store. Referred object updates (which tell the Secret controller to track the Secrets referenced by the Ingress or whatever) used to happen at line 404. The controller now exits and doesn't hit the referred update's new location at 429 because it's requeuing after seeing that the Ingress isn't configured in the proxy:

> github.com/kong/kubernetes-ingress-controller/v2/internal/controllers/configuration.(*NetV1IngressReconciler).Reconcile() ./internal/controllers/configuration/zz_generated_controllers.go:409 (PC: 0x3e6db97)
   404:		// if status updates are enabled report the status for the object
   405:		if r.DataplaneClient.AreKubernetesObjectReportsEnabled() {
   406:			log.V(util.DebugLevel).Info("determining whether data-plane configuration has succeeded", "namespace", req.Namespace, "name", req.Name)
   407:			if !r.DataplaneClient.KubernetesObjectIsConfigured(obj) {
   408:				log.V(util.DebugLevel).Info("resource not yet configured in the data-plane", "namespace", req.Namespace, "name", req.Name)
=> 409:				return ctrl.Result{Requeue: true}, nil // requeue until the object has been properly configured
   410:			}
   411:	
   412:			log.V(util.DebugLevel).Info("determining gateway addresses for object status updates", "namespace", req.Namespace, "name", req.Name)
   413:			addrs, err := r.DataplaneAddressFinder.GetLoadBalancerAddresses(ctx)
   414:			if err != nil {
(dlv) n

They never get configured because the parser can't find the Secrets they require:

ERRO[0631] resource processing failed: failed to fetch the secret (433a62f5-9bfe-4a93-8f88-2a9850a5bcd2/secret1)  GVK="networking.k8s.io/v1, Kind=Ingress" name=ingress1 namespace=433a62f5-9bfe-4a93-8f88-2a9850a5bcd2

Yeah... I see. The motivation was that once a Consumer's referred Secret doesn't exist, we will never get Programmed: False with a reason Invalid which I believe would be desirable in such a case (we'd return from the references update to reconcile again until a Secret is found and never get to the status update). I didn't foresee it would break other controllers.

I fixed this by making the order depend on ProgrammedConditionEnabled in the template. PTAL

@czeslavo czeslavo force-pushed the feat/programmed-condition branch 2 times, most recently from c604f6c to ca1749d Compare July 26, 2023 10:53
@czeslavo czeslavo requested a review from randmonkey July 26, 2023 10:55
@czeslavo czeslavo added the area/CRD Changes in existing CRDs or introduction of new ones label Jul 26, 2023
@rainest
Copy link
Contributor

rainest commented Jul 26, 2023

An envtest run encountered a data race: https://github.com/Kong/kubernetes-ingress-controller/actions/runs/5671617909/attempts/1?pr=4409 and https://gist.github.com/rainest/c65f15ad9fe052fb8981a224a9c7ab16

The failing code was added in https://github.com/Kong/kubernetes-ingress-controller/pull/4398/files, though this PR also includes a similar AddToScheme() call in a test that's likely also affected.

The race occurs in the depths of controller-runtime library code, and it's unclear if we actually did something wrong or if there's a bug in the underlying library. Reached out to CR chat to try and see: https://kubernetes.slack.com/archives/C02MRBMN00Z/p1690394875569169

We can presumably avoid this by manually managing our own locks to prevent invoking AddToScheme() and envtest.InstallCRDs() simultaneously, though that's a bit of a hack.

Not blocking this on that since the offending code is already in main, and because it's only a race in test harness code that wouldn't matter for release artifacts.

@rainest rainest enabled auto-merge (squash) July 26, 2023 18:18
@rainest rainest merged commit 8007500 into main Jul 26, 2023
@rainest rainest deleted the feat/programmed-condition branch July 26, 2023 18:18
@czeslavo
Copy link
Contributor Author

An envtest run encountered a data race: https://github.com/Kong/kubernetes-ingress-controller/actions/runs/5671617909/attempts/1?pr=4409 and https://gist.github.com/rainest/c65f15ad9fe052fb8981a224a9c7ab16

The failing code was added in https://github.com/Kong/kubernetes-ingress-controller/pull/4398/files, though this PR also includes a similar AddToScheme() call in a test that's likely also affected.

The race occurs in the depths of controller-runtime library code, and it's unclear if we actually did something wrong or if there's a bug in the underlying library. Reached out to CR chat to try and see: https://kubernetes.slack.com/archives/C02MRBMN00Z/p1690394875569169

We can presumably avoid this by manually managing our own locks to prevent invoking AddToScheme() and envtest.InstallCRDs() simultaneously, though that's a bit of a hack.

Not blocking this on that since the offending code is already in main, and because it's only a race in test harness code that wouldn't matter for release artifacts.

It's being addressed in #4413.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CRD Changes in existing CRDs or introduction of new ones area/feature New feature or request size/XL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants