Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDPRoute orphan checking #47

Open
1 of 2 tasks
shaneutt opened this issue Dec 14, 2022 · 11 comments
Open
1 of 2 tasks

UDPRoute orphan checking #47

shaneutt opened this issue Dec 14, 2022 · 11 comments
Labels
area/controlplane area/dataplane area/maintenance blocked lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@shaneutt
Copy link
Member

shaneutt commented Dec 14, 2022

Problem Statement

During #41 orphan checking was not added instead opting to follow up as that PR was already getting quite large. This PR is that follow-up: Orphaned UDPRoutes (e.g. the GatewayClass or Gateway become unmanaged, deleted, e.t.c.) need to be de-configured from the dataplane.

Prerequisites

Acceptance Criteria

  • with a valid and configured UDPRoute in place, if I delete its Gateway or GatewayClass its configuration gets removed from the dataplane
@shaneutt shaneutt added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. area/dataplane blocked area/controlplane area/maintenance labels Dec 14, 2022
@shaneutt shaneutt removed the blocked label Dec 14, 2022
@shaneutt shaneutt removed the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Aug 31, 2023
@shaneutt shaneutt added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Oct 12, 2023
@shaneutt shaneutt modified the milestones: v0.3.0 - Gateway API Conformance, v0.4.0 - Primary Features Oct 12, 2023
@levikobi
Copy link
Member

Hi @shaneutt! Can I work on this one?

@shaneutt
Copy link
Member Author

Sure thing sounds great, thanks @levikobi let us know if you need any assistance!

/assign @levikobi
/remove-label help-wanted

@kubernetes-sigs kubernetes-sigs deleted a comment from k8s-ci-robot Nov 27, 2023
@shaneutt shaneutt removed the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Nov 27, 2023
@levikobi
Copy link
Member

levikobi commented Dec 3, 2023

I've recently switched to a new computer, and I can't seem to be able to pull ghcr.io/kubernetes-sigs/blixt-controlplane:latest anymore.. I've logged in to ghcr.io with my github user, but I'm getting Error response from daemon: denied when trying to pull. Any idea what I'm missing here?🙏🏻

@shaneutt
Copy link
Member Author

shaneutt commented Dec 4, 2023

This relates to #106 and the fact that we need to move our images to the standard k8s image registry.

The ghcr.io images are private at the moment while we work on that transition as the kubernetes-sigs org doesn't want ghcr.io used for projects. I don't seem to be able to change any permissions on those.

What I've been doing, and something you can do in the meantime is to build the images locally:

$ make build.all.images

Does that work for you in the interim? Or is this still in your way?

@levikobi
Copy link
Member

levikobi commented Dec 4, 2023

When trying to build the data plane image, the build fails with the following error (I'm on a mac with an ARM chip):

 => CANCELED [internal] load metadata for docker.io/library/alpine:latest                                                                                                                                                                0.6s
 => ERROR [internal] load metadata for docker.io/library/archlinux:latest                                                                                                                                                                0.6s
------
 > [internal] load metadata for docker.io/library/archlinux:latest:
------
Dockerfile:1
--------------------
   1 | >>> FROM archlinux as builder
   2 |
   3 |     RUN pacman -Syu --noconfirm
--------------------
ERROR: failed to solve: archlinux: no match for platform in manifest sha256:1f83ba0580a15cd6ad1d02d62ad432ddc940f53f07d0e39c8982d6c9c74e53e0: not found

So I've tried building using docker buildx build --platform linux/amd64, but then it fails with:

 => ERROR [builder  2/13] RUN pacman -Syu --noconfirm                                                                                                                                                                                    1.5s
------
 > [builder  2/13] RUN pacman -Syu --noconfirm:
0.152 :: Synchronizing package databases...
1.438  core downloading...
1.438  extra downloading...
1.438 error: failed retrieving file 'core.db' from geo.mirror.pkgbuild.com : SSL certificate problem: unable to get local issuer certificate
1.438 error: failed retrieving file 'extra.db' from geo.mirror.pkgbuild.com : SSL certificate problem: unable to get local issuer certificate
1.438 error: failed retrieving file 'core.db' from mirror.rackspace.com : SSL certificate problem: unable to get local issuer certificate
1.438 error: failed retrieving file 'extra.db' from mirror.rackspace.com : SSL certificate problem: unable to get local issuer certificate
1.438 error: failed retrieving file 'extra.db' from mirror.leaseweb.net : SSL certificate problem: unable to get local issuer certificate
1.438 error: failed retrieving file 'core.db' from mirror.leaseweb.net : SSL certificate problem: unable to get local issuer certificate
1.439 error: failed to synchronize all databases (download library error)
------
Dockerfile:3
--------------------
   1 |     FROM archlinux as builder
   2 |
   3 | >>> RUN pacman -Syu --noconfirm
   4 |     RUN pacman -S base-devel protobuf rustup --noconfirm
   5 |
--------------------
ERROR: failed to solve: process "/bin/sh -c pacman -Syu --noconfirm" did not complete successfully: exit code: 1

I don't want to grab too much of your time, just wanted to keep you updated.
I'll keep debugging this (however if you've encountered that problem before I would love to hear about it😅)

@shaneutt
Copy link
Member Author

shaneutt commented Dec 4, 2023

I only use Linux, and have not run into this. @astoycos have you run into stuff like this?

@tzssangglass
Copy link
Contributor

tzssangglass commented Dec 27, 2023

I've recently switched to a new computer, and I can't seem to be able to pull ghcr.io/kubernetes-sigs/blixt-controlplane:latest anymore.. I've logged in to ghcr.io with my github user, but I'm getting Error response from daemon: denied when trying to pull. Any idea what I'm missing here?🙏🏻

I followed the Readme.md but got the same problem, and I tried to fix it, but I don't know if it is correct.

step1: build images locally

make build.all.images

after build success, get images like
shell

docker images
REPOSITORY                                      TAG                                              IMAGE ID       CREATED          SIZE
ghcr.io/kubernetes-sigs/blixt-udp-test-server   integration-tests                                c81edd464d8c   7 minutes ago    13.1MB
ghcr.io/kubernetes-sigs/blixt-dataplane         integration-tests                                555fec176377   7 minutes ago    19.9MB
ghcr.io/kubernetes-sigs/blixt-controlplane      integration-tests                                662e7aa906d7   11 minutes ago   62.7MB

step2: replace images for default

replace latest images to integration-tests which build locally

diff --git a/config/dataplane/dataplane.yaml b/config/dataplane/dataplane.yaml
index 042db12..fd2de06 100644
--- a/config/dataplane/dataplane.yaml
+++ b/config/dataplane/dataplane.yaml
@@ -20,7 +20,7 @@ spec:
       hostNetwork: true
       containers:
       - name: dataplane
-        image: ghcr.io/kubernetes-sigs/blixt-dataplane:latest
+        image: ghcr.io/kubernetes-sigs/blixt-dataplane:integration-tests
         securityContext:
           privileged: true
         args: ["-i", "eth0"]
diff --git a/config/manager/manager.yaml b/config/manager/manager.yaml
index b4ce802..f028503 100644
--- a/config/manager/manager.yaml
+++ b/config/manager/manager.yaml
@@ -38,7 +38,7 @@ spec:
         - /manager
         args:
         - --leader-elect
-        image: ghcr.io/kubernetes-sigs/blixt-controlplane:latest
+        image: ghcr.io/kubernetes-sigs/blixt-controlplane:integration-tests
         imagePullPolicy: IfNotPresent
         name: manager
         securityContext:

step3: kind load images

kind load docker-image -n blixt-dev ghcr.io/kubernetes-sigs/blixt-udp-test-server:integration-tests
kind load docker-image -n blixt-dev ghcr.io/kubernetes-sigs/blixt-dataplane:integration-tests
kind load docker-image -n blixt-dev ghcr.io/kubernetes-sigs/blixt-controlplane:integration-tests

step4: deploy

kubectl apply -k config/default

and get pods

kubectl -n blixt-system get pods
NAME                                 READY   STATUS    RESTARTS   AGE
blixt-controlplane-b846c79d5-t64wk   2/2     Running   0          7m35s
blixt-dataplane-88ssq                1/1     Running   0          7m35s

@levikobi
Copy link
Member

Hi @tzssangglass, can you please share which OS/Arch you're using? I guess it's Linux/AMD?

@tzssangglass
Copy link
Contributor

hi @levikobi Linux ubuntu

@shaneutt
Copy link
Member Author

Note that this is now blocked on https://github.com/kubernetes-sigs/blixt/milestone/8, and the work should be done in the upcoming Rust control-plane afterwards.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2024
@shaneutt shaneutt added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 24, 2024
@shaneutt shaneutt removed this from the v0.8.0 - Hardening FIXME milestone Oct 29, 2024
@shaneutt shaneutt added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Oct 29, 2024
@shaneutt shaneutt added this to the v0.8.0 - Hardening milestone Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/controlplane area/dataplane area/maintenance blocked lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

No branches or pull requests

5 participants