Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPv6 Cluster not resolving kind-control-plane #3114

Open
breuerfelix opened this issue Mar 2, 2023 · 24 comments
Open

IPv6 Cluster not resolving kind-control-plane #3114

breuerfelix opened this issue Mar 2, 2023 · 24 comments
Labels
area/provider/docker Issues or PRs related to docker kind/bug Categorizes issue or PR as related to a bug.

Comments

@breuerfelix
Copy link

breuerfelix commented Mar 2, 2023

What happened:
I created an IPv6 Single stack cluster and tried resolving kind-control-plane from inside the cluster which does not resolve to anything. In an IPv4 Cluster, this works as expected.

What you expected to happen:
The DNS record gets resolved as in IPv4 cluster.

How to reproduce it (as minimally and precisely as possible):
kind create cluster --config ipv6.yaml

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
- role: control-plane
  image: kindest/node:v1.26.0
networking:
  ipFamily: ipv6
  podSubnet: fd00:10:1::/56
  serviceSubnet: fd00:10:2::/112

kubectl create deploy multitool --image=ghcr.io/dergeberl/multitool-net

Open a shell in the container:
dig +short kind-control-plane
dig +short kind-control-plane aaaa
Neither does resolve anything.

Doing the same with the following config:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
- role: control-plane
  image: kindest/node:v1.26.0
networking:
  ipFamily: ipv4
  podSubnet: 10.1.0.0/16
  serviceSubnet: 10.2.0.0/16

resolves this output:

# dig +short kind-control-plane
172.18.0.2
# dig +short kind-control-plane aaaa
fc00:f853:ccd:e793::2

Anything else we need to know?:
IPv6 in general is working, also resolving external domains or kubernetes internal dns records do work.

Environment:

  • kind version: (use kind version): 0.14.0
  • Runtime info: (use docker info or podman info):
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 2
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version:
 runc version:
 init version:
 Security Options:
  apparmor
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.15.0-1029-gcp
 Operating System: Ubuntu 22.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.35GiB
 Name: felix-gardener-dev
 ID: 4C5N:TWVA:HRYV:WKBA:WAZS:SHAG:OEZD:SECJ:CKDR:SAE6:C3KU:FLMC
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
  • Kubernetes version: (use kubectl version):
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.26.0

/cc @einfachnuralex

@breuerfelix breuerfelix added the kind/bug Categorizes issue or PR as related to a bug. label Mar 2, 2023
@BenTheElder
Copy link
Member

dockerd is responsible for resolving this, though we do some magic to make the docker embedded DNS work inside pods.

https://docs.docker.com/config/containers/container-networking/#dns-services

@BenTheElder
Copy link
Member

Can you test if it's working at the node level using docker exec kind-control-plane ...?

@BenTheElder BenTheElder added the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 10, 2023
@timebertt
Copy link

TL;DR: in IPv6-only clusters, CoreDNS tries to talk to the dockerd upstream DNS server via IPv4.

On the node level, resolving the node's own hostname works as expected:

$ docker inspect kind-control-plane | jq '.[0].NetworkSettings.Networks["kind"]'
{
  "IPAMConfig": null,
  "Links": null,
  "Aliases": [
    "b62b5e1b2447",
    "kind-control-plane"
  ],
  "NetworkID": "e0d583719717c05f5eea301abf46ed9fc8c327f13c62d2acafb863146a5be117",
  "EndpointID": "147b7b078936194a13f357f1fe5ef92cf23aceec0005adaee8198667d7bc3981",
  "Gateway": "172.18.0.1",
  "IPAddress": "172.18.0.2",
  "IPPrefixLen": 16,
  "IPv6Gateway": "fc00:f853:ccd:e793::1",
  "GlobalIPv6Address": "fc00:f853:ccd:e793::2",
  "GlobalIPv6PrefixLen": 64,
  "MacAddress": "02:42:ac:12:00:02",
  "DriverOpts": null
}

$ docker exec -it kind-control-plane bash
root@kind-control-plane:/# dig kind-control-plane

; <<>> DiG 9.18.1-1ubuntu1.3-Ubuntu <<>> kind-control-plane
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10355
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;kind-control-plane.		IN	A

;; ANSWER SECTION:
kind-control-plane.	600	IN	A	172.18.0.2

;; Query time: 0 msec
;; SERVER: 172.18.0.1#53(172.18.0.1) (UDP)
;; WHEN: Mon Mar 13 15:41:05 UTC 2023
;; MSG SIZE  rcvd: 70

root@kind-control-plane:/# dig kind-control-plane aaaa

; <<>> DiG 9.18.1-1ubuntu1.3-Ubuntu <<>> kind-control-plane aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48588
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;kind-control-plane.		IN	AAAA

;; ANSWER SECTION:
kind-control-plane.	600	IN	AAAA	fc00:f853:ccd:e793::2

;; Query time: 0 msec
;; SERVER: 172.18.0.1#53(172.18.0.1) (UDP)
;; WHEN: Mon Mar 13 15:41:09 UTC 2023
;; MSG SIZE  rcvd: 82

In the pod network, resolution (via coredns) fails with SERVFAIL:

root@sh:/# dig kind-control-plane

; <<>> DiG 9.18.12-1ubuntu1-Ubuntu <<>> kind-control-plane
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 41728
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 0393c2e9cc2181a7 (echoed)
;; QUESTION SECTION:
;kind-control-plane.		IN	A

;; Query time: 1999 msec
;; SERVER: fd00:10:2::a#53(fd00:10:2::a) (UDP)
;; WHEN: Mon Mar 13 15:45:17 UTC 2023
;; MSG SIZE  rcvd: 59

root@sh:/# dig kind-control-plane aaaa

; <<>> DiG 9.18.12-1ubuntu1-Ubuntu <<>> kind-control-plane aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 60818
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: a7961cf388de2ae7 (echoed)
;; QUESTION SECTION:
;kind-control-plane.		IN	AAAA

;; Query time: 0 msec
;; SERVER: fd00:10:2::a#53(fd00:10:2::a) (UDP)
;; WHEN: Mon Mar 13 15:45:23 UTC 2023
;; MSG SIZE  rcvd: 59

In the coredns logs, I found the following error messages:

$ kubectl -n kube-system logs deployment/coredns
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[ERROR] plugin/errors: 2 430338506190911966.9170001181764506698. HINFO: read udp [fd00:10:1::a]:45734->[2001:4860:4860::8844]:53: i/o timeout
[ERROR] plugin/errors: 2 430338506190911966.9170001181764506698. HINFO: read udp [fd00:10:1::a]:48607->[2001:4860:4860::8888]:53: i/o timeout
[ERROR] plugin/errors: 2 430338506190911966.9170001181764506698. HINFO: read udp [fd00:10:1::a]:55218->[2001:4860:4860::8844]:53: i/o timeout
[ERROR] plugin/errors: 2 430338506190911966.9170001181764506698. HINFO: read udp [fd00:10:1::a]:50309->[2001:4860:4860::8844]:53: i/o timeout
[ERROR] plugin/errors: 2 430338506190911966.9170001181764506698. HINFO: dial udp 172.18.0.1:53: connect: network is unreachable
[ERROR] plugin/errors: 2 kind-control-plane. A: dial udp 172.18.0.1:53: connect: network is unreachable
[ERROR] plugin/errors: 2 kind-control-plane. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable

CoreDNS (like the other pods) only has IPv6 connectivity. However, it only has the dockerd upstream DNS server configured over IPv4:

$ kubectl -n kube-system get po coredns-7899dd785b-tbmnf -oyaml | yq '.status.podIPs'
- ip: fd00:10:1::a
$ kubectl -n kube-system debug --image=alpine -it coredns-7899dd785b-tbmnf sh
Defaulting debug container name to debugger-g54v7.
If you don't see a command prompt, try pressing enter.
/ # ip r
/ # ip -f inet6 r
fd00:10:1::1 dev eth0  src fd00:10:1::a  metric 1024
fd00:10:1::/64 via fd00:10:1::1 dev eth0  src fd00:10:1::a  metric 1024
fe80::/64 dev eth0  metric 256
default via fd00:10:1::1 dev eth0  metric 1024
multicast ff00::/8 dev eth0  metric 256
/ # cat /etc/resolv.conf
search ... google.internal
nameserver 172.18.0.1
nameserver 2001:4860:4860::8888
nameserver 2001:4860:4860::8844
options ndots:0 edns0 trust-ad

As CoreDNS uses dnsPolicy=Default, it inherits the node's resolv.conf.
On the node, this is not a problem and talking to dockerd over IPv4 works because it has dual-stack network connectivity:

$ docker exec -it kind-control-plane bash
root@kind-control-plane:/# cat /etc/resolv.conf
search ... google.internal
nameserver 172.18.0.1
nameserver 2001:4860:4860::8888
nameserver 2001:4860:4860::8844
options edns0 trust-ad ndots:0

root@kind-control-plane:/# ip r
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2

root@kind-control-plane:/# ip -f inet6 r
fc00:f853:ccd:e793::/64 dev eth0 proto kernel metric 256 pref medium
fd00:10:1::1 dev veth4e43b021 proto kernel metric 256 pref medium
fd00:10:1::1 dev veth0e62cc9b proto kernel metric 256 pref medium
fd00:10:1::1 dev veth5ecfbe78 proto kernel metric 256 pref medium
fd00:10:1::4 dev veth4e43b021 metric 1024 pref medium
fd00:10:1::7 dev veth0e62cc9b metric 1024 pref medium
fd00:10:1::a dev veth5ecfbe78 metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev veth4e43b021 proto kernel metric 256 pref medium
fe80::/64 dev veth0e62cc9b proto kernel metric 256 pref medium
fe80::/64 dev veth5ecfbe78 proto kernel metric 256 pref medium
default via fc00:f853:ccd:e793::1 dev eth0 metric 1024 pref medium

@BenTheElder
Copy link
Member

BenTheElder commented Mar 17, 2023

fun 😬 , cc @aojea

@BenTheElder
Copy link
Member

thanks for the detailed information!

@aojea
Copy link
Contributor

aojea commented Mar 17, 2023

kind/hack/ci/e2e-k8s.sh

Lines 192 to 217 in 0ebfe01

# IPv6 clusters need some CoreDNS changes in order to work in k8s CI:
# 1. k8s CI doesn´t offer IPv6 connectivity, so CoreDNS should be configured
# to work in an offline environment:
# https://github.com/coredns/coredns/issues/2494#issuecomment-457215452
# 2. k8s CI adds following domains to resolv.conf search field:
# c.k8s-prow-builds.internal google.internal.
# CoreDNS should handle those domains and answer with NXDOMAIN instead of SERVFAIL
# otherwise pods stops trying to resolve the domain.
if [ "${IP_FAMILY:-ipv4}" = "ipv6" ]; then
# Get the current config
original_coredns=$(kubectl get -oyaml -n=kube-system configmap/coredns)
echo "Original CoreDNS config:"
echo "${original_coredns}"
# Patch it
fixed_coredns=$(
printf '%s' "${original_coredns}" | sed \
-e 's/^.*kubernetes cluster\.local/& internal/' \
-e '/^.*upstream$/d' \
-e '/^.*fallthrough.*$/d' \
-e '/^.*forward . \/etc\/resolv.conf$/d' \
-e '/^.*loop$/d' \
)
echo "Patched CoreDNS config:"
echo "${fixed_coredns}"
printf '%s' "${fixed_coredns}" | kubectl apply -f -
fi

long time friendly known issue 😄

@BenTheElder BenTheElder removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 17, 2023
@timebertt
Copy link

We patched the coredns like the e2e-k8s.sh script. However, we are still unable to resolve the kind's docker container name.
We also tried adding the docker network gateway's IPv6 address to /etc/resolv.conf instead of the IPv4 address (see comment above). With this, we always get a connection refused. It seems like the docker daemon only serves DNS on IPv4.

Conclusion: resolving the kind's docker container name from within the pod network only works with IPv4 or dual-stack kind clusters but not in IPv6 clusters.

@aojea
Copy link
Contributor

aojea commented Mar 20, 2023

we always get a connection refused. It seems like the docker daemon only serves DNS on IPv4.

yeah, thanks for the detailed report, I recognize I was too lazy to go over all the details, but indeed if CoreDNS only has IPv6 and docker embedded DNS only has IPv4 it is impossible to communicate unless (from the top of my head) we do some NAT64, embedded DNS server on IPv6 or CoreDNS pod has dual-stack IPs.

Found this moby/moby#41651 related to the embedded DNS missing IPv6

@jon-nfc
Copy link

jon-nfc commented Mar 23, 2023

I have the same issue as above, coredns is not working for internal dns resolution, however, the difference in this case is that I'm using IPv4. even removing the IPv4 Config has no difference. the deployed service is mariadb.

Environment: Windows 10, WSL Docker Desktop

kind

C:\Users\user>kind --version
kind version 0.17.0

C:\Users\user>

Kubernetes

k8s-7c448ff89b-nk6ls:/apps# kubectl version --short --output=yaml
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
clientVersion:
  buildDate: "2023-02-22T13:39:03Z"
  compiler: gc
  gitCommit: fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b
  gitTreeState: clean
  gitVersion: v1.26.2
  goVersion: go1.19.6
  major: "1"
  minor: "26"
  platform: linux/amd64
kustomizeVersion: v4.5.7
serverVersion:
  buildDate: "2023-03-10T21:12:33Z"
  compiler: gc
  gitCommit: fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b
  gitTreeState: clean
  gitVersion: v1.26.2
  goVersion: go1.19.6
  major: "1"
  minor: "26"
  platform: linux/amd64

k8s-7c448ff89b-nk6ls:/apps#

docker

C:\Users\user>docker --version
Docker version 20.10.23, build 7155243

C:\Users\user>

all of these commands were run from a pod within the same cluster using the following manifest

kind-config.yaml
# this config file contains all config fields with comments
# NOTE: this is not a particularly useful config file
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# my custom settings
name: development-cluster
# networking:
#   ipFamily: ipv4
#   apiServerAddress: 127.0.0.1
  #podSubnet: "10.1.0.0/16" # possible issue however unknown. test me see: https://github.com/kubernetes-sigs/kind/issues/1216#issuecomment-621543446
  #serviceSubnet: "10.2.0.0/16" # see: https://github.com/kubernetes-sigs/kind/issues/1216#issuecomment-621543446
#######################################################################################3
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
  apiVersion: kubelet.config.k8s.io/v1beta1
  kind: KubeletConfiguration
  evictionHard:
    nodefs.available: "0%"

- |
  kind: ClusterConfiguration
  apiServer:
    extraArgs:
      "service-node-port-range": "1-65535"
# patch it further using a JSON 6902 patch
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
  version: v1beta3
  kind: ClusterConfiguration
  patch: |
    - op: add
      path: /apiServer/certSANs/-
      value: my-hostname

#        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
#          endpoint = ["https://registry.k8s.io", "https://k8s.gcr.io"]
# check node with: docker exec -ti development-cluster-worker cat /etc/containerd/config.toml
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["http://host.docker.internal:5000"] 
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry-1.docker.io"]
          endpoint = ["http://host.docker.internal:5000"] 
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["http://host.docker.internal:5002"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
          endpoint = ["http://host.docker.internal:5002"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.io"]
          endpoint = ["http://host.docker.internal:5002"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["http://host.docker.internal:5001"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["http://host.docker.internal:5003"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
          endpoint = ["http://host.docker.internal:5004"]
        
  

# 1 control plane node and 3 workers
nodes:
# the control plane node config
- role: control-plane
  image: kindest/node:v1.26.2@sha256:c39462fc9f460e13627cbd835b7d1268e4fd1a82d23833864e33ac1aaa79ee7a
  #image: alpine/k8s:1.26.2@sha256:b397cb3a0886ad07098f785948bdc4d3eda786942b60d7579e529faaa4dd4a48
  labels:
    type: master
    node.kubernetes.io/name: "control-plane1"
    node.kubernetes.io/instance: "test-cluster"
    #node.kubernetes.io/version: "5.7.21"
    node.kubernetes.io/managed-by: kind
    node.kubernetes.io/instance-type: "kind" # what is the machine and/or hardware. i.e. GPU
    topology.kubernetes.io/zone: "kind.dev.testing.nww" # for targeting zone
  #  ingress: true
  # extraPortMappings:
  # - containerPort: 80
  #   hostPort: 80
  # - containerPort: 443
  #   hostPort: 443


# the three workers
- role: worker
  image: kindest/node:v1.26.2@sha256:c39462fc9f460e13627cbd835b7d1268e4fd1a82d23833864e33ac1aaa79ee7a
  #image: alpine/k8s:1.26.2@sha256:b397cb3a0886ad07098f785948bdc4d3eda786942b60d7579e529faaa4dd4a48
  labels:
  #   #app.kubernetes.io/name: worker
    type: worker
    node.kubernetes.io/name: "worker1"
    node.kubernetes.io/instance: "test-cluster"
    #node.kubernetes.io/version: "5.7.21"
    node.kubernetes.io/managed-by: kind
    node.kubernetes.io/instance-type: "kind" # what is the machine and/or hardware. i.e. GPU
    topology.kubernetes.io/zone: "kind.dev.testing.nww" # for targeting zone
  extraMounts:
      - hostPath: C:\\Users\\user\\Documents\\git\\kubernetes
        containerPath: /my-data
    # ingress: http
    # app: nginx
  #   #node-role.kubernetes.io/worker: worker

- role: worker
  image: kindest/node:v1.26.2@sha256:c39462fc9f460e13627cbd835b7d1268e4fd1a82d23833864e33ac1aaa79ee7a
  #image: alpine/k8s:1.26.2@sha256:b397cb3a0886ad07098f785948bdc4d3eda786942b60d7579e529faaa4dd4a48
  labels:
    type: worker
    node.kubernetes.io/name: "worker2"
    node.kubernetes.io/instance: "test-cluster"
    #node.kubernetes.io/version: "5.7.21"
    node.kubernetes.io/managed-by: kind
    node.kubernetes.io/instance-type: "kind" # what is the machine and/or hardware. i.e. GPU
    topology.kubernetes.io/zone: "kind.dev.testing.nww" # for targeting zone
  extraMounts:
      - hostPath: C:\\Users\\user\\Documents\\git\\kubernetes
        containerPath: /my-data
k8s pod manifest

k8s-pod.yaml

---

apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
  labels:
    # app.kubernetes.io/component: exporter
    app.kubernetes.io/name: kubectl
    app.kubernetes.io/version: 2.3.0
  name: kubectl
  namespace: default

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: kubectl
  name: kubectl
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubectl
subjects:
- kind: ServiceAccount
  name: kubectl
  namespace: default
  # namespace: monitoring

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2023-03-20T05:15:06Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: kubectl
  resourceVersion: "72"
  uid: 95224e87-d870-4869-9bb0-11b07ce57836
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

---

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: k8s
    app.kubernetes.io/managed-by: manual
  name: k8s
  #namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      type: worker
  template:
    metadata:
      labels:
        type: worker
        app.kubernetes.io/name: k8s
    spec:
      automountServiceAccountToken: true
      containers:
        - name: k8s
          image: alpine/k8s:1.26.2
          #imagePullPolicy: Never
          command: ["/bin/sh"]
          args: ["-c", "bash"]
          stdin: true
          tty: true
          volumeMounts:
          - name: my-data
            mountPath: /apps  # in the container filesystem
          resources:
            limits:
              memory: "100M"
              cpu: "1000m"
            requests: 
              memory: 10M
              cpu: "100m"
      serviceAccountName: kubectl
      nodeSelector:
        type: worker
      volumes:
        - name: my-data
          hostPath:
            path: /my-data  # matches kind containerPath:
---
kubectl get -A svc
k8s-7c448ff89b-nk6ls:/apps# kubectl get -A svc
NAMESPACE     NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes        ClusterIP   10.96.0.1    <none>        443/TCP                  6m16s
default       mariadb-service   ClusterIP   None         <none>        3306/TCP                 4m57s
kube-system   kube-dns          ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   6m15s
k8s-7c448ff89b-nk6ls:/apps#
kubectl get -A pods -o wide
k8s-7c448ff89b-nk6ls:/apps# kubectl get -A pods -o wide
NAMESPACE            NAME                                                        READY   STATUS    RESTARTS       AGE     IP           NODE                                NOMINATED NODE   READINESS GATES
default              k8s-7c448ff89b-nk6ls                                        1/1     Running   0              4m54s   10.244.2.2   development-cluster-worker          <none>           <none>
default              mariadb-sts-0                                               1/1     Running   0              4m12s   10.244.1.3   development-cluster-worker2         <none>           <none>
default              mariadb-sts-1                                               1/1     Running   0              3m50s   10.244.2.4   development-cluster-worker          <none>           <none>
default              mariadb-sts-2                                               1/1     Running   0              3m29s   10.244.1.5   development-cluster-worker2         <none>           <none>
kube-system          coredns-787d4945fb-bmjkf                                    1/1     Running   0              5m16s   10.244.0.3   development-cluster-control-plane   <none>           <none>
kube-system          coredns-787d4945fb-vf6xw                                    1/1     Running   0              5m16s   10.244.0.4   development-cluster-control-plane   <none>           <none>
kube-system          etcd-development-cluster-control-plane                      1/1     Running   0              5m29s   172.18.0.4   development-cluster-control-plane   <none>           <none>
kube-system          kindnet-4hn7d                                               1/1     Running   0              4m59s   172.18.0.2   development-cluster-worker2         <none>           <none>
kube-system          kindnet-ggz5r                                               1/1     Running   0              5m16s   172.18.0.4   development-cluster-control-plane   <none>           <none>
kube-system          kindnet-n9wd6                                               1/1     Running   0              4m59s   172.18.0.3   development-cluster-worker          <none>           <none>
kube-system          kube-apiserver-development-cluster-control-plane            1/1     Running   0              5m29s   172.18.0.4   development-cluster-control-plane   <none>           <none>
kube-system          kube-controller-manager-development-cluster-control-plane   1/1     Running   0              5m29s   172.18.0.4   development-cluster-control-plane   <none>           <none>
kube-system          kube-proxy-2bb82                                            1/1     Running   0              4m59s   172.18.0.3   development-cluster-worker          <none>           <none>
kube-system          kube-proxy-gncn9                                            1/1     Running   0              5m16s   172.18.0.4   development-cluster-control-plane   <none>           <none>
kube-system          kube-proxy-x849w                                            1/1     Running   0              4m59s   172.18.0.2   development-cluster-worker2         <none>           <none>
kube-system          kube-scheduler-development-cluster-control-plane            1/1     Running   0              5m29s   172.18.0.4   development-cluster-control-plane   <none>           <none>
local-path-storage   local-path-provisioner-84f55fc489-f5vfh                     1/1     Running   1 (5m3s ago)   5m16s   10.244.0.2   development-cluster-control-plane   <none>           <none>
k8s-7c448ff89b-nk6ls:/apps#
nslookup mariadb-sts-0.mariadb-service.default.svc.cluster.local
k8s-7c448ff89b-nk6ls:/apps# nslookup mariadb-sts-0.mariadb-service.default.svc.cluster.local
Server:         10.96.0.10
Address:        10.96.0.10:53

** server can't find mariadb-sts-0.mariadb-service.default.svc.cluster.local: NXDOMAIN

** server can't find mariadb-sts-0.mariadb-service.default.svc.cluster.local: NXDOMAIN

k8s-7c448ff89b-nk6ls:/apps# 
(cordns connectivity) ping 10.244.0.3
k8s-7c448ff89b-nk6ls:/apps# ping 10.244.0.3
PING 10.244.0.3 (10.244.0.3): 56 data bytes
64 bytes from 10.244.0.3: seq=0 ttl=62 time=0.077 ms
64 bytes from 10.244.0.3: seq=1 ttl=62 time=0.174 ms
^C
--- 10.244.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.077/0.125/0.174 ms
k8s-7c448ff89b-nk6ls:/apps# 
(cordns connectivity) ping 10.244.0.4
k8s-7c448ff89b-nk6ls:/apps# ping 10.244.0.4
PING 10.244.0.4 (10.244.0.4): 56 data bytes
64 bytes from 10.244.0.4: seq=0 ttl=62 time=0.083 ms
64 bytes from 10.244.0.4: seq=1 ttl=62 time=0.130 ms
^C
--- 10.244.0.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.083/0.106/0.130 ms
k8s-7c448ff89b-nk6ls:/apps# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
k8s-7c448ff89b-nk6ls:/apps#
external connectivity: ping google.com
k8s-7c448ff89b-nk6ls:/apps# ping google.com
PING google.com (142.251.221.78): 56 data bytes
64 bytes from 142.251.221.78: seq=0 ttl=35 time=510.016 ms
64 bytes from 142.251.221.78: seq=1 ttl=35 time=101.935 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 101.935/305.975/510.016 ms
k8s-7c448ff89b-nk6ls:/apps#
nslookup google.com
k8s-7c448ff89b-nk6ls:/apps# nslookup google.com
Server:         10.96.0.10
Address:        10.96.0.10:53

Non-authoritative answer:
Name:   google.com
Address: ::ffff:142.251.221.78
Name:   google.com
Address: 2404:6800:4006:811::200e

Non-authoritative answer:
Name:   google.com
Address: 142.251.221.78

k8s-7c448ff89b-nk6ls:/apps#
k8s-7c448ff89b-nk6ls:/apps#

cat /etc/resolv.conf
k8s-7c448ff89b-nk6ls:/apps# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
k8s-7c448ff89b-nk6ls:/apps#

Edit: add core dns logs

both pods core dns logs
k8s-7c448ff89b-nk6ls:/apps# kubectl logs -n kube-system coredns-787d4945fb-bmjkf
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
k8s-7c448ff89b-nk6ls:/apps#


k8s-7c448ff89b-nk6ls:/apps# kubectl logs -n kube-system coredns-787d4945fb-vf6xw
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.4:39859->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. A: read udp 10.244.0.4:35365->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. A: read udp 10.244.0.4:48138->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.4:35414->192.168.65.2:53: i/o timeout
k8s-7c448ff89b-nk6ls:/apps#

@aojea
Copy link
Contributor

aojea commented Mar 23, 2023

  "service-node-port-range": "1-65535"

you should not do that, you can have weird connectivity problems if some of the ports is taking by another process

[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.4:39859->192.168.65.2:53: i/o timeout

please open a new issue, and check first with a fresh cluster without any modifications

@jon-nfc
Copy link

jon-nfc commented Mar 24, 2023

  "service-node-port-range": "1-65535"

you should not do that, you can have weird connectivity problems if some of the ports is taking by another process

"IF"

[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.4:39859->192.168.65.2:53: i/o timeout

yes, both IPv6 (AAAA) and IPv4 (A) have the issue. as for the 192.168.65.2 address, this is not on or part of any network I have.

[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.4:39859->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. A: read udp 10.244.0.4:35365->192.168.65.2:53: i/o timeout

please open a new issue, and check first with a fresh cluster without any modifications

new issue?????? I've demonstrated this issue is not isolated to IPv6. also the same on a clean stack without mods. of note, IPv6 errors still show when the stack is configured for IPv4

k8s-7c448ff89b-m88vv:/apps# ping kind-control-plane
ping: bad address 'kind-control-plane'


k8s-7c448ff89b-m88vv:/apps# kubectl logs -n kube-system coredns-787d4945fb-nrh6v
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
k8s-7c448ff89b-m88vv:/apps# 


k8s-7c448ff89b-m88vv:/apps# kubectl logs -n kube-system coredns-787d4945fb-zrvmp
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[ERROR] plugin/errors: 2 kind-control-plane. A: read udp 10.244.0.3:44434->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.3:41449->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. A: read udp 10.244.0.3:57983->192.168.65.2:53: i/o timeout
[ERROR] plugin/errors: 2 kind-control-plane. AAAA: read udp 10.244.0.3:58957->192.168.65.2:53: i/o timeout
k8s-7c448ff89b-m88vv:/apps#


k8s-7c448ff89b-m88vv:/apps# arp -a
? (10.244.2.1) at be:25:fd:d6:59:08 [ether]  on eth0
k8s-7c448ff89b-m88vv:/apps#


k8s-7c448ff89b-m88vv:/apps# kubectl get -A pods
NAMESPACE            NAME                                                        READY   STATUS    RESTARTS   AGE
default              k8s-7c448ff89b-m88vv                                        1/1     Running   0          40m
kube-system          coredns-787d4945fb-nrh6v                                    1/1     Running   0          41m
kube-system          coredns-787d4945fb-zrvmp                                    1/1     Running   0          41m
kube-system          etcd-development-cluster-control-plane                      1/1     Running   0          41m
kube-system          kindnet-gb6bb                                               1/1     Running   0          41m
kube-system          kindnet-rxkr7                                               1/1     Running   0          41m
kube-system          kindnet-sf6w4                                               1/1     Running   0          41m
kube-system          kube-apiserver-development-cluster-control-plane            1/1     Running   0          41m
kube-system          kube-controller-manager-development-cluster-control-plane   1/1     Running   0          41m
kube-system          kube-proxy-qbfpk                                            1/1     Running   0          41m
kube-system          kube-proxy-tvb8r                                            1/1     Running   0          41m
kube-system          kube-proxy-z9f25                                            1/1     Running   0          41m
kube-system          kube-scheduler-development-cluster-control-plane            1/1     Running   0          41m
local-path-storage   local-path-provisioner-84f55fc489-6l7sc                     1/1     Running   0          41m
k8s-7c448ff89b-m88vv:/apps#

@BenTheElder
Copy link
Member

BenTheElder commented Mar 24, 2023

new issue?????? I've demonstrated this issue is not isolated to IPv6. also the same on a clean stack without mods. of note, IPv6 errors still show when the stack is configured for IPv4

You've demonstrated that it's possible to have broken DNS without it being IPv6 related, but that is not the same root cause as IPv6-only always having DNS broken because there is no IPV6 listener from docker. (#3114 (comment))

Your issue is probably #3054 if I had to guess (fix is at HEAD but not released yet), but we should find out on a distinct issue so as not to distract from the clear root cause for this issue.

@BenTheElder BenTheElder added the area/provider/docker Issues or PRs related to docker label Mar 24, 2023
@jon-nfc
Copy link

jon-nfc commented Mar 24, 2023

@BenTheElder

You've demonstrated that it's possible to have broken DNS without it being IPv6 related, but that is not the same root cause as IPv6-only always having DNS broken because there is no IPV6 listener from docker. (#3114 (comment))

yeah, thanks for the detailed report, I recognize I was too lazy to go over all the details, but indeed if CoreDNS only has IPv6 and docker embedded DNS only has IPv4 it is impossible to communicate unless (from the top of my head) we do some NAT64, embedded DNS server on IPv6 or CoreDNS pod has dual-stack IPs.

If coreDNS only has IPv6 and not IPv4, I fail to see how having a different source stack IPv4/IPv6 is relevant. the above comment alludes to coreDNS only having IPv6 and the underlying docker not having the correct capabilities. However, I think it may be prudent to have debug/command output to confirm as there may still be routing/nat issues regardless of the source. Please advise, as I'm unaware of the underlying networking stack of kind.

What happened: I created an IPv6 Single stack cluster and tried resolving kind-control-plane from inside the cluster which does not resolve to anything. In an IPv4 Cluster, this works as expected.

What you expected to happen: The DNS record gets resolved as in IPv4 cluster.

This description is the same issue I have, except it's not isolated to IPv6.

Your issue is probably #3054 if I had to guess (fix is at HEAD but not released yet), but we should find out on a distinct issue so as not to distract from the clear root cause for this issue.

appears to be not related, as my nat tables have the --dport

root@development-cluster-control-plane:/# iptables -t nat -S|grep -e '--to-destination 127.0.0.11'
-A DOCKER_OUTPUT -d 192.168.65.2/32 -p tcp -m tcp --dport 53 -j DNAT --to-destination 127.0.0.11:46097
-A DOCKER_OUTPUT -d 192.168.65.2/32 -p udp -m udp --dport 53 -j DNAT --to-destination 127.0.0.11:60021
root@development-cluster-control-plane:/#

Happy to run any further debugging commands that I haven't, so as to narrow down if in fact my issue is related or not.

@BenTheElder
Copy link
Member

If coreDNS only has IPv6 and not IPv4, I fail to see how having a different source stack IPv4/IPv6 is relevant. the above comment alludes to coreDNS only having IPv6 and the underlying docker not having the correct capabilities. However, I think it may be prudent to have debug/command output to confirm as there may still be routing/nat issues regardless of the source. Please advise, as I'm unaware of the underlying networking stack of kind.

No, we're talking about dockerd's embedded listener in #3114 (comment) which is the upstream of coreDNS, not coreDNS. #3114 (comment) mentions this known issue with docker upstream WRT not having an IPv6 listenere, and we have docker's DNS for the upstream resolver as far as Kubernetes + coreDNS is concerned here.

I'm sorry, I can't dig in further at the moment myself, I'm keeping a close eye on the first round of rollouts for https://kubernetes.io/blog/2023/03/10/image-registry-redirect/

@BenTheElder
Copy link
Member

The originally filed root issue here is going to be a very challenging problem to solve and we're going to need to track this for some time. I can't think of a reasonable workaround off the top of my head until docker supports moby/moby#41651

@timebertt
Copy link

In our case, we stopped relying on kind-control-plane and instead inject another hostname into /etc/hosts on all kind nodes and into a coredns hosts configuration. After creating the kind cluster, we look up the IP of the control plane node in the docker network with docker container inspect and configure our new hostname to resolve to this IP.
This workaround is good enough for us and is also portable: can be used in a second kind cluster in the same network, can be used for other cluster setups, and can additionally be configured on the host machine.

With such an approach, kind could also "manually" inject hostname/IP pairs of all kind nodes into the docker containers and coredns configuration.
This would allow resolving at least the hostnames of nodes in the same cluster, not of other docker containers in the same network though.
That being said, we can live with our workaround. Given that switching to dual-stack kind also resolves the issue, I'm skeptical that a workaround in kind for moby/moby#41651 would be worth the effort :)

@aojea
Copy link
Contributor

aojea commented Mar 24, 2023

If coreDNS only has IPv6 and not IPv4, I fail to see how having a different source stack IPv4/IPv6 is relevant.

well, is the root cause 😄 , since embedded dns doesn't have ipv6, it is impossible to communicate something that is ipv6 only to something that is ipv4 only unless you add a translation layer

Environment: Windows 10, WSL Docker Desktop
This description is the same issue I have, except it's not isolated to IPv6.

@jon-nfc you are running in a total different environment with different underlay network, the problem here is root caused, aggregating issues by the same symptom will be more confusing to other users ...

@pmalek
Copy link

pmalek commented Nov 10, 2023

Also got hit by this issue. Is our only hope waiting for moby/moby#41651 to be fixed?

@BenTheElder
Copy link
Member

Is our only hope waiting for moby/moby#41651 to be fixed?

no, what I meant in #3114 (comment) is that's the only good solution I have off the top of my head, if someone proposes another good solution we can consider it.

so far the other options are .... not good, like #3114 (comment) does not work on host reboot (which is part of why we use the embedded DNS)

I think it's possible to resolve this some other way but you basically wind up needing to recreate the embedded DNS anyhow 😅

@pmalek
Copy link

pmalek commented Nov 10, 2023

#3114 (comment) is not a solution for me because I'm not only after kind-control-plane but after generally working DNS for IPv6.

@corhere
Copy link

corhere commented Jul 9, 2024

Hi, Moby maintainer here. I recommend replacing the iptables "network magic" with a proxy DNS service on each kind node; some process which listens on IPv6 and forwards queries to Docker's embedded DNS resolver, such as a CoreDNS instance configured as a forwarder:

. {
  forward . /etc/resolv.conf.original
}

@BenTheElder
Copy link
Member

Hi, Moby maintainer here. I recommend replacing the iptables "network magic" with a proxy DNS service on each kind node; some process which listens on IPv6 and forwards queries to Docker's embedded DNS resolver, such as a CoreDNS instance configured as a forwarder:

Hi, thanks :-)

If we do that then we're chaining pod's upstream requests through 3+ DNS resolvers every time, which seems a bit excessive ....

Aside from IPv6 using docker's existing resolver has been fine, we just had to work around the nested-container routing issue (IE use a non-loopback IP) and that little trick has worked pretty well.

Even if moby changed the implementation details, we could install a similar iptables rule to intercept $gateway:$dns_port => docker resolver.

Is there a reason moby can't provide an IPv6 enabled resolver?

@BenTheElder
Copy link
Member

BenTheElder commented Jul 9, 2024

Is there a reason moby can't provide an IPv6 enabled resolver?

moby/moby#47442 (comment) is a good point

@aojea
Copy link
Contributor

aojea commented Jul 9, 2024

you can use nat64, it solves this problem https://github.com/aojea/nat64,

    - name: Use DNS64 upstream DNS server
      run: |
        # Use Google Public DNS64 https://developers.google.com/speed/public-dns/docs/dns64 
        original_coredns=$(kubectl get -oyaml -n=kube-system configmap/coredns)
        echo "Original CoreDNS config:"
        echo "${original_coredns}"
        # Patch it
        fixed_coredns=$( printf '%s' "${original_coredns}" | sed 's/\/etc\/resolv.conf/[64:ff9b::8.8.8.8]:53/' )
        echo "Patched CoreDNS config:"
        echo "${fixed_coredns}"
        printf '%s' "${fixed_coredns}" | kubectl apply -f -
        kubectl -n kube-system rollout restart deployment coredns
        /usr/local/bin/kubectl wait --timeout=1m --for

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/docker Issues or PRs related to docker kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

7 participants