Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl-view_allocations plugin does not work through kube access #15106

Open
programmerq opened this issue Aug 1, 2022 · 2 comments
Open

kubectl-view_allocations plugin does not work through kube access #15106

programmerq opened this issue Aug 1, 2022 · 2 comments
Assignees
Labels
bug c-dv Internal Customer Reference good-starter-issue Good starter issue to start contributing to Teleport kubernetes-access

Comments

@programmerq
Copy link
Contributor

programmerq commented Aug 1, 2022

When running the kubectl-view-allocations plugin, it reports an error when accessing a cluster via teleport.

It seems to support exec credentials since my aws-iam-authenticator exec credential for my EKS cluster functions with the plugin. It's only when I attempt to use it through kube access that it outputs an error:

{"v":0,"name":"kubectl_view_allocations","msg":"failed \ncli: CliOpts { context: None, namespace: None, utilization: false, show_zero: false, resource_name: [], group_by: [resource, node, pod], output: table }\nerror: KubeError { context: \"list nodes\", source: Api(ErrorResponse { status: \"404 Not Found\", message: \"\\\"404 page not found\\\\n\\\"\", reason: \"Failed to parse error data\", code: 404 }) }","level":50,"hostname":"tam.local","pid":52043,"time":"2022-08-01T22:14:33.925757Z","target":"kubectl_view_allocations","line":43,"file":"src/main.rs"}

Bug details:

  • Teleport version
    Teleport v10.0.2 git:v10.0.2-0-g47e0914fb go1.18.3
    Proxy version: 10.0.2
    
  • Recreation steps
    Install the kubectl-view_allocations plugin . Run tsh kube login for a cluster you have full access on. Run kubectl-view_allocations with no arguments.
  • Debug logs

The kubernetes_service instance in debug mode when the failing request is made:

2022-08-01T22:17:40Z DEBU [PROXY:AGE] Transport request: teleport-transport. leaseID:1 target:z6085.telepath.cf:443 cluster:z6085.telepath.cf reversetunnel/agent.go:562
2022-08-01T22:17:40Z DEBU [PROXY:AGE] Received out-of-band proxy transport request for remote.kube.proxy.teleport.cluster.local [e8f55ed8-e1e9-4aa2-9108-759b7a88fdbe.z6085.telepath.cf]. cluster:z6085.telepath.cf reversetunnel/transport.go:206
2022-08-01T22:17:40Z DEBU [PROXY:AGE] Handing off connection to a local kubernetes service cluster:z6085.telepath.cf reversetunnel/transport.go:246
2022-08-01T22:17:40Z DEBU [AUTH]      ClientCertPool -> cert(z6085.telepath.cf issued by z6085.telepath.cf:261208543321147499061890457269898569198) auth/middleware.go:680
2022-08-01T22:17:40Z DEBU [AUTH]      ClientCertPool -> cert(z6085.telepath.cf issued by z6085.telepath.cf:182036188414456424001935538492062924613) auth/middleware.go:680
2022-08-01T22:17:40Z DEBU [RBAC]      Access to kube_cluster "kind-kind" granted, allow rule in role "admin" matched. services/role.go:2040
2022-08-01T22:17:40Z DEBU [RBAC]      Access to kube_cluster "kind-kind" granted, allow rule in role "contractor" matched. services/role.go:2040
2022-08-01T22:17:40Z DEBU [KUBERNETE] Handling kubernetes session for user: admin, users: map[admin:{}], groups: map[system:authenticated:{} system:masters:{}], teleport cluster: z6085.telepath.cf, kube cluster: kind-kind using local credentials. proxy/forwarder.go:1749
2022-08-01T22:17:40Z INFO [KUBERNETE] Round trip: GET https://172.19.0.2:6085/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue, code: 200, duration: 42.362086ms tls:version: 304, tls:resume:false, tls:csuite:1301, tls:server:kube.teleport.cluster.local forward/fwd.go:187

(no mention of a 404, just a successful listing of services)

Here's the raw http response for the successful service listing (intercepted and extracted from the communication between the kubernetes_service and the actual kubernetes api):

HTTP/1.1 200 OK
Audit-Id: 1563da05-d37e-409a-8a06-fcb5b7aef675
Cache-Control: no-cache, private
Content-Type: application/json
X-Kubernetes-Pf-Flowschema-Uid: 9ce21521-7da8-4e02-8757-0a7725be0e60
X-Kubernetes-Pf-Prioritylevel-Uid: b87a94d0-11bf-485a-9f4b-f33442050a20
Date: Mon, 01 Aug 2022 22:06:58 GMT
Content-Length: 1673

{"kind":"ServiceList","apiVersion":"v1","metadata":{"resourceVersion":"6151"},"items":[{"metadata":{"name":"kube-dns","namespace":"kube-system","uid":"eb44eebc-cfcd-4476-bc78-b7e231b600c3","resourceVersion":"233","creationTimestamp":"2022-08-01T20:27:25Z","labels":{"k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","kubernetes.io/name":"CoreDNS"},"annotations":{"prometheus.io/port":"9153","prometheus.io/scrape":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-08-01T20:27:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:prometheus.io/port":{},"f:prometheus.io/scrape":{}},"f:labels":{".":{},"f:k8s-app":{},"f:kubernetes.io/cluster-service":{},"f:kubernetes.io/name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9153,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}}}]},"spec":{"ports":[{"name":"dns","protocol":"UDP","port":53,"targetPort":53},{"name":"dns-tcp","protocol":"TCP","port":53,"targetPort":53},{"name":"metrics","protocol":"TCP","port":9153,"targetPort":9153}],"selector":{"k8s-app":"kube-dns"},"clusterIP":"10.96.0.10","clusterIPs":["10.96.0.10"],"type":"ClusterIP","sessionAffinity":"None","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","internalTrafficPolicy":"Cluster"},"status":{"loadBalancer":{}}}]}

teleport proxy node debug logs that appear during the kubectl-view_allocations invocation:

2022-08-01T22:21:28Z DEBU [AUTH]      ClientCertPool -> cert(z6085.telepath.cf issued by z6085.telepath.cf:261208543321147499061890457269898569198) auth/middleware.go:680
2022-08-01T22:21:28Z DEBU [AUTH]      ClientCertPool -> cert(z6085.telepath.cf issued by z6085.telepath.cf:182036188414456424001935538492062924613) auth/middleware.go:680
2022-08-01T22:21:28Z DEBU [RBAC]      Access to kube_cluster "kind-kind" granted, allow rule in role "admin" matched. services/role.go:2040
2022-08-01T22:21:28Z DEBU [RBAC]      Access to kube_cluster "kind-kind" granted, allow rule in role "contractor" matched. services/role.go:2040
2022-08-01T22:21:28Z DEBU [PROXY:PRO] Kubernetes session for user: admin, users: map[admin:{}], groups: map[system:authenticated:{} system:masters:{}], teleport cluster: z6085.telepath.cf, kube cluster: kind-kind forwarded to remote kubernetes_service instance. kube_service.endpoints:[{remote.kube.proxy.teleport.cluster.local e8f55ed8-e1e9-4aa2-9108-759b7a88fdbe.z6085.telepath.cf []}] proxy/forwarder.go:1772
2022-08-01T22:21:28Z DEBU [PROXY:SER] Dialing from: "192.168.74.170:38772" to: "remote.kube.proxy.teleport.cluster.local". trace.fields:map[cluster:z6085.telepath.cf] reversetunnel/localsite.go:202
2022-08-01T22:21:28Z DEBU [PROXY:SER] Tunnel dialing to e8f55ed8-e1e9-4aa2-9108-759b7a88fdbe.z6085.telepath.cf. trace.fields:map[cluster:z6085.telepath.cf] reversetunnel/localsite.go:319
2022-08-01T22:21:28Z DEBU [PROXY:SER] Connecting to 192.168.74.15:56826 through tunnel. trace.fields:map[cluster:z6085.telepath.cf] reversetunnel/localsite.go:598
2022-08-01T22:21:28Z DEBU [PROXY:SER] Succeeded dialing from: "192.168.74.170:38772" to: "remote.kube.proxy.teleport.cluster.local". trace.fields:map[cluster:z6085.telepath.cf] reversetunnel/localsite.go:208
2022-08-01T22:21:28Z INFO [AUDIT]     kube.request addr.local:172.19.0.2:6085 addr.remote:18.116.159.225:443 cluster_name:z6085.telepath.cf code:T3009I ei:0 event:kube.request kubernetes_cluster:kind-kind kubernetes_groups:[system:authenticated system:masters] kubernetes_users:[admin] login:admin namespace:default proto:kube request_path:/api/v1/namespaces/kube-system/services resource_api_group:core/v1 resource_kind:services resource_namespace:kube-system response_code:200 server_id:e8f55ed8-e1e9-4aa2-9108-759b7a88fdbe time:2022-08-01T22:21:28.963Z uid:8c484470-03ff-45b2-80a7-808812497179 user:admin verb:GET events/emitter.go:263
2022-08-01T22:21:28Z INFO [PROXY:PRO] Round trip: GET https://kube.teleport.cluster.local/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue, code: 200, duration: 189.058733ms tls:version: 304, tls:resume:false, tls:csuite:1301, tls:server:kube.z6085.telepath.cf forward/fwd.go:187
2022-08-01T22:21:29Z DEBU [AUTH]      ClientCertPool -> cert(z6085.telepath.cf issued by z6085.telepath.cf:261208543321147499061890457269898569198) auth/middleware.go:680
2022-08-01T22:21:29Z DEBU [AUTH]      ClientCertPool -> cert(z6085.telepath.cf issued by z6085.telepath.cf:182036188414456424001935538492062924613) auth/middleware.go:680

gz#6085

@programmerq programmerq added bug c-dv Internal Customer Reference labels Aug 1, 2022
@r0mant r0mant added the good-starter-issue Good starter issue to start contributing to Teleport label Aug 3, 2022
@r0mant r0mant assigned r0mant and AntonAM and unassigned r0mant Aug 3, 2022
@AntonAM
Copy link
Contributor

AntonAM commented Aug 24, 2022

After some investigation it looks like root case is that k8s api kube-rs used by kube-view-allocations plugin doesn't support tls-server-name in the kubeconfig, so it doesn't provide correct server name in SNI for TLS routing. I've opened and issue on kube-rs: kube-rs/kube#991

@webvictim
Copy link
Contributor

Very similar sort of issue to these two:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug c-dv Internal Customer Reference good-starter-issue Good starter issue to start contributing to Teleport kubernetes-access
Projects
None yet
Development

No branches or pull requests

5 participants