Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

DNS Endpoint Controller can't create service DNSEndpoint #872

Closed
dc520 opened this issue May 10, 2019 · 7 comments
Closed

DNS Endpoint Controller can't create service DNSEndpoint #872

dc520 opened this issue May 10, 2019 · 7 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dc520
Copy link

dc520 commented May 10, 2019

-->
/triage support

Federation v2 controller-manager version: v0.0.9
kubernetes version: v1.14.0

I ran two kubernetes clusters on AWS, were managing them using Federation v2, and the kubernetes cluster was able to use the cloud-provider=aws feature. I used the Multicluster Service DNS via external-dns and Multicluster Ingress DNS via external-dns functions. When I created the IngressDNSRecord, the DNS Endpoint Controller can create DNSendpoints as follows:
yaml file:
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: IngressDNSRecord
metadata:
name: test-ingress
namespace: test-namespace
spec:
hosts:
- ingress.example.com
recordTTL: 300

dnsendpoint:
[root@d-awsbj-paas-k8s-master-001 dnsRecord]# kubectl -n test-namespace get dnsendpoint ingress-test-ingress -o yaml
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
creationTimestamp: "2019-05-10T02:18:06Z"
generation: 2
name: ingress-test-ingress
namespace: test-namespace
resourceVersion: "2184194"
selfLink: /apis/multiclusterdns.federation.k8s.io/v1alpha1/namespaces/test-namespace/dnsendpoints/ingress-test-ingress
uid: d8e63e4c-72c9-11e9-bc79-0694d6735cae
spec:
endpoints:
- dnsName: ingress.example.com
recordTTL: 300
recordType: A
targets:
- 10.125.233.135
- 10.125.236.102
- 10.125.236.153
- 10.125.239.201
status:
observedGeneration: 2

However, when I created ServiceDNSRecord, DNSendpoint was always empty and there was no information to display:
yaml file:
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: Domain
metadata:
name: test-domain
namespace: federation-system
domain: example.com

apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: ServiceDNSRecord
metadata:
name: test-service
namespace: test-namespace
spec:
domainRef: test-domain
recordTTL: 300

dnsendpoint:
[root@d-awsbj-paas-k8s-master-001 dnsRecord]# kubectl -n test-namespace get dnsendpoint service-test-service -o yaml
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
creationTimestamp: "2019-05-10T07:51:11Z"
generation: 1
name: service-test-service
namespace: test-namespace
resourceVersion: "2270414"
selfLink: /apis/multiclusterdns.federation.k8s.io/v1alpha1/namespaces/test-namespace/dnsendpoints/service-test-service
uid: 60e44f04-72f8-11e9-bc79-0694d6735cae
spec: {}
status:
observedGeneration: 1

test-service can get ELB normally:

[root@d-awsbj-paas-k8s-master-001 dnsRecord]# kubectl -n test-namespace get svc test-service -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
creationTimestamp: "2019-05-10T07:48:40Z"
name: test-service
namespace: test-namespace
resourceVersion: "2269622"
selfLink: /api/v1/namespaces/test-namespace/services/test-service
uid: 06b452ec-72f8-11e9-bcaf-024014929984
spec:
clusterIP: 172.31.64.236
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 41991
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: internal-a06b452ec72f811e9bcaf02401492998-1662341750.cn-north-1.elb.amazonaws.com.cn

I adjusted the log level of controller-manager to 4, but I didn't find any valuable information. Can you help me and provide ideas for solving this problem?

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label May 10, 2019
@shashidharatd
Copy link
Contributor

related issue: #834

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 7, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@zhx828
Copy link

zhx828 commented May 4, 2020

@dc520 did you solve the issue? I see the same issue for IngressDNSRecord.
Using v0.2.0-alpha.1

@zhx828
Copy link

zhx828 commented May 6, 2020

Oh never mind, I found answer in #963
The name of ingress and IngressDNSRecord need to be the same.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants