-
Notifications
You must be signed in to change notification settings - Fork 531
DNS Endpoint Controller can't create service DNSEndpoint #872
Comments
related issue: #834 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@dc520 did you solve the issue? I see the same issue for IngressDNSRecord. |
Oh never mind, I found answer in #963 |
-->
/triage support
Federation v2 controller-manager version: v0.0.9
kubernetes version: v1.14.0
I ran two kubernetes clusters on AWS, were managing them using Federation v2, and the kubernetes cluster was able to use the cloud-provider=aws feature. I used the Multicluster Service DNS via external-dns and Multicluster Ingress DNS via external-dns functions. When I created the IngressDNSRecord, the DNS Endpoint Controller can create DNSendpoints as follows:
yaml file:
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: IngressDNSRecord
metadata:
name: test-ingress
namespace: test-namespace
spec:
hosts:
- ingress.example.com
recordTTL: 300
dnsendpoint:
[root@d-awsbj-paas-k8s-master-001 dnsRecord]# kubectl -n test-namespace get dnsendpoint ingress-test-ingress -o yaml
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
creationTimestamp: "2019-05-10T02:18:06Z"
generation: 2
name: ingress-test-ingress
namespace: test-namespace
resourceVersion: "2184194"
selfLink: /apis/multiclusterdns.federation.k8s.io/v1alpha1/namespaces/test-namespace/dnsendpoints/ingress-test-ingress
uid: d8e63e4c-72c9-11e9-bc79-0694d6735cae
spec:
endpoints:
- dnsName: ingress.example.com
recordTTL: 300
recordType: A
targets:
- 10.125.233.135
- 10.125.236.102
- 10.125.236.153
- 10.125.239.201
status:
observedGeneration: 2
However, when I created ServiceDNSRecord, DNSendpoint was always empty and there was no information to display:
yaml file:
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: Domain
metadata:
name: test-domain
namespace: federation-system
domain: example.com
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: ServiceDNSRecord
metadata:
name: test-service
namespace: test-namespace
spec:
domainRef: test-domain
recordTTL: 300
dnsendpoint:
[root@d-awsbj-paas-k8s-master-001 dnsRecord]# kubectl -n test-namespace get dnsendpoint service-test-service -o yaml
apiVersion: multiclusterdns.federation.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
creationTimestamp: "2019-05-10T07:51:11Z"
generation: 1
name: service-test-service
namespace: test-namespace
resourceVersion: "2270414"
selfLink: /apis/multiclusterdns.federation.k8s.io/v1alpha1/namespaces/test-namespace/dnsendpoints/service-test-service
uid: 60e44f04-72f8-11e9-bc79-0694d6735cae
spec: {}
status:
observedGeneration: 1
test-service can get ELB normally:
[root@d-awsbj-paas-k8s-master-001 dnsRecord]# kubectl -n test-namespace get svc test-service -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
creationTimestamp: "2019-05-10T07:48:40Z"
name: test-service
namespace: test-namespace
resourceVersion: "2269622"
selfLink: /api/v1/namespaces/test-namespace/services/test-service
uid: 06b452ec-72f8-11e9-bcaf-024014929984
spec:
clusterIP: 172.31.64.236
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 41991
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: internal-a06b452ec72f811e9bcaf02401492998-1662341750.cn-north-1.elb.amazonaws.com.cn
I adjusted the log level of controller-manager to 4, but I didn't find any valuable information. Can you help me and provide ideas for solving this problem?
The text was updated successfully, but these errors were encountered: