Skip to content
This repository has been archived by the owner on Oct 5, 2022. It is now read-only.

gofabric8 deploy -d not overriding the default xip domain #6448

Closed
rawlingsj opened this issue Oct 14, 2016 · 8 comments · Fixed by fabric8io/gofabric8#238
Closed

gofabric8 deploy -d not overriding the default xip domain #6448

rawlingsj opened this issue Oct 14, 2016 · 8 comments · Fixed by fabric8io/gofabric8#238

Comments

@rawlingsj
Copy link
Contributor

from issue #6445 (comment)

./gofabric8 deploy -d cluster.local
Deploying fabric8 to your Kubernetes installation at http://localhost:8080 for domain cluster.local in namespace fabric8
...
Deploying ingress controller on node 192.168.206.169 use its external ip when configuring your wildcard DNS.
...
Opening URL http://fabric8.fabric8.192.168.206.169.xip.io
@it-svit
Copy link

it-svit commented Oct 14, 2016

gofabric8 script created the following Config map for exposecontroller

[root@localhost ~]# kubectl get configmap exposecontroller -o yaml
apiVersion: v1
data:
  config.yml: |
    domain: cluster.local
    exposer: Ingress
kind: ConfigMap
metadata:
  creationTimestamp: 2016-10-14T08:46:18Z
  labels:
    provider: fabric8
  name: exposecontroller
  namespace: fabric8
  resourceVersion: "27266"
  selfLink: /api/v1/namespaces/fabric8/configmaps/exposecontroller
  uid: ac7a8a1b-91ea-11e6-bc15-663737633761

But the ingress controllers were created with the different domain name

[root@localhost ~]# kubectl get ingress
NAME                      RULE                                                     BACKEND   ADDRESS   AGE
fabric8                   -                                                                            30m
                          fabric8.fabric8.192.168.206.169.xip.io                   
                                                                                   fabric8:9090
fabric8-docker-registry   -                                                                            30m
                          fabric8-docker-registry.fabric8.192.168.206.169.xip.io   
                                                                                   fabric8-docker-registry:5000
fabric8-forge             -                                                                            30m
                          fabric8-forge.fabric8.192.168.206.169.xip.io             
                                                                                   fabric8-forge:8080
gogs                      -                                                                            30m
                          gogs.fabric8.192.168.206.169.xip.io                      
                                                                                   gogs:3000
gogs-ssh                  -                                                                            30m
                          gogs-ssh.fabric8.192.168.206.169.xip.io                  
                                                                                   gogs-ssh:2222
jenkins                   -                                                                            30m
                          jenkins.fabric8.192.168.206.169.xip.io                   
                                                                                   jenkins:8080
jenkins-jnlp              -                                                                            30m
                          jenkins-jnlp.fabric8.192.168.206.169.xip.io              
                                                                                   jenkins-jnlp:50000
jenkinshift               -                                                                            30m
                          jenkinshift.fabric8.192.168.206.169.xip.io               
                                                                                   jenkinshift:9191
nexus                     -                                                                            30m
                          nexus.fabric8.192.168.206.169.xip.io                     
                                                                                   nexus:8081

@rawlingsj
Copy link
Contributor Author

I've not been able to recreate this unfortunately. I wonder if the ingress rules were already created?

If you still have the problem I wonder if you could delete the exposecontroller pod to make sure its using the latest configmap with the correct domain and also delete the existing ingress rules incase they're not being replaced.

@it-svit
Copy link

it-svit commented Oct 18, 2016

@rawlingsj There was nothing before I installed fabric8.
I used dedicated namespace in a fresh kubernetes installation.

@novakg
Copy link

novakg commented Oct 18, 2016

I can confirm that I can reproduce it with AWS - Kubernetes installation

@timactive
Copy link

Same issue with custom cluster

@rawlingsj
Copy link
Contributor Author

Ok thanks for confirming, will work on a fix.

@rawlingsj
Copy link
Contributor Author

I've recreated the issue and have just released gofabric8 v0.4.84. It was an order issue where we created the config after deploying the pod!

Please note though, until we also release the fabric8-devops fabric8io/fabric8-devops#629 change you'll see an error which you can ignore when running gofabric8 deploy. This will probably be released later this evening.

Installing: exposecontroller
Processing resource kind: ServiceAccount in namespace test7
Processing resource kind: ConfigMap in namespace test7
Failed to create ConfigMap: configmaps "exposecontroller" already existsexposecontroller..............................................................✘ configmaps "exposecontroller" already exists

@magick93
Copy link

I think Im facing a similar issue. I have described it on https://groups.google.com/forum/#!topic/fabric8/xL-iYALhOCg

Is there any solution to this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants