Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Conformance][networking][router] The HAProxy router should support reencrypt to services backed by a serving certificate automatically #14523

Closed
stevekuznetsov opened this issue Jun 8, 2017 · 2 comments
Assignees
Labels
component/networking kind/test-flake Categorizes issue or PR as related to test flakes. priority/P1

Comments

@stevekuznetsov
Copy link
Contributor

------------------------------
[Conformance][networking][router] The HAProxy router 
  should support reencrypt to services backed by a serving certificate automatically
  /go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:64

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:51
[BeforeEach] [Conformance][networking][router]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120
STEP: Creating a kubernetes client
Jun  7 19:12:18.542: INFO: >>> kubeConfig: /etc/origin/master/admin.kubeconfig

STEP: Building a namespace api object
Jun  7 19:12:18.570: INFO: configPath is now "/tmp/extended-test-router-reencrypt-48d58-wcpdv-user.kubeconfig"
Jun  7 19:12:18.570: INFO: The user is now "extended-test-router-reencrypt-48d58-wcpdv-user"
Jun  7 19:12:18.570: INFO: Creating project "extended-test-router-reencrypt-48d58-wcpdv"
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Conformance][networking][router]
  /go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:35
[It] should support reencrypt to services backed by a serving certificate automatically
  /go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:64
Jun  7 19:12:18.662: INFO: Creating new exec pod
STEP: deploying a service using a reencrypt route without a destinationCACertificate
Jun  7 19:12:24.679: INFO: Running 'oc create --config=/tmp/extended-test-router-reencrypt-48d58-wcpdv-user.kubeconfig --namespace=extended-test-router-reencrypt-48d58-wcpdv -f /tmp/fixture-testdata-dir154131782/test/extended/testdata/reencrypt-serving-cert.yaml'
pod "serving-cert" created
configmap "serving-cert" created
service "serving-cert" created
route "serving-cert" created
Jun  7 19:12:26.030: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://ip-172-18-4-42.ec2.internal:8443 --kubeconfig=/etc/origin/master/admin.kubeconfig exec --namespace=extended-test-router-reencrypt-48d58-wcpdv execpod -- /bin/sh -c 
		set -e
		for i in $(seq 1 180); do
			code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: serving-cert-extended-test-router-reencrypt-48d58-wcpdv.router.default.svc.cluster.local' "https://172.30.95.6" ) || rc=$?
			if [[ "${rc:-0}" -eq 0 ]]; then
				echo $code
				if [[ $code -eq 200 ]]; then
					exit 0
				fi
				if [[ $code -ne 503 ]]; then
					exit 1
				fi
			else
				echo "error ${rc}" 1>&2
			fi
			sleep 1
		done
		'
Jun  7 19:15:45.738: INFO: stderr: "error 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\n"
[AfterEach] [Conformance][networking][router]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121
STEP: Collecting events from namespace "extended-test-router-reencrypt-48d58-wcpdv".
STEP: Found 9 events.
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:18 -0400 EDT - event for execpod: {default-scheduler } Scheduled: Successfully assigned execpod to ip-172-18-4-42.ec2.internal
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:20 -0400 EDT - event for execpod: {kubelet ip-172-18-4-42.ec2.internal} Pulled: Container image "gcr.io/google_containers/hostexec:1.2" already present on machine
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:20 -0400 EDT - event for execpod: {kubelet ip-172-18-4-42.ec2.internal} Created: Created container with id 6dafde2902f13c43671833b967af16050e25186492ee821eab22431cb73ce853
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:21 -0400 EDT - event for execpod: {kubelet ip-172-18-4-42.ec2.internal} Started: Started container with id 6dafde2902f13c43671833b967af16050e25186492ee821eab22431cb73ce853
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:24 -0400 EDT - event for serving-cert: {default-scheduler } Scheduled: Successfully assigned serving-cert to ip-172-18-4-42.ec2.internal
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:31 -0400 EDT - event for serving-cert: {kubelet ip-172-18-4-42.ec2.internal} Pulling: pulling image "nginx:latest"
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:40 -0400 EDT - event for serving-cert: {kubelet ip-172-18-4-42.ec2.internal} Pulled: Successfully pulled image "nginx:latest"
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:44 -0400 EDT - event for serving-cert: {kubelet ip-172-18-4-42.ec2.internal} Created: Created container with id 68aa8ed5fa890964b3ef50d1235589311687e6d16d55f923adb9052ef3105943
Jun  7 19:15:45.760: INFO: At 2017-06-07 19:12:44 -0400 EDT - event for serving-cert: {kubelet ip-172-18-4-42.ec2.internal} Started: Started container with id 68aa8ed5fa890964b3ef50d1235589311687e6d16d55f923adb9052ef3105943
Jun  7 19:15:45.772: INFO: POD                                                      NODE                         PHASE    GRACE  CONDITIONS
Jun  7 19:15:45.772: INFO: docker-registry-2-bx862                                  ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:05:37 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:05:47 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:05:37 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: registry-console-1-8nmjb                                 ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:06:07 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:06:16 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:06:07 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: router-2-jqhvj                                           ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:05:36 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:56 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:05:36 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: liveness-http                                            ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:13:59 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:14:02 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:13:59 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: netserver-0                                              ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:24 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:44 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:24 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: pod-exec-websocket-2cb71e84-4bd7-11e7-a95d-0ed58382e414  ip-172-18-4-42.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:19 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:23 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:19 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: history-limit-2-zv89b                                    ip-172-18-4-42.ec2.internal  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:01 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:04 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:01 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: history-limit-4-04764                                    ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:23 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:27 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:23 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: history-limit-6-deploy                                   ip-172-18-4-42.ec2.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:44 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:44 -0400 EDT ContainersNotReady containers with unready status: [deployment]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:15:44 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: execpod                                                  ip-172-18-4-42.ec2.internal  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:18 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:22 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:18 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: serving-cert                                             ip-172-18-4-42.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:25 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:56 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-06-07 19:12:24 -0400 EDT  }]
Jun  7 19:15:45.772: INFO: 
Jun  7 19:15:45.774: INFO: 
Logging node info for node ip-172-18-4-42.ec2.internal
Jun  7 19:15:45.777: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-172-18-4-42.ec2.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-172-18-4-42.ec2.internal,UID:1ae62e17-4bd0-11e7-9287-0ed58382e414,ResourceVersion:10014,Generation:0,CreationTimestamp:2017-06-07 18:24:43 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: ip-172-18-4-42.ec2.internal,region: infra,zone: default,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:NodeSpec{PodCIDR:,ExternalID:ip-172-18-4-42.ec2.internal,ProviderID:aws:////i-0b63d6f595f9d8ca3,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{alpha.kubernetes.io/nvidia-gpu: {{0 0} {<nil>} 0 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},memory: {{16388919296 0} {<nil>} 16004804Ki BinarySI},pods: {{40 0} {<nil>} 40 DecimalSI},},Allocatable:ResourceList{alpha.kubernetes.io/nvidia-gpu: {{0 0} {<nil>} 0 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},memory: {{16284061696 0} {<nil>} 15902404Ki BinarySI},pods: {{40 0} {<nil>} 40 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-06-07 19:15:36 -0400 EDT 2017-06-07 18:24:43 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-06-07 19:15:36 -0400 EDT 2017-06-07 18:24:43 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-06-07 19:15:36 -0400 EDT 2017-06-07 18:24:43 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-06-07 19:15:36 -0400 EDT 2017-06-07 19:05:25 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{LegacyHostIP 172.18.4.42} {InternalIP 172.18.4.42} {Hostname ip-172-18-4-42.ec2.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f9370ed252a14f73b014c1301a9b6d1b,SystemUUID:EC2D633F-1D3F-0ACD-83B3-19CD947B270C,BootID:f2cba44d-319d-431a-923c-61a02a143488,KernelVersion:3.10.0-514.21.1.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.3 (Maipo),ContainerRuntimeVersion:docker://1.12.6,KubeletVersion:v1.6.1+5115d708d7,KubeProxyVersion:v1.6.1+5115d708d7,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-federation:e87b37a openshift/origin-federation:latest] 1202817887} {[openshift/origin-docker-registry:e87b37a openshift/origin-docker-registry:latest] 1097477526} {[openshift/origin-gitserver:e87b37a openshift/origin-gitserver:latest] 1083901468} {[openshift/openvswitch:e87b37a openshift/openvswitch:latest] 1051362593} {[openshift/node:e87b37a openshift/node:latest] 1049680811} {[openshift/origin-keepalived-ipfailover:e87b37a openshift/origin-keepalived-ipfailover:latest] 1026488548} {[openshift/origin-haproxy-router:e87b37a openshift/origin-haproxy-router:latest] 1020717555} {[openshift/origin-f5-router:e87b37a openshift/origin-f5-router:latest] 999688711} {[openshift/origin-recycler:e87b37a openshift/origin-recycler:latest] 999688711} {[openshift/origin-deployer:e87b37a openshift/origin-deployer:latest] 999688711} {[openshift/origin-docker-builder:e87b37a openshift/origin-docker-builder:latest] 999688711} {[openshift/origin-sti-builder:e87b37a openshift/origin-sti-builder:latest] 999688711} {[openshift/origin:e87b37a openshift/origin:latest] 999688711} {[openshift/origin-cluster-capacity:e87b37a openshift/origin-cluster-capacity:latest] 960430861} {[docker.io/openshift/origin-release@sha256:68022b599d63c26f4b7f859dfba0657e973f6b2e525ce310be5148aa9500be47 docker.io/openshift/origin-release:golang-1.7] 852450907} {[docker.io/openshift/origin-haproxy-router@sha256:c9374f410e32907be1fa1d14d77e58206ef0be949a63a635e6f3bafa77b35726 docker.io/openshift/origin-haproxy-router:v1.5.1] 738600544} {[docker.io/openshift/origin-deployer@sha256:77ac551235d8edf43ccb2fbd8fa5384ad9d8b94ba726f778fced18710c5f74f0 docker.io/openshift/origin-deployer:v1.5.1] 617474229} {[docker.io/openshift/origin-docker-registry@sha256:cfe82b08f94c015d31664573f3caa4307ffc7941c930cc7ae9419d68cec32ed5 docker.io/openshift/origin-docker-registry:v1.5.1] 428360819} {[gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25 gcr.io/google_containers/redis:e2e] 418929769} {[openshift/origin-egress-router:e87b37a openshift/origin-egress-router:latest] 364746506} {[openshift/origin-base:latest] 363070922} {[docker.io/cockpit/kubernetes@sha256:49be8737ec76cf51cab6e8ca39e0eccd7c3b494771fea91cc32fcfdd2968549f docker.io/cockpit/kubernetes:latest] 351468457} {[openshift/origin-pod:e87b37a openshift/origin-pod:latest] 213198371} {[openshift/origin-source:latest] 192548894} {[docker.io/centos@sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9 docker.io/centos:7 docker.io/centos:centos7] 192548537} {[gcr.io/google_containers/nginx-slim@sha256:8b4501fe0fe221df663c22e16539f399e89594552f400408303c42f3dd8d0e52 gcr.io/google_containers/nginx-slim:0.8] 110461313} {[docker.io/nginx@sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268 docker.io/nginx:latest] 109368581} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86838142} {[gcr.io/google_containers/nettest@sha256:8af3a0e8b8ab906b0648dd575e8785e04c19113531f8ffbaab9e149aa1a60763 gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13185747} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8893907} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035} {[gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a gcr.io/google_containers/serve_hostname:v1.4] 6222101} {[openshift/hello-openshift:e87b37a openshift/hello-openshift:latest] 5635113} {[gcr.io/google_containers/test-webserver@sha256:f804e8837490d1dfdb5002e073f715fd0a08115de74e5a4847ca952315739372 gcr.io/google_containers/test-webserver:e2e] 4534272} {[gcr.io/google_containers/eptest@sha256:bb088b26ed78613cce171420168db9a6c62a8dbea17d7be13077e7010bae162f gcr.io/google_containers/eptest:0.1] 2970692} {[gcr.io/google_containers/mounttest@sha256:c4dcedb26013ab4231a2b2aaa4eebd5c2a44d5c597fa0613c9ff8bde4fb9fe02 gcr.io/google_containers/mounttest:0.7] 2052704} {[gcr.io/google_containers/portforwardtester@sha256:306879729d3eff635a11b89f3e62e440c9f2fe4dabdfb9ef02bc67f2275f67ab gcr.io/google_containers/portforwardtester:1.2] 1892642} {[gcr.io/google_containers/mounttest@sha256:bec3122ddcf8bd999e44e46e096659f31241d09f5236bc3dc212ea584ca06856 gcr.io/google_containers/mounttest:0.8] 1450761} {[gcr.io/google_containers/mounttest-user@sha256:5487c126b03abf4119a8f7950cd5f591f72dbe4ab15623f3387d3917e1268b4e gcr.io/google_containers/mounttest-user:0.5] 1450761} {[docker.io/openshift/origin-pod@sha256:8b61dcade7a9a972795a5bd92ebc73d7edbec5f36d4eb4abd4ae3127f2f4bb2e docker.io/openshift/origin-pod:v1.5.1] 1143145} {[gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/busybox:1.24] 1113554} {[docker.io/busybox@sha256:c79345819a6882c31b41bc771d9a94fc52872fa651b36771fbe0c8461d7ee558] 1106304} {[gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516 gcr.io/google_containers/pause-amd64:3.0] 746888}],VolumesInUse:[],VolumesAttached:[],},}
Jun  7 19:15:45.777: INFO: 
Logging kubelet events for node ip-172-18-4-42.ec2.internal
Jun  7 19:15:45.779: INFO: 
Logging pods the kubelet thinks is on node ip-172-18-4-42.ec2.internal
Jun  7 19:15:45.791: INFO: router-2-jqhvj started at 2017-06-07 19:05:36 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.791: INFO: 	Container router ready: true, restart count 1
Jun  7 19:15:45.792: INFO: history-limit-4-04764 started at 2017-06-07 19:15:23 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container myapp ready: true, restart count 0
Jun  7 19:15:45.792: INFO: netserver-0 started at 2017-06-07 19:15:24 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container webserver ready: true, restart count 0
Jun  7 19:15:45.792: INFO: registry-console-1-8nmjb started at 2017-06-07 19:06:07 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container registry-console ready: true, restart count 0
Jun  7 19:15:45.792: INFO: liveness-http started at 2017-06-07 19:13:59 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container liveness ready: true, restart count 0
Jun  7 19:15:45.792: INFO: history-limit-2-zv89b started at 2017-06-07 19:15:01 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container myapp ready: true, restart count 0
Jun  7 19:15:45.792: INFO: serving-cert started at 2017-06-07 19:12:25 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container serve ready: true, restart count 0
Jun  7 19:15:45.792: INFO: pod-exec-websocket-2cb71e84-4bd7-11e7-a95d-0ed58382e414 started at 2017-06-07 19:15:19 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container main ready: true, restart count 0
Jun  7 19:15:45.792: INFO: execpod started at 2017-06-07 19:12:18 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container hostexec ready: true, restart count 0
Jun  7 19:15:45.792: INFO: history-limit-6-deploy started at 2017-06-07 19:15:44 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container deployment ready: false, restart count 0
Jun  7 19:15:45.792: INFO: docker-registry-2-bx862 started at 2017-06-07 19:05:37 -0400 EDT (0+1 container statuses recorded)
Jun  7 19:15:45.792: INFO: 	Container registry ready: true, restart count 0
W0607 19:15:45.794030   29471 metrics_grabber.go:74] Master node is not registered. Grabbing metrics from Scheduler and ControllerManager is disabled.
Jun  7 19:15:46.002: INFO: 
Latency metrics for node ip-172-18-4-42.ec2.internal
Jun  7 19:15:46.002: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m3.822287s}
Jun  7 19:15:46.002: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m2.277411s}
Jun  7 19:15:46.002: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m0.99155s}
Jun  7 19:15:46.002: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:31.800596s}
Jun  7 19:15:46.002: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.9 Latency:30.321405s}
Jun  7 19:15:46.002: INFO: {Operation:update Method:pod_worker_latency_microseconds Quantile:0.99 Latency:30.321405s}
Jun  7 19:15:46.002: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:28.764905s}
Jun  7 19:15:46.002: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:28.764905s}
Jun  7 19:15:46.002: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:24.93889s}
Jun  7 19:15:46.002: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:11.009362s}
Jun  7 19:15:46.002: INFO: {Operation:create_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:10.021453s}
STEP: Dumping a list of prepulled images on each node
Jun  7 19:15:46.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-router-reencrypt-48d58-wcpdv" for this suite.
Jun  7 19:16:11.058: INFO: namespace: extended-test-router-reencrypt-48d58-wcpdv, resource: bindings, ignored listing per whitelist


• Failure [232.594 seconds]
[Conformance][networking][router]
/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:66
  The HAProxy router
  /go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:65
    should support reencrypt to services backed by a serving certificate automatically [It]
    /go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:64

    Expected error:
        <*errors.errorString | 0xc421a8a040>: {
            s: "last response from server was not 200:\n",
        }
        last response from server was not 200:
        
    not to have occurred

    /go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:63
------------------------------

/cc @smarterclayton

@smarterclayton
Copy link
Contributor

smarterclayton commented Jun 8, 2017 via email

@smarterclayton
Copy link
Contributor

Was a network related issue. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/networking kind/test-flake Categorizes issue or PR as related to test flakes. priority/P1
Projects
None yet
Development

No branches or pull requests

3 participants