Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

kube-aws up fails using multiple AZ/Subnets #510

Closed
harsha-y opened this issue May 25, 2016 · 8 comments
Closed

kube-aws up fails using multiple AZ/Subnets #510

harsha-y opened this issue May 25, 2016 · 8 comments

Comments

@harsha-y
Copy link
Contributor

kube-aws validate succeeds without any errors. kube-aws up fails.

kube-aws version -

kube-aws version f74499b08fb2a10e7085602cc6ebf4f854a3a7c9

Error -

SHELL$ kube-aws up
Creating AWS resources. This should take around 5 minutes.
Error: Error creating cluster: error validating existing VPC: error parsing instances cidr  : invalid CIDR address:

Configuration -

# ID of existing VPC to create subnet in. Leave blank to create a new VPC
vpcId: vpc-xxxxxxxx

# ID of existing route table in existing VPC to attach subnet to. Leave blank to use the VPC's main route table.
# routeTableId: rtb-xxxxxxxx

# CIDR for Kubernetes VPC. If vpcId is specified, must match the CIDR of existing vpc.
vpcCIDR: "172.32.0.0/16"

# CIDR for Kubernetes subnet when placing nodes in a single availability zone (not highly-available) Leave commented out for multi availability zone setting and use the below `subnets` section instead.
# instanceCIDR: "172.32.1.0/24"

# Kubernetes subnets with their CIDRs and availability zones. Differentiating availability zone for 2 or more subnets result in high-availability (failures of a single availability zone won't result in immediate downtimes)
subnets:
  - availabilityZone: us-east-1a
    instanceCIDR: "172.32.1.0/24"
  - availabilityZone: us-east-1b
    instanceCIDR: "172.32.3.0/24"
  - availabilityZone: us-east-1c
    instanceCIDR: "172.32.5.0/24"

# IP Address for the controller in Kubernetes subnet. When we have 2 or more subnets, the controller is placed in the first subnet and controllerIP must be included in the instanceCIDR of the first subnet. This convention will change once we have H/A controllers
controllerIP: 172.32.1.100
@harsha-y harsha-y changed the title kube-aws up fails with error using multiple AZ/Subnets kube-aws up fails using multiple AZ/Subnets May 25, 2016
@harsha-y
Copy link
Contributor Author

harsha-y commented May 25, 2016

CC @mumoshu @colhom

@mumoshu
Copy link
Contributor

mumoshu commented May 26, 2016

@colhom I believe that validation fails when subnets are provided but the top-level instanceCIDR in cluster.yml. It seems that we missed that func ValidateExistingVPC in config.go validates the config before the cluster is created.

Should we switch which instanceCIDR to validate(top-level/under subnets) according to len(c.Subnets) like we did before?

@colhom
Copy link
Contributor

colhom commented May 31, 2016

@harsha-y sorry for the delay on this. I believe this malfunction was reported in #500 and a fix in which was merged in #507.

@harsha-y
Copy link
Contributor Author

harsha-y commented Jun 3, 2016

@mumoshu Thanks for the follow-up!
@colhom No worries and thank you! Looks like I was a few commits too early on the same day - f74499b

I'll rebuild from latest master, test and report.

@cgag
Copy link
Contributor

cgag commented Jun 7, 2016

@harsha-y Any luck? Can we close this?

@harsha-y
Copy link
Contributor Author

harsha-y commented Jun 9, 2016

We ran into #538 as well. Giving it another shot today with both release 0.7.1 and 855a0f9 binaries.

With 0.7.1 we get the same error as #538 - is our assumption that kube-aws can launch cluster into an existing VPC with existing subnets in multiple availability zones incorrect? We are using 3 private /24 subnets.
Error: Error creating cluster: error validating existing VPC: instance cidr (172.32.1.0/24) conflicts with existing subnet cidr=172.32.1.0/24

With latest master 855a0f9 we get the following error. Changing this to alpha results in the same error.
Error: failed getting AMI for config: error getting ami data for channel stable: failed to get AMI data: stable: invalid status code: 522
Error: failed getting AMI for config: error getting ami data for channel alpha: failed to get AMI data: stable: invalid status code: 522

@cgag @colhom

@cgag
Copy link
Contributor

cgag commented Jun 10, 2016

I just replied on #538 about the first error, I believe currently we don't support existing subnets, just existing VPCs.

For the second error, I suspect coreos.com may have been down briefly as I've heard talks of ec2 outages. Currently kube-aws pulls the alphaami information from https://coreos.com/dist/aws/aws-<channel>.json, so https://coreos.com/dist/aws/aws-alpha.json for alpha. It looks good now.

@harsha-y
Copy link
Contributor Author

Believe we can close this issue. Will follow progress on #340 and #538 - Thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants