Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Support the use-case to manage multiple kube-aws clusters' configurations optionally inheriting organization-specific customizations with a version control system like Git #238

Closed
redbaron opened this issue Jan 12, 2017 · 46 comments

Comments

@redbaron
Copy link
Contributor

We use git branching to track customizations on top of kube-aws generated CF/userdata. Having nodepools as a separate level with arbitrary paths complicates things for us. It also leads to not insignificant code duplication in nodepool/.

What was justification for such split? would you consider PR which manages nodepools as part of stack-template.json? Either embedded or as a nested CF stack.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

@redbaron We've split templates:

  • To avoid the size limitation of cfn stack templates and
  • To quit hard-coding vars like vpcID, routeTableID, workerSecurityGroupIds into the resulting/exported cfn template ([WIP] Direct & Indirect CloudFormation Stack Parameters #195 is wip) in the future. A single big cfn template would make it far more hard to achieve things like that.

Also, would you mind sharing me example(s) of the "insignificant code duplication" you've mentioned? Anyways I guess we could fix that separately without embedded/nested CF stack.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Could you also share your thoughts on how would the whole files and directories would be structured after the change you've wanted, or does fixing just stack-template.json solve your issue?

@redbaron
Copy link
Contributor Author

OK, lets discuss that:

  • AFAIK stack size limitation when using s3 bucket and current main stack already reaches that size.
  • They are already "hardcoded" for main template, right? If nodepool is just another ASG in main CF or a nested CF stack which uses values from parent, how is it worse than what we have now?
  • From the brief look nodepool/config/config.go is a subset of config/config.go: validation, ClusterFromFile and other funcs are pretty much mirroring what is already coded in config/config.go.

@redbaron
Copy link
Contributor Author

Could you also share your thoughts on how would the whole files and directories would be structured after the change you've wanted, or does fixing just stack-template.json solve your issue?

Well, having one cluster.yaml and one stack-template.json seems to be capable of supporting nodepools just fine. Luckily there is worker userdata is already reused

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Thanks for the additional info!

They are already "hardcoded" for main template, right? If nodepool is just another ASG in main CF or a nested CF stack which uses values from parent, how is it worse than what we have now?

Well, having one cluster.yaml and one stack-template.json seems to be capable of supporting nodepools just fine. Luckily there is worker userdata is already reused

Sorry for missing the context here but anyways I'd like to inform that user-data/cloud-config-worker is managed separately among a main stack and node pools to allow customization(s) per main/node pool.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

However, for example if duplicated blocks in cloud-config-worker files are a problem, we could probably introduce e.g. a "cloud-config template helper" to reduce them.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Just to be clear but I'm rather eager to make a step change like you've suggested if it does improve kube-aws but that should be done as long as the end result keeps our requirements IMHO, or the discussion to narrow down our reqs would be the first step.
// Though, unfortunately, I believe our reqs (e.g. #238 (comment)) aren't documented MECE for now.

@redbaron
Copy link
Contributor Author

I'd like to inform that user-data/cloud-config-worker is managed separately among a main stack and node pools to allow customization per main/node pool.

Ah, true. Even more headache.

See,you cater for customization, but it is not customization friendly. Once you modify render output you are on your own, all kube-aws can do is to wipe out your changes on next render. So we are developing a tool which uses git branching to keep track of customizations, where kube-aws renders into "vanilla" branches, then it got merged into "tailored" branch where all organization-wide customizations go to be shared across all the clusters, then it got merged into individual "kluster" branches were per-cluster adjustments are possible.

It is all fine and dandy when paths are stable, once there are random files poping up per cluster, they can't benefit from this workflow as git can't keep track of changes "remounted" to different paths without clunky tricks like subtree merging or even more complicated branching.

Hence my request to keep paths stable :)

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

AFAIK stack size limitation when using s3 bucket and current main stack already reaches that size.
They are already "hardcoded" for main template, right? If nodepool is just another ASG in main CF or a nested CF stack which uses values from parent, how is it worse than what we have now?

Yes 👍 Therefore the limit is now 460,800 bytes for a cfn stack template fed via S3 hence kube-aws provides approximately 400KB left for user-customization per cfn stack today. Also beware of my guess that a resulting stack template rendered with stack-templaet.json sizes at least 30KB each.
So, If we'd merged all the stacks into one, probably more than 14 node pools would hit the limit.
It might be ok for someone but I think it is not "Planet Scale" like Kubernetes itself tries to support.

From the brief look nodepool/config/config.go is a subset of config/config.go: validation, ClusterFromFile and other funcs are pretty much mirroring what is already coded in config/config.go.

Yes, we'd tried hard to reduce duplication among them; the result includes cfnstack/, model/, filegen/, filereader/, gzipcompressor, etc.)

I'm now observing how would the code for the main and the node pool changes over time to plan further refactorings to make them less duplicated.
However, I'd appreciate it if we can actually fix that in an another way like you've suggested.
It's just that I personally believe it is the way to go, for now.

@redbaron
Copy link
Contributor Author

redbaron commented Jan 12, 2017

So, If we'd merged all the stacks into one, probably more than 14 node pools would hit the limit.

Hm, then having a single nodepool-template.json which get uploaded into separate CF template in S3 bucket by up command and then single stack-template.json which glues them all should scale 400*1024/len('AWS::EC2::CloudFormationStack') times :)

I'm now observing how would the code for the main and the node pool changes over time to plan further refactorings to make them less duplicated.

I am really not concerned about .go code duplication as it has no effect on the problem I am solving, but it might be an argument for you or somebody else.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

@redbaron I'm interested to your use-case 👍
Believing it would help me further understand your case, would you let me ask for examples of "stable paths"?

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Hm, then having a single nodepool-template.json which get uploaded into separate CF template in S3 bucket by up command and then single stack-template.json which glues them all should scale 400*1024/len('AWS::EC2::CloudFormationStack') times :)

Sounds good 👍 but could you clarify a bit? I guess the single root template to nest all the stacks including the ones for main and node pools itself can be achieved out of kube-aws. The issue is that the way to "propagate" outputs from a stack(probably a "main" stack a inline resources defined in the single root template) to a node-pool stack inside a "root" template is missing. It can be resolved via #195, right?

@redbaron
Copy link
Contributor Author

redbaron commented Jan 12, 2017

Believing it would help me further understand your case, would you let me ask for examples of "stable paths"?

If all following branches:

  • vanilla: result of kube-aws init --kms-key CHANGE_ME .. && kube-aws render committed as-is
  • tailored: set of common changes maintained internally for the benefit of all clusters in organization
  • kluster-<NAME>: individual clusters where kube-aws up is executed from

Have same paths, then we can use power of git to track updates in kube-aws without loosing customizations on all levels.

If anywhere on the the way from vanilla to kluster we have difference in file names, then things become unnecessary complicated. kube-aws node-pool commands introduce new file names, which may or may not be different between klusters and more importantly there are variable number of them, so it is trickier to provide tailored version of them

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Thanks for your replies!

I am really not concerned about .go code duplication as it has no effect on the problem I am solving, but it might be an argument for you or somebody else.

ok then we can discuss that separately 👍
I'm still interested to your use-case.

@redbaron redbaron changed the title why worker pools are in separate CF templates? why worker pools are in separate files? Jan 12, 2017
@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

@redbaron I assume that your use-case is: "how to efficiently manage multiple kube-aws cluster(kcluster)s which may or may not read/inherit/apply organization-wide common configuration(including settings fed to cluster.yaml, changes applied to stack-template.json and cloud-config-*, and possibly credentials/*)". Is my assumption correct?

If so, what kind of information is contained in the "set of common changes" in the tailored branch?

I'm considering if it could be possible for kube-aws to provide an out-of-box way to provide "common configuration" to e.g. kube-aws render and/or kube-aws up.

@redbaron
Copy link
Contributor Author

I guess the single template to nest all the stacks including the ones for main and node pools itself can be achieved out of kube-aws.

Yes, then kube-aws nodepool init <NAME> will only be adding entries to cluster.yml.

The issue is that the way to "propagate" outputs from a stack(probably a "main" stack) to a node-pool stack inside a "root" template is missing.

Yes, it is not the way kube-aws currently work, but it is doable. Looks like #195 has much wider scope than just template nesing.

Fundamentally for that to work we'd need:

  • root-stack.json which is doing nothing more than just listing all other stacks and passing outputs from stack-template.json to inputs of nodepool-stack.json
  • change kube-aws up/update to work on root-stack.json and remove kube-aws node-pool up/render
  • do something about individual userdata files. Symlinks to single nodepool-template.json maybe ? Then if nodepool name was passed to template context, there is a way to have per-noodepool alterations. There is an option for cluster administrator to nodeppol-template.json and lose benefits of change tracking our tool provides, but that is a choice not a forced decision.

@redbaron
Copy link
Contributor Author

"how to manage multiple kube-aws cluster(kcluster)s which may or may not read organization-wide common configuration(including settings fed to cluster.yaml, changes applied to stack-template.json and cloud-config-*) ", right?

yes and still be able to benefit from improvements which new version of kube-aws provides without labor intensive cherry-picking individual changes.

If so, what kind of information is contained in the "set of common changes" in the tailored branch?
I'm considering if it could be possible for kube-aws to provide an out-of-box way to provide "common configuration" to e.g. kube-aws render and/or kube-aws up.

It would be nice, but I am not sure it is possible without becoming another Git :)

Kind of customizations we are going to have are mostly those which are not supported by kube-aws and some of them will never be: existing subnets, multiple EFS, Vault secrets and PKI, non-standard company DNS, monitoring & audit daemons etc

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Kind of customizations we are going to have are mostly those which are not supported by kube-aws and some of them will never be: existing subnets, multiple EFS, Vault secrets and PKI, non-standard company DNS, monitoring & audit daemons etc

How are you exactly going to apply those customizations?

  • Probably with diff patches which are under version control applied to stack-template.json and cloud-config-worker generated on the fly?
  • Probably with a script (1) to obtain vault secrets with a vault api key (2) and the fed the vault secrets to cloud-config-worker?

or anything else?

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

It would be nice, but I am not sure it is possible without becoming another Git :)

We won't version-control kube-aws generated assets but could provide a framework to accept common customizations to these assets 😃

Then, you can define your customizations entirely in dedicated files(script to modify kube-aws assets? diff patches?) and put those customizations among all the klusters under version control of Git repo(s)/branch(s).

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

The framework mentioned above could be something like @c-knowles suggested before in #96 (comment); hooks

@redbaron
Copy link
Contributor Author

redbaron commented Jan 12, 2017

How are you exactly going to apply those customizations?
They are done by committing either to tailored branch (which a set of common changes, which then merged into all kluster branches) or to individual kluster branches.

Script simplifies all the dance around git, but it is doing nothing more than a sequence of checkouts, kube-aws calls and merges. Vault integration is done by adding/modifying systemd unit files in userdata.

It doesn't matter what changes are, there are always going to be features which general purpose tool doesn't support. Things like recent customSettings help, but things like multiple node-pools directories don't :)

Then, you can define your customizations entirely in dedicated files(script to modify kube-aws assets? diff patches?)

It is possible, but managing them will be unimaginable pain. Right now central team can do git checkout tailored; vim ..; git commit and then know that this change will be propagated to all kluster-<NAME> branches. It's hard to beat this workflow in ease of maintenance,it builds on top of power of Git.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

It is possible, but managing them is will be unimaginable pain. Right now central team can do git checkout tailored; vim ..; git commit and then know that this change will be propagated to all kluster- branches. It's hard to beat this workflow in ease of maintenance,it builds on top of power of Git.

I guessed you've been managing the customizations in your own form inside the tailored branch anyways, right?

I'd imagined that with the framework, you could generate each kluster branch from the tailored branch containing customizations fed to kube-aws via the framework, in a more straightforward way.

@redbaron
Copy link
Contributor Author

redbaron commented Jan 12, 2017

I guessed you've been managing them in other form inside the tailored branch anyways, right?

no, why? first commit on tailored branch is a commit on a vanilla branch which is exact output of kube-aws init && kube-aws render. From there there are modifications and merges of more recent vanilla branches

you could generate each kluster branch from the tailored branch containing customizations fed to kube-aws via the framework, in a more straightforward way.

when new kluster branch is created it is essentially git fetch; git checkout -b kluster-main1 origin/tailored; vim ...; git push -upstream, but wrapped into a user-friendly script. No need for frameworks.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Yes, but I guess you apply your further customizations on the contents of tailored branch, right?
I'd rather suggest managing only the followings in the tailored branch:

  1. general configuration per kluster (a.k.a cluster.yaml) and
  2. common configurations optionally inherited to kluster(s) (NEW!) and
  3. patches to customize your stack-template.json, cloud-config-worker, credentials, etc per kluster and
  4. the list of klusters.

That way, I guess that your workflow could be implemented a bit more easily plus kube-aws could support it via the framework to apply (1) (2) (3) while running kube-aws render && kube-aws up.
Implementation details like the path of node pool's configuration directories won't affect your workflow then?

@redbaron
Copy link
Contributor Author

redbaron commented Jan 12, 2017

This makes it hard to maintain mainly because of (3). preparing/testing/appying these patches are going to be bad experience for everybody. Imagine if new version of kube-aws render conflicts with one of the patch prepared with previous one. Git merge conflicts are intuitive to resolve and need to be done once only.

In our workflow tailored branch is not per cluster, it is a modified output of kube-aws init with CHANGE_ME placeholders all over in cluster.yml. And cluster.yml is in the root of it:

cluster.yaml
stack-template.json
userdata/

each kluster-<NAME> branch was once forked of a tailored branch and then administrator filled params in cluster.yaml.

Then it is super easy:

  • new versions of kube-aws produce output into root of vanilla branch which is then merged into tailored where conflicts are resolved if needed
  • new changes are added to tailored branch overtime if it is clear that most clusters will benefit from it
  • cluster administrators pull tailored branch into their branch , merging/resolving conflict with per-cluster customizations they made.

Managing bunch of .patch files or transform hooks can't beat it.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Aha, so now I guess what you'd need is:

In the tailored branch, something like:

  • cluster.yaml
  • stack-template.json
  • credentials/
  • user-data/
  • node-pools/
    • ondemand/
      • all the node pool specific files
    • spot/
      • all the node pool specific files
    • connectslegacynetwork/
      • all the node pool specific files

Then in a kluster branch you can choose which node pools to be actually created for the kluster by selectively removing unnecessary node pools under the node-pools/ directory.

A node pool's stack name is name-spaced under a main's stack name therefore you can choose stable names for node pools today and it won't prevent you from creating node pools for each kluster, as long as you differentiate clusterName for each kluster.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

...and if you care about duplication among the main and the node pool's stack-template.jsons, helpers suggested in #238 (comment) would help?

@redbaron
Copy link
Contributor Author

redbaron commented Jan 12, 2017

That is one way to do it, one which I wanted to avoid because these nodepools not only separated by function, they also better be one per AZ right? Now imagine we need to add new daemon to all workers, how many files need to be edited with same copy paste? Doing #217 I had to make same changes to 2 files and that repetitive work already killed me :)

That is not to mention that if in kluster owner decides to create new nodepool, he will lose all current common customizations to and future updates to it.

There are workarounds I am exploring to have tailored-nodepool branch where nodepool directory is at the root of it, then do git subtree merging (http://www.kernel.org/pub/software/scm/git/docs/howto/using-merge-subtree.html) into a node-pools-common dir on a kluster-<NAME> branch and then symlink files into node-pool/pool1 directories there, but it is clunky and error prone and I am not yet sure it is going to work or be easy to live with.

Hence my request whether it is possible to have set of fixed paths even for "dynamic" ammount of nodepools

@redbaron
Copy link
Contributor Author

...and if you care about duplication among the main and the node pool's stack-template.jsons, helpers suggested in #238 (comment) would help?

A bit of duplication between 2-4 files is fine, no need to normalize everything to absolute extreme :)

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

I can't stop being impressed to your deep understanding of the problem 👍 Thanks for your patience here!

ok then introducing helpers would be a solution for not this but for another problems.

How about the following in the tailored branch?

  • cluster.yaml
  • stack-template.json
  • credentials/
  • user-data/
  • node-pools/
    • <fixed but arbitrary-to-your-org name 1 e.g. ondemand-1a, spot-1b, legacy-1a, staticip-1a, or vice versa>/
      • all the node pool specific files
      • (In cluster.yaml in a kluster branch, clusterName and nodePoolName could be modified regardless of what the directory name is? might be a feature request)
    • <fixed but arbitrary-to-your-org name 2>/
      • all the node pool specific files
    • <fixed but arbitrary-to-your-org name N>/ where N is the max number of node pools for a kluster in your org.

plus somehow adding kube-aws a functionality to support default cloud-config-worker to be shared among main and node pools if the specific one doesn't exist in the node pool's user-data directory.

This way I guess you won't need to deal with variable directories with dynamic names under the node-pools/ directory with git?

Or I'd appreciate it if you could share me an example structure of contents inside the tailored branch after the stable paths 🙇

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Or I'd suggest adding options in cluster.yaml to specify arbitrary paths of files which are embedded eventually to a cfn stack template. Such paths would include stack-template.json, cloud-config-worker, credentials/*.

@redbaron
Copy link
Contributor Author

Ideal layout for me would be:

cluster.yaml
stack-template.json
nodepool-template.json
root-template.json # glue CF
credentials/
user-data/

where cluster.yaml controls both "main" setup and nodepools. Then we scrap all kube-aws nodepool commands and make regular kube-aws up/update controll whole cluster like it was originally.

Individual nodepool config in cluster.yaml may choose to override path to userdata files, but by default it points to the same set. If there was a .customSettings per nodepool it would be enough to provide all needed customizations keeping fixed paths on filesystem.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

Sounds good!

Several things I'd like to add:

  • nodepool-template.json should be named node-pool-stack-template.json or stack-templates/node-pool.json for consistency. The latter would be better.
  • root-template.json should be named root-stack-template.json or stack-templates/root.json for consistency. The latter would be better.
  • stack-template.json should renamed to main-stack-template.json or stack-templates/main.json and we should also keep supporting stack-template.json for backward-compatibility. The latter would be better.
  • cluster.yaml should introduce the new key node-poolsworker.nodePools. Node pools configuration should be read from both values reside in the new node-poolsworker.nodePools array in cluster.yaml and the former node-pools/ directory for the time being.
    • Also note that kube-aws node-pool init won't work on cluster.yaml. It is hard to modify cluster.yaml in-place programatically in go, isn't it? 😃

I'd also like to hear from the community for more feedback before a PR is actually merged but I'd be happy to review WIP PRs.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

@redbaron Would you mind changing the subject of this issue to something like "Support the use-case to manage multiple kube-aws clusters' configurations optionally inheriting organization-specific customizations with a version control system like Git"?

@redbaron
Copy link
Contributor Author

@mumoshu that sounds perfect for my usecase! I'll prepare something over weekend to show

@redbaron redbaron changed the title why worker pools are in separate files? Support the use-case to manage multiple kube-aws clusters' configurations optionally inheriting organization-specific customizations with a version control system like Git Jan 12, 2017
@mumoshu
Copy link
Contributor

mumoshu commented Jan 12, 2017

@redbaron Thanks for the discussion! I'm really looking forward to it 👍

@mumoshu
Copy link
Contributor

mumoshu commented Jan 13, 2017

Hi @redbaron, I began to believe that making kube-aws up, kube-aws updated and kube-aws destroy to work on root-stack-template.json i.e. the whole cluster instead of stack-template.json i.e. the main cluster will also make it possible to achieve what I've mentioned in an another issue #176 (comment).
Could you confirm that if it might be true?

@redbaron
Copy link
Contributor Author

yes, it solves that and also eliminates need to keep params like "sshAuthorizedKeys" in sync between all cluster.yaml files manually.

@cknowles
Copy link
Contributor

@mumoshu I think the above means the top two remaining points from #44 (comment) will be dealt with here. Do you agree?

@mumoshu
Copy link
Contributor

mumoshu commented Jan 16, 2017

@redbaron

Then we scrap all kube-aws nodepool commands and make regular kube-aws up/update controll whole cluster like it was originally.

I'm still wanting to leave the nodepool subcommand not for creating/updating underlying cfn stacks but at least for adding new node pools definitions (which will then be "reflected" via cfn by running kube-aws up as you've suggested).
Would it be possible?

@mumoshu
Copy link
Contributor

mumoshu commented Jan 16, 2017

@c-knowles Those are not requirements which must be addressed before a future PR from @redbaron is merged anyways.

Those are definitely what I'd appreciate if he could include in a PR but I'd recommend not to do so at least in the initial PR to make each PR reviewable hence able to be merged quickly!

Those can be even addressed elsewhere today because kube-aws can anyway read cluster.yaml for both main and node pools to "inherit" values from main as defaults of a node pool.

@redbaron
Copy link
Contributor Author

I'm still wanting to leave the nodepool subcommand not for creating/updating underlying cfn stacks but at least for adding new node pools definitions (which will then be "reflected" via cfn by running kube-aws up as you've suggested).

All that command will do is modify nodePools map in YAML file in-place. Editing yaml files keeping all the formatting and existing comments is not a most pleasant experience for a very little benefits added. With detailed enough commented example in cluster.yaml it would be trivial to create/update/delete nodepool.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 27, 2017

@redbaron Agreed. I'm going to drop kube-aws node-pools init then.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 27, 2017

@redbaron Btw, do you have any plan to publish the tool you're developing as an OSS? 😃
If not, could I request it so that we can collaborate more closely?

mumoshu added a commit to mumoshu/kube-aws that referenced this issue Feb 15, 2017
This is an implementation of kubernetes-retired#238 from @redbaron especially what I've described in my comment there kubernetes-retired#238 (comment), and an answer to the request "**3. Node pools should be more tightly integrated**" of kubernetes-retired#271 from @Sasso .
I believe this also achieves what was requested by @andrejvanderzee in kubernetes-retired#176 (comment).

After applying this change:

1. All the `kube-aws node-pools` sub-commands are dropped
2. You can now bring up a main cluster and one or more node pools at once with `kube-aws up`
3. You can now update all the sub-clusters including a main cluster and node pool(s) by running  `kube-aws update`
4. You can now destroy all the AWS resources spanning main and node pools at once with `kube-aws destroy`
5. You can configure node pools by defining a `worker.nodePools` array in cluster.yaml`
6. `workerCount` is dropped. Please migrate to `worker.nodePools[].count`
7. `node-pools/` and hence `node-pools/<node pool name>` directories, `cluster.yaml`, `stack-template.json`, `user-data/cloud-config-worker` for each node pool are dropped.
8. A typical local file tree would now look like:
  - `cluster.yaml`
  - `stack-templates/` (generated on `kube-aws render`)
     - `root.json.tmpl`
     - `control-plane.json.tmpl`
     - `node-pool.json.tmpl`
  - `userdata/`
     - `cloud-config-worker`
     - `cloud-config-controller`
     - `cloud-config-etcd`
  - `credentials/`
     - *.pem(generated on `kube-aws render`)
     - *.pem.enc(generated on `kube-aws validate` or `kube-aws up`)
  - `exported/` (generated on `kube-aws up --export --s3-uri <s3uri>`)
     - `stacks/`
       - `control-plane/`
         - `stack.json`
         - `user-data-controller`
       - `<node pool name = stack name>/`
         - `stack.json`
         - `user-data-worker`
9. A typical object tree in S3 would now look like:
  - `<bucket and directory from s3URI>`/
    - kube-aws/
      - clusters/
        - `<cluster name>`/
          - `exported`/
            - `stacks`/
              - `control-plane/`
                - `stack.json`
                - `cloud-config-controller`
              - `<node pool name = stack name>`/
                - `stack.json`

Implementation details:

Under the hood, kube-aws utilizes CloudFormation nested stacks to delegate management of multiple stacks as a whole.
kube-aws now creates 1 root stack and nested stacks including 1 main(or currently named "control plane") stack and 0 or more node pool stacks.
kube-aws operates on S3 to upload all the assets required by all the stacks(root, main, node pools) and then on CloudFormation to create/update/destroy a root stack.

An example `cluster.yaml`  I've been used to test this looks like:

```yaml
clusterName: <your cluster name>
externalDNSName: <your external dns name>
hostedZoneId: <your hosted zone id>
keyName: <your key name>
kmsKeyArn: <your kms key arn>
region: ap-northeast-1
createRecordSet: true
experimental:
  waitSignal:
    enabled: true
subnets:
- name: private1
  availabilityZone: ap-northeast-1a
  instanceCIDR: "10.0.1.0/24"
  private: true
- name: private2
  availabilityZone: ap-northeast-1c
  instanceCIDR: "10.0.2.0/24"
  private: true
- name: public1
  availabilityZone: ap-northeast-1a
  instanceCIDR: "10.0.3.0/24"
- name: public2
  availabilityZone: ap-northeast-1c
  instanceCIDR: "10.0.4.0/24"
controller:
  subnets:
  - name: public1
  - name: public2
  loadBalancer:
    private: false
etcd:
  subnets:
  - name: public1
  - name: public2
worker:
  nodePools:
  - name: pool1
    subnets:
    - name: asgPublic1a
  - name: pool2
    subnets: # former `worker.subnets` introduced in v0.9.4-rc.1 via kubernetes-retired#284
    - name: asgPublic1c
    instanceType: "c4.large" # former `workerInstanceType` in the top-level
    count: 2 # former `workerCount` in the top-level
    rootVolumeSize: ...
    rootVolumeType: ...
    rootVolumeIOPs: ...
    autoScalingGroup:
      minSize: 0
      maxSize: 10
    waitSignal:
      enabled: true
      maxBatchSize: 2
  - name: spotFleetPublic1a
    subnets:
    - name: public1
    spotFleet:
      targetCapacity: 1
      unitRootVolumeSize: 50
      unitRootvolumeIOPs: 100
      rootVolumeType: gp2
      spotPrice: 0.06
      launchSpecifications:
      - spotPrice: 0.12
         weightedCapacity: 2
         instanceType: m4.xlarge
        rootVolumeType: io1
        rootVolumeIOPs: 200
        rootVolumeSize: 100
```
mumoshu added a commit to mumoshu/kube-aws that referenced this issue Feb 16, 2017
This is an implementation of kubernetes-retired#238 from @redbaron especially what I've described in my comment there kubernetes-retired#238 (comment), and an answer to the request "**3. Node pools should be more tightly integrated**" of kubernetes-retired#271 from @Sasso .
I believe this also achieves what was requested by @andrejvanderzee in kubernetes-retired#176 (comment).

After applying this change:

1. All the `kube-aws node-pools` sub-commands are dropped
2. You can now bring up a main cluster and one or more node pools at once with `kube-aws up`
3. You can now update all the sub-clusters including a main cluster and node pool(s) by running  `kube-aws update`
4. You can now destroy all the AWS resources spanning main and node pools at once with `kube-aws destroy`
5. You can configure node pools by defining a `worker.nodePools` array in cluster.yaml`
6. `workerCount` is dropped. Please migrate to `worker.nodePools[].count`
7. `node-pools/` and hence `node-pools/<node pool name>` directories, `cluster.yaml`, `stack-template.json`, `user-data/cloud-config-worker` for each node pool are dropped.
8. A typical local file tree would now look like:
  - `cluster.yaml`
  - `stack-templates/` (generated on `kube-aws render`)
     - `root.json.tmpl`
     - `control-plane.json.tmpl`
     - `node-pool.json.tmpl`
  - `userdata/`
     - `cloud-config-worker`
     - `cloud-config-controller`
     - `cloud-config-etcd`
  - `credentials/`
     - *.pem(generated on `kube-aws render`)
     - *.pem.enc(generated on `kube-aws validate` or `kube-aws up`)
  - `exported/` (generated on `kube-aws up --export --s3-uri <s3uri>`)
     - `stacks/`
       - `control-plane/`
         - `stack.json`
         - `user-data-controller`
       - `<node pool name = stack name>/`
         - `stack.json`
         - `user-data-worker`
9. A typical object tree in S3 would now look like:
  - `<bucket and directory from s3URI>`/
    - kube-aws/
      - clusters/
        - `<cluster name>`/
          - `exported`/
            - `stacks`/
              - `control-plane/`
                - `stack.json`
                - `cloud-config-controller`
              - `<node pool name = stack name>`/
                - `stack.json`

Implementation details:

Under the hood, kube-aws utilizes CloudFormation nested stacks to delegate management of multiple stacks as a whole.
kube-aws now creates 1 root stack and nested stacks including 1 main(or currently named "control plane") stack and 0 or more node pool stacks.
kube-aws operates on S3 to upload all the assets required by all the stacks(root, main, node pools) and then on CloudFormation to create/update/destroy a root stack.

An example `cluster.yaml`  I've been used to test this looks like:

```yaml
clusterName: <your cluster name>
externalDNSName: <your external dns name>
hostedZoneId: <your hosted zone id>
keyName: <your key name>
kmsKeyArn: <your kms key arn>
region: ap-northeast-1
createRecordSet: true
experimental:
  waitSignal:
    enabled: true
subnets:
- name: private1
  availabilityZone: ap-northeast-1a
  instanceCIDR: "10.0.1.0/24"
  private: true
- name: private2
  availabilityZone: ap-northeast-1c
  instanceCIDR: "10.0.2.0/24"
  private: true
- name: public1
  availabilityZone: ap-northeast-1a
  instanceCIDR: "10.0.3.0/24"
- name: public2
  availabilityZone: ap-northeast-1c
  instanceCIDR: "10.0.4.0/24"
controller:
  subnets:
  - name: public1
  - name: public2
  loadBalancer:
    private: false
etcd:
  subnets:
  - name: public1
  - name: public2
worker:
  nodePools:
  - name: pool1
    subnets:
    - name: asgPublic1a
  - name: pool2
    subnets: # former `worker.subnets` introduced in v0.9.4-rc.1 via kubernetes-retired#284
    - name: asgPublic1c
    instanceType: "c4.large" # former `workerInstanceType` in the top-level
    count: 2 # former `workerCount` in the top-level
    rootVolumeSize: ...
    rootVolumeType: ...
    rootVolumeIOPs: ...
    autoScalingGroup:
      minSize: 0
      maxSize: 10
    waitSignal:
      enabled: true
      maxBatchSize: 2
  - name: spotFleetPublic1a
    subnets:
    - name: public1
    spotFleet:
      targetCapacity: 1
      unitRootVolumeSize: 50
      unitRootvolumeIOPs: 100
      rootVolumeType: gp2
      spotPrice: 0.06
      launchSpecifications:
      - spotPrice: 0.12
         weightedCapacity: 2
         instanceType: m4.xlarge
        rootVolumeType: io1
        rootVolumeIOPs: 200
        rootVolumeSize: 100
```
@mumoshu
Copy link
Contributor

mumoshu commented Feb 16, 2017

@redbaron Can we close this as #315 is already merged?

@redbaron
Copy link
Contributor Author

This is great! Will port our config to new structure soon and test it.

kylehodgetts pushed a commit to HotelsDotCom/kube-aws that referenced this issue Mar 27, 2018
This is an implementation of kubernetes-retired#238 from @redbaron especially what I've described in my comment there kubernetes-retired#238 (comment), and an answer to the request "**3. Node pools should be more tightly integrated**" of kubernetes-retired#271 from @Sasso .
I believe this also achieves what was requested by @andrejvanderzee in kubernetes-retired#176 (comment).

After applying this change:

1. All the `kube-aws node-pools` sub-commands are dropped
2. You can now bring up a main cluster and one or more node pools at once with `kube-aws up`
3. You can now update all the sub-clusters including a main cluster and node pool(s) by running  `kube-aws update`
4. You can now destroy all the AWS resources spanning main and node pools at once with `kube-aws destroy`
5. You can configure node pools by defining a `worker.nodePools` array in cluster.yaml`
6. `workerCount` is dropped. Please migrate to `worker.nodePools[].count`
7. `node-pools/` and hence `node-pools/<node pool name>` directories, `cluster.yaml`, `stack-template.json`, `user-data/cloud-config-worker` for each node pool are dropped.
8. A typical local file tree would now look like:
  - `cluster.yaml`
  - `stack-templates/` (generated on `kube-aws render`)
     - `root.json.tmpl`
     - `control-plane.json.tmpl`
     - `node-pool.json.tmpl`
  - `userdata/`
     - `cloud-config-worker`
     - `cloud-config-controller`
     - `cloud-config-etcd`
  - `credentials/`
     - *.pem(generated on `kube-aws render`)
     - *.pem.enc(generated on `kube-aws validate` or `kube-aws up`)
  - `exported/` (generated on `kube-aws up --export --s3-uri <s3uri>`)
     - `stacks/`
       - `control-plane/`
         - `stack.json`
         - `user-data-controller`
       - `<node pool name = stack name>/`
         - `stack.json`
         - `user-data-worker`
9. A typical object tree in S3 would now look like:
  - `<bucket and directory from s3URI>`/
    - kube-aws/
      - clusters/
        - `<cluster name>`/
          - `exported`/
            - `stacks`/
              - `control-plane/`
                - `stack.json`
                - `cloud-config-controller`
              - `<node pool name = stack name>`/
                - `stack.json`

Implementation details:

Under the hood, kube-aws utilizes CloudFormation nested stacks to delegate management of multiple stacks as a whole.
kube-aws now creates 1 root stack and nested stacks including 1 main(or currently named "control plane") stack and 0 or more node pool stacks.
kube-aws operates on S3 to upload all the assets required by all the stacks(root, main, node pools) and then on CloudFormation to create/update/destroy a root stack.

An example `cluster.yaml`  I've been used to test this looks like:

```yaml
clusterName: <your cluster name>
externalDNSName: <your external dns name>
hostedZoneId: <your hosted zone id>
keyName: <your key name>
kmsKeyArn: <your kms key arn>
region: ap-northeast-1
createRecordSet: true
experimental:
  waitSignal:
    enabled: true
subnets:
- name: private1
  availabilityZone: ap-northeast-1a
  instanceCIDR: "10.0.1.0/24"
  private: true
- name: private2
  availabilityZone: ap-northeast-1c
  instanceCIDR: "10.0.2.0/24"
  private: true
- name: public1
  availabilityZone: ap-northeast-1a
  instanceCIDR: "10.0.3.0/24"
- name: public2
  availabilityZone: ap-northeast-1c
  instanceCIDR: "10.0.4.0/24"
controller:
  subnets:
  - name: public1
  - name: public2
  loadBalancer:
    private: false
etcd:
  subnets:
  - name: public1
  - name: public2
worker:
  nodePools:
  - name: pool1
    subnets:
    - name: asgPublic1a
  - name: pool2
    subnets: # former `worker.subnets` introduced in v0.9.4-rc.1 via kubernetes-retired#284
    - name: asgPublic1c
    instanceType: "c4.large" # former `workerInstanceType` in the top-level
    count: 2 # former `workerCount` in the top-level
    rootVolumeSize: ...
    rootVolumeType: ...
    rootVolumeIOPs: ...
    autoScalingGroup:
      minSize: 0
      maxSize: 10
    waitSignal:
      enabled: true
      maxBatchSize: 2
  - name: spotFleetPublic1a
    subnets:
    - name: public1
    spotFleet:
      targetCapacity: 1
      unitRootVolumeSize: 50
      unitRootvolumeIOPs: 100
      rootVolumeType: gp2
      spotPrice: 0.06
      launchSpecifications:
      - spotPrice: 0.12
         weightedCapacity: 2
         instanceType: m4.xlarge
        rootVolumeType: io1
        rootVolumeIOPs: 200
        rootVolumeSize: 100
```
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants