Releases: ksandermann/formkube
v3.0.0
v2.0.0
Changes
- changed node_resource_group for aks cluster from cluster_fqdn to platform_rg_name + "_ClusterResources"
- marked secrets and AKS kubeconfig outputs as sensitive
- updated runtime to ksandermann/cloud-toolbox:2019-09-17_01
- bumped terraform to 0.12.8
- bumped azure provider versions to latest supported versions
- added terraform output for kube configs to apply script
- bumped example cluster AKS version to 1.13.10
- implemented output var with the Azure DNS zone nameservers
- implemented probe_id for lb rules, fixing #1
- modularized destroy and plan scripts
- added set -e to all scripts again
- implemented mulitple output vars for azure provider - private IPs for cluster vms, public IPs for bastions
- added tenant_id to aks ad integration
Bootstrap environment
ksandermann/cloud-toolbox:2019-09-17_01
Terraform: 0.12.8
recommended Docker: 19.03.2
v1.1.1
Changes
- AKS Provider: merged dns and cluster modules
- AKS Provider: migrated loadbalancer public IP to clusterresources rg
Bootstrap environment
ksandermann/cloud-toolbox:2019-08-08_01
Terraform: 0.12.6
recommended Docker: 18.09.2
v1.1.0
Changes
- implemented support for AKS to fully bootstrap an AKS Kubernetes Cluster
Bootstrap environment
ksandermann/cloud-toolbox:2019-08-08_01
Terraform: 0.12.6
recommended Docker: 18.09.2
v1.0.0
Changes
- enabled ip-forwarding for all nics
- added empty azure route for cluster subnet
- added message that DNS zone will not get removed when running destroy
- added documentation for kubespray pitfalls
- adjusted kubespray sample inventories
Bootstrap environment
ksandermann/cloud-toolbox:2019-07-18_01
Terraform: 0.12.4
recommended Docker: 18.09.2
v0.2.1
v0.2.1
Changes
- changed default vm sizes to support Premium Storage
- destroy now keeping the dns zone
Bootstrap environment
- ksandermann/cloud-toolbox:2019-07-18_01
- Terraform: 0.12.4
- recommended Docker: 18.09.2
v0.2.0
v0.2.0
Changes
-
moved from multi-zone high-availability to availability sets (multiple racks in one region).
As it turns out, Azure does not support one of the following patterns:- Adding blob-based disks from a storage account to a VM that resists in a specific zones.
- Adding both blob-based disks and managed disk to the same VM at the same time.
- Creating disks that reside in multiple zones at the same time (zone-redundancy)
- Automatic migration of a disk from one zone to another
This causes the problem, that the Kubernetes Azure Cloud Provider is not able to properly create/manage disks:
- The cloud provider has Azure availability-zones support since 1-12 as stated here. Though, this feature is not reasonable to use, as this only uses node affinities/constraints to add disks only to nodes
that reside in the same zone as the disk. In case of a zone failure, the disk is also lost. Therefore, the whole point of using multiple zones in order to be safe against zone failure, is not met. - OpenShift 3.11 still uses Kubernetes 1.11 and does not have the above feature at all.
Bootstrap environment
- ksandermann/cloud-toolbox:2019-07-18_01
- Terraform: 0.12.4
- recommended Docker: 18.09.2
v0.1.1
v0.1.1
hotfixes:
-
fixed documentation from deploy.sh to apply.sh
-
fixed bug: A-record from bastion now pointing to public instead of private IP
Bootstrap environment:
- ksandermann/cloud-toolbox:2019-07-18_01
- Terraform: 0.12.4
- recommended Docker: 18.09.2
v0.1.0
v0.1.0 - finally opensourced this
- Bootstrap environment: ksandermann/cloud-toolbox:2019-07-18_01
- Terraform: 0.12.4
- recommended Docker: 18.09.2