Releases: Azure/AKS
Release 2019-05-20
-
Behavioral Changes
- The 192.0.2.0/24 IP block is now reserved for AKS use. Clusters created in
a VNet that overlaps with this block will fail pre-flight validation.
- The 192.0.2.0/24 IP block is now reserved for AKS use. Clusters created in
-
Bug Fixes
- An issue where users running old AKS clusters attempting to upgrade would
get a failed upgrade with an Internal Server Error has been fixed. - An issue where Kubernetes 1.14.0 would not show in the Azure Portal or AKS
Preview CLI with the 'Preview' or 'isPreview' tag has been resolved. - An issue where customers would get excessive log entries due to missing
Heapster rbac permissions has been fixed. - An issue where AKS clusters could end up with missing DNS entries resulting
in DNS resolution errors or crashes within CoreDNS has been resolved.
- An issue where users running old AKS clusters attempting to upgrade would
-
Preview Features
- A bug where the AKS node count could be out of sync with the VMSS node count
has been resolved. - There is a known issue with the cluster autoscaler preview and multiple
agent pools. The current autoscaler in preview is not compatible with
multiple agent pools, and could not be disabled. We have fixed the issue
that blocked disabling the autoscaler. A fix for mutliple agent pools and
the cluster autoscaler is in development.
- A bug where the AKS node count could be out of sync with the VMSS node count
Q. How much money does a pirate pay for corn?
A. A buccaneer.
2019-05-17 (Announcement)
-
Window node support for AKS is now in Public Preview
- Blog post: https://aka.ms/aks/windows
- Support and documentation:
- Documentation: https://aka.ms/aks/windowsdocs
- Issues may be filed on this Github repository (https://github.com/Azure/AKS)
or raised as a Sev C support request. Support requests and issues for
preview features do not have an SLA / SLO and are best-effort only.
- Do not enable preview featured on production subscriptions or clusters.
- For all previews, please see the previews document for opt-in
instructions and documentation links.
-
Bug fixes
- An issue impacting Java workloads where pods running Java workloads would
consume all available node resources instead of the defined pod resource
limits defined by the user has been resolved.- https://bugs.openjdk.java.net/browse/JDK-8217766
- AKS-Engine PR for fix: Azure/aks-engine#1095
- An issue impacting Java workloads where pods running Java workloads would
-
Component Updates
- AKS-Engine has been updates to v0.35.1
Q: Why did the Clydesdale give the pony a glass of water?
A: Because he was a little horse!
Release 2019-05-13
- New Features
- Shared Subnets are now supported with Azure CNI.
- Users may bring / provide their own subnets to AKS clusters
- Subnets are no longer restricted to a single subnet per AKS cluster, users
may now have multiple AKS clusters on a subnet. - If the subnet provided to AKS has NSGs, those NSGs will be preserved and
used.- Warning: NSGs must respect: https://aka.ms/aksegress or the
cluster might not come up or work properly.
- Warning: NSGs must respect: https://aka.ms/aksegress or the
- Note: Shared subnet support is not supported with VMSS (in preview)
- Shared Subnets are now supported with Azure CNI.
- Bug Fixes
- A bug that blocked Azure CNI users from setting maxPods above 110 (maximum
of 250) and that blocked existing clusters from scaling up when the value
was over 110 for CNI has been fixed. - A validation bug blocking long DNS names used by customers has been fixed.
For restrictions on DNS/Cluster names, please see
https://aka.ms/aks-naming-rules
- A bug that blocked Azure CNI users from setting maxPods above 110 (maximum
Q: Did you hear that I’m reading a book about anti-gravity?
A: It’s impossible to put down.
2019-05-06 Release
This release is currently rolling out to all regions
-
New Features
- Kubernetes Network Policies are GA
- See https://docs.microsoft.com/en-us/azure/aks/use-network-policies
for documentation.
- See https://docs.microsoft.com/en-us/azure/aks/use-network-policies
- Kubernetes Network Policies are GA
-
Bug Fixes
- An issues customers reported with CoreDNS entering CrashLoopBackoff has
been fixed. This was related to the upstream move toklog
- An issue where AKS managed pods (within kube-system) did not have the correct
tolerations preventing them from being scheduled when customers use
taints/tolerations has been fixed. - An issue with kube-dns crashing on specific config map override scenarios
as seen in Azure/acs-engine#3534 has been
resolved by updating to the latest upstream kube-dns release. - An issue where customers could experience longer than normal create times
for clusters tied to a blocking wait on heapster pods has been resolved.
- An issues customers reported with CoreDNS entering CrashLoopBackoff has
-
Preview Features
- New features in public preview:
- Secure access to the API server using authorized IP address ranges
- Locked down egress traffic
- This feature allows users to limit / whitelist the hosts used by AKS
clusters.
- This feature allows users to limit / whitelist the hosts used by AKS
- Multiple Node Pools
- For all previews, please see the previews document for opt-in
instructions and documentation links.
- New features in public preview:
Release 2019-04-01
This release is rolling out to all regions
- Bug Fixes
- Resolved an issue preventing some users from leveraging the Live Container Logs feature (due to a 401 unauthorized).
- Resolved an issue where users could get "Failed to get list of supported orchestrators" during upgrade calls.
- Resolved an issue where users using custom subnets/routes/networking with AKS where IP ranges match the cluster/service or node IPs could result in an inability to
exec
, get cluster logs (kubectl get logs
) or otherwise pass required health checks. - An issue where a user running
az aks get-credentials
while a cluster is in creation resulting in an unclear error ('Could not find role name') has been resolved.
Release 2019-04-22
This release is rolling out to all regions
-
Kubernetes 1.14 is now in Preview
- Do not use this for production clusters. This version is for early adopters
and advanced users to test and validate. - Accessing the Kubernetes 1.14 release requires the
aks-preview
CLI
extension to be installed.
- Do not use this for production clusters. This version is for early adopters
-
New Features
- Users are no longer forced to create / pre-provision subnets when using
Advanced networking. Instead, if you choose advanced networking and do not
supply a subnet, AKS will create one on your behalf.
- Users are no longer forced to create / pre-provision subnets when using
-
Bug fixes
- An issue where AKS / the Azure CLI would ignore the
--network-plugin=azure
option silently and create clusters with Kubenet has been resolved.- Specifically, there was a bug in the cluster creation workflow where users
would specific--network-plugin=azure
with Azure CNI / Advanced Networking
but miss passing in the additional options (eg '--pod-cidr, --service-cidr,
etc). If this occured, the service would fall-back and create the cluster
with Kubenet instead.
- Specifically, there was a bug in the cluster creation workflow where users
- An issue where AKS / the Azure CLI would ignore the
-
Preview Features
- Kubernetes 1.14 is now in Preview
- An issue with Network Policy and Calico where cluster creation could
fail/time out and pods would enter a crashloop has been fixed.- #905
- Note, in order to get the fix properly applied, you should create a new
cluster based on this release, or upgrade your existing cluster and then
run the following clean up command after the upgrade is complete:
kubectl delete -f https://github.com/Azure/aks-engine/raw/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml
Release 2019-04-15
-
Kubernetes 1.13 is GA
-
The Kubernetes 1.9.x releases are now deprecated. All clusters
on version 1.9 must be upgraded to a later release (1.10, 1.11, 1.12, 1.13)
within 30 days. Clusters still on 1.9.x after 30 days (2019-05-25)
will no longer be supported.- During the deprecation period, 1.9.x will continue to appear in the available
versions list. Once deprecation is completed 1.9 will be removed.
- During the deprecation period, 1.9.x will continue to appear in the available
-
(Region) North Central US is now available
-
(Region) Japan West is now available
-
New Features
- Customers may now provide custom Resource Group names.
- This means that users are no longer locked into the MC_* resource name
group. On cluster creation you may pass in a custom RG and AKS will
inherit that RG, permissions and attach AKS resources to the customer
provided resource group.
* Currently, you must pass in a new RG (resource group) must be new, and
can not be a pre-existing RG. We are working on support for pre-existing
RGs.
* This change requires newly provisioned clusters, existing clusters can
not be migrated to support this new capability. Cluster migration across
subscriptions and RGs is not currently supported.
- This means that users are no longer locked into the MC_* resource name
- AKS now properly associates existing route tables created by AKS when
passing in custom VNET for Kubenet/Basic Networking. This does not
support User Defined / Custom routes (UDRs).
- Customers may now provide custom Resource Group names.
-
Bug fixes
- An issue where two delete operations could be issued against a cluster
simultaneously resulting in an unknown and unrecoverable state has been
resolved. - An issue where users could create a new AKS cluster and set the
maxPods
value too low has been resolved.- Users have reported cluster crashes, unavailability and other issues
when changing this setting. As AKS is a managed service, we provide
sidecars and pods we deploy and manage as part of the cluster. However
users could define a maxPods value lower than the value required for the
managed pods to run (eg 30), AKS now calculates the minimum number of
pods via:maxPods or maxPods * vm_count > managed add-on pods
- Users have reported cluster crashes, unavailability and other issues
- An issue where two delete operations could be issued against a cluster
-
Behavioral Changes
* AKS cluster creation now properly pre-checks the assigned service CIDR
range to block against possible conflicts with the dns-service CIDR.
* As an example, a user could use 10.2.0.1/24 instead of 10.2.0.0/24 which
would lead to IP conflicts. This is now validated/checked and if there is
a conflict, a clear error is returned.
* AKS now correctly blocks/validates users who accidentally attempt an
upgrade to a previous release (eg downgrade).- AKS now validate all CRUD operations to confirm the requested action will
not fail due to IP Address/subnet exhaustion. If a call is made that would
exceed available addresses, the service correctly returns an error. - The amount of memory allocated to the Kubernetes Dashboard has been
increased to 500Mi for customers with large numbers of nodes/jobs/objects. - Small VM SKUs (such as Standard F1, and A2) that do not have enough RAM to
support the Kubernetes control plane components have been removed from the
list of available VMs users can use when creating AKS clusters.
- AKS now validate all CRUD operations to confirm the requested action will
-
Preview Features
- A bug where Calico pods would not start after a 1.11 to 1.12 upgrade has
been resolved. - When using network policies and Calico, AKS now properly uses Azure CNI for
all routing vs defaulting to using Calico the routing plugin. - Calico has been updated to v3.5.0
- A bug where Calico pods would not start after a 1.11 to 1.12 upgrade has
-
Component Updates
- AKS-Engine has been updates to v0.33.4
- See: https://github.com/Azure/aks-engine/releases/tag/v0.33.4 for details
- AKS-Engine has been updates to v0.33.4
Release 2019-04-08 (Hotfix)
This release fixes one AKS product regression and an issue identified with the Azure Jenkins plugin.
- A regression when using ARM templates to issue AKS cluster update(s) (such as configuration changes) that also impacted the Azure Portal has been fixed.
- Users do not need to perform any actions / upgrades for this fix.
- An issue when using the Azure Container Jenkins plugin with AKS has been mitigated.
- This issue caused errors and failures when using the Jenkins plugin - the bug triggered by a new AKS API version but was related to a latent issue in the plugin's API detection behavior.
- An updated Jenkins plugin has been published: jenkinsci/azure-acs-plugin#16
- https://github.com/jenkinsci/azure-acs-plugin/releases/tag/azure-acs-0.2.4
Release 2019-04-04 - Hotfix (CVE mitigation)
- Bug fixes
- New kubernetes versions released with multiple CVE mitigations
- Kubernetes 1.12.7
- Kubernetes 1.11.9
- Customers should upgrade to the latest 1.11 and 1.12 releases.
- Kubernetes versions prior to 1.11 must upgrade to 1.11/1.12 for the fix.
- New kubernetes versions released with multiple CVE mitigations
- Component updates
- Updated included AKS-Engine version to 0.33.2
Release 2019-03-29 (Hotfix)
-
The following regions are now GA: South Central US, Korea Central and Korea South
-
Bug fixes
- Fixed an issue which prevented Kubernetes addons from being disabled.
-
Behavioral Changes
- AKS will now block subsequent PUT requests (with a status code 409 - Conflict) while an ongoing operation is being performed.