Skip to content

Releases: Azure/AKS

Release 2019-05-20

22 May 18:37
179c984
Compare
Choose a tag to compare
  • Behavioral Changes

    • The 192.0.2.0/24 IP block is now reserved for AKS use. Clusters created in
      a VNet that overlaps with this block will fail pre-flight validation.
  • Bug Fixes

    • An issue where users running old AKS clusters attempting to upgrade would
      get a failed upgrade with an Internal Server Error has been fixed.
    • An issue where Kubernetes 1.14.0 would not show in the Azure Portal or AKS
      Preview CLI with the 'Preview' or 'isPreview' tag has been resolved.
    • An issue where customers would get excessive log entries due to missing
      Heapster rbac permissions has been fixed.
    • An issue where AKS clusters could end up with missing DNS entries resulting
      in DNS resolution errors or crashes within CoreDNS has been resolved.
  • Preview Features

    • A bug where the AKS node count could be out of sync with the VMSS node count
      has been resolved.
    • There is a known issue with the cluster autoscaler preview and multiple
      agent pools. The current autoscaler in preview is not compatible with
      multiple agent pools, and could not be disabled. We have fixed the issue
      that blocked disabling the autoscaler. A fix for mutliple agent pools and
      the cluster autoscaler is in development.

Q. How much money does a pirate pay for corn?
A. A buccaneer.

2019-05-17 (Announcement)

17 May 18:24
584811e
Compare
Choose a tag to compare
  • Window node support for AKS is now in Public Preview

    • Blog post: https://aka.ms/aks/windows
    • Support and documentation:
    • Do not enable preview featured on production subscriptions or clusters.
    • For all previews, please see the previews document for opt-in
      instructions and documentation links.
  • Bug fixes

  • Component Updates

    • AKS-Engine has been updates to v0.35.1

Q: Why did the Clydesdale give the pony a glass of water?
A: Because he was a little horse!

Release 2019-05-13

16 May 16:27
a1d5007
Compare
Choose a tag to compare
  • New Features
    • Shared Subnets are now supported with Azure CNI.
      • Users may bring / provide their own subnets to AKS clusters
      • Subnets are no longer restricted to a single subnet per AKS cluster, users
        may now have multiple AKS clusters on a subnet.
      • If the subnet provided to AKS has NSGs, those NSGs will be preserved and
        used.
      • Note: Shared subnet support is not supported with VMSS (in preview)
  • Bug Fixes
    • A bug that blocked Azure CNI users from setting maxPods above 110 (maximum
      of 250) and that blocked existing clusters from scaling up when the value
      was over 110 for CNI has been fixed.
    • A validation bug blocking long DNS names used by customers has been fixed.
      For restrictions on DNS/Cluster names, please see
      https://aka.ms/aks-naming-rules

Q: Did you hear that I’m reading a book about anti-gravity?
A: It’s impossible to put down.

2019-05-06 Release

10 May 15:41
07a5b2a
Compare
Choose a tag to compare

This release is currently rolling out to all regions

  • New Features

  • Bug Fixes

    • An issues customers reported with CoreDNS entering CrashLoopBackoff has
      been fixed. This was related to the upstream move to klog
    • An issue where AKS managed pods (within kube-system) did not have the correct
      tolerations preventing them from being scheduled when customers use
      taints/tolerations has been fixed.
    • An issue with kube-dns crashing on specific config map override scenarios
      as seen in Azure/acs-engine#3534 has been
      resolved by updating to the latest upstream kube-dns release.
    • An issue where customers could experience longer than normal create times
      for clusters tied to a blocking wait on heapster pods has been resolved.
  • Preview Features

    • New features in public preview:
      • Secure access to the API server using authorized IP address ranges
      • Locked down egress traffic
        • This feature allows users to limit / whitelist the hosts used by AKS
          clusters.
      • Multiple Node Pools
      • For all previews, please see the previews document for opt-in
        instructions and documentation links.

Release 2019-04-01

11 Apr 17:55
07a5b2a
Compare
Choose a tag to compare

This release is rolling out to all regions

  • Bug Fixes
    • Resolved an issue preventing some users from leveraging the Live Container Logs feature (due to a 401 unauthorized).
    • Resolved an issue where users could get "Failed to get list of supported orchestrators" during upgrade calls.
    • Resolved an issue where users using custom subnets/routes/networking with AKS where IP ranges match the cluster/service or node IPs could result in an inability to exec, get cluster logs (kubectl get logs) or otherwise pass required health checks.
    • An issue where a user running az aks get-credentials while a cluster is in creation resulting in an unclear error ('Could not find role name') has been resolved.

Release 2019-04-22

01 May 20:03
9df8801
Compare
Choose a tag to compare

This release is rolling out to all regions

  • Kubernetes 1.14 is now in Preview

    • Do not use this for production clusters. This version is for early adopters
      and advanced users to test and validate.
    • Accessing the Kubernetes 1.14 release requires the aks-preview CLI
      extension to be installed.
  • New Features

    • Users are no longer forced to create / pre-provision subnets when using
      Advanced networking. Instead, if you choose advanced networking and do not
      supply a subnet, AKS will create one on your behalf.
  • Bug fixes

    • An issue where AKS / the Azure CLI would ignore the --network-plugin=azure
      option silently and create clusters with Kubenet has been resolved.
      • Specifically, there was a bug in the cluster creation workflow where users
        would specific --network-plugin=azure with Azure CNI / Advanced Networking
        but miss passing in the additional options (eg '--pod-cidr, --service-cidr,
        etc). If this occured, the service would fall-back and create the cluster
        with Kubenet instead.
  • Preview Features

    • Kubernetes 1.14 is now in Preview
    • An issue with Network Policy and Calico where cluster creation could
      fail/time out and pods would enter a crashloop has been fixed.
      • #905
      • Note, in order to get the fix properly applied, you should create a new
        cluster based on this release, or upgrade your existing cluster and then
        run the following clean up command after the upgrade is complete:
kubectl delete -f https://github.com/Azure/aks-engine/raw/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml

Release 2019-04-15

24 Apr 19:18
525c196
Compare
Choose a tag to compare
  • Kubernetes 1.13 is GA

  • The Kubernetes 1.9.x releases are now deprecated. All clusters
    on version 1.9 must be upgraded to a later release (1.10, 1.11, 1.12, 1.13)
    within 30 days. Clusters still on 1.9.x after 30 days (2019-05-25)
    will no longer be supported.

    • During the deprecation period, 1.9.x will continue to appear in the available
      versions list. Once deprecation is completed 1.9 will be removed.
  • (Region) North Central US is now available

  • (Region) Japan West is now available

  • New Features

    • Customers may now provide custom Resource Group names.
      • This means that users are no longer locked into the MC_* resource name
        group. On cluster creation you may pass in a custom RG and AKS will
        inherit that RG, permissions and attach AKS resources to the customer
        provided resource group.
            * Currently, you must pass in a new RG (resource group) must be new, and
        can not be a pre-existing RG. We are working on support for pre-existing
        RGs.
            * This change requires newly provisioned clusters, existing clusters can
        not be migrated to support this new capability. Cluster migration across
        subscriptions and RGs is not currently supported.
    • AKS now properly associates existing route tables created by AKS when
      passing in custom VNET for Kubenet/Basic Networking. This does not
      support User Defined / Custom routes (UDRs)
      .
  • Bug fixes

    • An issue where two delete operations could be issued against a cluster
      simultaneously resulting in an unknown and unrecoverable state has been
      resolved.
    • An issue where users could create a new AKS cluster and set the maxPods
      value too low has been resolved.
      • Users have reported cluster crashes, unavailability and other issues
        when changing this setting. As AKS is a managed service, we provide
        sidecars and pods we deploy and manage as part of the cluster. However
        users could define a maxPods value lower than the value required for the
        managed pods to run (eg 30), AKS now calculates the minimum number of
        pods via: maxPods or maxPods * vm_count > managed add-on pods
  • Behavioral Changes
      * AKS cluster creation now properly pre-checks the assigned service CIDR
    range to block against possible conflicts with the dns-service CIDR.
       * As an example, a user could use 10.2.0.1/24 instead of 10.2.0.0/24 which
    would lead to IP conflicts. This is now validated/checked and if there is
    a conflict, a clear error is returned.
      * AKS now correctly blocks/validates users who accidentally attempt an
    upgrade to a previous release (eg downgrade).

    • AKS now validate all CRUD operations to confirm the requested action will
      not fail due to IP Address/subnet exhaustion. If a call is made that would
      exceed available addresses, the service correctly returns an error.
    • The amount of memory allocated to the Kubernetes Dashboard has been
      increased to 500Mi for customers with large numbers of nodes/jobs/objects.
    • Small VM SKUs (such as Standard F1, and A2) that do not have enough RAM to
      support the Kubernetes control plane components have been removed from the
      list of available VMs users can use when creating AKS clusters.
  • Preview Features

    • A bug where Calico pods would not start after a 1.11 to 1.12 upgrade has
      been resolved.
    • When using network policies and Calico, AKS now properly uses Azure CNI for
      all routing vs defaulting to using Calico the routing plugin.
    • Calico has been updated to v3.5.0
  • Component Updates

Release 2019-04-08 (Hotfix)

10 Apr 17:54
52b9c75
Compare
Choose a tag to compare

This release fixes one AKS product regression and an issue identified with the Azure Jenkins plugin.

  • A regression when using ARM templates to issue AKS cluster update(s) (such as configuration changes) that also impacted the Azure Portal has been fixed.
    • Users do not need to perform any actions / upgrades for this fix.
  • An issue when using the Azure Container Jenkins plugin with AKS has been mitigated.

Release 2019-04-04 - Hotfix (CVE mitigation)

04 Apr 22:45
4e905c6
Compare
Choose a tag to compare

Release 2019-03-29 (Hotfix)

31 Mar 13:45
7a995eb
Compare
Choose a tag to compare
  • The following regions are now GA: South Central US, Korea Central and Korea South

  • Bug fixes

    • Fixed an issue which prevented Kubernetes addons from being disabled.
  • Behavioral Changes

    • AKS will now block subsequent PUT requests (with a status code 409 - Conflict) while an ongoing operation is being performed.