From 8ef426d434bf78989133e9a4661a78aa657259de Mon Sep 17 00:00:00 2001
From: TAKAHASHI Shuuji
Date: Thu, 17 Sep 2020 09:35:49 +0900
Subject: [PATCH 01/50] Copy concepts/storage/storage-capacity.md from en/
directory.
---
.../docs/concepts/storage/storage-capacity.md | 140 ++++++++++++++++++
1 file changed, 140 insertions(+)
create mode 100644 content/ja/docs/concepts/storage/storage-capacity.md
diff --git a/content/ja/docs/concepts/storage/storage-capacity.md b/content/ja/docs/concepts/storage/storage-capacity.md
new file mode 100644
index 0000000000000..836d5d2c36c1f
--- /dev/null
+++ b/content/ja/docs/concepts/storage/storage-capacity.md
@@ -0,0 +1,140 @@
+---
+reviewers:
+- jsafrane
+- saad-ali
+- msau42
+- xing-yang
+- pohly
+title: Storage Capacity
+content_type: concept
+weight: 45
+---
+
+
+
+Storage capacity is limited and may vary depending on the node on
+which a pod runs: network-attached storage might not be accessible by
+all nodes, or storage is local to a node to begin with.
+
+{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
+
+This page describes how Kubernetes keeps track of storage capacity and
+how the scheduler uses that information to schedule Pods onto nodes
+that have access to enough storage capacity for the remaining missing
+volumes. Without storage capacity tracking, the scheduler may choose a
+node that doesn't have enough capacity to provision a volume and
+multiple scheduling retries will be needed.
+
+Tracking storage capacity is supported for {{< glossary_tooltip
+text="Container Storage Interface" term_id="csi" >}} (CSI) drivers and
+[needs to be enabled](#enabling-storage-capacity-tracking) when installing a CSI driver.
+
+
+
+## API
+
+There are two API extensions for this feature:
+- [CSIStorageCapacity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csistoragecapacity-v1alpha1-storage-k8s-io) objects:
+ these get produced by a CSI driver in the namespace
+ where the driver is installed. Each object contains capacity
+ information for one storage class and defines which nodes have
+ access to that storage.
+- [The `CSIDriverSpec.StorageCapacity` field](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csidriverspec-v1-storage-k8s-io):
+ when set to `true`, the Kubernetes scheduler will consider storage
+ capacity for volumes that use the CSI driver.
+
+## Scheduling
+
+Storage capacity information is used by the Kubernetes scheduler if:
+- the `CSIStorageCapacity` feature gate is true,
+- a Pod uses a volume that has not been created yet,
+- that volume uses a {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} which references a CSI driver and
+ uses `WaitForFirstConsumer` [volume binding
+ mode](/docs/concepts/storage/storage-classes/#volume-binding-mode),
+ and
+- the `CSIDriver` object for the driver has `StorageCapacity` set to
+ true.
+
+In that case, the scheduler only considers nodes for the Pod which
+have enough storage available to them. This check is very
+simplistic and only compares the size of the volume against the
+capacity listed in `CSIStorageCapacity` objects with a topology that
+includes the node.
+
+For volumes with `Immediate` volume binding mode, the storage driver
+decides where to create the volume, independently of Pods that will
+use the volume. The scheduler then schedules Pods onto nodes where the
+volume is available after the volume has been created.
+
+For [CSI ephemeral volumes](/docs/concepts/storage/volumes/#csi),
+scheduling always happens without considering storage capacity. This
+is based on the assumption that this volume type is only used by
+special CSI drivers which are local to a node and do not need
+significant resources there.
+
+## Rescheduling
+
+When a node has been selected for a Pod with `WaitForFirstConsumer`
+volumes, that decision is still tentative. The next step is that the
+CSI storage driver gets asked to create the volume with a hint that the
+volume is supposed to be available on the selected node.
+
+Because Kubernetes might have chosen a node based on out-dated
+capacity information, it is possible that the volume cannot really be
+created. The node selection is then reset and the Kubernetes scheduler
+tries again to find a node for the Pod.
+
+## Limitations
+
+Storage capacity tracking increases the chance that scheduling works
+on the first try, but cannot guarantee this because the scheduler has
+to decide based on potentially out-dated information. Usually, the
+same retry mechanism as for scheduling without any storage capacity
+information handles scheduling failures.
+
+One situation where scheduling can fail permanently is when a Pod uses
+multiple volumes: one volume might have been created already in a
+topology segment which then does not have enough capacity left for
+another volume. Manual intervention is necessary to recover from this,
+for example by increasing capacity or deleting the volume that was
+already created. [Further
+work](https://github.com/kubernetes/enhancements/pull/1703) is needed
+to handle this automatically.
+
+## Enabling storage capacity tracking
+
+Storage capacity tracking is an *alpha feature* and only enabled when
+the `CSIStorageCapacity` [feature
+gate](/docs/reference/command-line-tools-reference/feature-gates/) and
+the `storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}} are enabled. For details on
+that, see the `--feature-gates` and `--runtime-config` [kube-apiserver
+parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
+
+A quick check
+whether a Kubernetes cluster supports the feature is to list
+CSIStorageCapacity objects with:
+```shell
+kubectl get csistoragecapacities --all-namespaces
+```
+
+If your cluster supports CSIStorageCapacity, the response is either a list of CSIStorageCapacity objects or:
+```
+No resources found
+```
+
+If not supported, this error is printed instead:
+```
+error: the server doesn't have a resource type "csistoragecapacities"
+```
+
+In addition to enabling the feature in the cluster, a CSI
+driver also has to
+support it. Please refer to the driver's documentation for
+details.
+
+## {{% heading "whatsnext" %}}
+
+ - For more information on the design, see the
+[Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md).
+- For more information on further development of this feature, see the [enhancement tracking issue #1472](https://github.com/kubernetes/enhancements/issues/1472).
+- Learn about [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
From 181faf6baea4814c2c5b455709ef47bbfb79f3cf Mon Sep 17 00:00:00 2001
From: TAKAHASHI Shuuji
Date: Wed, 16 Sep 2020 01:22:48 +0900
Subject: [PATCH 02/50] Translate concepts/storage/storage-capacity into
Japanese.
---
.../docs/concepts/storage/storage-capacity.md | 158 ++++++------------
1 file changed, 49 insertions(+), 109 deletions(-)
diff --git a/content/ja/docs/concepts/storage/storage-capacity.md b/content/ja/docs/concepts/storage/storage-capacity.md
index 836d5d2c36c1f..9611c59eacead 100644
--- a/content/ja/docs/concepts/storage/storage-capacity.md
+++ b/content/ja/docs/concepts/storage/storage-capacity.md
@@ -1,140 +1,80 @@
---
-reviewers:
-- jsafrane
-- saad-ali
-- msau42
-- xing-yang
-- pohly
-title: Storage Capacity
+title: ストレージ容量
content_type: concept
weight: 45
---
-Storage capacity is limited and may vary depending on the node on
-which a pod runs: network-attached storage might not be accessible by
-all nodes, or storage is local to a node to begin with.
+ストレージ容量は、Podが実行されるノードごとに制限があったり、大きさが異なる可能性があります。たとえば、NASがすべてのノードからはアクセスできなかったり、初めはストレージがノードローカルでしか利用できない可能性があります。
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
-This page describes how Kubernetes keeps track of storage capacity and
-how the scheduler uses that information to schedule Pods onto nodes
-that have access to enough storage capacity for the remaining missing
-volumes. Without storage capacity tracking, the scheduler may choose a
-node that doesn't have enough capacity to provision a volume and
-multiple scheduling retries will be needed.
+このページでは、Kubernetesがストレージ容量を追跡し続ける方法と、スケジューラーがその情報を利用して、残りの未作成のボリュームのために十分なストレージ容量へアクセスできるノード上にどのようにPodをスケジューリングするかについて説明します。もしストレージ容量の追跡がなければ、スケジューラーは、ボリュームをプロビジョニングするために十分な容量のないノードを選択してしまい、スケジューリングの再試行が複数回行われてしまう恐れがあります。
-Tracking storage capacity is supported for {{< glossary_tooltip
-text="Container Storage Interface" term_id="csi" >}} (CSI) drivers and
-[needs to be enabled](#enabling-storage-capacity-tracking) when installing a CSI driver.
+ストレージ容量の追跡は、{{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}}(CSI)向けにサポートされており、CSIドライバーのインストール時に[有効にする必要があります](#enabling-storage-capacity-tracking)。
## API
-There are two API extensions for this feature:
-- [CSIStorageCapacity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csistoragecapacity-v1alpha1-storage-k8s-io) objects:
- these get produced by a CSI driver in the namespace
- where the driver is installed. Each object contains capacity
- information for one storage class and defines which nodes have
- access to that storage.
-- [The `CSIDriverSpec.StorageCapacity` field](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csidriverspec-v1-storage-k8s-io):
- when set to `true`, the Kubernetes scheduler will consider storage
- capacity for volumes that use the CSI driver.
-
-## Scheduling
-
-Storage capacity information is used by the Kubernetes scheduler if:
-- the `CSIStorageCapacity` feature gate is true,
-- a Pod uses a volume that has not been created yet,
-- that volume uses a {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} which references a CSI driver and
- uses `WaitForFirstConsumer` [volume binding
- mode](/docs/concepts/storage/storage-classes/#volume-binding-mode),
- and
-- the `CSIDriver` object for the driver has `StorageCapacity` set to
- true.
-
-In that case, the scheduler only considers nodes for the Pod which
-have enough storage available to them. This check is very
-simplistic and only compares the size of the volume against the
-capacity listed in `CSIStorageCapacity` objects with a topology that
-includes the node.
-
-For volumes with `Immediate` volume binding mode, the storage driver
-decides where to create the volume, independently of Pods that will
-use the volume. The scheduler then schedules Pods onto nodes where the
-volume is available after the volume has been created.
-
-For [CSI ephemeral volumes](/docs/concepts/storage/volumes/#csi),
-scheduling always happens without considering storage capacity. This
-is based on the assumption that this volume type is only used by
-special CSI drivers which are local to a node and do not need
-significant resources there.
-
-## Rescheduling
-
-When a node has been selected for a Pod with `WaitForFirstConsumer`
-volumes, that decision is still tentative. The next step is that the
-CSI storage driver gets asked to create the volume with a hint that the
-volume is supposed to be available on the selected node.
-
-Because Kubernetes might have chosen a node based on out-dated
-capacity information, it is possible that the volume cannot really be
-created. The node selection is then reset and the Kubernetes scheduler
-tries again to find a node for the Pod.
-
-## Limitations
-
-Storage capacity tracking increases the chance that scheduling works
-on the first try, but cannot guarantee this because the scheduler has
-to decide based on potentially out-dated information. Usually, the
-same retry mechanism as for scheduling without any storage capacity
-information handles scheduling failures.
-
-One situation where scheduling can fail permanently is when a Pod uses
-multiple volumes: one volume might have been created already in a
-topology segment which then does not have enough capacity left for
-another volume. Manual intervention is necessary to recover from this,
-for example by increasing capacity or deleting the volume that was
-already created. [Further
-work](https://github.com/kubernetes/enhancements/pull/1703) is needed
-to handle this automatically.
-
-## Enabling storage capacity tracking
-
-Storage capacity tracking is an *alpha feature* and only enabled when
-the `CSIStorageCapacity` [feature
-gate](/docs/reference/command-line-tools-reference/feature-gates/) and
-the `storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}} are enabled. For details on
-that, see the `--feature-gates` and `--runtime-config` [kube-apiserver
-parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
-
-A quick check
-whether a Kubernetes cluster supports the feature is to list
-CSIStorageCapacity objects with:
+この機能には、以下の2つのAPI拡張があります。
+
+- [CSIStorageCapacity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csistoragecapacity-v1alpha1-storage-k8s-io)オブジェクト: このオブジェクトは、CSIドライバーがインストールされた名前空間に生成されます。各オブジェクトには1つのストレージクラスに対する容量の情報が含まれ、そのストレージに対してどのノードがアクセス権を持つかが定められています。
+
+- [`CSIDriverSpec.StorageCapacity`フィールド](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csidriverspec-v1-storage-k8s-io): `true`に設定すると、Kubernetesのスケジューラーが、CSIドライバーを使用するボリュームに対してストレージ容量を考慮するようになります。
+
+## スケジューリング
+
+ストレージ容量の情報がKubernetesのスケジューラーで利用されるのは、以下のすべての条件を満たす場合です。
+
+- `CSIStorageCapacity`フィーチャーゲートがtrueである
+- Podがまだ作成されていないボリュームを使用している
+- そのボリュームが、CSIドライバーを参照し、[volume binding mode](/docs/concepts/storage/storage-classes/#volume-binding-mode)に`WaitForFirstConsumer`を使う{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}を使用している
+
+その場合、スケジューラーはPodに対して、十分なストレージが利用できるノードだけを考慮するようになります。このチェックは非常に単純で、ボリュームのサイズと、`CSIStorageCapacity`オブジェクトに一覧された容量を、ノードを含むトポロジで比較するだけです。
+
+volume binding modeが`Immediate`のボリュームの場合、ボリュームを使用するPodとは独立に、ストレージドライバーがボリュームの作成場所を決定します。次に、スケジューラーはボリュームが作成された後、Podをボリュームが利用できるノードにスケジューリングします。
+
+[CSI ephemeral volumes](/docs/concepts/storage/volumes/#csi)の場合、スケジューリングは常にストレージ容量を考慮せずに行われます。このような動作になっているのは、このボリュームタイプはノードローカルな特別なCSIドライバーでのみ使用され、そこでは特に大きなリソースが必要になることはない、という想定に基づいています。
+
+## 再スケジューリング
+
+`WaitForFirstConsumer`ボリュームがあるPodに対してノードが選択された場合は、その決定はまだ一時的なものです。次のステップで、CSIストレージドライバーに対して、選択されたノード上でボリュームが利用可能になることが予定されているというヒントを付きでボリュームの作成を要求します。
+
+Kubernetesは古い容量の情報をもとにノードを選択する場合があるため、実際にはボリュームが作成できないという可能性が存在します。その場合、ノードの選択がリセットされ、KubernetesスケジューラーはPodに割り当てるノードを再び探します。
+
+## 制限
+
+ストレージ容量を追跡することで、1回目の試行でスケジューリングが成功する可能性が高くなります。しかし、スケジューラーは潜在的に古い情報に基づいて決定を行う可能性があるため、成功を保証することはできません。通常、ストレージ容量の情報が存在しないスケジューリングと同様のリトライの仕組みによって、スケジューリングの失敗に対処します。
+
+スケジューリングが永続的に失敗する状況の1つは、Podが複数のボリュームを使用する場合で、あるトポロジーのセグメントで1つのボリュームがすでに作成された後、もう1つのボリュームのために十分な容量が残っていないような場合です。この状況から回復するには、たとえば、容量を増加させたり、すでに作成されたボリュームを削除するなどの手動の仲介が必要です。この問題に自動的に対処するためには、まだ[追加の作業](https://github.com/kubernetes/enhancements/pull/1703)が必要となっています。
+
+## ストレージ容量の追跡を有効にする {#enabling-storage-capacity-tracking}
+
+ストレージ容量の追跡は*アルファ機能*であり、`CSIStorageCapacity`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)と`storage.k8s.io/v1alpha1` {{< glossary_tooltip text="API group" term_id="api-group" >}}を有効にした場合にのみ、有効化されます。詳細については、`--feature-gates`および`--runtime-config` [kube-apiserverパラメータ](/docs/reference/command-line-tools-reference/kube-apiserver/)を参照してください。
+
+Kubernetesクラスターがこの機能をサポートしているか簡単に確認するには、以下のコマンドを実行して、CSIStorageCapacityオブジェクトを一覧表示します。
+
```shell
kubectl get csistoragecapacities --all-namespaces
```
-If your cluster supports CSIStorageCapacity, the response is either a list of CSIStorageCapacity objects or:
+クラスターがCSIStorageCapacityをサポートしていれば、CSIStorageCapacityのリストが表示されるか、次のメッセージが表示されます。
```
No resources found
```
-If not supported, this error is printed instead:
+もしサポートされていなければ、代わりに次のエラーが表示されます。
+
```
error: the server doesn't have a resource type "csistoragecapacities"
```
-In addition to enabling the feature in the cluster, a CSI
-driver also has to
-support it. Please refer to the driver's documentation for
-details.
+クラスター内で機能を有効化することに加えて、CSIドライバーもこの機能をサポートしている必要があります。詳細については、各ドライバーのドキュメントを参照してください。
## {{% heading "whatsnext" %}}
- - For more information on the design, see the
-[Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md).
-- For more information on further development of this feature, see the [enhancement tracking issue #1472](https://github.com/kubernetes/enhancements/issues/1472).
-- Learn about [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
+ - 設計に関するさらなる情報について知るために、[Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md)を読む。
+- この機能の今後の開発に関する情報について知るために、[enhancement tracking issue #1472](https://github.com/kubernetes/enhancements/issues/1472)を参照する。
+- [Kubernetesのスケジューラー](/ja/docs/concepts/scheduling-eviction/kube-scheduler/)についてもっと学ぶ。
From 58a640ffd4fb0ef344a28da998e301cd57819d19 Mon Sep 17 00:00:00 2001
From: Kenichi Omichi
Date: Fri, 9 Oct 2020 00:31:42 +0000
Subject: [PATCH 03/50] ja: Removing references of `kubectl rolling-update`
This applies the commit be6c0c3a2180f2a6da3bc615b1dcd78dea7c3ba1 for
ja content. Big motivation here is the link of [rolling update] is
NotFound today. So it is better to avoid such page for readers.
---
.../ja/docs/concepts/workloads/controllers/deployment.md | 7 +------
content/ja/docs/reference/kubectl/cheatsheet.md | 7 -------
content/ja/docs/reference/kubectl/overview.md | 1 -
3 files changed, 1 insertion(+), 14 deletions(-)
diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md
index f3b210ff4a541..6b4fe792649d4 100644
--- a/content/ja/docs/concepts/workloads/controllers/deployment.md
+++ b/content/ja/docs/concepts/workloads/controllers/deployment.md
@@ -969,7 +969,7 @@ Deploymentのセレクターに一致するラベルを持つPodを直接作成
#### Deploymentのローリングアップデート
-`.spec.strategy.type==RollingUpdate`と指定されているとき、Deploymentは[ローリングアップデート](/docs/tasks/run-application/rolling-update-replication-controller/)によりPodを更新します。ローリングアップデートの処理をコントロールするために`maxUnavailable`と`maxSurge`を指定できます。
+`.spec.strategy.type==RollingUpdate`と指定されているとき、DeploymentはローリングアップデートによりPodを更新します。ローリングアップデートの処理をコントロールするために`maxUnavailable`と`maxSurge`を指定できます。
##### maxUnavailable
@@ -1008,8 +1008,3 @@ Deploymentのリビジョン履歴は、Deploymentが管理するReplicaSetに
### paused
`.spec.paused`はオプションのboolean値で、Deploymentの一時停止と再開のための値です。一時停止されているものと、そうでないものとの違いは、一時停止されているDeploymentはPodTemplateSpecのいかなる変更があってもロールアウトがトリガーされないことです。デフォルトではDeploymentは一時停止していない状態で作成されます。
-
-## Deploymentの代替案
-### kubectl rolling-update
-
-[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update)によって、同様の形式でPodとReplicationControllerを更新できます。しかしDeploymentの使用が推奨されます。なぜならDeploymentの作成は宣言的であり、ローリングアップデートが更新された後に過去のリビジョンにロールバックできるなど、いくつかの追加機能があるためです。
diff --git a/content/ja/docs/reference/kubectl/cheatsheet.md b/content/ja/docs/reference/kubectl/cheatsheet.md
index d63654bd38299..1ebe1deea6c15 100644
--- a/content/ja/docs/reference/kubectl/cheatsheet.md
+++ b/content/ja/docs/reference/kubectl/cheatsheet.md
@@ -208,8 +208,6 @@ kubectl diff -f ./my-manifest.yaml
## リソースのアップデート
-version 1.11で`rolling-update`は廃止されました、代わりに`rollout`コマンドをお使いください(詳しくはこちらをご覧ください [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md))。
-
```bash
kubectl set image deployment/frontend www=image:v2 # frontend Deploymentのwwwコンテナイメージをv2にローリングアップデートします
kubectl rollout history deployment/frontend # frontend Deploymentの改訂履歴を確認します
@@ -219,11 +217,6 @@ kubectl rollout status -w deployment/frontend # frontend Depl
kubectl rollout restart deployment/frontend # frontend Deployment を再起動します
-# これらのコマンドは1.11から廃止されました
-kubectl rolling-update frontend-v1 -f frontend-v2.json # (廃止) frontend-v1 Podをローリングアップデートします
-kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (廃止) リソース名とイメージを変更します
-kubectl rolling-update frontend --image=image:v2 # (廃止) frontendのイメージを変更します
-kubectl rolling-update frontend-v1 frontend-v2 --rollback # (廃止) 現在実行中のローリングアップデートを中止します
cat pod.json | kubectl replace -f - # 標準入力から渡されたJSONに基づいてPodを置き換えます
# リソースを強制的に削除してから再生成し、置き換えます。サービスの停止が発生します
diff --git a/content/ja/docs/reference/kubectl/overview.md b/content/ja/docs/reference/kubectl/overview.md
index dc018cd40c40f..c45e055b41e62 100644
--- a/content/ja/docs/reference/kubectl/overview.md
+++ b/content/ja/docs/reference/kubectl/overview.md
@@ -88,7 +88,6 @@ kubectl [command] [TYPE] [NAME] [flags]
`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | 1つ以上のリーカルポートを、Podに転送します。
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Kubernetes APIサーバーへのプロキシーを実行します。
`replace` | `kubectl replace -f FILENAME` | ファイルや標準出力から、リソースを置き換えます。
-`rolling-update` | kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags] | 指定されたReplicationControllerとそのPodを徐々に置き換えることで、ローリングアップデートを実行します。
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | 指定したイメージを、クラスタ上で実行します。
`scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | していしたReplicationControllerのサイズを更新します。
`version` | `kubectl version [--client] [flags]` | クライアントとサーバーで実行中のKubernetesのバージョンを表示します。
From 791d7ad969dda686477ed7ce7bacb7bd6a734740 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Mon, 12 Oct 2020 19:01:32 +0100
Subject: [PATCH 04/50] Add object to glossary
---
content/en/docs/reference/glossary/object.md | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
create mode 100755 content/en/docs/reference/glossary/object.md
diff --git a/content/en/docs/reference/glossary/object.md b/content/en/docs/reference/glossary/object.md
new file mode 100755
index 0000000000000..1e3a6165fd0b4
--- /dev/null
+++ b/content/en/docs/reference/glossary/object.md
@@ -0,0 +1,19 @@
+---
+title: Object
+id: object
+date: 2020-10-12
+full_link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects
+short_description: >
+ A entity in the Kubernetes system, representing part of the state of your cluster.
+aka:
+tags:
+- fundamental
+---
+An entity in the Kubernetes system. The Kubernetes API uses these entities to represent the state
+of your cluster.
+
+A Kubernetes object is typically a “record of intent”—once you create the object, the Kubernetes
+{{< glossary_tooltip text="control plane" term_id="control-plane" >}} works constantly to ensure
+that the item it represents actually exists.
+By creating an object, you're effectively telling the Kubernetes system what you want that part of
+your cluster's workload to look like; this is your cluster's desired state.
From 36a42bf4a1ce4b8fed6c293dd3d45cc8431d5362 Mon Sep 17 00:00:00 2001
From: Laurence Man
Date: Mon, 26 Oct 2020 13:14:01 -0700
Subject: [PATCH 05/50] Update Calico description on 'Cluster Networking' page
---
.../concepts/cluster-administration/networking.md | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md
index 3b692fb448ab3..872a8167ae6ef 100644
--- a/content/en/docs/concepts/cluster-administration/networking.md
+++ b/content/en/docs/concepts/cluster-administration/networking.md
@@ -124,6 +124,12 @@ With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, c
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
+### Calico
+
+[Calico](https://docs.projectcalico.org/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a state-of-the-art pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Whether you prefer cutting edge eBPF, or the familiarity of the standard primitives that existing system administrators already know, you’ll get the same, easy to use, base networking, network policy and IP address management capabilities, that have made Calico the most trusted networking and network policy solution for mission-critical cloud-native applications.
+
+Calico supports a broad range of platforms including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services. The largest public cloud providers have selected Calico to provide network security for their hosted Kubernetes services (Amazon EKS, Azure AKS, Google GKE, and IBM IKS) running across tens of thousands of clusters.
+
### Cilium
[Cilium](https://github.com/cilium/cilium) is open source software for
@@ -291,14 +297,6 @@ stateful ACLs, load-balancers etc to build different virtual networking
topologies. The project has a specific Kubernetes plugin and documentation
at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
-### Project Calico
-
-[Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
-
-Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
-
-Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE, AWS or Azure networking.
-
### Romana
[Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
From 9a6731fc8b65790f0a2533d4321c46766ccc3ebd Mon Sep 17 00:00:00 2001
From: Nate W
Date: Mon, 26 Oct 2020 12:03:04 -0700
Subject: [PATCH 06/50] Copy updates
1. Copy updates to correct "container image manifests" reference to "container image index." (Fixes https://github.com/kubernetes/website/issues/23055)
2. Setting titles to sentence case throughout.
Signed-off-by: Nate W
---
content/en/docs/concepts/containers/images.md | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index e681f1a351c40..be25fd4e147e8 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -47,7 +47,7 @@ to roll back to a working version.
Instead, specify a meaningful tag such as `v1.42.0`.
{{< /caution >}}
-## Updating Images
+## Updating images
The default pull policy is `IfNotPresent` which causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip
@@ -61,13 +61,13 @@ you can do one of the following:
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
-## Multi-architecture Images with Manifests
+## Multi-architecture images with image indexes
-As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
+As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
-## Using a Private Registry
+## Using a private registry
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
@@ -86,7 +86,7 @@ Credentials can be provided in several ways:
These options are explaind in more detail below.
-### Configuring Nodes to authenticate to a Private Registry
+### Configuring nodes to authenticate to a private registry
If you run Docker on your nodes, you can configure the Docker container
runtime to authenticate to a private container registry.
@@ -178,7 +178,7 @@ template needs to include the `.docker/config.json` or mount a drive that contai
All pods will have read access to images in any private registry once private
registry keys are added to the `.docker/config.json`.
-### Pre-pulled Images
+### Pre-pulled images
{{< note >}}
This approach is suitable if you can control node configuration. It
@@ -197,7 +197,7 @@ This can be used to preload certain images for speed or as an alternative to aut
All pods will have read access to any pre-pulled images.
-### Specifying ImagePullSecrets on a Pod
+### Specifying imagePullSecrets on a Pod
{{< note >}}
This is the recommended approach to run containers based on images
@@ -206,7 +206,7 @@ in private registries.
Kubernetes supports specifying container image registry keys on a Pod.
-#### Creating a Secret with a Docker Config
+#### Creating a Secret with a Docker config
Run the following command, substituting the appropriate uppercase values:
@@ -266,7 +266,7 @@ Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-cont
You can use this in conjunction with a per-node `.docker/config.json`. The credentials
will be merged.
-## Use Cases
+## Use cases
There are a number of solutions for configuring private registries. Here are some
common use cases and suggested solutions.
From dcf5fd415dbebf7fc82ee02a4f261f23793f44d2 Mon Sep 17 00:00:00 2001
From: GoodGameZoo
Date: Tue, 27 Oct 2020 00:17:32 -0700
Subject: [PATCH 07/50] Update links in page docker-cli-to-kubectl.md
---
.../kubectl/docker-cli-to-kubectl.md | 147 +++++++++---------
1 file changed, 73 insertions(+), 74 deletions(-)
diff --git a/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
index c6b4d1a38d4c8..a89f43c32c347 100644
--- a/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
+++ b/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
@@ -5,19 +5,19 @@ reviewers:
- brendandburns
- thockin
---
-
-
您可以使用 Kubernetes 命令行工具 kubectl 与 API 服务器进行交互。如果您熟悉 Docker 命令行工具,则使用 kubectl 非常简单。但是,docker 命令和 kubectl 命令之间有一些区别。以下显示了 docker 子命令,并描述了等效的 kubectl 命令。
@@ -48,11 +48,11 @@ CONTAINER ID IMAGE COMMAND CREATED
kubectl:
-
```shell
# 启动运行 nginx 的 Pod
@@ -77,11 +77,11 @@ deployment.apps/nginx-app env updated
`kubectl` 命令打印创建或突变资源的类型和名称,然后可以在后续命令中使用。部署后,您可以公开新服务。
{{< /note >}}
-
```shell
# 通过服务公开端口
@@ -91,10 +91,10 @@ kubectl expose deployment nginx-app --port=80 --name=nginx-http
service "nginx-http" exposed
```
-
-在 kubectl 命令中,我们创建了一个 [Deployment](/docs/concepts/workloads/controllers/deployment/),这将保证有 N 个运行 nginx 的 pod(N 代表 spec 中声明的 replica 数,默认为 1)。我们还创建了一个 [service](/docs/concepts/services-networking/service/),其选择器与容器标签匹配。查看[使用服务访问群集中的应用程序](/docs/tasks/access-application-cluster/service-access-application-cluster) 获取更多信息。
+在 kubectl 命令中,我们创建了一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/),这将保证有 N 个运行 nginx 的 pod(N 代表 spec 中声明的 replica 数,默认为 1)。我们还创建了一个 [service](/zh/docs/concepts/services-networking/service/),其选择器与容器标签匹配。查看[使用服务访问群集中的应用程序](/zh/docs/tasks/access-application-cluster/service-access-application-cluster) 获取更多信息。
与 `docker run ...` 不同的是,如果指定了 `--attach` ,我们将连接到 `stdin`,`stdout` 和 `stderr`,而不能控制具体连接到哪个输出流(`docker -a ...`)。要从容器中退出,可以输入 Ctrl + P,然后按 Ctrl + Q。
-
因为我们使用 Deployment 启动了容器,如果您终止连接到的进程(例如 `ctrl-c`),容器将会重启,这跟 `docker run -it` 不同。
如果想销毁该 Deployment(和它的 pod),您需要运行 `kubectl delete deployment `。
## docker ps
-
如何列出哪些正在运行?查看 [kubectl get](/docs/reference/generated/kubectl/kubectl-commands/#get)。
-
使用 docker 命令:
@@ -139,8 +139,8 @@ CONTAINER ID IMAGE COMMAND CREATED
55c103fa1296 nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp nginx-app
```
-
使用 kubectl 命令:
@@ -155,13 +155,13 @@ ubuntu 0/1 Completed 0 20s
## docker attach
-
如何连接到已经运行在容器中的进程?查看 [kubectl attach](/docs/reference/generated/kubectl/kubectl-commands/#attach)。
-
使用 docker 命令:
@@ -193,20 +193,20 @@ kubectl attach -it nginx-app-5jyvm
...
```
-
要从容器中分离,可以输入 Ctrl + P,然后按 Ctrl + Q。
## docker exec
-
如何在容器中执行命令?查看 [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec)。
-
使用 docker 命令:
@@ -224,8 +224,8 @@ docker exec 55c103fa1296 cat /etc/hostname
55c103fa1296
```
-
使用 kubectl 命令:
@@ -244,13 +244,13 @@ kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm
```
-
执行交互式命令怎么办?
-
使用 docker 命令:
@@ -266,21 +266,21 @@ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
# exit
```
-
-更多信息请查看[获取运行中容器的 Shell 环境](/docs/tasks/kubectl/get-shell-running-container/)。
+更多信息请查看[获取运行中容器的 Shell 环境](/zh/docs/tasks/debug-application-cluster/get-shell-running-container/)。
## docker logs
-
如何查看运行中进程的 stdout/stderr?查看 [kubectl logs](/docs/reference/generated/kubectl/kubectl-commands/#logs)。
-
使用 docker 命令:
@@ -292,8 +292,8 @@ docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
```
-
使用 kubectl 命令:
@@ -305,8 +305,8 @@ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
-
现在是时候提一下 pod 和容器之间的细微差别了;默认情况下如果 pod 中的进程退出 pod 也不会终止,相反它将会重启该进程。这类似于 docker run 时的 `--restart=always` 选项, 这是主要差别。在 docker 中,进程的每个调用的输出都是被连接起来的,但是对于 kubernetes,每个调用都是分开的。要查看以前在 kubernetes 中执行的输出,请执行以下操作:
@@ -318,20 +318,20 @@ kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
-
-查看[日志架构](/docs/concepts/cluster-administration/logging/)获取更多信息。
+查看[日志架构](/zh/docs/concepts/cluster-administration/logging/)获取更多信息。
## docker stop and docker rm
-
如何停止和删除运行中的进程?查看 [kubectl delete](/docs/reference/generated/kubectl/kubectl-commands/#delete)。
-
使用 docker 命令:
@@ -357,8 +357,8 @@ docker rm a9ec34d98787
a9ec34d98787
```
-
使用 kubectl 命令:
@@ -390,28 +390,28 @@ kubectl get po -l run=nginx-app
```
{{< note >}}
-
请注意,我们不直接删除 pod。使用 kubectl 命令,我们要删除拥有该 pod 的 Deployment。如果我们直接删除 pod,Deployment 将会重新创建该 pod。
{{< /note >}}
## docker login
-
-在 kubectl 中没有对 `docker login` 的直接模拟。如果您有兴趣在私有镜像仓库中使用 Kubernetes,请参阅[使用私有镜像仓库](/docs/concepts/containers/images/#using-a-private-registry)。
+在 kubectl 中没有对 `docker login` 的直接模拟。如果您有兴趣在私有镜像仓库中使用 Kubernetes,请参阅[使用私有镜像仓库](/zh/docs/concepts/containers/images/#using-a-private-registry)。
## docker version
-
如何查看客户端和服务端的版本?查看 [kubectl version](/docs/reference/generated/kubectl/kubectl-commands/#version)。
-
使用 docker 命令:
@@ -431,8 +431,8 @@ Git commit (server): 0baf609
OS/Arch (server): linux/amd64
```
-
使用 kubectl 命令:
@@ -446,13 +446,13 @@ Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4
## docker info
-
如何获取有关环境和配置的各种信息?查看 [kubectl cluster-info](/docs/reference/generated/kubectl/kubectl-commands/#cluster-info)。
-
使用 docker 命令:
@@ -478,8 +478,8 @@ ID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO
WARNING: No swap limit support
```
-
使用 kubectl 命令:
@@ -494,4 +494,3 @@ Grafana is running at https://108.59.85.141/api/v1/namespaces/kube-system/servic
Heapster is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy
InfluxDB is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
```
-
From 5be352ab1b433d6b4c01d29acacce07412d3694e Mon Sep 17 00:00:00 2001
From: Zhang Yong
Date: Tue, 27 Oct 2020 16:24:24 +0800
Subject: [PATCH 08/50] fix Dockerfile link
---
.../windows/intro-windows-in-kubernetes.md | 2 +-
.../production-environment/windows/user-guide-windows-nodes.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index c821fca4254d3..676f7f8a48f55 100644
--- a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -539,7 +539,7 @@ Kubernetesクラスターのトラブルシューティングの主なヘルプ
Kubernetes Podでは、インフラストラクチャまたは「pause」コンテナが最初に作成され、コンテナエンドポイントをホストします。インフラストラクチャやワーカーコンテナなど、同じPodに属するコンテナは、共通のネットワークネームスペースとエンドポイント(同じIPとポートスペース)を共有します。Pauseコンテナは、ネットワーク構成を失うことなくクラッシュまたは再起動するワーカーコンテナに対応するために必要です。
- 「pause」(インフラストラクチャ)イメージは、Microsoft Container Registry(MCR)でホストされています。`docker pull mcr.microsoft.com/k8s/core/pause:1.2.0`を使用してアクセスできます。詳細については、[DOCKERFILE](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/cmd/wincat)をご覧ください。
+ 「pause」(インフラストラクチャ)イメージは、Microsoft Container Registry(MCR)でホストされています。`docker pull mcr.microsoft.com/k8s/core/pause:1.2.0`を使用してアクセスできます。詳細については、[DOCKERFILE](https://github.com/kubernetes-sigs/windows-testing/blob/master/images/pause/Dockerfile)をご覧ください。
### さらなる調査
diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md
index 6edce770c12cb..9f54861a945fa 100644
--- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md
+++ b/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md
@@ -217,7 +217,7 @@ All code snippets in Windows sections are to be run in a PowerShell environment
```
{{< note >}}
- The "pause" (infrastructure) image is hosted on Microsoft Container Registry (MCR). You can access it using "docker pull mcr.microsoft.com/k8s/core/pause:1.2.0". The DOCKERFILE is available at https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/cmd/wincat.
+ The "pause" (infrastructure) image is hosted on Microsoft Container Registry (MCR). You can access it using "docker pull mcr.microsoft.com/k8s/core/pause:1.2.0". The DOCKERFILE is available at https://github.com/kubernetes-sigs/windows-testing/blob/master/images/pause/Dockerfile.
{{< /note >}}
1. Prepare a Windows directory for Kubernetes
From 8f352ff829bd64af4984087f6988cce36fa8240c Mon Sep 17 00:00:00 2001
From: clearbjli
Date: Tue, 27 Oct 2020 13:11:00 +0100
Subject: [PATCH 09/50] Update container-runtimes.md
--keyring option to apt-key must come before the "add" command. Otherwise this step will fail (verified on Ubuntu 20.04.1 LTS).
---
.../en/docs/setup/production-environment/container-runtimes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index ae67726b8ef88..ffc5d122931c1 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -102,7 +102,7 @@ sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificat
```shell
## Add Docker's official GPG key
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add --keyring /etc/apt/trusted.gpg.d/docker.gpg -
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
```
```shell
From 811eded1c099b5fff6937b9b87c6cf6617e5783f Mon Sep 17 00:00:00 2001
From: yaowenqiang
Date: Wed, 28 Oct 2020 13:37:12 +0800
Subject: [PATCH 10/50] fix wrong anchor link
---
.../docs/tasks/access-application-cluster/access-cluster.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/zh/docs/tasks/access-application-cluster/access-cluster.md b/content/zh/docs/tasks/access-application-cluster/access-cluster.md
index 26b6dca83408c..7708022f31bf6 100644
--- a/content/zh/docs/tasks/access-application-cluster/access-cluster.md
+++ b/content/zh/docs/tasks/access-application-cluster/access-cluster.md
@@ -371,7 +371,7 @@ You have several options for connecting to nodes, pods and services from outside
如果服务不能够安全地暴露到互联网,或者服务不能获得节点 IP 端口的访问权限,或者是为了 debug,那么请使用此选项。
- 代理可能会给一些 web 应用带来问题。
- 只适用于 HTTP/HTTPS。
- - 更多详细信息在 [这里]。
+ - 更多详细信息在 [这里](#manually-constructing-apiserver-proxy-urls)。
- 从集群中的 node 或者 pod 中访问。
- 运行一个 pod,然后使用 [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec) 来连接 pod 里的 shell。
然后从 shell 中连接其它的节点、pod 和服务。
@@ -432,7 +432,7 @@ The supported formats for the name segment of the URL are:
* `https::` - proxies to the default or unnamed port using https (note the trailing colon)
* `https::` - proxies to the specified port using https
-->
-#### 手动构建 apiserver 代理 URL
+#### 手动构建 apiserver 代理 URL {#manually-constructing-apiserver-proxy-urls}
如上所述,您可以使用 `kubectl cluster-info` 命令来获得服务的代理 URL。要创建包含服务端点、后缀和参数的代理 URL,只需添加到服务的代理 URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
From b1be5bde9e6fceb8a2e844d738b1103ad920b53a Mon Sep 17 00:00:00 2001
From: paikwiki
Date: Wed, 28 Oct 2020 18:00:33 +0900
Subject: [PATCH 11/50] Translate a word, "and"
There is a word, "and" among operators. It confuses that "and" is in operators. So I translate it in Korean.
---
.../ko/docs/concepts/overview/working-with-objects/labels.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ko/docs/concepts/overview/working-with-objects/labels.md b/content/ko/docs/concepts/overview/working-with-objects/labels.md
index a007375673802..a6f03a907fcc4 100644
--- a/content/ko/docs/concepts/overview/working-with-objects/labels.md
+++ b/content/ko/docs/concepts/overview/working-with-objects/labels.md
@@ -129,7 +129,7 @@ spec:
### _집합성 기준_ 요건
-_집합성 기준_ 레이블 요건에 따라 값 집합을 키로 필터링할 수 있다. `in`,`notin` and `exists`(키 식별자만 해당)의 3개의 연산자를 지원한다. 예를 들면,
+_집합성 기준_ 레이블 요건에 따라 값 집합을 키로 필터링할 수 있다. `in`,`notin`과 `exists`(키 식별자만 해당)의 3개의 연산자를 지원한다. 예를 들면,
```
environment in (production, qa)
From 7f94c0e4a48effe32e6c422d00c8bc4c17fe0c07 Mon Sep 17 00:00:00 2001
From: Laurence Man
Date: Wed, 28 Oct 2020 12:00:23 -0700
Subject: [PATCH 12/50] Update Calico description based on feedback
---
content/en/docs/concepts/cluster-administration/networking.md | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md
index 872a8167ae6ef..a124d788fd36c 100644
--- a/content/en/docs/concepts/cluster-administration/networking.md
+++ b/content/en/docs/concepts/cluster-administration/networking.md
@@ -126,9 +126,7 @@ BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http
### Calico
-[Calico](https://docs.projectcalico.org/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a state-of-the-art pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Whether you prefer cutting edge eBPF, or the familiarity of the standard primitives that existing system administrators already know, you’ll get the same, easy to use, base networking, network policy and IP address management capabilities, that have made Calico the most trusted networking and network policy solution for mission-critical cloud-native applications.
-
-Calico supports a broad range of platforms including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services. The largest public cloud providers have selected Calico to provide network security for their hosted Kubernetes services (Amazon EKS, Azure AKS, Google GKE, and IBM IKS) running across tens of thousands of clusters.
+[Calico](https://docs.projectcalico.org/) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports multiple data planes including: a pure Linux eBPF dataplane, a standard Linux networking dataplane, and a Windows HNS dataplane. Calico provides a full networking stack but can also be used in conjunction with [cloud provider CNIs](https://docs.projectcalico.org/networking/determine-best-networking#calico-compatible-cni-plugins-and-cloud-provider-integrations) to provide network policy enforcement.
### Cilium
From ab5877570252b3dffd86c810ec47bd04aa901107 Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Fri, 11 Sep 2020 19:48:37 +0800
Subject: [PATCH 13/50] Add secret type documentation
---
.../en/docs/concepts/configuration/secret.md | 373 ++++++++++++++++--
1 file changed, 351 insertions(+), 22 deletions(-)
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index 8db4cfa43a562..572072dd3e2d1 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -15,50 +15,379 @@ weight: 30
Kubernetes Secrets let you store and manage sensitive information, such
as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret
is safer and more flexible than putting it verbatim in a
-{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
-
+{{< glossary_tooltip term_id="pod" >}} definition or in a
+{{< glossary_tooltip text="container image" term_id="image" >}}.
+See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
+A Secret is an object that contains a small amount of sensitive data such as
+a password, a token, or a key. Such information might otherwise be put in a
+Pod specification or in an image. Users can create Secrets and the system
+also creates some Secrets.
## Overview of Secrets
-A Secret is an object that contains a small amount of sensitive data such as
-a password, a token, or a key. Such information might otherwise be put in a
-Pod specification or in an image. Users can create secrets and the system
-also creates some secrets.
-
-To use a secret, a Pod needs to reference the secret.
-A secret can be used with a Pod in three ways:
+To use a Secret, a Pod needs to reference the Secret.
+A Secret can be used with a Pod in three ways:
- As [files](#using-secrets-as-files-from-a-pod) in a
-{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
-its containers.
+ {{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
+ its containers.
- As [container environment variable](#using-secrets-as-environment-variables).
- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod.
The name of a Secret object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+You can specify the `data` and/or the `stringData` field when creating a
+configuration file for a Secret. The `data` and the `stringData` fields are optional.
+The values for all keys in the `data` field have to be base64-encoded strings.
+If the conversion to base64 string is not desirable, you can choose to specify
+the `stringData` field instead, which accepts arbitrary strings as values.
The keys of `data` and `stringData` must consist of alphanumeric characters,
-`-`, `_` or `.`.
+`-`, `_` or `.`. All key-value pairs in the `stringData` field are internally
+merged into the `data` field. If a key appears in both the `data` and the
+`stringData` field, the value specified in the `stringData` field takes
+precedence.
+
+## Types of Secret {#secret-types}
+
+When creating a Secret, you can specify its type using the `type` field of
+the [`Secret`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
+resource, or certain equivalent `kubectl` command line flags (if available).
+The Secret type is used to facilitate programmatic handling of the Secret data.
+
+Kubernetes provides several builtin types for some common usage scenarios.
+These types vary in terms of the validations performed and the constraints
+Kubernetes imposes on them.
+
+| Builtin Type | Usage |
+|--------------|-------|
+| `Opaque` | arbitrary user-defined data |
+| `kubernetes.io/service-account-token` | service account token |
+| `kubernetes.io/dockercfg` | serialized `~/.dockercfg` file |
+| `kubernetes.io/dockerconfigjson` | serialized `~/.docker/config.json` file |
+| `kubernetes.io/basic-auth` | credentials for basic authentication |
+| `kubernetes.io/ssh-auth` | credentials for SSH authentication |
+| `kubernetes.io/tls` | data for a TLS client or server |
+| `bootstrap.kubernetes.io/token` | bootstrap token data |
+
+You can define and use your own Secret type by assigning a non-empty string as the
+`type` value for a Secret object. An empty string is treated as an `Opaque` type.
+Kubernetes doesn't impose any constraints on the type name. However, if you
+are using one of the builtin types, you must meet all the requirements defined
+for that type.
+
+### Opaque secrets
+
+`Opaque` is the default Secret type if omitted from a Secret configuration file.
+When you create a Secret using `kubectl`, you will use the `generic`
+subcommand to indicate an `Opaque` Secret type. For example, the following
+command creates an empty Secret of type `Opaque`.
+
+```shell
+kubectl create secret generic empty-secret
+kubectl get secret empty-secret
+```
-### Built-in Secrets
+The output looks like:
-#### Service accounts automatically create and attach Secrets with API credentials
+```
+NAME TYPE DATA AGE
+empty-secret Opaque 0 2m6s
+```
-Kubernetes automatically creates secrets which contain credentials for
-accessing the API and automatically modifies your Pods to use this type of
-secret.
+The `DATA` column shows the number of data items stored in the Secret.
+In this case, `0` means we have just created an empty Secret.
-The automatic creation and use of API credentials can be disabled or overridden
-if desired. However, if all you need to do is securely access the API server,
-this is the recommended workflow.
+### Service account token Secrets
+
+A `kubernetes.io/service-account-token` type of Secret is used to store a
+token that identifies a service account. When using this Secret type, you need
+to ensure that the `kubernetes.io/service-account.name` annotation is set to an
+existing service account name. An Kubernetes controller fills in some other
+fields such as the `kubernetes.io/service-account.uid` annotation and the
+`token` key in the `data` field set to actual token content.
+
+The following example configuration declares a service account token Secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-sa-sample
+ annotations:
+ kubernetes.io/service-account.name: "sa-name"
+type: kubernetes.io/service-account-token
+data:
+ # You can include additional key value pairs as you do with Opaque Secrets
+ extra: YmFyCg==
+```
+
+When creating a `Pod`, Kubernetes automatically creates a service account Secret
+and automatically modifies your Pod to use this Secret. The service account token
+Secret contains credentials for accessing the API.
+
+The automatic creation and use of API credentials can be disabled or
+overridden if desired. However, if all you need to do is securely access the
+API server, this is the recommended workflow.
See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/)
documentation for more information on how service accounts work.
+You can also check the `automountServiceAccountToken` field and the
+`serviceAccountName` field of the
+[`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
+for information on referencing service account from Pods.
+
+### Docker config Secrets
+
+You can use one of the following `type` values to create a Secret to
+store the credentials for accessing a Docker registry for images.
+
+- `kubernetes.io/dockercfg`
+- `kubernetes.io/dockerconfigjson`
+
+The `kubernetes.io/dockercfg` type is reserved to store a serialized
+`~/.dockercfg` which is the legacy format for configuring Docker command line.
+When using this Secret type, you have to ensure the Secret `data` field
+contains a `.dockercfg` key whose value is content of a `~/.dockercfg` file
+encoded in the base64 format.
+
+The `kubernetes/dockerconfigjson` type is designed for storing a serialized
+JSON that follows the same format rules as the `~/.docker/config.json` file
+which is a new format for `~/.dockercfg`.
+When using this Secret type, the `data` field of the Secret object must
+contain a `.dockerconfigjson` key, in which the content for the
+`~/.docker/config.json` file is provided as a base64 encoded string.
+
+Below is an example for a `kubernetes.io/dockercfg` type of Secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-dockercfg
+type: kubernetes.io/dockercfg
+data:
+ .dockercfg: |
+ ""
+```
+
+{{< note >}}
+If you do not want to perform the base64 encoding, you can choose to use the
+`stringData` field instead.
+{{< /note >}}
+
+When you create these types of Secrets using a manifest, the API
+server checks whether the expected key does exists in the `data` field, and
+it verifies if the value provided can be parsed as a valid JSON. The API
+server doesn't validate if the JSON actually is a Docker config file.
+
+When you do not have a Docker config file, or you want to use `kubectl`
+to create a Docker registry Secret, you can do:
+
+```shell
+kubectl create secret docker-registry secret-tiger-docker \
+ --docker-username=tiger \
+ --docker-password=pass113 \
+ --docker-email=tiger@acme.com
+```
+
+This command creates a Secret of type `kubernetes.io/dockerconfigjson`.
+If you dump the `.dockerconfigjson` content from the `data` field, you will
+get the following JSON content which is a valid Docker configuration created
+on the fly:
+
+```json
+{
+ "auths": {
+ "https://index.docker.io/v1/": {
+ "username": "tiger",
+ "password": "pass113",
+ "email": "tiger@acme.com",
+ "auth": "dGlnZXI6cGFzczExMw=="
+ }
+ }
+}
+```
+
+### Basic authentication Secret
+
+The `kubernetes.io/basic-auth` type is provided for storing credentials needed
+for basic authentication. When using this Secret type, the `data` field of the
+Secret must contain the following two keys:
+
+- `username`: the user name for authentication;
+- `password`: the password or token for authentication.
+
+Both values for the above two keys are base64 encoded strings. You can, of
+course, provide the clear text content using the `stringData` for Secret
+creation.
+
+The following YAML is an example config for a basic authentication Secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-basic-auth
+type: kubernetes.io/basic-auth
+stringData:
+ username: admin
+ password: t0p-Secret
+```
+
+The basic authentication Secret type is provided only for user's convenience.
+You can create an `Opaque` for credentials used for basic authentication.
+However, using the builtin Secret type helps unify the formats of your credentials
+and the API server does verify if the required keys are provided in a Secret
+configuration.
+
+### SSH authentication secrets
+
+The builtin type `kubernetes.io/ssh-auth` is provided for storing data used in
+SSH authentication. When using this Secret type, you will have to specify a
+`ssh-privatekey` key-value pair in the `data` (or `stringData`) field.
+as the SSH credential to use.
+
+The following YAML is an example config for a SSH authentication Secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-ssh-auth
+type: kubernetes.io/ssh-auth
+data:
+ # the data is abbreviated in this example
+ ssh-privatekey: |
+ MIIEpQIBAAKCAQEAulqb/Y ...
+```
+
+The SSH authentication Secret type is provided only for user's convenience.
+You can create an `Opaque` for credentials used for SSH authentication.
+However, using the builtin Secret type helps unify the formats of your credentials
+and the API server does verify if the required keys are provided in a Secret
+configuration.
+
+### TLS secrets
+
+Kubernetes provides a builtin Secret type `kubernetes.io/tls` for to storing
+a certificate and its associated key that are typically used for TLS . This
+data is primarily used with TLS termination of the Ingress resource, but may
+be used with other resources or directly by a workload.
+When using this type of Secret, the `tls.key` and the `tls.crt` key must be provided
+in the `data` (or `stringData`) field of the Secret configuration, although the API
+server doesn't actually validate the values for each key.
+
+The following YAML contains an example config for a TLS Secret:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-tls
+type: kubernetes.io/tls
+data:
+ # the data is abbreviated in this example
+ tls.crt: |
+ MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
+ tls.key: |
+ MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
+```
+
+The TLS Secret type is provided for user's convenience. You can create an `Opaque`
+for credentials used for TLS server and/or client. However, using the builtin Secret
+type helps ensure the consistency of Secret format in your project; the API server
+does verify if the required keys are provided in a Secret configuration.
+
+When creating a TLS Secret using `kubectl`, you can use the `tls` subcommand
+as shown in the following example:
+
+```shell
+kubectl create secret tls my-tls-secret \
+ --cert=path/to/cert/file \
+ --key=path/to/key/file
+```
+
+The public/private key pair must exist before hand. The public key certificate
+for `--cert` must be .PEM encoded (Base64-encoded DER format), and match the
+given private key for `--key`.
+The private key must be in what is commonly called PEM private key format,
+unencrypted. In both cases, the initial and the last lines from PEM (for
+example, `--------BEGIN CERTIFICATE-----` and `-------END CERTIFICATE----` for
+a cetificate) are *not* included.
+
+### Bootstrap token Secrets
+
+A bootstrap token Secret can be created by explicitly specifying the Secret
+`type` to `bootstrap.kubernetes.io/token`. This type of Secret is designed for
+tokens used during the node bootstrap process. It stores tokens used to sign
+well known ConfigMaps.
+
+A bootstrap token Secret is usually created in the `kube-system` namespace and
+named in the form `bootstrap-token-` where `` is a 6 character
+string of the token ID.
+
+As a Kubernetes manifest, a bootstrap token Secret might look like the
+following:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: bootstrap-token-5emitj
+ namespace: kube-system
+type: bootstrap.kubernetes.io/token
+data:
+ auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=
+ expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=
+ token-id: NWVtaXRq
+ token-secret: a3E0Z2lodnN6emduMXAwcg==
+ usage-bootstrap-authentication: dHJ1ZQ==
+ usage-bootstrap-signing: dHJ1ZQ==
+```
+
+A bootstrap type has the following keys specified under `data`:
+
+- `token_id`: A random 6 character string as the token identifier. Required.
+- `token-secret`: A random 16 character string as the actual token secret. Required.
+- `description1`: A human-readable string that describes what the token is
+ used for. Optional.
+- `expiration`: An absolute UTC time using RFC3339 specifying when the token
+ should be expired. Optional.
+- `usage-bootstrap-`: A boolean flag indicating additional usage for
+ the bootstrap token.
+- `auth-extra-groups`: A comma-separated list of group names that will be
+ authenticated as in addition to system:bootstrappers group.
+
+The above YAML may look confusing because the values are all in base64 encoded
+strings. In fact, you can create an identical Secret using the following YAML
+which results in an identical Secret object:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ # Note how the Secret is named
+ name: bootstrap-token-5emitj
+ # A bootstrap token Secret usually resides in the kube-system namespace
+ namespace: kube-system
+type: bootstrap.kubernetes.io/token
+stringData:
+ auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token"
+ expiration: "2020-09-13T04:39:10Z"
+ # This token ID is used in the name
+ token-id: "5emitj"
+ token-secret: "kq4gihvszzgn1p0r"
+ # This token can be used for authentication
+ usage-bootstrap-authentication: "true"
+ # and it can be used for signing
+ usage-bootstrap-signing: "true"
+```
-### Creating a Secret
+## Creating a Secret
There are several options to create a Secret:
@@ -66,7 +395,7 @@ There are several options to create a Secret:
- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
-### Editing a Secret
+## Editing a Secret
An existing Secret may be edited with the following command:
From f1e5b34607cc86dacd44ff5a662b873c641a6427 Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Thu, 29 Oct 2020 14:21:02 +0800
Subject: [PATCH 14/50] Resync tutorial page expose external IP address
---
.../expose-external-ip-address.md | 225 +++++++++---------
1 file changed, 112 insertions(+), 113 deletions(-)
diff --git a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md
index 4a4d7779e2bfb..25cb862660e7c 100644
--- a/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md
+++ b/content/zh/docs/tutorials/stateless-application/expose-external-ip-address.md
@@ -3,13 +3,10 @@ title: 公开外部 IP 地址以访问集群中应用程序
content_type: tutorial
weight: 10
---
-
@@ -20,12 +17,8 @@ external IP address.
-->
此页面显示如何创建公开外部 IP 地址的 Kubernetes 服务对象。
-
-
-
## {{% heading "prerequisites" %}}
-
-
* 运行 Hello World 应用程序的五个实例。
* 创建一个公开外部 IP 地址的 Service 对象。
* 使用 Service 对象访问正在运行的应用程序。
-
-
-
1. 在集群中运行 Hello World 应用程序:
-{{< codenew file="service/load-balancer-example.yaml" >}}
-
-```shell
-kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
-```
-
-
- 前面的命令创建一个 [Deployment](/zh/docs/concepts/workloads/controllers/deployment/)
- 对象和一个关联的 [ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/)对象。
- ReplicaSet 有五个 [Pod](/zh/docs/concepts/workloads/pods/pod/),每个都运行 Hello World 应用程序。
+ {{< codenew file="service/load-balancer-example.yaml" >}}
+
+ ```shell
+ kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
+ ```
+
+
+ 前面的命令创建一个
+ {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
+ 对象和一个关联的
+ {{< glossary_tooltip term_id="replica-set" text="ReplicaSet" >}} 对象。
+ ReplicaSet 有五个 {{< glossary_tooltip text="Pods" term_id="pod" >}},
+ 每个都运行 Hello World 应用程序。
2. 显示有关 Deployment 的信息:
- kubectl get deployments hello-world
- kubectl describe deployments hello-world
+ ```shell
+ kubectl get deployments hello-world
+ kubectl describe deployments hello-world
+ ```
3. 显示有关 ReplicaSet 对象的信息:
- kubectl get replicasets
- kubectl describe replicasets
+ ```shell
+ kubectl get replicasets
+ kubectl describe replicasets
+ ```
-4. 创建公开 deployment 的 Service 对象:
+4. 创建公开 Deployment 的 Service 对象:
- kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
+ ```shell
+ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
+ ```
5. 显示有关 Service 的信息:
- kubectl get services my-service
+ ```shell
+ kubectl get services my-service
+ ```
-
+ -->
输出类似于:
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s
-
-
- 注意:`type=LoadBalancer` 服务由外部云服务提供商提供支持,本例中不包含此部分,详细信息请参考[此页](/docs/concepts/services-networking/service/#loadbalancer)
-
-
- 注意:如果外部 IP 地址显示为 \,请等待一分钟再次输入相同的命令。
+ ```
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s
+ ```
+
+
+ 提示:`type=LoadBalancer` 服务由外部云服务提供商提供支持,本例中不包含此部分,
+ 详细信息请参考[此页](/docs/concepts/services-networking/service/#loadbalancer)
+
+
+ 提示:如果外部 IP 地址显示为 \,请等待一分钟再次输入相同的命令。
6. 显示有关 Service 的详细信息:
- kubectl describe services my-service
+ ```shell
+ kubectl describe services my-service
+ ```
-
+ -->
输出类似于:
- Name: my-service
- Namespace: default
- Labels: run=load-balancer-example
- Annotations:
- Selector: run=load-balancer-example
- Type: LoadBalancer
- IP: 10.3.245.137
- LoadBalancer Ingress: 104.198.205.71
- Port: 8080/TCP
- NodePort: 32377/TCP
- Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
- Session Affinity: None
- Events:
-
-
+ -->
记下服务公开的外部 IP 地址(`LoadBalancer Ingress`)。
在本例中,外部 IP 地址是 104.198.205.71。还要注意 `Port` 和 `NodePort` 的值。
在本例中,`Port` 是 8080,`NodePort` 是32377。
-
+ -->
输出类似于:
- NAME ... IP NODE
- hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
- hello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
- hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
- hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
- hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
-
+ ```
+ NAME ... IP NODE
+ hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
+ hello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
+ hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
+ hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
+ hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
+ ```
8. 使用外部 IP 地址(`LoadBalancer Ingress`)访问 Hello World 应用程序:
- curl http://:
+ ```shell
+ curl http://:
+ ```
-
+ -->
其中 `` 是您的服务的外部 IP 地址(`LoadBalancer Ingress`),
`` 是您的服务描述中的 `port` 的值。
如果您正在使用 minikube,输入 `minikube service my-service` 将在浏览器中自动打开 Hello World 应用程序。
-
+ -->
成功请求的响应是一条问候消息:
- Hello Kubernetes!
-
-
-
+ ```
+ Hello Kubernetes!
+ ```
## {{% heading "cleanup" %}}
-
要删除服务,请输入以下命令:
- kubectl delete services my-service
+```shell
+kubectl delete services my-service
+```
要删除正在运行 Hello World 应用程序的 Deployment,ReplicaSet 和 Pod,请输入以下命令:
- kubectl delete deployment hello-world
-
-
-
+```shell
+kubectl delete deployment hello-world
+```
## {{% heading "whatsnext" %}}
-
-
-了解更多关于[将应用程序与服务连接](/zh/docs/concepts/services-networking/connect-applications-service/)。
-
+进一步了解[将应用程序与服务连接](/zh/docs/concepts/services-networking/connect-applications-service/)。
From 1c8429a173585d4e502d2baf07b0a744fabd0ead Mon Sep 17 00:00:00 2001
From: Karen Bradshaw
Date: Thu, 29 Oct 2020 10:42:44 -0400
Subject: [PATCH 15/50] fix csimigration feature state
---
content/en/docs/concepts/storage/volumes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index 7645b7e5fa830..39410ec2e5319 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -1367,7 +1367,7 @@ For more information on how to develop a CSI driver, refer to the
#### Migrating to CSI drivers from in-tree plugins
-{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
The `CSIMigration` feature, when enabled, directs operations against existing in-tree
plugins to corresponding CSI plugins (which are expected to be installed and configured).
From b45a679d36fa69992ea90b5e42db4975aac72e33 Mon Sep 17 00:00:00 2001
From: Anwesh Budhathoki <45763486+anwesh-b@users.noreply.github.com>
Date: Thu, 29 Oct 2020 20:42:46 +0545
Subject: [PATCH 16/50] Update container-runtimes.md
Added few lines to help people understand the procedure
---
.../en/docs/setup/production-environment/container-runtimes.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index ffc5d122931c1..3f0e4bdc951aa 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -85,6 +85,7 @@ net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
+# Apply sysctl params without reboot
sudo sysctl --system
```
@@ -421,6 +422,7 @@ EOF
```
```shell
+# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
```
@@ -476,6 +478,7 @@ EOF
```
```shell
+# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d
```
From 59cdc2aaecd306d0e4b523e2551a173418eef30f Mon Sep 17 00:00:00 2001
From: Arhell
Date: Fri, 30 Oct 2020 02:07:55 +0200
Subject: [PATCH 17/50] add cncf-landscape shortcode for training page
---
content/en/training/_index.html | 3 +--
layouts/shortcodes/cncf-landscape.html | 6 +++++-
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/content/en/training/_index.html b/content/en/training/_index.html
index 6dc509ff5bc3c..d7437a45de495 100644
--- a/content/en/training/_index.html
+++ b/content/en/training/_index.html
@@ -112,7 +112,6 @@
diff --git a/layouts/shortcodes/cncf-landscape.html b/layouts/shortcodes/cncf-landscape.html
index 5684dc0511716..455f2b5658b66 100644
--- a/layouts/shortcodes/cncf-landscape.html
+++ b/layouts/shortcodes/cncf-landscape.html
@@ -57,7 +57,11 @@
{{- end -}}
-
+ {{ if ( .Get "category" ) }}
+
+ {{ else }}
+
+ {{ end }}
{{- end -}}
From 384ba18c003c9787a31049d6ca4ced7adf2a01ee Mon Sep 17 00:00:00 2001
From: Zhang Yong
Date: Fri, 30 Oct 2020 09:25:17 +0800
Subject: [PATCH 18/50] update containerd systemd configuration
---
.../container-runtimes.md | 21 +++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/content/zh/docs/setup/production-environment/container-runtimes.md b/content/zh/docs/setup/production-environment/container-runtimes.md
index 500ea95345fcf..fc4494cf73461 100644
--- a/content/zh/docs/setup/production-environment/container-runtimes.md
+++ b/content/zh/docs/setup/production-environment/container-runtimes.md
@@ -877,9 +877,16 @@ Start-Service containerd
-### systemd
+### systemd {#containerd-systemd}
使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置
```
-[plugins.cri]
-systemd_cgroup = true
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
+ ...
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
+ SystemdCgroup = true
```
当使用 kubeadm 时,请手动配置
[kubelet 的 cgroup 驱动](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
From 0dfed51bc3e23d34fc5939dd5e8f624ba508e64d Mon Sep 17 00:00:00 2001
From: bryan
Date: Wed, 30 Sep 2020 09:23:20 +0800
Subject: [PATCH 19/50] Translate Restrict a Container's Syscalls with Seccomp
into Chinese
---
content/zh/docs/tutorials/clusters/seccomp.md | 527 ++++++++++++++++++
.../security/seccomp/alpha/audit-pod.yaml | 16 +
.../security/seccomp/alpha/default-pod.yaml | 16 +
.../pods/security/seccomp/alpha/fine-pod.yaml | 16 +
.../security/seccomp/alpha/violation-pod.yaml | 16 +
.../pods/security/seccomp/ga/audit-pod.yaml | 18 +
.../pods/security/seccomp/ga/default-pod.yaml | 17 +
.../pods/security/seccomp/ga/fine-pod.yaml | 18 +
.../security/seccomp/ga/violation-pod.yaml | 18 +
.../examples/pods/security/seccomp/kind.yaml | 7 +
.../pods/security/seccomp/profiles/audit.json | 3 +
.../seccomp/profiles/fine-grained.json | 65 +++
.../security/seccomp/profiles/violation.json | 3 +
13 files changed, 740 insertions(+)
create mode 100644 content/zh/docs/tutorials/clusters/seccomp.md
create mode 100644 content/zh/examples/pods/security/seccomp/alpha/audit-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/alpha/default-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/alpha/fine-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/alpha/violation-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/ga/audit-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/ga/default-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/ga/fine-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/ga/violation-pod.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/kind.yaml
create mode 100644 content/zh/examples/pods/security/seccomp/profiles/audit.json
create mode 100644 content/zh/examples/pods/security/seccomp/profiles/fine-grained.json
create mode 100644 content/zh/examples/pods/security/seccomp/profiles/violation.json
diff --git a/content/zh/docs/tutorials/clusters/seccomp.md b/content/zh/docs/tutorials/clusters/seccomp.md
new file mode 100644
index 0000000000000..9dea407d2ea15
--- /dev/null
+++ b/content/zh/docs/tutorials/clusters/seccomp.md
@@ -0,0 +1,527 @@
+---
+title: 使用 Seccomp 限制容器的系统调用
+content_type: tutorial
+weight: 20
+---
+
+
+
+{{< feature-state for_k8s_version="v1.19" state="stable" >}}
+
+
+Seccomp 代表安全计算模式,自 2.6.12 版本以来一直是 Linux 内核的功能。
+它可以用来对进程的特权进行沙盒处理,从而限制了它可以从用户空间向内核进行的调用。
+Kubernetes 允许你将加载到节点上的 seccomp 配置文件自动应用于 Pod 和容器。
+
+确定工作负载所需的特权可能很困难。在本教程中,你将了解如何将 seccomp 配置文件
+加载到本地 Kubernetes 集群中,如何将它们应用到 Pod,以及如何开始制作仅向容器
+进程提供必要特权的配置文件。
+
+## {{% heading "objectives" %}}
+
+
+* 了解如何在节点上加载 seccomp 配置文件
+* 了解如何将 seccomp 配置文件应用于容器
+* 观察由容器进程进行的系统调用的审核
+* 观察当指定了一个不存在的配置文件时的行为
+* 观察违反 seccomp 配置的情况
+* 了解如何创建精确的 seccomp 配置文件
+* 了解如何应用容器运行时默认 seccomp 配置文件
+
+## {{% heading "prerequisites" %}}
+
+
+为了完成本教程中的所有步骤,你必须安装 [kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
+和 [kubectl](/zh/doc/tasks/tools/install-kubectl/)。本教程将显示同时具有 alpha(v1.19 之前的版本)
+和通常可用的 seccomp 功能的示例,因此请确保为所使用的版本[正确配置](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version)了集群。
+
+
+
+## 创建 Seccomp 文件
+
+这些配置文件的内容将在以后进行探讨,但现在继续进行,并将其下载到名为 `profiles/` 的目录中,以便可以将其加载到集群中。
+
+{{< tabs name="tab_with_code" >}}
+{{{< tab name="audit.json" >}}
+{{< codenew file="pods/security/seccomp/profiles/audit.json" >}}
+{{< /tab >}}
+{{< tab name="violation.json" >}}
+{{< codenew file="pods/security/seccomp/profiles/violation.json" >}}
+{{< /tab >}}}
+{{< tab name="fine-grained.json" >}}
+{{< codenew file="pods/security/seccomp/profiles/fine-grained.json" >}}
+{{< /tab >}}}
+{{< /tabs >}}
+
+
+## 使用 Kind 创建一个本地 Kubernetes 集群
+
+为简单起见,可以使用 [kind](https://kind.sigs.k8s.io/) 创建一个已经加载 seccomp 配置文件的单节点集群。
+Kind 在 Docker 中运行 Kubernetes,因此集群的每个节点实际上只是一个容器。这允许将文件挂载到每个容器的文件系统中,
+就像将文件挂载到节点上一样。
+
+{{< codenew file="pods/security/seccomp/kind.yaml" >}}
+
+
+下载上面的这个示例,并将其保存为 `kind.yaml`。然后使用这个配置创建集群。
+
+```
+kind create cluster --config=kind.yaml
+```
+
+
+一旦这个集群已经就绪,找到作为单节点集群运行的容器:
+
+```
+docker ps
+```
+
+
+你应该看到输出显示正在运行的容器名称为 `kind-control-plane`。
+
+```
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+6a96207fed4b kindest/node:v1.18.2 "/usr/local/bin/entr…" 27 seconds ago Up 24 seconds 127.0.0.1:42223->6443/tcp kind-control-plane
+```
+
+
+如果观察该容器的文件系统,则应该看到 `profiles/` 目录已成功加载到 kubelet 的默认 seccomp 路径中。
+使用 `docker exec` 在 Pod 中运行命令:
+
+```
+docker exec -it 6a96207fed4b ls /var/lib/kubelet/seccomp/profiles
+```
+
+```
+audit.json fine-grained.json violation.json
+```
+
+
+## 使用 Seccomp 配置文件创建 Pod 以进行系统调用审核
+
+首先,将 `audit.json` 配置文件应用到新的 Pod 中,该配置文件将记录该进程的所有系统调用。
+
+为你的 Kubernetes 版本下载正确的清单:
+
+{{< tabs name="audit_pods" >}}
+{{< tab name="v1.19 或更新版本(GA)" >}}
+{{< codenew file="pods/security/seccomp/ga/audit-pod.yaml" >}}
+{{< /tab >}}}
+{{{< tab name="v1.19之前版本(alpha)" >}}
+{{< codenew file="pods/security/seccomp/alpha/audit-pod.yaml" >}}
+{{< /tab >}}
+{{< /tabs >}}
+
+
+
+在集群中创建 Pod:
+
+```
+kubectl apply -f audit-pod.yaml
+```
+
+
+这个配置文件并不限制任何系统调用,所以这个 Pod 应该会成功启动。
+
+```
+kubectl get pod/audit-pod
+```
+
+```
+NAME READY STATUS RESTARTS AGE
+audit-pod 1/1 Running 0 30s
+```
+
+
+为了能够与该容器公开的端点进行交互,请创建一个 NodePort 服务,
+该服务允许从 kind 控制平面容器内部访问该端点。
+
+```
+kubectl expose pod/audit-pod --type NodePort --port 5678
+```
+
+
+检查这个服务在这个节点上被分配了什么端口。
+
+```
+kubectl get svc/audit-pod
+```
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+audit-pod NodePort 10.111.36.142 5678:32373/TCP 72s
+```
+
+
+现在你可以使用 `curl` 命令从 kind 控制平面容器内部通过该服务暴露出来的端口来访问这个端点。
+
+```
+docker exec -it 6a96207fed4b curl localhost:32373
+```
+
+```
+just made some syscalls!
+```
+
+
+你可以看到该进程正在运行,但是实际上执行了哪些系统调用?因为该 Pod 是在本地集群中运行的,
+你应该可以在 `/var/log/syslog` 日志中看到这些。打开一个新的终端窗口,使用 `tail` 命令来
+查看来自 `http-echo` 的调用输出:
+
+```
+tail -f /var/log/syslog | grep 'http-echo'
+```
+
+你应该已经可以看到 `http-echo` 发出的一些系统调用日志,
+如果你在控制面板容器内 `curl` 了这个端点,你会看到更多的日志。
+
+```
+Jul 6 15:37:40 my-machine kernel: [369128.669452] audit: type=1326 audit(1594067860.484:14536): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=51 compat=0 ip=0x46fe1f code=0x7ffc0000
+Jul 6 15:37:40 my-machine kernel: [369128.669453] audit: type=1326 audit(1594067860.484:14537): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=54 compat=0 ip=0x46fdba code=0x7ffc0000
+Jul 6 15:37:40 my-machine kernel: [369128.669455] audit: type=1326 audit(1594067860.484:14538): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000
+Jul 6 15:37:40 my-machine kernel: [369128.669456] audit: type=1326 audit(1594067860.484:14539): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=288 compat=0 ip=0x46fdba code=0x7ffc0000
+Jul 6 15:37:40 my-machine kernel: [369128.669517] audit: type=1326 audit(1594067860.484:14540): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=0 compat=0 ip=0x46fd44 code=0x7ffc0000
+Jul 6 15:37:40 my-machine kernel: [369128.669519] audit: type=1326 audit(1594067860.484:14541): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000
+Jul 6 15:38:40 my-machine kernel: [369188.671648] audit: type=1326 audit(1594067920.488:14559): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1 code=0x7ffc0000
+Jul 6 15:38:40 my-machine kernel: [369188.671726] audit: type=1326 audit(1594067920.488:14560): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064 comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53 code=0x7ffc0000
+```
+
+
+通过查看每一行上的 `syscall=` 条目,你可以开始了解 `http-echo` 进程所需的系统调用。
+尽管这些不太可能包含它使用的所有系统调用,但它可以作为该容器的 seccomp 配置文件的基础。
+
+开始下一节之前,请清理该 Pod 和 Service:
+
+```
+kubectl delete pod/audit-pod
+kubectl delete svc/audit-pod
+```
+
+
+## 使用导致违规的 Seccomp 配置文件创建 Pod
+
+为了进行演示,请将不允许任何系统调用的配置文件应用于 Pod。
+
+为你的 Kubernetes 版本下载正确的清单:
+
+{{< tabs name="violation_pods" >}}
+{{< tab name="v1.19 或更新版本(GA)" >}}
+{{< codenew file="pods/security/seccomp/ga/violation-pod.yaml" >}}
+{{< /tab >}}}
+{{{< tab name="v1.19 之前版本(alpha)" >}}
+{{< codenew file="pods/security/seccomp/alpha/violation-pod.yaml" >}}
+{{< /tab >}}
+{{< /tabs >}}
+
+
+
+在集群中创建 Pod:
+
+```
+kubectl apply -f violation-pod.yaml
+```
+
+
+如果你检查 Pod 的状态,你将会看到该 Pod 启动失败。
+
+```
+kubectl get pod/violation-pod
+```
+
+```
+NAME READY STATUS RESTARTS AGE
+violation-pod 0/1 CrashLoopBackOff 1 6s
+```
+
+
+如上例所示,`http-echo` 进程需要大量的系统调用。通过设置 `"defaultAction": "SCMP_ACT_ERRNO"`,
+来指示 seccomp 在任何系统调用上均出错。这是非常安全的,但是会删除执行有意义的操作的能力。
+你真正想要的只是给工作负载所需的特权。
+
+开始下一节之前,请清理该 Pod 和 Service:
+
+```
+kubectl delete pod/violation-pod
+kubectl delete svc/violation-pod
+```
+
+
+## 使用设置仅允许需要的系统调用的配置文件来创建 Pod
+
+如果你看一下 `fine-pod.json` 文件,你会注意到在第一个示例中配置文件设置为 `"defaultAction": "SCMP_ACT_LOG"` 的一些系统调用。
+现在,配置文件设置为 `"defaultAction": "SCMP_ACT_ERRNO"`,但是在 `"action": "SCMP_ACT_ALLOW"` 块中明确允许一组系统调用。
+理想情况下,容器将成功运行,并且你将不会看到任何发送到 `syslog` 的消息。
+
+为你的 Kubernetes 版本下载正确的清单:
+
+{{< tabs name="fine_pods" >}}
+{{< tab name="v1.19 或更新版本(GA)" >}}
+{{< codenew file="pods/security/seccomp/ga/fine-pod.yaml" >}}
+{{< /tab >}}}
+{{{< tab name="v1.19 之前版本(alpha)" >}}
+{{< codenew file="pods/security/seccomp/alpha/fine-pod.yaml" >}}
+{{< /tab >}}
+{{< /tabs >}}
+
+
+
+在你的集群上创建Pod:
+
+```
+kubectl apply -f fine-pod.yaml
+```
+
+
+Pod 应该被成功启动。
+
+```
+kubectl get pod/fine-pod
+```
+
+```
+NAME READY STATUS RESTARTS AGE
+fine-pod 1/1 Running 0 30s
+```
+
+
+打开一个新的终端窗口,使用 `tail` 命令查看来自 `http-echo` 的调用的输出:
+
+```
+tail -f /var/log/syslog | grep 'http-echo'
+```
+
+
+使用 NodePort 服务为该 Pod 开一个端口:
+
+```
+kubectl expose pod/fine-pod --type NodePort --port 5678
+```
+
+
+检查服务在该节点被分配了什么端口:
+
+```
+kubectl get svc/fine-pod
+```
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+fine-pod NodePort 10.111.36.142 5678:32373/TCP 72s
+```
+
+
+使用 `curl` 命令从 kind 控制面板容器内部请求这个端点:
+
+```
+docker exec -it 6a96207fed4b curl localhost:32373
+```
+
+```
+just made some syscalls!
+```
+
+
+你会看到 `syslog` 中没有任何输出,因为这个配置文件允许了所有需要的系统调用,
+并指定如果有发生列表之外的系统调用将发生错误。从安全角度来看,这是理想的情况,
+但是在分析程序时需要多付出一些努力。如果有一种简单的方法无需花费太多精力就能更接近此安全性,那就太好了。
+
+开始下一节之前,请清理该 Pod 和 Service:
+
+```
+kubectl delete pod/fine-pod
+kubectl delete svc/fine-pod
+```
+
+
+## 使用容器运行时默认的 Seccomp 配置文件创建 Pod
+
+大多数容器运行时都提供一组允许或不允许的默认系统调用。通过使用 `runtime/default` 注释
+或将 Pod 或容器的安全上下文中的 seccomp 类型设置为 `RuntimeDefault`,可以轻松地在 Kubernetes 中应用默认值。
+
+为你的 Kubernetes 版本下载正确的清单:
+
+{{< tabs name="default_pods" >}}
+{{< tab name="v1.19 或更新版本(GA)" >}}
+{{< codenew file="pods/security/seccomp/ga/default-pod.yaml" >}}
+{{< /tab >}}}
+{{{< tab name="v1.19 之前版本(alpha)" >}}
+{{< codenew file="pods/security/seccomp/alpha/default-pod.yaml" >}}
+{{< /tab >}}
+{{< /tabs >}}
+
+
+
+默认的 seccomp 配置文件应该为大多数工作负载提供足够的权限。
+
+## {{% heading "whatsnext" %}}
+
+
+额外的资源:
+
+* [Seccomp 概要](https://lwn.net/Articles/656307/)
+* [Seccomp 在 Docker 中的安全配置](https://docs.docker.com/engine/security/seccomp/)
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/alpha/audit-pod.yaml b/content/zh/examples/pods/security/seccomp/alpha/audit-pod.yaml
new file mode 100644
index 0000000000000..d2d93d7fba912
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/alpha/audit-pod.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: audit-pod
+ labels:
+ app: audit-pod
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: localhost/profiles/audit.json
+spec:
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/alpha/default-pod.yaml b/content/zh/examples/pods/security/seccomp/alpha/default-pod.yaml
new file mode 100644
index 0000000000000..ebb734441891c
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/alpha/default-pod.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: default-pod
+ labels:
+ app: default-pod
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: runtime/default
+spec:
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/alpha/fine-pod.yaml b/content/zh/examples/pods/security/seccomp/alpha/fine-pod.yaml
new file mode 100644
index 0000000000000..abd53585471aa
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/alpha/fine-pod.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: fine-pod
+ labels:
+ app: fine-pod
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: localhost/profiles/fine-grained.json
+spec:
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/alpha/violation-pod.yaml b/content/zh/examples/pods/security/seccomp/alpha/violation-pod.yaml
new file mode 100644
index 0000000000000..177f56af8f653
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/alpha/violation-pod.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: violation-pod
+ labels:
+ app: violation-pod
+ annotations:
+ seccomp.security.alpha.kubernetes.io/pod: localhost/profiles/violation.json
+spec:
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/ga/audit-pod.yaml b/content/zh/examples/pods/security/seccomp/ga/audit-pod.yaml
new file mode 100644
index 0000000000000..409d4b923c45a
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/ga/audit-pod.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: audit-pod
+ labels:
+ app: audit-pod
+spec:
+ securityContext:
+ seccompProfile:
+ type: Localhost
+ localhostProfile: profiles/audit.json
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/ga/default-pod.yaml b/content/zh/examples/pods/security/seccomp/ga/default-pod.yaml
new file mode 100644
index 0000000000000..fbeec4c1676d3
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/ga/default-pod.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: audit-pod
+ labels:
+ app: audit-pod
+spec:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/ga/fine-pod.yaml b/content/zh/examples/pods/security/seccomp/ga/fine-pod.yaml
new file mode 100644
index 0000000000000..692b8281516ca
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/ga/fine-pod.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: fine-pod
+ labels:
+ app: fine-pod
+spec:
+ securityContext:
+ seccompProfile:
+ type: Localhost
+ localhostProfile: profiles/fine-grained.json
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/ga/violation-pod.yaml b/content/zh/examples/pods/security/seccomp/ga/violation-pod.yaml
new file mode 100644
index 0000000000000..70deadf4b22b3
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/ga/violation-pod.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: violation-pod
+ labels:
+ app: violation-pod
+spec:
+ securityContext:
+ seccompProfile:
+ type: Localhost
+ localhostProfile: profiles/violation.json
+ containers:
+ - name: test-container
+ image: hashicorp/http-echo:0.2.3
+ args:
+ - "-text=just made some syscalls!"
+ securityContext:
+ allowPrivilegeEscalation: false
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/kind.yaml b/content/zh/examples/pods/security/seccomp/kind.yaml
new file mode 100644
index 0000000000000..4418359661c78
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/kind.yaml
@@ -0,0 +1,7 @@
+apiVersion: kind.x-k8s.io/v1alpha4
+kind: Cluster
+nodes:
+- role: control-plane
+ extraMounts:
+ - hostPath: "./profiles"
+ containerPath: "/var/lib/kubelet/seccomp/profiles"
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/profiles/audit.json b/content/zh/examples/pods/security/seccomp/profiles/audit.json
new file mode 100644
index 0000000000000..1f2d5df6a4e87
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/profiles/audit.json
@@ -0,0 +1,3 @@
+{
+ "defaultAction": "SCMP_ACT_LOG"
+}
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/profiles/fine-grained.json b/content/zh/examples/pods/security/seccomp/profiles/fine-grained.json
new file mode 100644
index 0000000000000..2eeaf9133e3c2
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/profiles/fine-grained.json
@@ -0,0 +1,65 @@
+{
+ "defaultAction": "SCMP_ACT_ERRNO",
+ "architectures": [
+ "SCMP_ARCH_X86_64",
+ "SCMP_ARCH_X86",
+ "SCMP_ARCH_X32"
+ ],
+ "syscalls": [
+ {
+ "names": [
+ "accept4",
+ "epoll_wait",
+ "pselect6",
+ "futex",
+ "madvise",
+ "epoll_ctl",
+ "getsockname",
+ "setsockopt",
+ "vfork",
+ "mmap",
+ "read",
+ "write",
+ "close",
+ "arch_prctl",
+ "sched_getaffinity",
+ "munmap",
+ "brk",
+ "rt_sigaction",
+ "rt_sigprocmask",
+ "sigaltstack",
+ "gettid",
+ "clone",
+ "bind",
+ "socket",
+ "openat",
+ "readlinkat",
+ "exit_group",
+ "epoll_create1",
+ "listen",
+ "rt_sigreturn",
+ "sched_yield",
+ "clock_gettime",
+ "connect",
+ "dup2",
+ "epoll_pwait",
+ "execve",
+ "exit",
+ "fcntl",
+ "getpid",
+ "getuid",
+ "ioctl",
+ "mprotect",
+ "nanosleep",
+ "open",
+ "poll",
+ "recvfrom",
+ "sendto",
+ "set_tid_address",
+ "setitimer",
+ "writev"
+ ],
+ "action": "SCMP_ACT_ALLOW"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/content/zh/examples/pods/security/seccomp/profiles/violation.json b/content/zh/examples/pods/security/seccomp/profiles/violation.json
new file mode 100644
index 0000000000000..7ce0faa8b06ed
--- /dev/null
+++ b/content/zh/examples/pods/security/seccomp/profiles/violation.json
@@ -0,0 +1,3 @@
+{
+ "defaultAction": "SCMP_ACT_ERRNO"
+}
\ No newline at end of file
From 91d6aa7de5329665018c966f93f73960512adb8a Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Thu, 29 Oct 2020 14:05:49 +0800
Subject: [PATCH 20/50] Fix typo in Daemonset
---
.../workloads/controllers/daemonset.md | 20 +++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/content/zh/docs/concepts/workloads/controllers/daemonset.md b/content/zh/docs/concepts/workloads/controllers/daemonset.md
index eeca63ebe8e96..ece35c15cf017 100644
--- a/content/zh/docs/concepts/workloads/controllers/daemonset.md
+++ b/content/zh/docs/concepts/workloads/controllers/daemonset.md
@@ -65,7 +65,7 @@ You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` f
-基于 YAML 文件创建 DaemonSet:
+基于 YAML 文件创建 DaemonSet:
```
kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
@@ -199,7 +199,11 @@ If you do not specify either, then the DaemonSet controller will create Pods on
-->
### 仅在某些节点上运行 Pod
-如果指定了 `.spec.template.spec.nodeSelector`,DaemonSet Controller 将在能够与 [Node Selector](/zh/docs/concepts/scheduling-eviction/assign-pod-node/) 匹配的节点上创建 Pod。类似这种情况,可以指定 `.spec.template.spec.affinity`,然后 DaemonSet Controller 将在能够与 [node Affinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/) 匹配的节点上创建 Pod。
+如果指定了 `.spec.template.spec.nodeSelector`,DaemonSet 控制器将在能够与
+[Node 选择算符](/zh/docs/concepts/scheduling-eviction/assign-pod-node/) 匹配的节点上创建 Pod。
+类似这种情况,可以指定 `.spec.template.spec.affinity`,之后 DaemonSet 控制器
+将在能够与[节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)
+匹配的节点上创建 Pod。
如果根本就没有指定,则 DaemonSet Controller 将在所有节点上创建 Pod。
-## 如何调度 Daemon Pods
+## Daemon Pods 是如何被调度的
### 通过默认调度器调度
@@ -228,7 +232,7 @@ That introduces the following issues:
-->
DaemonSet 确保所有符合条件的节点都运行该 Pod 的一个副本。
通常,运行 Pod 的节点由 Kubernetes 调度器选择。
-不过,DaemonSet pods 由 DaemonSet 控制器创建和调度。这就带来了以下问题:
+不过,DaemonSet Pods 由 DaemonSet 控制器创建和调度。这就带来了以下问题:
* Pod 行为的不一致性:正常 Pod 在被创建后等待调度时处于 `Pending` 状态,
DaemonSet Pods 创建后不会处于 `Pending` 状态下。这使用户感到困惑。
@@ -250,7 +254,7 @@ changes are made to the `spec.template` of the DaemonSet.
默认调度器接下来将 Pod 绑定到目标主机。
如果 DaemonSet Pod 的节点亲和性配置已存在,则被替换。
DaemonSet 控制器仅在创建或修改 DaemonSet Pod 时执行这些操作,
-并且不回更改 DaemonSet 的 `spec.template`。
+并且不会更改 DaemonSet 的 `spec.template`。
```yaml
nodeAffinity:
@@ -284,10 +288,10 @@ the related features.
尽管 Daemon Pods 遵循[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration)
规则,根据相关特性,控制器会自动将以下容忍度添加到 DaemonSet Pod:
-| 容忍度键名 | 效果 | 版本 | 描述 |
+| 容忍度键名 | 效果 | 版本 | 描述 |
| ---------------------------------------- | ---------- | ------- | ------------------------------------------------------------ |
| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | 当出现类似网络断开的情况导致节点问题时,DaemonSet Pod 不会被逐出。 |
-| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | 当出现类似于网络断开的情况导致节点问题时,DaemonSet Pod 不会被逐出。 |
+| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | 当出现类似于网络断开的情况导致节点问题时,DaemonSet Pod 不会被逐出。 |
| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | |
| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | |
| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSet Pod 能够容忍默认调度器所设置的 `unschedulable` 属性. |
@@ -313,7 +317,7 @@ Some possible patterns for communicating with Pods in a DaemonSet are:
与 DaemonSet 中的 Pod 进行通信的几种可能模式如下:
-- **Push**:配置 DaemonSet 中的 Pod,将更新发送到另一个服务,例如统计数据库。
+- **推送(Push)**:配置 DaemonSet 中的 Pod,将更新发送到另一个服务,例如统计数据库。
这些服务没有客户端。
- **NodeIP 和已知端口**:DaemonSet 中的 Pod 可以使用 `hostPort`,从而可以通过节点 IP
From 418f0af120c86265ad9665f2824669574c25f69c Mon Sep 17 00:00:00 2001
From: "wangjibao.lc"
Date: Fri, 30 Oct 2020 16:29:38 +0800
Subject: [PATCH 21/50] update image.cm for zh
---
content/zh/docs/setup/production-environment/tools/kops.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/zh/docs/setup/production-environment/tools/kops.md b/content/zh/docs/setup/production-environment/tools/kops.md
index 86cb3ddf0ff02..e4a0edc12355e 100644
--- a/content/zh/docs/setup/production-environment/tools/kops.md
+++ b/content/zh/docs/setup/production-environment/tools/kops.md
@@ -34,7 +34,7 @@ kops 是一个自用的供应系统:
* 全自动安装流程
* 使用 DNS 识别集群
* 自我修复:一切都在自动扩展组中运行
-* 支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS、RHEL、Amazon Linux 和 CoreOS) - 参考 [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md)
+* 支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS、RHEL、Amazon Linux 和 CoreOS) - 参考 [images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
* 支持高可用 - 参考 [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)
* 可以直接提供或者生成 terraform 清单 - 参考 [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
From 930d8938d02e5e1f1ffff21fb3a2dba1f7bf89e6 Mon Sep 17 00:00:00 2001
From: wwgfhf <51694849+wwgfhf@users.noreply.github.com>
Date: Fri, 30 Oct 2020 16:46:16 +0800
Subject: [PATCH 22/50] Update horizontal-pod-autoscale.md
---
.../zh/docs/tasks/run-application/horizontal-pod-autoscale.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md
index 4c88fc2c560e1..91226eaa7cb3d 100644
--- a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -250,7 +250,7 @@ conservatively assume the not-yet-ready pods are consuming 0% of the
desired metric, further dampening the magnitude of a scale up.
-->
此外,如果存在任何尚未就绪的 Pod,我们可以在不考虑遗漏指标或尚未就绪的 Pod 的情况下进行扩缩,
-我们保守地假设尚未就绪的 Pod 消耗了试题指标的 0%,从而进一步降低了扩缩的幅度。
+我们保守地假设尚未就绪的 Pod 消耗了期望指标的 0%,从而进一步降低了扩缩的幅度。
* 全自动安装流程
* 使用 DNS 识别集群
* 自我修复:一切都在自动扩展组中运行
-* 支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS、RHEL、Amazon Linux 和 CoreOS) - 参考 [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md)
+* 支持多种操作系统(如 Debian、Ubuntu 16.04、CentOS、RHEL、Amazon Linux 和 CoreOS) - 参考 [images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
* 支持高可用 - 参考 [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)
* 可以直接提供或者生成 terraform 清单 - 参考 [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
From 379b6907f3a40983dcab8dd5ca007403093f3103 Mon Sep 17 00:00:00 2001
From: huccshen <1171593960@qq.com>
Date: Fri, 30 Oct 2020 17:15:42 +0800
Subject: [PATCH 24/50] Update define-environment-variable-container.md
---
.../define-environment-variable-container.md | 20 +++----------------
1 file changed, 3 insertions(+), 17 deletions(-)
diff --git a/content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md
index 05a207a23664d..de9f1e8a5357b 100644
--- a/content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md
+++ b/content/zh/docs/tasks/inject-data-application/define-environment-variable-container.md
@@ -76,21 +76,12 @@ Pod:
```
-1. 进入该 Pod 下的容器并打开一个命令终端:
+1. 列出 Pod 容器的环境变量:
```shell
- kubectl exec -it envar-demo -- /bin/bash
- ```
-
-
-1. 在命令终端中通过执行 `printenv` 打印出环境变量。
-
- ```shell
- root@envar-demo:/# printenv
+ kubectl exec envar-demo -- printenv
```
-1. 通过键入 `exit` 退出命令终端。
-
由于云驱动的开发和发布的步调与 Kubernetes 项目不同,将服务提供商专用代码抽象到
-{{< glossary_tooltip text="`cloud-controller-manager`" term_id="cloud-controller-manager" >}}
+`{{< glossary_tooltip text="cloud-controller-manager" term_id="cloud-controller-manager" >}}`
二进制中有助于云服务厂商在 Kubernetes 核心代码之外独立进行开发。
名字空间是在多个用户之间划分集群资源的一种方法(通过[资源配额](/zh/docs/concepts/policy/resource-quotas/))。
-在 Kubernetes 未来版本中,相同名字空间中的对象默认将具有相同的访问控制策略。
例子:
@@ -338,7 +338,7 @@ Note that the introduction of underscores to a plugin filename does not prevent
The command from the above example, can be invoked using either a dash (`-`) or an underscore (`_`):
-->
请注意,在插件文件名中引入下划线并不会阻止我们使用 `kubectl foo_bar` 之类的命令。
-可以使用破折号(`-`)或下划线(`-`)调用上面示例中的命令:
+可以使用破折号(`-`)或下划线(`_`)调用上面示例中的命令:
```shell
# 我们的插件也可以用破折号来调用
From 9f052f96d409cc50485a69305da94756d6a8a481 Mon Sep 17 00:00:00 2001
From: doug-fish
Date: Fri, 30 Oct 2020 10:49:13 -0500
Subject: [PATCH 28/50] Add missing space
---
.../en/docs/concepts/services-networking/network-policies.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md
index 774d7b7808632..8110996397f2d 100644
--- a/content/en/docs/concepts/services-networking/network-policies.md
+++ b/content/en/docs/concepts/services-networking/network-policies.md
@@ -226,7 +226,7 @@ As of Kubernetes 1.20, the following functionality does not exist in the Network
- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
- Anything TLS related (use a service mesh or ingress controller for this).
- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities specifically).
-- Targeting of namespaces or services by name (you can, however, target pods or namespaces by their{{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
+- Targeting of namespaces or services by name (you can, however, target pods or namespaces by their {{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
- Creation or management of "Policy requests" that are fulfilled by a third party.
- Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects which can do this).
- Advanced policy querying and reachability tooling.
From 891d8edb26eca29e494de3d9947051d36d61c933 Mon Sep 17 00:00:00 2001
From: bryan
Date: Fri, 30 Oct 2020 23:53:59 +0800
Subject: [PATCH 29/50] resolve title errors
---
content/zh/docs/tasks/administer-cluster/safely-drain-node.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/zh/docs/tasks/administer-cluster/safely-drain-node.md b/content/zh/docs/tasks/administer-cluster/safely-drain-node.md
index dafde95115b0e..77bdc299fac47 100644
--- a/content/zh/docs/tasks/administer-cluster/safely-drain-node.md
+++ b/content/zh/docs/tasks/administer-cluster/safely-drain-node.md
@@ -1,5 +1,5 @@
---
-title: 确保 PodDisruptionBudget 的前提下安全地清空一个{{< glossary_tooltip text="节点" term_id="node" >}}
+title: 确保 PodDisruptionBudget 的前提下安全地清空一个节点
content_type: task
---
-本页展示了如何在确保 PodDisruptionBudget 的前提下,安全地清空一个节点。
+本页展示了如何在确保 PodDisruptionBudget 的前提下,安全地清空一个{{< glossary_tooltip text="节点" term_id="node" >}}。
## {{% heading "prerequisites" %}}
From b05f84778bf49a5360458e44bee7dda411d4cea1 Mon Sep 17 00:00:00 2001
From: Pankaj Kumar
Date: Sat, 31 Oct 2020 11:46:33 +0530
Subject: [PATCH 30/50] Updated _index.md to remove duplicate lines
Removed duplicate lines in the documentation.
---
content/en/docs/reference/using-api/_index.md | 7 -------
1 file changed, 7 deletions(-)
diff --git a/content/en/docs/reference/using-api/_index.md b/content/en/docs/reference/using-api/_index.md
index 8dc81fe549673..df9e00758ef75 100644
--- a/content/en/docs/reference/using-api/_index.md
+++ b/content/en/docs/reference/using-api/_index.md
@@ -31,13 +31,6 @@ For general background information, read
describes how clients can authenticate to the Kubernetes API server, and how their
requests are authorized.
-
-
-The REST API is the fundamental fabric of Kubernetes. All operations and
-communications between components, and external user commands are REST API
-calls that the API Server handles. Consequently, everything in the Kubernetes
-platform is treated as an API object and has a corresponding entry in the
-API.
## API versioning
From 3f7f7f7b8ecdc69b83d88a729efaea18f8154c84 Mon Sep 17 00:00:00 2001
From: inductor
Date: Sun, 18 Oct 2020 09:53:34 +0900
Subject: [PATCH 31/50] rebase master
---
Makefile | 2 +-
README.md | 5 -----
2 files changed, 1 insertion(+), 6 deletions(-)
diff --git a/Makefile b/Makefile
index e8576459a4ff8..0fa3608840c42 100644
--- a/Makefile
+++ b/Makefile
@@ -65,7 +65,7 @@ container-image:
--build-arg HUGO_VERSION=$(HUGO_VERSION)
container-build: module-check
- $(CONTAINER_RUN) --read-only $(CONTAINER_IMAGE) hugo --minify
+ $(CONTAINER_RUN) --read-only $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify"
container-serve: module-check
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
diff --git a/README.md b/README.md
index 5966a17b369ad..b03847c302968 100644
--- a/README.md
+++ b/README.md
@@ -12,8 +12,6 @@ You can run the website locally using Hugo (Extended version), or you can run it
To use this repository, you need the following installed locally:
-- [yarn](https://yarnpkg.com/)
-- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- A container runtime, like [Docker](https://www.docker.com/).
@@ -28,9 +26,6 @@ cd website
The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/docsy#readme). Even if you plan to run the website in a container, we strongly recommend pulling in the submodule and other development dependencies by running the following:
```
-# install dependencies
-yarn
-
# pull in the Docsy submodule
git submodule update --init --recursive --depth 1
```
From 28209e0b0032b30f2a29990e2bb8cc1ba41a72e6 Mon Sep 17 00:00:00 2001
From: inductor
Date: Fri, 7 Aug 2020 01:54:08 +0900
Subject: [PATCH 32/50] apply review
---
README.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/README.md b/README.md
index b03847c302968..c19aae489c6a9 100644
--- a/README.md
+++ b/README.md
@@ -12,6 +12,7 @@ You can run the website locally using Hugo (Extended version), or you can run it
To use this repository, you need the following installed locally:
+- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- A container runtime, like [Docker](https://www.docker.com/).
@@ -48,6 +49,8 @@ Make sure to install the Hugo extended version specified by the `HUGO_VERSION` e
To build and test the site locally, run:
```bash
+# install dependencies
+npm ci
make serve
```
From 59d643b8a387dfc0ee60346fccebc6a09466b947 Mon Sep 17 00:00:00 2001
From: inductor
Date: Sun, 1 Nov 2020 11:01:10 +0900
Subject: [PATCH 33/50] fix build
---
Dockerfile | 2 +-
Makefile | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/Dockerfile b/Dockerfile
index 4ceae22959257..e168c3e6dc3ae 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -16,7 +16,7 @@ RUN apk add --no-cache \
build-base \
libc6-compat \
npm && \
- npm install -G autoprefixer postcss-cli
+ npm install -D autoprefixer postcss-cli
ARG HUGO_VERSION
diff --git a/Makefile b/Makefile
index 0fa3608840c42..58babc3627eb8 100644
--- a/Makefile
+++ b/Makefile
@@ -65,7 +65,7 @@ container-image:
--build-arg HUGO_VERSION=$(HUGO_VERSION)
container-build: module-check
- $(CONTAINER_RUN) --read-only $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify"
+ $(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify"
container-serve: module-check
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
From df2285edf1d4e54c9745c5d19396a67fe3c5f2e9 Mon Sep 17 00:00:00 2001
From: "M. Habib Rosyad"
Date: Sun, 1 Nov 2020 20:15:40 +0700
Subject: [PATCH 34/50] Improve maintainability of case studies styling for KO
---
content/ko/case-studies/adform/index.html | 130 +++++------
content/ko/case-studies/amadeus/index.html | 175 +++++++--------
content/ko/case-studies/ancestry/index.html | 171 +++++++--------
content/ko/case-studies/blablacar/index.html | 161 +++++++-------
content/ko/case-studies/blackrock/index.html | 171 ++++++---------
content/ko/case-studies/box/index.html | 194 ++++++++---------
content/ko/case-studies/buffer/index.html | 173 +++++++--------
.../ko/case-studies/capital-one/index.html | 129 ++++-------
content/ko/case-studies/crowdfire/index.html | 164 +++++++-------
content/ko/case-studies/golfnow/index.html | 202 +++++++-----------
content/ko/case-studies/haufegroup/index.html | 173 +++++++--------
content/ko/case-studies/huawei/index.html | 152 ++++++-------
content/ko/case-studies/ibm/index.html | 156 ++++++--------
content/ko/case-studies/ing/index.html | 155 ++++++--------
content/ko/case-studies/naic/index.html | 164 ++++++--------
.../ko/case-studies/newyorktimes/index.html | 168 ++++++---------
content/ko/case-studies/nordstrom/index.html | 173 ++++++---------
.../northwestern-mutual/index.html | 122 ++++-------
content/ko/case-studies/ocado/index.html | 148 ++++++-------
content/ko/case-studies/openAI/index.html | 128 +++++------
content/ko/case-studies/peardeck/index.html | 176 +++++++--------
content/ko/case-studies/pearson/index.html | 154 +++++++------
content/ko/case-studies/pinterest/index.html | 152 ++++++-------
content/ko/case-studies/slingtv/index.html | 123 ++++-------
.../ko/case-studies/squarespace/index.html | 146 +++++--------
content/ko/case-studies/wikimedia/index.html | 152 ++++++-------
content/ko/case-studies/wink/index.html | 184 +++++++---------
content/ko/case-studies/workiva/index.html | 168 +++++++--------
content/ko/case-studies/ygrene/index.html | 123 ++++-------
content/ko/case-studies/zalando/index.html | 162 +++++++-------
30 files changed, 1996 insertions(+), 2753 deletions(-)
diff --git a/content/ko/case-studies/adform/index.html b/content/ko/case-studies/adform/index.html
index be35a2d8375ee..1de43d0637e56 100644
--- a/content/ko/case-studies/adform/index.html
+++ b/content/ko/case-studies/adform/index.html
@@ -3,116 +3,84 @@
linkTitle: Adform
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
logo: adform_featured_logo.png
draft: false
featured: true
weight: 47
quote: >
Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.
----
-
-
-
CASE STUDY:
Improving Performance and Morale with Cloud Native
-
-
-
-
-
-
- Company AdForm Location Copenhagen, Denmark Industry Adtech
-
-
-
-
-
-
Challenge
- Adform’s mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company’s growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure."
-
-
-
+new_case_study_styles: true
+heading_background: /images/case-studies/adform/banner1.jpg
+heading_title_logo: /images/adform_logo.png
+subheading: >
+ Improving Performance and Morale with Cloud Native
+case_study_details:
+ - Company: AdForm
+ - Location: Copenhagen, Denmark
+ - Industry: Adtech
+---
-
Solution
- The team, which had already been using Prometheus for monitoring, embraced Kubernetes and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."
+
Challenge
+
Adform's mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company's growth, the infrastructure team felt that "our private cloud was not really flexible enough," says IT System Engineer Edgaras Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn't have self-healing infrastructure."
-
+
Solution
-
+
The team, which had already been using Prometheus for monitoring, embraced Kubernetes and cloud native practices in 2017. "To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks," says Apšega. "We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately."
Impact
- "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on your systems."
-
-
-
-
-
-
-
-"Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier."
— Edgaras Apšega, IT Systems Engineer, Adform
+
"Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2-3 times more efficiency over virtual machines. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes," says Apšega. Prometheus has also had a positive impact: "It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on your systems."
-
-
+{{< case-studies/quote author="Edgaras Apšega, IT Systems Engineer, Adform" >}}
+"Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier."
+{{< /case-studies/quote >}}
+{{< case-studies/lead >}}
+Adform made headlines last year when it detected the HyphBot ad fraud network that was costing some businesses hundreds of thousands of dollars a day.
+{{< /case-studies/lead >}}
-
-
-
Adform made headlines last year when it detected the HyphBot ad fraud network that was costing some businesses hundreds of thousands of dollars a day.
With its mission to provide a secure and transparent full stack of advertising technology to enable an open internet, Adform published a white paper revealing what it did—and others could too—to limit customers’ exposure to the scam.
-In that same spirit, Adform is sharing its cloud native journey. "When you see that everyone shares their best practices, it inspires you to contribute back to the project," says IT Systems Engineer Edgaras Apšega.
-The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in their own seven data centers around the world, three of which were opened in the past year. With the company’s growth, the infrastructure team felt that "our private cloud was not really flexible enough," says Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software really takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure."
+
With its mission to provide a secure and transparent full stack of advertising technology to enable an open internet, Adform published a white paper revealing what it did—and others could too—to limit customers' exposure to the scam.
+
In that same spirit, Adform is sharing its cloud native journey. "When you see that everyone shares their best practices, it inspires you to contribute back to the project," says IT Systems Engineer Edgaras Apšega.
-
-
-
-
- "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral. And we can see that a community really gathers around it. Everyone shares their experiences, their knowledge, and the fact that it’s open source, you can contribute."
— Edgaras Apšega, IT Systems Engineer, Adform
-
-
-
-
+
The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in their own seven data centers around the world, three of which were opened in the past year. With the company's growth, the infrastructure team felt that "our private cloud was not really flexible enough," says Apšega. "The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software really takes time. We were really struggling with our releases, and we didn't have self-healing infrastructure."
-The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."
-A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
-Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they’re still doing it."
-
-The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."
+{{< case-studies/quote
+ image="/images/case-studies/adform/banner3.jpg"
+ author="Edgaras Apšega, IT Systems Engineer, Adform"
+>}}
+"The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral. And we can see that a community really gathers around it. Everyone shares their experiences, their knowledge, and the fact that it's open source, you can contribute."
+{{< /case-studies/quote >}}
+
The team, which had already been using Prometheus for monitoring, embraced Kubernetes, microservices, and cloud native practices. "The fact that Cloud Native Computing Foundation incubated Kubernetes was a really big point for us because it was vendor neutral," says Apšega. "And we can see that a community really gathers around it."
-
-
-
-
-"Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore."
— Andrius Cibulskis, IT Systems Engineer, Adform
-
-
+
A proof of concept project was started, with a Kubernetes cluster running on bare metal in the data center. When developers saw how quickly containers could be spun up compared to the virtual machine process, "they wanted to ship their containers in production right away, and we were still doing proof of concept," says IT Systems Engineer Andrius Cibulskis.
-
-
-This big push has been driven by the real impact that these new practices have had. "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes." The release process went from several hours to several minutes. Autoscaling is at least six times faster than the semi-manual VM bootstrapping and application deployment required before.
-The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching two to three times more efficiency over virtual machines.
-Prometheus has also had a positive impact: "It provides high availability for metrics and alerting," says Apšega. "We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on our systems."
+
Of course, a lot of work still had to be done. "First of all, we had to learn Kubernetes, see all of the moving parts, how they glue together," says Apšega. "Second of all, the whole CI/CD part had to be redone, and our DevOps team had to invest more man hours to implement it. And third is that developers had to rewrite the code, and they're still doing it."
+
The first production cluster was launched in the spring of 2018, and is now up to 20 physical machines dedicated for pods throughout three data centers, with plans for separate clusters in the other four data centers. The user-facing Adform application platform, data distribution platform, and back ends are now all running on Kubernetes. "Many APIs for critical applications are being developed for Kubernetes," says Apšega. "Teams are rewriting their applications to .NET core, because it supports containers, and preparing to move to Kubernetes. And new applications, by default, go in containers."
+{{< case-studies/quote
+ image="/images/case-studies/adform/banner4.jpg"
+ author="Andrius Cibulskis, IT Systems Engineer, Adform"
+>}}
+"Releases are really nice for them, because they just push their code to Git and that's it. They don't have to worry about their virtual machines anymore."
+{{< /case-studies/quote >}}
-
+
This big push has been driven by the real impact that these new practices have had. "Kubernetes helps our business a lot because our features are coming to market faster," says Apšega. "The deployments are very easy because developers just push the code and it automatically appears on Kubernetes." The release process went from several hours to several minutes. Autoscaling is at least six times faster than the semi-manual VM bootstrapping and application deployment required before.
-
-
- "I think that our company just started our cloud native journey. It seems like a huge road ahead, but we’re really happy that we joined it."
— Edgaras Apšega, IT Systems Engineer, Adform
-
-
+
The team estimates that the company has experienced cost savings of 4-5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching two to three times more efficiency over virtual machines.
-
-All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that’s it. They don’t have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they’re happy because they can easily inspect the containers."
-The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it’s cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we’re interested in is the Virtual Kubelet that lets you spin up the working nodes on different clouds to do some computing."
-
-Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we’re really happy that we joined it."
+
Prometheus has also had a positive impact: "It provides high availability for metrics and alerting," says Apšega. "We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on our systems."
+{{< case-studies/quote author="Edgaras Apšega, IT Systems Engineer, Adform" >}}
+"I think that our company just started our cloud native journey. It seems like a huge road ahead, but we're really happy that we joined it."
+{{< /case-studies/quote >}}
+
All of these benefits have trickled down to individual team members, whose working lives have been changed for the better. "They used to have to get up at night to re-start some services, and now Kubernetes handles all of that," says Apšega. Adds Cibulskis: "Releases are really nice for them, because they just push their code to Git and that's it. They don't have to worry about their virtual machines anymore." Even the security teams have been impacted. "Security teams are always not happy," says Apšega, "and now they're happy because they can easily inspect the containers."
-
+
The company plans to remain in the data centers for now, "mostly because we want to keep all the data, to not share it in any way," says Cibulskis, "and it's cheaper at our scale." But, Apšega says, the possibility of using a hybrid cloud for computing is intriguing: "One of the projects we're interested in is the Virtual Kubelet that lets you spin up the working nodes on different clouds to do some computing."
-
+
Apšega, Cibulskis and their colleagues are keeping tabs on how the cloud native ecosystem develops, and are excited to contribute where they can. "I think that our company just started our cloud native journey," says Apšega. "It seems like a huge road ahead, but we're really happy that we joined it."
diff --git a/content/ko/case-studies/amadeus/index.html b/content/ko/case-studies/amadeus/index.html
index 8b6294d4f9afa..bd647003ecc50 100644
--- a/content/ko/case-studies/amadeus/index.html
+++ b/content/ko/case-studies/amadeus/index.html
@@ -1,105 +1,84 @@
---
title: Amadeus Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_amadeus.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/amadeus/banner1.jpg
+heading_title_logo: /images/amadeus_logo.png
+subheading: >
+ Another Technical Evolution for a 30-Year-Old Company
+case_study_details:
+ - Company: Amadeus IT Group
+ - Location: Madrid, Spain
+ - Industry: Travel Technology
---
-
-
CASE STUDY:
Another Technical Evolution for a 30-Year-Old Company
-
-
-
- Company Amadeus IT Group Location Madrid, Spain Industry Travel Technology
-
-
-
-
-
-
Challenge
- In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company’s goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.
-
-
-
Solution
- Mountain has been overseeing the company’s migration to Kubernetes, using OpenShift Container Platform, Red Hat’s enterprise container platform.
-
+
Challenge
+
+
In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The 30-year-old company operates its own data center in Germany, and there were growing demands internally and externally for solutions that needed to be geographically dispersed. And more generally, "we had objectives of being even more highly available," says Eric Mountain, Senior Expert, Distributed Systems at Amadeus. Among the company's goals: to increase automation in managing its infrastructure, optimize the distribution of workloads, use data center resources more efficiently, and adopt new technologies more easily.
+
+
Solution
+
+
Mountain has been overseeing the company's migration to Kubernetes, using OpenShift Container Platform, Red Hat's enterprise container platform.
+
Impact
- One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "It’s now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
-
-
-
-
-
-
- "We want multi-data center capabilities, and we want them for our mainstream system as well. We didn’t think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring." - Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group
-
-
-
-
-
-
In his two decades at Amadeus, Eric Mountain has been the migrations guy.
- Back in the day, he worked on the company’s move from Unix to Linux, and now he’s overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyone’s travel experience, without interrupting workflows for the customers who depend on our technology."
- That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.
- The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the company’s main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldn’t achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."
- More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. It’s wasteful on many levels. For instance, an application doesn’t necessarily use the machine very optimally. Virtualization can help a bit, but it’s not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you can’t simply say, ‘Well, I’ll bring in another machine and give it that role.’ It’s not fast. It’s not efficient. So we wanted the next level of automation."
-
-
-
-
-
- "We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
-
-
-
-
-
- While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like Python and databases like Couchbase, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent.
-
- All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of Kubernetes whatever happens to be missing from our point of view, or go with OpenShift and build whatever remains there."
-
- The team decided against building everything themselves—though they’d done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain.
-
- Ultimately, they went with OpenShift Container Platform, Red Hat’s Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security."
-
- The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that there’s always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
-
-
-
-
-
- "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
-
-
-
-
-
- The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the project’s needs, "We couldn’t rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasn’t offered in the Kubernetes or OpenShift ecosystem. Now that Prometheus and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer."
-
- The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and it’s deployed in multiple data centers throughout the world," says Mountain. "It’s not a migration of an existing workload; it’s a whole new workload that we couldn’t have done otherwise. [This platform] gives us access to market opportunities that we didn’t have before."
-
- Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "That’s one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we can’t simply do absolutely everything from one day to the next. And we mustn’t sell it that way."
-
- The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountain’s team selected a smaller application that was representative of all the company’s other applications in its complexity: "We just made sure we picked something that’s complex enough, and we showed that it can be done."
-
-
-
-
-
- "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
-
-
-
-
-
- Next comes convincing people. "On the operations side and on the R&D side, there will be people who say quite rightly, ‘There is a system, and it works, so why change?’" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the company’s existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?"
-
- "The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don’t think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
-
- So how do you get everyone on board? "Make sure you have good links between your R&D and your operations," he says. "Also make sure you’re going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company."
-
- His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so there’s no complicated license key for the evaluation period and you’re not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "You’ve got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how you’ll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment."
-
- And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because it’s important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. It’s the only real way that you’ll see that you might be able to do things."
-
-
+
+
One of the first projects the team deployed in Kubernetes was the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume. "It's now handling in production several thousand transactions per second, and it's deployed in multiple data centers throughout the world," says Mountain. "It's not a migration of an existing workload; it's a whole new workload that we couldn't have done otherwise. [This platform] gives us access to market opportunities that we didn't have before."
+
+{{< case-studies/quote author="Eric Mountain, Senior Expert, Distributed Systems at Amadeus IT Group" >}}
+"We want multi-data center capabilities, and we want them for our mainstream system as well. We didn't think that we could achieve them with our existing system. We need new automation, things that Kubernetes and OpenShift bring."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+In his two decades at Amadeus, Eric Mountain has been the migrations guy.
+{{< /case-studies/lead >}}
+
+
Back in the day, he worked on the company's move from Unix to Linux, and now he's overseeing the journey to cloud native. "Technology just keeps changing, and we embrace it," he says. "We are celebrating our 30 years this year, and we continue evolving and innovating to stay cost-efficient and enhance everyone's travel experience, without interrupting workflows for the customers who depend on our technology."
+
+
That was the challenge that Amadeus—which provides IT solutions to the travel industry around the world, from flight searches to hotel bookings to customer feedback—faced in 2014. The technology team realized it was in need of a new platform for the 5,000 services supported by its service-oriented architecture.
+
+
The tipping point occurred when they began receiving many requests, internally and externally, for solutions that needed to be geographically outside the company's main data center in Germany. "Some requests were for running our applications on customer premises," Mountain says. "There were also new services we were looking to offer that required response time to the order of a few hundred milliseconds, which we couldn't achieve with transatlantic traffic. Or at least, not without eating into a considerable portion of the time available to our applications for them to process individual queries."
+
+
More generally, the company was interested in leveling up on high availability, increasing automation in managing infrastructure, optimizing the distribution of workloads and using data center resources more efficiently. "We have thousands and thousands of servers," says Mountain. "These servers are assigned roles, so even if the setup is highly automated, the machine still has a given role. It's wasteful on many levels. For instance, an application doesn't necessarily use the machine very optimally. Virtualization can help a bit, but it's not a silver bullet. If that machine breaks, you still want to repair it because it has that role and you can't simply say, 'Well, I'll bring in another machine and give it that role.' It's not fast. It's not efficient. So we wanted the next level of automation."
+
+{{< case-studies/quote image="/images/case-studies/amadeus/banner3.jpg" >}}
+"We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
+{{< /case-studies/quote >}}
+
+
While mainly a C++ and Java shop, Amadeus also wanted to be able to adopt new technologies more easily. Some of its developers had started using languages like Python and databases like Couchbase, but Mountain wanted still more options, he says, "in order to better adapt our technical solutions to the products we offer, and open up entirely new possibilities to our developers." Working with recent technologies and cool new things would also make it easier to attract new talent.
+
+
All of those needs led Mountain and his team on a search for a new platform. "We did a set of studies and proofs of concept over a fairly short period, and we considered many technologies," he says. "In the end, we were left with three choices: build everything on premise, build on top of Kubernetes whatever happens to be missing from our point of view, or go with OpenShift and build whatever remains there."
+
+
The team decided against building everything themselves—though they'd done that sort of thing in the past—because "people were already inventing things that looked good," says Mountain.
+
+
Ultimately, they went with OpenShift Container Platform, Red Hat's Kubernetes-based enterprise offering, instead of building on top of Kubernetes because "there was a lot of synergy between what we wanted and the way Red Hat was anticipating going with OpenShift," says Mountain. "They were clearly developing Kubernetes, and developing certain things ahead of time in OpenShift, which were important to us, such as more security."
+
+
The hope was that those particular features would eventually be built into Kubernetes, and, in the case of security, Mountain feels that has happened. "We realize that there's always a certain amount of automation that we will probably have to develop ourselves to compensate for certain gaps," says Mountain. "The less we do that, the better for us. We hope that if we build on what others have built, what we do might actually be upstream-able. As Kubernetes and OpenShift progress, we see that we are indeed able to remove some of the additional layers we implemented to compensate for gaps we perceived earlier."
+
+{{< case-studies/quote image="/images/case-studies/amadeus/banner4.jpg" >}}
+"It's not a migration of an existing workload; it's a whole new workload that we couldn't have done otherwise. [This platform] gives us access to market opportunities that we didn't have before."
+{{< /case-studies/quote >}}
+
+
The first project the team tackled was one that they knew had to run outside the data center in Germany. Because of the project's needs, "We couldn't rely only on the built-in Kubernetes service discovery; we had to layer on top of that an extra service discovery level that allows us to load balance at the operation level within our system," says Mountain. They also built a stream dedicated to monitoring, which at the time wasn't offered in the Kubernetes or OpenShift ecosystem. Now that Prometheus and other products are available, Mountain says the company will likely re-evaluate their monitoring system: "We obviously always like to leverage what Kubernetes and OpenShift can offer."
+
+
+
The second project ended up going into production first: the Amadeus Airline Cloud Availability solution, which helps manage ever-increasing flight-search volume and was deployed in public cloud. Launched in early 2016, it is "now handling in production several thousand transactions per second, and it's deployed in multiple data centers throughout the world," says Mountain. "It's not a migration of an existing workload; it's a whole new workload that we couldn't have done otherwise. [This platform] gives us access to market opportunities that we didn't have before."
+
+
Having been through this kind of technical evolution more than once, Mountain has advice on how to handle the cultural changes. "That's one aspect that we can tackle progressively," he says. "We have to go on supplying our customers with new features on our pre-existing products, and we have to keep existing products working. So we can't simply do absolutely everything from one day to the next. And we mustn't sell it that way."
+
+
The first order of business, then, is to pick one or two applications to demonstrate that the technology works. Rather than choosing a high-impact, high-risk project, Mountain's team selected a smaller application that was representative of all the company's other applications in its complexity: "We just made sure we picked something that's complex enough, and we showed that it can be done."
+
+{{< case-studies/quote >}}
+"The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don't think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
+{{< /case-studies/quote >}}
+
+
Next comes convincing people. "On the operations side and on the R&D side, there will be people who say quite rightly, 'There is a system, and it works, so why change?'" Mountain says. "The only thing that really convinces people is showing them the value." For Amadeus, people realized that the Airline Cloud Availability product could not have been made available on the public cloud with the company's existing system. The question then became, he says, "Do we go into a full-blown migration? Is that something that is justified?"
+
+
"The bottom line is we want these multi-data center capabilities, and we want them as well for our mainstream system," he says. "And we don't think that we can implement them with our previous system. We need the new automation, homogeneity, and scale that Kubernetes and OpenShift bring."
+
+
So how do you get everyone on board? "Make sure you have good links between your R&D and your operations," he says. "Also make sure you're going to talk early on to the investors and stakeholders. Figure out what it is that they will be expecting from you, that will convince them or not, that this is the right way for your company."
+
+
His other advice is simply to make the technology available for people to try it. "Kubernetes and OpenShift Origin are open source software, so there's no complicated license key for the evaluation period and you're not limited to 30 days," he points out. "Just go and get it running." Along with that, he adds, "You've got to be prepared to rethink how you do things. Of course making your applications as cloud native as possible is how you'll reap the most benefits: 12 factors, CI/CD, which is continuous integration, continuous delivery, but also continuous deployment."
+
+
And while they explore that aspect of the technology, Mountain and his team will likely be practicing what he preaches to others taking the cloud native journey. "See what happens when you break it, because it's important to understand the limits of the system," he says. Or rather, he notes, the advantages of it. "Breaking things on Kube is actually one of the nice things about it—it recovers. It's the only real way that you'll see that you might be able to do things."
diff --git a/content/ko/case-studies/ancestry/index.html b/content/ko/case-studies/ancestry/index.html
index a992a284ac86b..8cab3c19c6a9e 100644
--- a/content/ko/case-studies/ancestry/index.html
+++ b/content/ko/case-studies/ancestry/index.html
@@ -1,117 +1,92 @@
---
title: Ancestry Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_ancestry.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/ancestry/banner1.jpg
+heading_title_logo: /images/ancestry_logo.png
+subheading: >
+ Digging Into the Past With New Technology
+case_study_details:
+ - Company: Ancestry
+ - Location: Lehi, Utah
+ - Industry: Internet Company, Online Services
---
-
-
CASE STUDY:
Digging Into the Past With New Technology
+
Challenge
-
+
Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. Ancestry currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, ancestry.com, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our products."
-
- Company Ancestry Location Lehi, Utah Industry Internet Company, Online Services
-
+
Solution
-
+
The company is transitioning to cloud native infrastructure, using Docker containerization, Kubernetes orchestration and Prometheus for cluster monitoring.
-
-
-
+
Impact
-
Challenge
-Ancestry, the global leader in family history and consumer genomics, uses sophisticated engineering and technology to help everyone, everywhere discover the story of what led to them. The company has spent more than 30 years innovating and building products and technologies that at their core, result in real and emotional human responses. Ancestry currently serves more than 2.6 million paying subscribers, holds 20 billion historical records, 90 million family trees and more than four million people are in its AncestryDNA network, making it the largest consumer genomics DNA network in the world. The company's popular website, ancestry.com, has been working with big data long before the term was popularized. The site was built on hundreds of services, technologies and a traditional deployment methodology. "It's worked well for us in the past," says Paul MacKay, software engineer and architect at Ancestry, "but had become quite cumbersome in its processing and is time-consuming. As a primarily online service, we are constantly looking for ways to accelerate to be more agile in delivering our solutions and our products."
+
"Every single product, every decision we make at Ancestry, focuses on delighting our customers with intimate, sometimes life-changing discoveries about themselves and their families," says MacKay. "As the company continues to grow, the increased productivity gains from using Kubernetes has helped Ancestry make customer discoveries faster. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. We've truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."
-
+{{< case-studies/quote author="PAUL MACKAY, SOFTWARE ENGINEER AND ARCHITECT AT ANCESTRY" >}}
+"At a certain point, you have to step back if you're going to push a new technology and get key thought leaders with engineers within the organization to become your champions for new technology adoption. At training sessions, the development teams were always the ones that were saying, 'Kubernetes saved our time tremendously; it's an enabler. It really is incredible.'"
+{{< /case-studies/quote >}}
-
+{{< case-studies/lead >}}
+It started with a Shaky Leaf.
+{{< /case-studies/lead >}}
-
-
Solution
+
Since its introduction a decade ago, the Shaky Leaf icon has become one of Ancestry's signature features, which signals to users that there's a helpful hint you can use to find out more about your family tree.
- The company is transitioning to cloud native infrastructure, using Docker containerization, Kubernetes orchestration and Prometheus for cluster monitoring.
-
-
Impact
- "Every single product, every decision we make at Ancestry, focuses on delighting our customers with intimate, sometimes life-changing discoveries about themselves and their families," says MacKay. "As the company continues to grow, the increased productivity gains from using Kubernetes has helped Ancestry make customer discoveries faster. With the move to Dockerization for example, instead of taking between 20 to 50 minutes to deploy a new piece of code, we can now deploy in under a minute for much of our code. We’ve truly experienced significant time savings in addition to the various features and benefits from cloud native and Kubernetes-type technologies."
-
-
-
-
-
-
- "At a certain point, you have to step back if you're going to push a new technology and get key thought leaders with engineers within the organization to become your champions for new technology adoption. At training sessions, the development teams were always the ones that were saying, 'Kubernetes saved our time tremendously; it's an enabler. It really is incredible.'"
- PAUL MACKAY, SOFTWARE ENGINEER AND ARCHITECT AT ANCESTRY
-
-
-
-
-
-
It started with a Shaky Leaf.
-
- Since its introduction a decade ago, the Shaky Leaf icon has become one of Ancestry's signature features, which signals to users that there's a helpful hint you can use to find out more about your family tree.
- So when the company decided to begin moving its infrastructure to cloud native technology, the first service that was launched on Kubernetes, the open source platform for managing application containers across clusters of hosts, was this hint system. Think of it as Amazon's recommended products, but instead of recommending products the company recommends records, stories, or familial connections. "It was a very important part of the site," says Ancestry software engineer and architect Paul MacKay, "but also small enough for a pilot project that we knew we could handle in a very appropriate, secure way."
- And when it went live smoothly in early 2016, "our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes," MacKay adds. "The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation."
- The stability of that Shaky Leaf was a signal for MacKay and his team that their decision to embrace cloud native technologies was the right one for the company. With a private data center, Ancestry built its website (which launched in 1996) on hundreds of services and technologies and a traditional deployment methodology. "It worked well for us in the past, but the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."
-
-
-
-
-
+
So when the company decided to begin moving its infrastructure to cloud native technology, the first service that was launched on Kubernetes, the open source platform for managing application containers across clusters of hosts, was this hint system. Think of it as Amazon's recommended products, but instead of recommending products the company recommends records, stories, or familial connections. "It was a very important part of the site," says Ancestry software engineer and architect Paul MacKay, "but also small enough for a pilot project that we knew we could handle in a very appropriate, secure way."
+
+
And when it went live smoothly in early 2016, "our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes," MacKay adds. "The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation."
+
+
The stability of that Shaky Leaf was a signal for MacKay and his team that their decision to embrace cloud native technologies was the right one for the company. With a private data center, Ancestry built its website (which launched in 1996) on hundreds of services and technologies and a traditional deployment methodology. "It worked well for us in the past, but the sum of the legacy systems became quite cumbersome in its processing and was time-consuming," says MacKay. "We were looking for other ways to accelerate, to be more agile in delivering our solutions and our products."
+
+{{< case-studies/quote image="/images/case-studies/ancestry/banner3.jpg" >}}
"And when it [Kubernetes] went live smoothly in early 2016, 'our deployment time for this service literally was cut down from 50 minutes to 2 or 5 minutes,' MacKay adds. 'The development team was just thrilled because we're focused on supplying a great experience for our customers. And that means features, it means stability, it means all those things that we need for a first-in-class type operation.'"
-
-
-
-
-
- That need led them in 2015 to explore containerization. Ancestry engineers had already been using technology like Java and Python on Linux, so part of the decision was about making the infrastructure more Linux-friendly. They quickly decided that they wanted to go with Docker for containerization, "but it always comes down to the orchestration part of it to make it really work," says MacKay.
- His team looked at orchestration platforms offered by Docker Compose, Mesos and OpenStack, and even started to prototype some homegrown solutions. And then they started hearing rumblings of the imminent release of Kubernetes v1.0. "At the forefront, we were looking at the secret store, so we didn't have to manage that all ourselves, the config maps, the methodology of seamless deployment strategy," he says. "We found that how Kubernetes had done their resources, their types, their labels and just their interface was so much further advanced than the other things we had seen. It was a feature fit."
-
- Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."
- Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."
- Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."
- Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
-
-
-
-
-
-
+{{< /case-studies/quote >}}
+
+
That need led them in 2015 to explore containerization. Ancestry engineers had already been using technology like Java and Python on Linux, so part of the decision was about making the infrastructure more Linux-friendly. They quickly decided that they wanted to go with Docker for containerization, "but it always comes down to the orchestration part of it to make it really work," says MacKay.
+
+
His team looked at orchestration platforms offered by Docker Compose, Mesos and OpenStack, and even started to prototype some homegrown solutions. And then they started hearing rumblings of the imminent release of Kubernetes v1.0. "At the forefront, we were looking at the secret store, so we didn't have to manage that all ourselves, the config maps, the methodology of seamless deployment strategy," he says. "We found that how Kubernetes had done their resources, their types, their labels and just their interface was so much further advanced than the other things we had seen. It was a feature fit."
+
+{{< case-studies/lead >}}
+Plus, MacKay says, "I just believed in the confidence that comes with the history that Google has with containerization. So we started out right on the leading edge of it. And we haven't looked back since."
+{{< /case-studies/lead >}}
+
+
Which is not to say that adopting a new technology hasn't come with some challenges. "Change is hard," says MacKay. "Not because the technology is hard or that the technology is not good. It's just that people like to do things like they had done [before]. You have the early adopters and you have those who are coming in later. It was a learning experience on both sides."
+
Figuring out the best deployment operations for Ancestry was a big part of the work it took to adopt cloud native infrastructure. "We want to make sure the process is easy and also controlled in the manner that allows us the highest degree of security that we demand and our customers demand," says MacKay. "With Kubernetes and other products, there are some good solutions, but a little bit of glue is needed to bring it into corporate processes and governances. It's like having a set of gloves that are generic, but when you really do want to grab something you have to make it so it's customized to you. That's what we had to do."
+
+
Their best practices include allowing their developers to deploy into development stage and production, but then controlling the aspects that need governance and auditing, such as secrets. They found that having one namespace per service is useful for achieving that containment of secrets and config maps. And for their needs, having one container per pod makes it easier to manage and to have a smaller unit of deployment.
+
+
+{{< case-studies/quote image="/images/case-studies/ancestry/banner4.jpg" >}}
"The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology."
+{{< /case-studies/quote >}}
+
+
With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."
+
+
The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."
+
+
A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."
+
+
The company continues to weigh which services it will move forward to Kubernetes, which ones will be kept as is, and which will be replaced in the future and thus don't have to be moved over. MacKay estimates that the company is "approaching halfway on those features that are going forward. We don't have to do a lot of convincing anymore. It's more of an issue of timing with getting product management and engineering staff the knowledge and information that they need."
+
+{{< case-studies/quote >}}
+"... 'I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward.'"
+{{< /case-studies/quote >}}
+
+
Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."
+
+
That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."
+
+
As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending meetups to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."
+
+
When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."
+
+
With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.
+
+
"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."
-
-
-
-
-
- With that process established, the time spent on deployment was cut down to under a minute for some services. "As programmers, we have what's called REPL: read, evaluate, print, and loop, but with Kubernetes, we have CDEL: compile, deploy, execute, and loop," says MacKay. "It's a very quick loop back and a great benefit to understand that when our services are deployed in production, they're the same as what we tested in the pre-production environments. The approach of cloud native for Ancestry provides us a better ability to scale and to accommodate the business needs as work loads occur."
- The success of Ancestry's first deployment of the hint system on Kubernetes helped create momentum for greater adoption of the technology. "Engineers like to code, they like to do features, they don't like to sit around waiting for things to be deployed and worrying about scaling up and out and down," says MacKay. "After a while the engineers became our champions. At training sessions, the development teams were always the ones saying, 'Kubernetes saved our time tremendously; it's an enabler; it really is incredible.' Over time, we were able to convince our management that this was a transition that the industry is making and that we needed to be a part of it."
- A year later, Ancestry has transitioned a good number of applications to Kubernetes. "We have many different services that make up the rich environment that [the website] has from both the DNA side and the family history side," says MacKay. "We have front-end stacks, back-end stacks and back-end processing type stacks that are in the cluster."
- The company continues to weigh which services it will move forward to Kubernetes, which ones will be kept as is, and which will be replaced in the future and thus don't have to be moved over. MacKay estimates that the company is "approaching halfway on those features that are going forward. We don't have to do a lot of convincing anymore. It's more of an issue of timing with getting product management and engineering staff the knowledge and information that they need."
-
-
-
-
-
- "... 'I believe in Kubernetes. I believe in containerization. I think
- if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about,
- and it'll go forward.'"
-
-
-
-
-
-
-
-Looking ahead, MacKay sees Ancestry maximizing the benefits of Kubernetes in 2017. "We're very close to having everything that should be or could be in a Linux-friendly world in Kubernetes by the end of the year," he says, adding that he's looking forward to features such as federation and horizontal pod autoscaling that are currently in the works. "Kubernetes has been very wonderful for us and we continue to ride the wave."
-That wave, he points out, has everything to do with the vibrant Kubernetes community, which has grown by leaps and bounds since Ancestry joined it as an early adopter. "This is just a very rough way of judging it, but on Slack in June 2015, there were maybe 500 on there," MacKay says. "The last time I looked there were maybe 8,500 just on the Slack channel. There are so many major companies and different kinds of companies involved now. It's the variety of contributors, the number of contributors, the incredibly competent and friendly community."
-As much as he and his team at Ancestry have benefited from what he calls "the goodness and the technical abilities of many" in the community, they've also contributed information about best practices, logged bug issues and participated in the open source conversation. And they've been active in attending meetups to help educate and give back to the local tech community in Utah. Says MacKay: "We're trying to give back as far as our experience goes, rather than just code."
-
When he meets with companies considering adopting cloud native infrastructure, the best advice he has to give from Ancestry's Kubernetes journey is this: "Start small, but with hard problems," he says. And "you need a patron who understands the vision of containerization, to help you tackle the political as well as other technical roadblocks that can occur when change is needed."
-With the changes that MacKay's team has led over the past year and a half, cloud native will be part of Ancestry's technological genealogy for years to come. MacKay has been such a champion of the technology that he says people have jokingly accused him of having a Kubernetes tattoo.
-"I really don't," he says with a laugh. "But I'm passionate. I'm not exclusive to any technology; I use whatever I need that's out there that makes us great. If it's something else, I'll use it. But right now I believe in Kubernetes. I believe in containerization. I think if we can get there and establish ourselves in that world, we will be further along and far better off being agile and all the things we talk about, and it'll go forward."
-He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!"
-
-
-
-
+
He pauses. "So, yeah, I guess you can say I'm an evangelist for Kubernetes," he says. "But I'm not getting a tattoo!"
diff --git a/content/ko/case-studies/blablacar/index.html b/content/ko/case-studies/blablacar/index.html
index 2d55ffb8d07fe..e6537a672fc40 100644
--- a/content/ko/case-studies/blablacar/index.html
+++ b/content/ko/case-studies/blablacar/index.html
@@ -1,98 +1,85 @@
---
title: BlaBlaCar Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_blablacar.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/blablacar/banner1.jpg
+heading_title_logo: /images/blablacar_logo.png
+subheading: >
+ Turning to Containerization to Support Millions of Rideshares
+case_study_details:
+ - Company: BlaBlaCar
+ - Location: Paris, France
+ - Industry: Ridesharing Company
---
-
-
CASE STUDY:
Turning to Containerization to Support Millions of Rideshares
+
Challenge
-
+
The world's largest long-distance carpooling community, BlaBlaCar, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When you're thinking about doubling the number of servers, you start thinking, 'What should I do to be more efficient?'" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.
-
- Company BlaBlaCar Location Paris, France Industry Ridesharing Company
-
+
Solution
-
-
-
-
-
Challenge
- The world’s largest long-distance carpooling community, BlaBlaCar, connects 40 million members across 22 countries. The company has been experiencing exponential growth since 2012 and needed its infrastructure to keep up. "When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’" says Simon Lallemand, Infrastructure Engineer at BlaBlaCar. "The answer is not to hire more and more people just to deal with the servers and installation." The team knew they had to scale the platform, but wanted to stay on their own bare metal servers.
-
-
-
Solution
- Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime rkt, initially deployed using fleet cluster manager. Last year, the company switched to Kubernetes orchestration, and now also uses Prometheus for monitoring.
-
+
Opting not to shift to cloud virtualization or use a private cloud on their own servers, the BlaBlaCar team became early adopters of containerization, using the CoreOs runtime rkt, initially deployed using fleet cluster manager. Last year, the company switched to Kubernetes orchestration, and now also uses Prometheus for monitoring.
-
Impact
- "Before using containers, it would take sometimes a day, sometimes two, just to create a new service," says Lallemand. "With all the tooling that we made around the containers, copying a new service now is a matter of minutes. It’s really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that they’re developing, and not on the infrastructure."
-
-
-
-
-
- "When you’re switching to this cloud-native model and running everything in containers, you have to make sure that at any moment you can reboot without any downtime and without losing traffic. [With Kubernetes] our infrastructure is much more resilient and we have better availability than before." - Simon Lallemand, Infrastructure Engineer at BlaBlaCar
-
-
-
-
-
-
For the 40 million users of BlaBlaCar, it’s easy to find strangers headed in the same direction to share rides and costs. You can even choose how much "bla bla" chatter you want from a long-distance ride mate.
- Behind the scenes, though, the infrastructure was falling woefully behind the rider community’s exponential growth. Founded in 2006, the company hit its current stride around 2012. "Our infrastructure was very traditional," says Infrastructure Engineer Simon Lallemand, who began working at the company in 2014. "In the beginning, it was a bit chaotic because we had to [grow] fast. But then comes the time when you have to design things to make it manageable."
- By 2015, the company had about 50 bare metal servers. The team was using a MySQL database and PHP, but, Lallemand says, "it was a very static way." They also utilized the configuration management system, Chef, but had little automation in its process. "When you’re thinking about doubling the number of servers, you start thinking, ‘What should I do to be more efficient?’" says Lallemand. "The answer is not to hire more and more people just to deal with the servers and installation."
- Instead, BlaBlaCar began its cloud-native journey but wasn’t sure which route to take. "We could either decide to go into cloud virtualization or even use a private cloud on our own servers," says Lallemand. "But going into the cloud meant we had to make a lot of changes in our application work, and we were just not ready to make the switch from on premise to the cloud." They wanted to keep the great performance they got on bare metal, so they didn’t want to go to virtualization on premise.
- The solution: containerization. This was early 2015 and containers were still relatively new. "It was a bold move at the time," says Lallemand. "We decided that the next servers that we would buy in the new data center would all be the same model, so we could outsource the maintenance of the servers. And we decided to go with containers and with CoreOS Container Linux as an abstraction for this hardware. It seemed future-proof to go with containers because we could see what companies were already doing with containers."
-
-
-
-
-
- "With all the tooling that we made around the containers, copying a new service is a matter of minutes. It’s a huge gain. For the developers, it means they can focus only on the features that they’re developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
-
-
-
-
-
- Next, they needed to choose a runtime for the containers, but "there were very few deployments in production at that time," says Lallemand. They experimented with Docker but decided to go with rkt. Lallemand explains that for BlaBlaCar, it was "much simpler to integrate things that are on rkt." At the time, the project was still pre-v1.0, so "we could speak with the developers of rkt and give them feedback. It was an advantage." Plus, he notes, rkt was very stable, even at this early stage.
- Once those decisions were made that summer, the company came up with a plan for implementation. First, they formed a task force to create a workflow that would be tested by three of the 10 members on Lallemand’s team. But they took care to run regular workshops with all 10 members to make sure everyone was on board. "When you’re focused on your product sometimes you forget if it’s really user friendly, whether other people can manage to create containers too," Lallemand says. "So we did a lot of iterations to find a good workflow."
- After establishing the workflow, Lallemand says with a smile that "we had this strange idea that we should try the most difficult thing first. Because if it works, it will work for everything." So the first project the team decided to containerize was the database. "Nobody did that at the time, and there were really no existing tools for what we wanted to do, including building container images," he says. So the team created their own tools, such as dgr, which builds container images so that the whole team has a common framework to build on the same images with the same standards. They also revamped the service-discovery tools Nerve and Synapse; their versions, Go-Nerve and Go-Synapse, were written in Go and built to be more efficient and include new features. All of these tools were open-sourced.
- At the same time, the company was working to migrate its entire platform to containers with a deadline set for Christmas 2015. With all the work being done in parallel, BlaBlaCar was able to get about 80 percent of its production into containers by its deadline with live traffic running on containers during December. (It’s now at 100 percent.) "It’s a really busy time for traffic," says Lallemand. "We knew that by using those new servers with containers, it would help us handle the traffic."
- In the middle of that peak season for carpooling, everything worked well. "The biggest impact that we had was for the deployment of new services," says Lallemand. "Before using containers, we had to first deploy a new server and create configurations with Chef. It would take sometimes a day, sometimes two, just to create a new service. And with all the tooling that we made around the containers, copying a new service is a matter of minutes. So it’s really a huge gain. For the developers, it means they can focus only on the features that they’re developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
-
-
-
-
-
- "We realized that there was a really strong community around it [Kubernetes], which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes."
-
-
-
-
-
- In order to meet their self-imposed deadline, one of the decisions they made was to not do any "orchestration magic" for containers in the first production alignment. Instead, they used the basic fleet tool from CoreOS to deploy their containers. (They did build a tool called GGN, which they’ve open-sourced, to make it more manageable for their system engineers to use.)
- Still, the team knew that they’d want more orchestration. "Our tool was doing a pretty good job, but at some point you want to give more autonomy to the developer team," Lallemand says. "We also realized that we don’t want to be the single point of contact for developers when they want to launch new services." By the summer of 2016, they found their answer in Kubernetes, which had just begun supporting rkt implementation.
- After discussing their needs with their contacts at CoreOS and Google, they were convinced that Kubernetes would work for BlaBlaCar. "We realized that there was a really strong community around it, which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes." They also started using Prometheus, as they were looking for "service-oriented monitoring that could be updated nightly." Production on Kubernetes began in December 2016. "We like to do crazy stuff around Christmas," he adds with a laugh.
- BlaBlaCar now has about 3,000 pods, with 1200 of them running on Kubernetes. Lallemand leads a "foundations team" of 25 members who take care of the networks, databases and systems for about 100 developers. There have been some challenges getting to this point. "The rkt implementation is still not 100 percent finished," Lallemand points out. "It’s really good, but there are some features still missing. We have questions about how we do things with stateful services, like databases. We know how we will be migrating some of the services; some of the others are a bit more complicated to deal with. But the Kubernetes community is making a lot of progress on that part."
- The team is particularly happy that they’re now able to plan capacity better in the company’s data center. "We have fewer constraints since we have this abstraction between the services and the hardware we run on," says Lallemand. "If we lose a server because there’s a hardware problem on it, we just move the containers onto another server. It’s much more efficient. We do that by just changing a line in the configuration file. And with Kubernetes, it should be automatic, so we would have nothing to do."
-
-
-
-
-
- "If we lose a server because there’s a hardware problem on it, we just move the containers onto another server. It’s much more efficient. We do that by just changing a line in the configuration file. With Kubernetes, it should be automatic, so we would have nothing to do."
-
-
-
-
-
- And these advances ultimately trickle down to BlaBlaCar’s users. "We have improved availability overall on our website," says Lallemand. "When you’re switching to this cloud-native model with running everything in containers, you have to make sure that you can at any moment reboot a server or a data container without any downtime, without losing traffic. So now our infrastructure is much more resilient and we have better availability than before."
- Within BlaBlaCar’s technology department, the cloud-native journey has created some profound changes. Lallemand thinks that the regular meetings during the conception stage and the training sessions during implementation helped. "After that everybody took part in the migration process," he says. "Then we split the organization into different ‘tribes’—teams that gather developers, product managers, data analysts, all the different jobs, to work on a specific part of the product. Before, they were organized by function. The idea is to give all these tribes access to the infrastructure directly in a self-service way without having to ask. These people are really autonomous. They have responsibility of that part of the product, and they can make decisions faster."
- This DevOps transformation turned out to be a positive one for the company’s staffers. "The team was very excited about the DevOps transformation because it was new, and we were working to make things more reliable, more future-proof," says Lallemand. "We like doing things that very few people are doing, other than the internet giants."
- With these changes already making an impact, BlaBlaCar is looking to split up more and more of its application into services. "I don’t say microservices because they’re not so micro," Lallemand says. "If we can split the responsibilities between the development teams, it would be easier to manage and more reliable, because we can easily add and remove services if one fails. You can handle it easily, instead of adding a big monolith that we still have."
- When Lallemand speaks to other European companies curious about what BlaBlaCar has done with its infrastructure, he tells them to come along for the ride. "I tell them that it’s such a pleasure to deal with the infrastructure that we have today compared to what we had before," he says. "They just need to keep in mind their real motive, whether it’s flexibility in development or reliability or so on, and then go step by step towards reaching those objectives. That’s what we’ve done. It’s important not to do technology for the sake of technology. Do it for a purpose. Our focus was on helping the developers."
-
-
+
+
"Before using containers, it would take sometimes a day, sometimes two, just to create a new service," says Lallemand. "With all the tooling that we made around the containers, copying a new service now is a matter of minutes. It's really a huge gain. We are better at capacity planning in our data center because we have fewer constraints due to this abstraction between the services and the hardware we run on. For the developers, it also means they can focus only on the features that they're developing, and not on the infrastructure."
+
+{{< case-studies/quote author="Simon Lallemand, Infrastructure Engineer at BlaBlaCar" >}}
+"When you're switching to this cloud-native model and running everything in containers, you have to make sure that at any moment you can reboot without any downtime and without losing traffic. [With Kubernetes] our infrastructure is much more resilient and we have better availability than before."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+For the 40 million users of BlaBlaCar, it's easy to find strangers headed in the same direction to share rides and costs. You can even choose how much "bla bla" chatter you want from a long-distance ride mate.
+{{< /case-studies/lead >}}
+
+
Behind the scenes, though, the infrastructure was falling woefully behind the rider community's exponential growth. Founded in 2006, the company hit its current stride around 2012. "Our infrastructure was very traditional," says Infrastructure Engineer Simon Lallemand, who began working at the company in 2014. "In the beginning, it was a bit chaotic because we had to [grow] fast. But then comes the time when you have to design things to make it manageable."
+
+
By 2015, the company had about 50 bare metal servers. The team was using a MySQL database and PHP, but, Lallemand says, "it was a very static way." They also utilized the configuration management system, Chef, but had little automation in its process. "When you're thinking about doubling the number of servers, you start thinking, 'What should I do to be more efficient?'" says Lallemand. "The answer is not to hire more and more people just to deal with the servers and installation."
+
+
Instead, BlaBlaCar began its cloud-native journey but wasn't sure which route to take. "We could either decide to go into cloud virtualization or even use a private cloud on our own servers," says Lallemand. "But going into the cloud meant we had to make a lot of changes in our application work, and we were just not ready to make the switch from on premise to the cloud." They wanted to keep the great performance they got on bare metal, so they didn't want to go to virtualization on premise.
+
+
The solution: containerization. This was early 2015 and containers were still relatively new. "It was a bold move at the time," says Lallemand. "We decided that the next servers that we would buy in the new data center would all be the same model, so we could outsource the maintenance of the servers. And we decided to go with containers and with CoreOS Container Linux as an abstraction for this hardware. It seemed future-proof to go with containers because we could see what companies were already doing with containers."
+
+{{< case-studies/quote image="/images/case-studies/blablacar/banner3.jpg">}}
+"With all the tooling that we made around the containers, copying a new service is a matter of minutes. It's a huge gain. For the developers, it means they can focus only on the features that they're developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
+{{< /case-studies/quote >}}
+
+
Next, they needed to choose a runtime for the containers, but "there were very few deployments in production at that time," says Lallemand. They experimented with Docker but decided to go with rkt. Lallemand explains that for BlaBlaCar, it was "much simpler to integrate things that are on rkt." At the time, the project was still pre-v1.0, so "we could speak with the developers of rkt and give them feedback. It was an advantage." Plus, he notes, rkt was very stable, even at this early stage.
+
+
Once those decisions were made that summer, the company came up with a plan for implementation. First, they formed a task force to create a workflow that would be tested by three of the 10 members on Lallemand's team. But they took care to run regular workshops with all 10 members to make sure everyone was on board. "When you're focused on your product sometimes you forget if it's really user friendly, whether other people can manage to create containers too," Lallemand says. "So we did a lot of iterations to find a good workflow."
+
+
After establishing the workflow, Lallemand says with a smile that "we had this strange idea that we should try the most difficult thing first. Because if it works, it will work for everything." So the first project the team decided to containerize was the database. "Nobody did that at the time, and there were really no existing tools for what we wanted to do, including building container images," he says. So the team created their own tools, such as dgr, which builds container images so that the whole team has a common framework to build on the same images with the same standards. They also revamped the service-discovery tools Nerve and Synapse; their versions, Go-Nerve and Go-Synapse, were written in Go and built to be more efficient and include new features. All of these tools were open-sourced.
+
+
At the same time, the company was working to migrate its entire platform to containers with a deadline set for Christmas 2015. With all the work being done in parallel, BlaBlaCar was able to get about 80 percent of its production into containers by its deadline with live traffic running on containers during December. (It's now at 100 percent.) "It's a really busy time for traffic," says Lallemand. "We knew that by using those new servers with containers, it would help us handle the traffic."
+
+
In the middle of that peak season for carpooling, everything worked well. "The biggest impact that we had was for the deployment of new services," says Lallemand. "Before using containers, we had to first deploy a new server and create configurations with Chef. It would take sometimes a day, sometimes two, just to create a new service. And with all the tooling that we made around the containers, copying a new service is a matter of minutes. So it's really a huge gain. For the developers, it means they can focus only on the features that they're developing and not on the infrastructure or the hour they would test their code, or the hour that it would get deployed."
+
+{{< case-studies/quote image="/images/case-studies/blablacar/banner4.jpg" >}}
+"We realized that there was a really strong community around it [Kubernetes], which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes."
+{{< /case-studies/quote >}}
+
+
In order to meet their self-imposed deadline, one of the decisions they made was to not do any "orchestration magic" for containers in the first production alignment. Instead, they used the basic fleet tool from CoreOS to deploy their containers. (They did build a tool called GGN, which they've open-sourced, to make it more manageable for their system engineers to use.)
+
+
Still, the team knew that they'd want more orchestration. "Our tool was doing a pretty good job, but at some point you want to give more autonomy to the developer team," Lallemand says. "We also realized that we don't want to be the single point of contact for developers when they want to launch new services." By the summer of 2016, they found their answer in Kubernetes, which had just begun supporting rkt implementation.
+
+
After discussing their needs with their contacts at CoreOS and Google, they were convinced that Kubernetes would work for BlaBlaCar. "We realized that there was a really strong community around it, which meant we would not have to maintain a lot of tools of our own," says Lallemand. "It was better if we could contribute to some bigger project like Kubernetes." They also started using Prometheus, as they were looking for "service-oriented monitoring that could be updated nightly." Production on Kubernetes began in December 2016. "We like to do crazy stuff around Christmas," he adds with a laugh.
+
+
BlaBlaCar now has about 3,000 pods, with 1200 of them running on Kubernetes. Lallemand leads a "foundations team" of 25 members who take care of the networks, databases and systems for about 100 developers. There have been some challenges getting to this point. "The rkt implementation is still not 100 percent finished," Lallemand points out. "It's really good, but there are some features still missing. We have questions about how we do things with stateful services, like databases. We know how we will be migrating some of the services; some of the others are a bit more complicated to deal with. But the Kubernetes community is making a lot of progress on that part."
+
+
The team is particularly happy that they're now able to plan capacity better in the company's data center. "We have fewer constraints since we have this abstraction between the services and the hardware we run on," says Lallemand. "If we lose a server because there's a hardware problem on it, we just move the containers onto another server. It's much more efficient. We do that by just changing a line in the configuration file. And with Kubernetes, it should be automatic, so we would have nothing to do."
+
+{{< case-studies/quote >}}
+"If we lose a server because there's a hardware problem on it, we just move the containers onto another server. It's much more efficient. We do that by just changing a line in the configuration file. With Kubernetes, it should be automatic, so we would have nothing to do."
+{{< /case-studies/quote >}}
+
+
And these advances ultimately trickle down to BlaBlaCar's users. "We have improved availability overall on our website," says Lallemand. "When you're switching to this cloud-native model with running everything in containers, you have to make sure that you can at any moment reboot a server or a data container without any downtime, without losing traffic. So now our infrastructure is much more resilient and we have better availability than before."
+
+
Within BlaBlaCar's technology department, the cloud-native journey has created some profound changes. Lallemand thinks that the regular meetings during the conception stage and the training sessions during implementation helped. "After that everybody took part in the migration process," he says. "Then we split the organization into different 'tribes'—teams that gather developers, product managers, data analysts, all the different jobs, to work on a specific part of the product. Before, they were organized by function. The idea is to give all these tribes access to the infrastructure directly in a self-service way without having to ask. These people are really autonomous. They have responsibility of that part of the product, and they can make decisions faster."
+
+
This DevOps transformation turned out to be a positive one for the company's staffers. "The team was very excited about the DevOps transformation because it was new, and we were working to make things more reliable, more future-proof," says Lallemand. "We like doing things that very few people are doing, other than the internet giants."
+
+
With these changes already making an impact, BlaBlaCar is looking to split up more and more of its application into services. "I don't say microservices because they're not so micro," Lallemand says. "If we can split the responsibilities between the development teams, it would be easier to manage and more reliable, because we can easily add and remove services if one fails. You can handle it easily, instead of adding a big monolith that we still have."
+
+
When Lallemand speaks to other European companies curious about what BlaBlaCar has done with its infrastructure, he tells them to come along for the ride. "I tell them that it's such a pleasure to deal with the infrastructure that we have today compared to what we had before," he says. "They just need to keep in mind their real motive, whether it's flexibility in development or reliability or so on, and then go step by step towards reaching those objectives. That's what we've done. It's important not to do technology for the sake of technology. Do it for a purpose. Our focus was on helping the developers."
diff --git a/content/ko/case-studies/blackrock/index.html b/content/ko/case-studies/blackrock/index.html
index 6bef0ec7084d3..725d6bc057428 100644
--- a/content/ko/case-studies/blackrock/index.html
+++ b/content/ko/case-studies/blackrock/index.html
@@ -1,112 +1,83 @@
---
title: BlackRock Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_blackrock.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/blackrock/banner1.jpg
+heading_title_logo: /images/blackrock_logo.png
+subheading: >
+ Rolling Out Kubernetes in Production in 100 Days
+case_study_details:
+ - Company: BlackRock
+ - Location: New York, NY
+ - Industry: Financial Services
---
-
-
CASE STUDY:
-
Rolling Out Kubernetes in Production in 100 Days
-
+
Challenge
-
+
The world's largest asset manager, BlackRock operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Michael Francis, a Managing Director in BlackRock's Product Group, which runs the company's investment management platform. "Managing complex Python installations on users' desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It's not so much that we had to solve our main core production problem, it's how do we extend that? How do we evolve?"
-
- Company BlackRock Location New York, NY Industry Financial Services
-
+
Solution
-
+
Drawing from what they learned during a pilot done last year using Docker environments, Francis put together a cross-sectional team of 20 to build an investor research web app using Kubernetes with the goal of getting it into production within one quarter.
-
+
Impact
-
-
-
Challenge
- The world’s largest asset manager, BlackRock operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Michael Francis, a Managing Director in BlackRock’s Product Group, which runs the company’s investment management platform. "Managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?"
-
+
"Our goal was: How do you give people tools rapidly without having to install them on their desktop?" says Francis. And the team hit the goal within 100 days. Francis is pleased with the results and says, "We're going to use this infrastructure for lots of other application workloads as time goes on. It's not just data science; it's this style of application that needs the dynamism. But I think we're 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. What's interesting is that just having this technology there is changing the way our developers are starting to think about their future development."
-
-
Solution
- Drawing from what they learned during a pilot done last year using Docker environments, Francis put together a cross-sectional team of 20 to build an investor research web app using Kubernetes with the goal of getting it into production within one quarter.
-
-
Impact
- "Our goal was: How do you give people tools rapidly without having to install them on their desktop?" says Francis. And the team hit the goal within 100 days. Francis is pleased with the results and says, "We’re going to use this infrastructure for lots of other application workloads as time goes on. It’s not just data science; it’s this style of application that needs the dynamism. But I think we’re 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. What’s interesting is that just having this technology there is changing the way our developers are starting to think about their future development."
-
-
-
-
-
-
-
-
- "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don’t have to throw out everything you do. And using Kubernetes made a complex problem significantly easier." - Michael Francis, Managing Director, BlackRock
-
-
-
-
-
-
- One of the management objectives for BlackRock’s Product Group employees in 2017 was to "build cool stuff." Led by Managing Director Michael Francis, a cross-sectional group of 20 did just that: They rolled out a full production Kubernetes environment and released a new investor research web app on it. In 100 days.
- For a company that’s the world’s largest asset manager, "just equipment procurement can take 100 days sometimes, let alone from inception to delivery," says Karl Wieman, a Senior System Administrator. "It was an aggressive schedule. But it moved the dial."
- In fact, the project achieved two goals: It solved a business problem (creating the needed web app) as well as provided real-world, in-production experience with Kubernetes, a cloud-native technology that the company was eager to explore. "It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?" says Francis. The ultimate success of this project, beyond delivering the app, lies in the fact that "we’ve managed to integrate a radically new thought process into a controlled infrastructure that we didn’t want to change."
- After all, in its three decades of existence, BlackRock has "a very well-established environment for managing our compute resources," says Francis. "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that’s very cloudish in concept. We’re able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
- Though that works well for the core production, the company has found that some data science workloads require more dynamic access to resources. "It’s a very bursty process," says Francis, who is head of data for the company’s Aladdin investment management platform division.
- Aladdin, which connects the people, information and technology needed for money management in real time, is used internally and is also sold as a platform to other asset managers and insurance companies. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Francis. But "managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. Docker allows us to flatten that environment."
-
-
-
-
-
- "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that’s very cloudish in concept. We’re able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
-
-
-
-
-
- Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you’d have to build an infrastructure to define limits for our processes, and the Python notebooks weren’t really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."
- Made up of managers from technology, infrastructure, production operations, development and information security, Francis’s team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using Ansible and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we don’t understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didn’t build anywhere near the amount we thought we were going to end up building."
- In search of a solution in which they could manage usage on a user-by-user level, Francis’s team gravitated to Red Hat’s OpenShift Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, that’s an indicator of the momentum."
- Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRock’s existing framework. "It’s about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"
- The first (anticipated) speed bump was working around issues behind BlackRock’s corporate firewalls. "One of our challenges is there are no firewalls in most open source software," says Francis. "So almost all install scripts fail in some bizarre way, and pulling down packages doesn’t necessarily work." The team ran into these types of problems using Minikube and did a few small pushes back to the open source project.
-
-
-
-
-
-
-
- "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there."
-
-
-
-
-
- There were also questions about service discovery. "You can think of Aladdin as a cloud of services with APIs between them that allows us to build applications rapidly," says Francis. "It’s all on a proprietary message bus, which gives us all sorts of advantages but at the same time, how does that play in a third party [platform]?"
- Another issue they had to navigate was that in BlackRock’s existing system, the messaging protocol has different instances in the different development, test and production environments. While Kubernetes enables a more DevOps-style model, it didn’t make sense for BlackRock. "I think what we are very proud of is that the ability for us to push into production is still incredibly rapid in this [new] infrastructure, but we have the control points in place, and we didn’t have to disrupt everything," says Francis. "A lot of the cost of this development was thinking how best to leverage our internal tools. So it was less costly than we actually thought it was going to be."
- The project leveraged tools associated with the messaging bus, for example. "The way that the Kubernetes cluster will talk to our internal messaging platform is through a gateway program, and this gateway program already has built-in checks and throttles," says Morris. "We can use them to control and potentially throttle the requests coming in from Kubernetes’s very elastic infrastructure to the production infrastructure. We’ll continue to go in that direction. It enables us to scale as we need to from the operational perspective."
- The solution also had to be complementary with BlackRock’s centralized operational support team structure. "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools," Morris explains. "That means that I don’t need to hire more people."
- With those points established, the team created a procedure for the project: "We rolled this out first to a development environment, then moved on to a testing environment and then eventually to two production environments, in that sequential order," says Maskallis. "That drove a lot of our learning curve. We have all these moving parts, the software components on the infrastructure side, the software components with Kubernetes directly, the interconnectivity with the rest of the environment that we operate here at BlackRock, and how we connect all these pieces. If we came across issues, we fixed them, and then moved on to the different environments to replicate that until we eventually ended up in our production environment where this particular cluster is supposed to live."
- The team had weekly one-hour working sessions with all the members (who are located around the world) participating, and smaller breakout or deep-dive meetings focusing on specific technical details. Possible solutions would be reported back to the group and debated the following week. "I think what made it a successful experiment was people had to work to learn, and they shared their experiences with others," says Vice President and Software Developer Fouad Semaan. Then, Francis says, "We gave our engineers the space to do what they’re good at. This hasn’t been top-down."
-
-
-
-
-
-
-
- "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools. That means that I don’t need to hire more people."
-
-
-
-
-
-
- They were led by one key axiom: To stay focused and avoid scope creep. This meant that they wouldn’t use features that weren’t in the core of Kubernetes and Docker. But if there was a real need, they’d build the features themselves. Luckily, Francis says, "Because of the rapidity of the development, a lot of things we thought we would have to build ourselves have been rolled into the core product. [The package manager Helm is one example]. People have similar problems."
- By the end of the 100 days, the app was up and running for internal BlackRock users. The initial capacity of 30 users was hit within hours, and quickly increased to 150. "People were immediately all over it," says Francis. In the next phase of this project, they are planning to scale up the cluster to have more capacity.
- Even more importantly, they now have in-production experience with Kubernetes that they can continue to build on—and a complete framework for rolling out new applications. "We’re going to use this infrastructure for lots of other application workloads as time goes on. It’s not just data science; it’s this style of application that needs the dynamism," says Francis. "Is it the right place to move our core production processes onto? It might be. We’re not at a point where we can say yes or no, but we felt that having real production experience with something like Kubernetes at some form and scale would allow us to understand that. I think we’re 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues."
- For other big companies considering a project like this, Francis says commitment and dedication are key: "We got the signoff from [senior management] from day one, with the commitment that we were able to get the right people. If I had to isolate what makes something complex like this succeed, I would say senior hands-on people who can actually drive it make a huge difference." With that in place, he adds, "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don’t have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."
-
-
-
+{{< case-studies/quote author="Michael Francis, Managing Director, BlackRock">}}
+"My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don't have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."
+{{< /case-studies/quote >}}
+
+
One of the management objectives for BlackRock's Product Group employees in 2017 was to "build cool stuff." Led by Managing Director Michael Francis, a cross-sectional group of 20 did just that: They rolled out a full production Kubernetes environment and released a new investor research web app on it. In 100 days.
+
+
For a company that's the world's largest asset manager, "just equipment procurement can take 100 days sometimes, let alone from inception to delivery," says Karl Wieman, a Senior System Administrator. "It was an aggressive schedule. But it moved the dial." In fact, the project achieved two goals: It solved a business problem (creating the needed web app) as well as provided real-world, in-production experience with Kubernetes, a cloud-native technology that the company was eager to explore. "It's not so much that we had to solve our main core production problem, it's how do we extend that? How do we evolve?" says Francis. The ultimate success of this project, beyond delivering the app, lies in the fact that "we've managed to integrate a radically new thought process into a controlled infrastructure that we didn't want to change."
+
+
After all, in its three decades of existence, BlackRock has "a very well-established environment for managing our compute resources," says Francis. "We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that's very cloudish in concept. We're able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
+
+
Though that works well for the core production, the company has found that some data science workloads require more dynamic access to resources. "It's a very bursty process," says Francis, who is head of data for the company's Aladdin investment management platform division.
+
+
Aladdin, which connects the people, information and technology needed for money management in real time, is used internally and is also sold as a platform to other asset managers and insurance companies. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Francis. But "managing complex Python installations on users' desktops is really hard because everyone ends up with slightly different environments. Docker allows us to flatten that environment."
+
+{{< case-studies/quote image="/images/case-studies/blackrock/banner3.jpg">}}
+"We manage large cluster processes on machines, so we do a lot of orchestration and management for our main production processes in a way that's very cloudish in concept. We're able to manage them in a very controlled, static deployment scheme, and that has given us a huge amount of scalability."
+{{< /case-studies/quote >}}
+
+
Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you'd have to build an infrastructure to define limits for our processes, and the Python notebooks weren't really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."
+
+
Made up of managers from technology, infrastructure, production operations, development and information security, Francis's team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using Ansible and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we don't understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didn't build anywhere near the amount we thought we were going to end up building."
+
+
In search of a solution in which they could manage usage on a user-by-user level, Francis's team gravitated to Red Hat's OpenShift Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years' time, in some form. And right now, in this space, Kubernetes feels like the one that's going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, that's an indicator of the momentum."
+
+
Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRock's existing framework. "It's about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"
+
+
The first (anticipated) speed bump was working around issues behind BlackRock's corporate firewalls. "One of our challenges is there are no firewalls in most open source software," says Francis. "So almost all install scripts fail in some bizarre way, and pulling down packages doesn't necessarily work." The team ran into these types of problems using Minikube and did a few small pushes back to the open source project.
+
+{{< case-studies/quote image="/images/case-studies/blackrock/banner4.jpg">}}
+"Typically we make technology choices that we believe are going to be here in 5-10 years' time, in some form. And right now, in this space, Kubernetes feels like the one that's going to be there."
+{{< /case-studies/quote >}}
+
+
There were also questions about service discovery. "You can think of Aladdin as a cloud of services with APIs between them that allows us to build applications rapidly," says Francis. "It's all on a proprietary message bus, which gives us all sorts of advantages but at the same time, how does that play in a third party [platform]?"
+
+
Another issue they had to navigate was that in BlackRock's existing system, the messaging protocol has different instances in the different development, test and production environments. While Kubernetes enables a more DevOps-style model, it didn't make sense for BlackRock. "I think what we are very proud of is that the ability for us to push into production is still incredibly rapid in this [new] infrastructure, but we have the control points in place, and we didn't have to disrupt everything," says Francis. "A lot of the cost of this development was thinking how best to leverage our internal tools. So it was less costly than we actually thought it was going to be."
+
+
The project leveraged tools associated with the messaging bus, for example. "The way that the Kubernetes cluster will talk to our internal messaging platform is through a gateway program, and this gateway program already has built-in checks and throttles," says Morris. "We can use them to control and potentially throttle the requests coming in from Kubernetes's very elastic infrastructure to the production infrastructure. We'll continue to go in that direction. It enables us to scale as we need to from the operational perspective."
+
+
The solution also had to be complementary with BlackRock's centralized operational support team structure. "The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools," Morris explains. "That means that I don't need to hire more people."
+
+
With those points established, the team created a procedure for the project: "We rolled this out first to a development environment, then moved on to a testing environment and then eventually to two production environments, in that sequential order," says Maskallis. "That drove a lot of our learning curve. We have all these moving parts, the software components on the infrastructure side, the software components with Kubernetes directly, the interconnectivity with the rest of the environment that we operate here at BlackRock, and how we connect all these pieces. If we came across issues, we fixed them, and then moved on to the different environments to replicate that until we eventually ended up in our production environment where this particular cluster is supposed to live."
+
+
The team had weekly one-hour working sessions with all the members (who are located around the world) participating, and smaller breakout or deep-dive meetings focusing on specific technical details. Possible solutions would be reported back to the group and debated the following week. "I think what made it a successful experiment was people had to work to learn, and they shared their experiences with others," says Vice President and Software Developer Fouad Semaan. Then, Francis says, "We gave our engineers the space to do what they're good at. This hasn't been top-down."
+
+{{< case-studies/quote >}}
+"The core infrastructure components of Kubernetes are hooked into our existing orchestration framework, which means that anyone in our support team has both control and visibility to the cluster using the existing operational tools. That means that I don't need to hire more people."
+{{< /case-studies/quote >}}
+
+
They were led by one key axiom: To stay focused and avoid scope creep. This meant that they wouldn't use features that weren't in the core of Kubernetes and Docker. But if there was a real need, they'd build the features themselves. Luckily, Francis says, "Because of the rapidity of the development, a lot of things we thought we would have to build ourselves have been rolled into the core product. [The package manager Helm is one example]. People have similar problems."
+
+
By the end of the 100 days, the app was up and running for internal BlackRock users. The initial capacity of 30 users was hit within hours, and quickly increased to 150. "People were immediately all over it," says Francis. In the next phase of this project, they are planning to scale up the cluster to have more capacity.
+
+
Even more importantly, they now have in-production experience with Kubernetes that they can continue to build on—and a complete framework for rolling out new applications. "We're going to use this infrastructure for lots of other application workloads as time goes on. It's not just data science; it's this style of application that needs the dynamism," says Francis. "Is it the right place to move our core production processes onto? It might be. We're not at a point where we can say yes or no, but we felt that having real production experience with something like Kubernetes at some form and scale would allow us to understand that. I think we're 6-12 months away from making a [large scale] decision. We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues."
+
+
For other big companies considering a project like this, Francis says commitment and dedication are key: "We got the signoff from [senior management] from day one, with the commitment that we were able to get the right people. If I had to isolate what makes something complex like this succeed, I would say senior hands-on people who can actually drive it make a huge difference." With that in place, he adds, "My message to other enterprises like us is you can actually integrate Kubernetes into an existing, well-orchestrated machinery. You don't have to throw out everything you do. And using Kubernetes made a complex problem significantly easier."
diff --git a/content/ko/case-studies/box/index.html b/content/ko/case-studies/box/index.html
index bead8eb01a5bf..058ff7f9a24e4 100644
--- a/content/ko/case-studies/box/index.html
+++ b/content/ko/case-studies/box/index.html
@@ -2,113 +2,99 @@
title: Box Case Study
case_study_styles: true
cid: caseStudies
-css: /css/style_box.css
video: https://www.youtube.com/embed/of45hYbkIZs?autoplay=1
quote: >
Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that's going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I've seen in the last 10 years of working on the cloud.
+new_case_study_styles: true
+heading_background: /images/case-studies/box/banner1.jpg
+heading_title_logo: /images/box_logo.png
+subheading: >
+ An Early Adopter Envisions a New Cloud Platform
+case_study_details:
+ - Company: Box
+ - Location: Redwood City, California
+ - Industry: Technology
---
-
-
CASE STUDY:
-
An Early Adopter Envisions
- a New Cloud Platform
-
-
-
-
-
- Company Box Location Redwood City, California Industry Technology
-
-
-
-
-
-
-
-
-
-
Challenge
- Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. Box was built primarily with bare metal inside the company’s own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "It’s been a huge challenge because of different clouds, especially bare metal, have very different interfaces."
-
-
-
-
-
Solution
- Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, Kubernetes container orchestration. Kubernetes, Ghods says, has allowed Box’s developers to "target a universal set of concepts that are portable across all clouds."
-
-
Impact
- "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we’re working on getting it to an hour."
-
-
-
-
-
-
-
- "We looked at a lot of different options, but Kubernetes really stood out....the fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."
- SAM GHOUDS, CO-FOUNDER AND SERVICES ARCHITECT OF BOX
-
-
-
-
-
-
-
In the summer of 2014, Box was feeling the pain of a decade’s worth of hardware and software infrastructure that wasn’t keeping up with the company’s needs.
-
- A platform that allows its more than 50 million users (including governments and big businesses like General Electric) to manage and share content in the cloud, Box was originally a PHP monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we’ve been expanding into regions around the globe, and as the public cloud wars have been heating up, we’ve been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It’s been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."
- Box’s cloud native journey accelerated that June, when Ghods attended DockerCon. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.
- At that conference, Google announced the release of its Kubernetes container management system, and Ghods was won over. "We looked at a lot of different options, but Kubernetes really stood out, especially because of the incredibly strong team of Borg veterans and the vision of having a completely infrastructure-agnostic way of being able to run cloud software," he says, referencing Google’s internal container orchestrator Borg. "The fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."
- Another plus: Ghods liked that Kubernetes has a universal set of API objects like pod, service, replica set and deployment object, which created a consistent surface to build tooling against. "Even PaaS layers like OpenShift or Deis that build on top of Kubernetes still treat those objects as first-class principles," he says. "We were excited about having these abstractions shared across the entire ecosystem, which would result in a lot more momentum than we saw in other potential solutions."
- Box deployed Kubernetes in a cluster in a production data center just six months later. Kubernetes was then still pre-beta, on version 0.11. They started small: The very first thing Ghods’s team ran on Kubernetes was a Box API checker that confirms Box is up. "That was just to write and deploy some software to get the whole pipeline functioning," he says. Next came some daemons that process jobs, which was "nice and safe because if they experienced any interruptions, we wouldn’t fail synchronous incoming requests from customers."
-
-
-
-
-
-
- "As we’ve been expanding into regions around the globe, and as the public cloud wars have been heating up, we’ve been focusing a lot more on figuring out how we [can have Kubernetes help] run our workload across many different environments and many different cloud infrastructure providers."
-
-
-
-
-
- The first live service, which the team could route to and ask for information, was launched a few months later. At that point, Ghods says, "We were comfortable with the stability of the Kubernetes cluster. We started to port some services over, then we would increase the cluster size and port a few more, and that’s ended up to about 100 servers in each data center that are dedicated purely to Kubernetes. And that’s going to be expanding a lot over the next 12 months, probably too many hundreds if not thousands."
- While observing teams who began to use Kubernetes for their microservices, "we immediately saw an uptick in the number of microservices being released," Ghods notes. "There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
-
"There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
- Ghods reflects that as early adopters, Box had a different journey from what companies experience now. "We were definitely lock step with waiting for certain things to stabilize or features to get released," he says. "In the early days we were doing a lot of contributions [to components such as kubectl apply] and waiting for Kubernetes to release each of them, and then we’d upgrade, contribute more, and go back and forth several times. The entire project took about 18 months from our first real deployment on Kubernetes to having general availability. If we did that exact same thing today, it would probably be no more than six."
- In any case, Box didn’t have to make too many modifications to Kubernetes for it to work for the company. "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing (and often legacy) infrastructure," says Ghods, "such as upgrading our base operating system from RHEL6 to RHEL7 or integrating it into Nagios, our monitoring infrastructure. But overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we’ve been running it very successfully on our bare metal infrastructure."
- Perhaps the bigger challenge for Box was a cultural one. "Kubernetes, and cloud native in general, represents a pretty big paradigm shift, and it’s not very incremental," Ghods says. "We’re essentially making this pitch that Kubernetes is going to solve everything because it does things the right way and everything is just suddenly better. But it’s important to keep in mind that it’s not nearly as proven as many other solutions out there. You can’t say how long this or that company took to do it because there just aren’t that many yet. Our team had to really fight for resources because our project was a bit of a moonshot."
-
-
-
-
-
- "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing [and often legacy] infrastructure....overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we’ve been running it very successfully on our bare metal infrastructure."
-
-
-
-
-
- Having learned from experience, Ghods offers these two pieces of advice for companies going through similar challenges:
-
1. Deliver early and often.
Service discovery was a huge problem for Box, and the team had to decide whether to build an interim solution or wait for Kubernetes to natively satisfy Box’s unique requirements. After much debate, "we just started focusing on delivering something that works, and then dealing with potentially migrating to a more native solution later," Ghods says. "The above-all-else target for the team should always be to serve real production use cases on the infrastructure, no matter how trivial. This helps keep the momentum going both for the team itself and for the organizational perception of the project."
-
2. Keep an open mind about what your company has to abstract away from developers and what it doesn’t.
Early on, the team built an abstraction on top of Docker files to help ensure that images had the right security updates.
- This turned out to be superfluous work, since container images are considered immutable and you can easily scan them post-build to ensure they do not contain vulnerabilities. Because managing infrastructure through containerization is such a discontinuous leap, it’s better to start by interacting directly with the native tools and learning their unique advantages and caveats. An abstraction should be built only after a practical need for it arises.
- In the end, the impact has been powerful. "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Now a new microservice takes less than five days to deploy. And we’re working on getting it to an hour. Granted, much of that six months was due to how broken our systems were, but bare metal is intrinsically a difficult platform to support unless you have a system like Kubernetes to help manage it."
- By Ghods’s estimate, Box is still several years away from his goal of being a 90-plus percent Kubernetes shop. "We’re very far along on having a mission-critical, stable Kubernetes deployment that provides a lot of value," he says. "Right now about five percent of all of our compute runs on Kubernetes, and I think in the next six months we’ll likely be between 20 to 50 percent. We’re working hard on enabling all stateless service use cases, and shift our focus to stateful services after that."
-
-
-
-
-
- "Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. '...because it’s a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure.'"
-
-
-
-
-
- In fact, that’s what he envisions across the industry: Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. Kubernetes provides an API consistent across different cloud platforms including bare metal, and "I don’t think people have seen the full potential of what’s possible when you can program against one single interface," he says. "The same way AWS changed infrastructure so that you don’t have to think about servers or cabinets or networking equipment anymore, Kubernetes enables you to focus exclusively on the containers that you’re running, which is pretty exciting. That’s the vision."
- Ghods points to projects that are already in development or recently released for Kubernetes as a cloud platform: cluster federation, the Dashboard UI, and CoreOS’s etcd operator. "I honestly believe it’s the most exciting thing I’ve seen in cloud infrastructure," he says, "because it’s a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure."
- Box, with its early decision to use bare metal, embarked on its Kubernetes journey out of necessity. But Ghods says that even if companies don’t have to be agnostic about cloud providers today, Kubernetes may soon become the industry standard, as more and more tooling and extensions are built around the API.
- "The same way it doesn’t make sense to deviate from Linux because it’s such a standard," Ghods says, "I think Kubernetes is going down the same path. It is still early days—the documentation still needs work and the user experience for writing and publishing specs to the Kubernetes clusters is still rough. When you’re on the cutting edge you can expect to bleed a little. But the bottom line is, this is where the industry is going. Three to five years from now it’s really going to be shocking if you run your infrastructure any other way."
-
-
+
Challenge
+
+
Founded in 2005, the enterprise content management company allows its more than 50 million users to manage content in the cloud. Box was built primarily with bare metal inside the company's own data centers, with a monolithic PHP code base. As the company was expanding globally, it needed to focus on "how we run our workload across many different cloud infrastructures from bare metal to public cloud," says Sam Ghods, Cofounder and Services Architect of Box. "It's been a huge challenge because of different clouds, especially bare metal, have very different interfaces."
+
+
Solution
+
+
Over the past couple of years, Box has been decomposing its infrastructure into microservices, and became an early adopter of, as well as contributor to, Kubernetes container orchestration. Kubernetes, Ghods says, has allowed Box's developers to "target a universal set of concepts that are portable across all clouds."
+
+
Impact
+
+
"Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Today, a new microservice takes less than five days to deploy. And we're working on getting it to an hour."
+
+{{< case-studies/quote author="SAM GHOUDS, CO-FOUNDER AND SERVICES ARCHITECT OF BOX" >}}
+"We looked at a lot of different options, but Kubernetes really stood out....the fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+In the summer of 2014, Box was feeling the pain of a decade's worth of hardware and software infrastructure that wasn't keeping up with the company's needs.
+{{< /case-studies/lead >}}
+
+
A platform that allows its more than 50 million users (including governments and big businesses like General Electric) to manage and share content in the cloud, Box was originally a PHP monolith of millions of lines of code built exclusively with bare metal inside of its own data centers. It had already begun to slowly chip away at the monolith, decomposing it into microservices. And "as we've been expanding into regions around the globe, and as the public cloud wars have been heating up, we've been focusing a lot more on figuring out how we run our workload across many different environments and many different cloud infrastructure providers," says Box Cofounder and Services Architect Sam Ghods. "It's been a huge challenge thus far because of all these different providers, especially bare metal, have very different interfaces and ways in which you work with them."
+
+
Box's cloud native journey accelerated that June, when Ghods attended DockerCon. The company had come to the realization that it could no longer run its applications only off bare metal, and was researching containerizing with Docker, virtualizing with OpenStack, and supporting public cloud.
+
+
At that conference, Google announced the release of its Kubernetes container management system, and Ghods was won over. "We looked at a lot of different options, but Kubernetes really stood out, especially because of the incredibly strong team of Borg veterans and the vision of having a completely infrastructure-agnostic way of being able to run cloud software," he says, referencing Google's internal container orchestrator Borg. "The fact that on day one it was designed to run on bare metal just as well as Google Cloud meant that we could actually migrate to it inside of our data centers, and then use those same tools and concepts to run across public cloud providers as well."
+
+
Another plus: Ghods liked that Kubernetes has a universal set of API objects like pod, service, replica set and deployment object, which created a consistent surface to build tooling against. "Even PaaS layers like OpenShift or Deis that build on top of Kubernetes still treat those objects as first-class principles," he says. "We were excited about having these abstractions shared across the entire ecosystem, which would result in a lot more momentum than we saw in other potential solutions."
+
+
Box deployed Kubernetes in a cluster in a production data center just six months later. Kubernetes was then still pre-beta, on version 0.11. They started small: The very first thing Ghods's team ran on Kubernetes was a Box API checker that confirms Box is up. "That was just to write and deploy some software to get the whole pipeline functioning," he says. Next came some daemons that process jobs, which was "nice and safe because if they experienced any interruptions, we wouldn't fail synchronous incoming requests from customers."
+
+{{< case-studies/quote image="/images/case-studies/box/banner3.jpg">}}
+"As we've been expanding into regions around the globe, and as the public cloud wars have been heating up, we've been focusing a lot more on figuring out how we [can have Kubernetes help] run our workload across many different environments and many different cloud infrastructure providers."
+{{< /case-studies/quote >}}
+
+
The first live service, which the team could route to and ask for information, was launched a few months later. At that point, Ghods says, "We were comfortable with the stability of the Kubernetes cluster. We started to port some services over, then we would increase the cluster size and port a few more, and that's ended up to about 100 servers in each data center that are dedicated purely to Kubernetes. And that's going to be expanding a lot over the next 12 months, probably too many hundreds if not thousands."
+
+
While observing teams who began to use Kubernetes for their microservices, "we immediately saw an uptick in the number of microservices being released," Ghods notes. "There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
+
+{{< case-studies/lead >}}
+"There was clearly a pent-up demand for a better way of building software through microservices, and the increase in agility helped our developers be more productive and make better architectural choices."
+{{< /case-studies/lead >}}
+
+
Ghods reflects that as early adopters, Box had a different journey from what companies experience now. "We were definitely lock step with waiting for certain things to stabilize or features to get released," he says. "In the early days we were doing a lot of contributions [to components such as kubectl apply] and waiting for Kubernetes to release each of them, and then we'd upgrade, contribute more, and go back and forth several times. The entire project took about 18 months from our first real deployment on Kubernetes to having general availability. If we did that exact same thing today, it would probably be no more than six."
+
+
In any case, Box didn't have to make too many modifications to Kubernetes for it to work for the company. "The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing (and often legacy) infrastructure," says Ghods, "such as upgrading our base operating system from RHEL6 to RHEL7 or integrating it into Nagios, our monitoring infrastructure. But overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we've been running it very successfully on our bare metal infrastructure."
+
+
Perhaps the bigger challenge for Box was a cultural one. "Kubernetes, and cloud native in general, represents a pretty big paradigm shift, and it's not very incremental," Ghods says. "We're essentially making this pitch that Kubernetes is going to solve everything because it does things the right way and everything is just suddenly better. But it's important to keep in mind that it's not nearly as proven as many other solutions out there. You can't say how long this or that company took to do it because there just aren't that many yet. Our team had to really fight for resources because our project was a bit of a moonshot."
+
+{{< case-studies/quote image="/images/case-studies/box/banner4.jpg">}}
+"The vast majority of the work our team has done to implement Kubernetes at Box has been making it work inside of our existing [and often legacy] infrastructure....overall Kubernetes has been remarkably flexible with fitting into many of our constraints, and we've been running it very successfully on our bare metal infrastructure."
+{{< /case-studies/quote >}}
+
+
Having learned from experience, Ghods offers these two pieces of advice for companies going through similar challenges:
+
+{{< case-studies/lead >}}
+1. Deliver early and often.
+{{< /case-studies/lead >}}
+
+
Service discovery was a huge problem for Box, and the team had to decide whether to build an interim solution or wait for Kubernetes to natively satisfy Box's unique requirements. After much debate, "we just started focusing on delivering something that works, and then dealing with potentially migrating to a more native solution later," Ghods says. "The above-all-else target for the team should always be to serve real production use cases on the infrastructure, no matter how trivial. This helps keep the momentum going both for the team itself and for the organizational perception of the project."
+
+{{< case-studies/lead >}}
+2. Keep an open mind about what your company has to abstract away from developers and what it doesn't.
+{{< /case-studies/lead >}}
+
+
Early on, the team built an abstraction on top of Docker files to help ensure that images had the right security updates. This turned out to be superfluous work, since container images are considered immutable and you can easily scan them post-build to ensure they do not contain vulnerabilities. Because managing infrastructure through containerization is such a discontinuous leap, it's better to start by interacting directly with the native tools and learning their unique advantages and caveats. An abstraction should be built only after a practical need for it arises.
+
+
In the end, the impact has been powerful. "Before Kubernetes," Ghods says, "our infrastructure was so antiquated it was taking us more than six months to deploy a new microservice. Now a new microservice takes less than five days to deploy. And we're working on getting it to an hour. Granted, much of that six months was due to how broken our systems were, but bare metal is intrinsically a difficult platform to support unless you have a system like Kubernetes to help manage it."
+
+
By Ghods's estimate, Box is still several years away from his goal of being a 90-plus percent Kubernetes shop. "We're very far along on having a mission-critical, stable Kubernetes deployment that provides a lot of value," he says. "Right now about five percent of all of our compute runs on Kubernetes, and I think in the next six months we'll likely be between 20 to 50 percent. We're working hard on enabling all stateless service use cases, and shift our focus to stateful services after that."
+
+{{< case-studies/quote >}}
+"Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. '...because it's a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure.'"
+{{< /case-studies/quote >}}
+
+
In fact, that's what he envisions across the industry: Ghods predicts that Kubernetes has the opportunity to be the new cloud platform. Kubernetes provides an API consistent across different cloud platforms including bare metal, and "I don't think people have seen the full potential of what's possible when you can program against one single interface," he says. "The same way AWS changed infrastructure so that you don't have to think about servers or cabinets or networking equipment anymore, Kubernetes enables you to focus exclusively on the containers that you're running, which is pretty exciting. That's the vision."
+
+
Ghods points to projects that are already in development or recently released for Kubernetes as a cloud platform: cluster federation, the Dashboard UI, and CoreOS's etcd operator. "I honestly believe it's the most exciting thing I've seen in cloud infrastructure," he says, "because it's a never-before-seen level of automation and intelligence surrounding infrastructure that is portable and agnostic to every way you can run your infrastructure."
+
+
Box, with its early decision to use bare metal, embarked on its Kubernetes journey out of necessity. But Ghods says that even if companies don't have to be agnostic about cloud providers today, Kubernetes may soon become the industry standard, as more and more tooling and extensions are built around the API.
+
+
"The same way it doesn't make sense to deviate from Linux because it's such a standard," Ghods says, "I think Kubernetes is going down the same path. It is still early days—the documentation still needs work and the user experience for writing and publishing specs to the Kubernetes clusters is still rough. When you're on the cutting edge you can expect to bleed a little. But the bottom line is, this is where the industry is going. Three to five years from now it's really going to be shocking if you run your infrastructure any other way."
diff --git a/content/ko/case-studies/buffer/index.html b/content/ko/case-studies/buffer/index.html
index 333db6a74eb32..bcf089644590b 100644
--- a/content/ko/case-studies/buffer/index.html
+++ b/content/ko/case-studies/buffer/index.html
@@ -1,112 +1,83 @@
---
title: Buffer Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_buffer.css
----
-
-
CASE STUDY:
-
Making Deployments Easy for a Small, Distributed Team
-
-
+new_case_study_styles: true
+heading_background: /images/case-studies/buffer/banner3.jpg
+heading_title_logo: /images/buffer.png
+subheading: >
+ Making Deployments Easy for a Small, Distributed Team
+case_study_details:
+ - Company: Buffer
+ - Location: Around the World
+ - Industry: Social Media Technology
+---
-
- Company Buffer Location Around the World Industry Social Media Technology
-
+
Challenge
-
+
With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary."
-
-
-
+
Solution
-
Challenge
- With a small but fully distributed team of 80 working across almost a dozen time zones, Buffer—which offers social media management to agencies and marketers—was looking to solve its "classic monolithic code base problem," says Architect Dan Farrelly. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary."
-
+
Embracing containerization, Buffer moved its infrastructure from Amazon Web Services' Elastic Beanstalk to Docker on AWS, orchestrated with Kubernetes.
-
-
Solution
- Embracing containerization, Buffer moved its infrastructure from Amazon Web Services’ Elastic Beanstalk to Docker on AWS, orchestrated with Kubernetes.
-
-
Impact
- The new system "leveled up our ability with deployment and rolling out new changes," says Farrelly. "Building something on your computer and knowing that it’s going to work has shortened things up a lot. Our feedback cycles are a lot faster now too."
-
-
-
-
-
-
- "It’s amazing that we can use the Kubernetes solution off the shelf with our team. And it just keeps getting better. Before we even know that we need something, it’s there in the next release or it’s coming in the next few months."
- DAN FARRELLY, BUFFER ARCHITECT
-
-
-
-
-
-
Dan Farrelly uses a carpentry analogy to explain the problem his company, Buffer, began having as its team of developers grew over the past few years.
-
- "If you’re building a table by yourself, it’s fine," the company’s architect says. "If you bring in a second person to work on the table, maybe that person can start sanding the legs while you’re sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table." Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes.
- Since around 2012, Buffer had already been using Elastic Beanstalk, the orchestration service for deploying infrastructure offered by Amazon Web Services. "We were deploying a single monolithic PHP application, and it was the same application across five or six environments," says Farrelly. "We were very much a product-driven company. It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didn’t spend too much time on it. If things were getting a little bit slow, we’d maybe use a faster server or just scale up one instance, and it would be good enough. We’d move on."
- But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffer’s then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. "It was a classic monolithic code base problem," says Farrelly.
Some of the company’s team was already successfully using Docker in their development environment, but the only application running on Docker in production was a marketing website that didn’t see real user traffic. They wanted to go further with Docker, and the next step was looking at options for orchestration.
-
-
-
-
-
- And all the things Kubernetes did well suited Buffer’s needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
-
-
-
-
-
- First they considered Mesosphere, DC/OS and Amazon Elastic Container Service (which their data systems team was already using for some data pipeline jobs). While they were impressed by these offerings, they ultimately went with Kubernetes. "We run on AWS still, so spinning up, creating services and creating load balancers on demand for us without having to configure them manually was a great way for our team to get into this," says Farrelly. "We didn’t need to figure out how to configure this or that, especially coming from a former Elastic Beanstalk environment that gave us an automatically-configured load balancer. I really liked Kubernetes’ controls of the command line. It just took care of ports. It was a lot more flexible. Kubernetes was designed for doing what it does, so it does it very well."
- And all the things Kubernetes did well suited Buffer’s needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
- Above all, it provided a powerful solution for one of the company’s most distinguishing characteristics: their remote team that’s spread across a dozen different time zones. "The people with deep knowledge of our infrastructure live in time zones different from our peak traffic time zones, and most of our product engineers live in other places," says Farrelly. "So we really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep. Otherwise people would sit around for 12 to 24 hours for something. It’s been really cool to see people moving much faster."
-
- With a relatively small engineering team—just 25 people, and only a handful working on infrastructure, with the majority front-end developers—Buffer needed "something robust for them to deploy whatever they wanted," says Farrelly. Before, "it was only a couple of people who knew how to set up everything in the old way. With this system, it was easy to review documentation and get something out extremely quickly. It lowers the bar for us to get everything in production. We don't have the big team to build all these tools or manage the infrastructure like other larger companies might."
-
-
-
-
-
- "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it’s out the door."
-
-
-
-
-
- To help with this, Buffer developers wrote a deploy bot that wraps the Kubernetes deploy process and can be used by every team. "Before, our data analysts would update, say, a Python analysis script and have to wait for the lead on that team to click the button and deploy it," Farrelly explains. "Now our data analysts can make a change, enter a Slack command, ‘/deploy,’ and it goes out instantly. They don’t need to wait on these slow turnaround times. They don’t even know where it’s running; it doesn’t matter."
-
- One of the first applications the team built from scratch using Kubernetes was a new image resizing service. As a social media management tool that allows marketing teams to collaborate on posts and send updates across multiple social media profiles and networks, Buffer has to be able to resize photographs as needed to meet the varying limitations of size and format posed by different social networks. "We always had these hacked together solutions," says Farrelly.
-
- To create this new service, one of the senior product engineers was assigned to learn Docker and Kubernetes, then build the service, test it, deploy it and monitor it—which he was able to do relatively quickly. "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it’s out the door."
-
- Plus, unlike with their old system, they could scale things horizontally with one command. "As we rolled it out," Farrelly says, "we could anticipate and just click a button. This allowed us to deal with the demand that our users were placing on the system and easily scale it to handle it."
-
- Another thing they weren’t able to do before was a canary deploy. This new capability "made us so much more confident in deploying big changes," says Farrelly. "Before, it took a lot of testing, which is still good, but it was also a lot of ‘fingers crossed.’ And this is something that gets run 800,000 times a day, the core of our business. If it doesn’t work, our business doesn’t work. In a Kubernetes world, I can do a canary deploy to test it for 1 percent and I can shut it down very quickly if it isn’t working. This has leveled up our ability to deploy and roll out new changes quickly while reducing risk."
-
-
-
-
-
-
- "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We’re a relatively small team that’s actually running Kubernetes, and we’ve never run anything like it before. So it’s more approachable than you might think. That’s the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
-
-
-
-
-
- By October 2016, 54 percent of Buffer’s traffic was going through their Kubernetes cluster. "There’s a lot of our legacy functionality that still runs alright, and those parts might move to Kubernetes or stay in our old setup forever," says Farrelly. But the company made the commitment at that time that going forward, "all new development, all new features, will be running on Kubernetes."
-
- The plan for 2017 is to move all the legacy applications to a new Kubernetes cluster, and run everything they’ve pulled out of their old infrastructure, plus the new services they’re developing in Kubernetes, on another cluster. "I want to bring all the benefits that we’ve seen on our early services to everyone on the team," says Farrelly.
-
-
For Buffer’s engineers, it’s an exciting process. "Every time we’re deploying a new service, we need to figure out: OK, what’s the architecture? How do these services communicate? What’s the best way to build this service?" Farrelly says. "And then we use the different features that Kubernetes has to glue all the pieces together. It’s enabling us to experiment as we’re learning how to design a service-oriented architecture. Before, we just wouldn’t have been able to do it. This is actually giving us a blank white board so we can do whatever we want on it."
-
- Part of that blank slate is the flexibility that Kubernetes offers should the time come when Buffer may want or need to change its cloud. "It’s cloud agnostic so maybe one day we could switch to Google or somewhere else," Farrelly says. "We’re very deep in Amazon but it’s nice to know we could move away if we need to."
-
- At this point, the team at Buffer can’t imagine running their infrastructure any other way—and they’re happy to spread the word. "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We’re a relatively small team that’s actually running Kubernetes, and we’ve never run anything like it before. So it’s more approachable than you might think. That’s the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
-
-
-
+
+
The new system "leveled up our ability with deployment and rolling out new changes," says Farrelly. "Building something on your computer and knowing that it's going to work has shortened things up a lot. Our feedback cycles are a lot faster now too."
+
+{{< case-studies/quote author="DAN FARRELLY, BUFFER ARCHITECT" >}}
+"It's amazing that we can use the Kubernetes solution off the shelf with our team. And it just keeps getting better. Before we even know that we need something, it's there in the next release or it's coming in the next few months."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+Dan Farrelly uses a carpentry analogy to explain the problem his company, Buffer, began having as its team of developers grew over the past few years.
+{{< /case-studies/lead >}}
+
+
"If you're building a table by yourself, it's fine," the company's architect says. "If you bring in a second person to work on the table, maybe that person can start sanding the legs while you're sanding the top. But when you bring a third or fourth person in, someone should probably work on a different table." Needing to work on more and more different tables led Buffer on a path toward microservices and containerization made possible by Kubernetes.
+
+
Since around 2012, Buffer had already been using Elastic Beanstalk, the orchestration service for deploying infrastructure offered by Amazon Web Services. "We were deploying a single monolithic PHP application, and it was the same application across five or six environments," says Farrelly. "We were very much a product-driven company. It was all about shipping new features quickly and getting things out the door, and if something was not broken, we didn't spend too much time on it. If things were getting a little bit slow, we'd maybe use a faster server or just scale up one instance, and it would be good enough. We'd move on."
+
+
But things came to a head in 2016. With the growing number of committers on staff, Farrelly and Buffer's then-CTO, Sunil Sadasivan, decided it was time to re-architect and rethink their infrastructure. "It was a classic monolithic code base problem," says Farrelly.
Some of the company's team was already successfully using Docker in their development environment, but the only application running on Docker in production was a marketing website that didn't see real user traffic. They wanted to go further with Docker, and the next step was looking at options for orchestration.
+
+{{< case-studies/quote image="/images/case-studies/buffer/banner1.jpg" >}}
+And all the things Kubernetes did well suited Buffer's needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
+{{< /case-studies/quote >}}
+
+
First they considered Mesosphere, DC/OS and Amazon Elastic Container Service (which their data systems team was already using for some data pipeline jobs). While they were impressed by these offerings, they ultimately went with Kubernetes. "We run on AWS still, so spinning up, creating services and creating load balancers on demand for us without having to configure them manually was a great way for our team to get into this," says Farrelly. "We didn't need to figure out how to configure this or that, especially coming from a former Elastic Beanstalk environment that gave us an automatically-configured load balancer. I really liked Kubernetes' controls of the command line. It just took care of ports. It was a lot more flexible. Kubernetes was designed for doing what it does, so it does it very well."
+
+
And all the things Kubernetes did well suited Buffer's needs. "We wanted to have the kind of liquid infrastructure where a developer could create an app and deploy it and scale it horizontally as necessary," says Farrelly. "We quickly used some scripts to set up a couple of test clusters, we built some small proof-of-concept applications in containers, and we deployed things within an hour. We had very little experience in running containers in production. It was amazing how quickly we could get a handle on it [Kubernetes]."
+
+
Above all, it provided a powerful solution for one of the company's most distinguishing characteristics: their remote team that's spread across a dozen different time zones. "The people with deep knowledge of our infrastructure live in time zones different from our peak traffic time zones, and most of our product engineers live in other places," says Farrelly. "So we really wanted something where anybody could get a grasp of the system early on and utilize it, and not have to worry that the deploy engineer is asleep. Otherwise people would sit around for 12 to 24 hours for something. It's been really cool to see people moving much faster."
+
+
With a relatively small engineering team—just 25 people, and only a handful working on infrastructure, with the majority front-end developers—Buffer needed "something robust for them to deploy whatever they wanted," says Farrelly. Before, "it was only a couple of people who knew how to set up everything in the old way. With this system, it was easy to review documentation and get something out extremely quickly. It lowers the bar for us to get everything in production. We don't have the big team to build all these tools or manage the infrastructure like other larger companies might."
+
+{{< case-studies/quote image="/images/case-studies/buffer/banner4.jpg" >}}
+"In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it's out the door."
+{{< /case-studies/quote >}}
+
+
To help with this, Buffer developers wrote a deploy bot that wraps the Kubernetes deploy process and can be used by every team. "Before, our data analysts would update, say, a Python analysis script and have to wait for the lead on that team to click the button and deploy it," Farrelly explains. "Now our data analysts can make a change, enter a Slack command, '/deploy,' and it goes out instantly. They don't need to wait on these slow turnaround times. They don't even know where it's running; it doesn't matter."
+
+
One of the first applications the team built from scratch using Kubernetes was a new image resizing service. As a social media management tool that allows marketing teams to collaborate on posts and send updates across multiple social media profiles and networks, Buffer has to be able to resize photographs as needed to meet the varying limitations of size and format posed by different social networks. "We always had these hacked together solutions," says Farrelly.
+
+
To create this new service, one of the senior product engineers was assigned to learn Docker and Kubernetes, then build the service, test it, deploy it and monitor it—which he was able to do relatively quickly. "In our old way of working, the feedback loop was a lot longer, and it was delicate because if you deployed something, the risk was high to potentially break something else," Farrelly says. "With the kind of deploys that we built around Kubernetes, we were able to detect bugs and fix them, and get them deployed super fast. The second someone is fixing [a bug], it's out the door."
+
+
Plus, unlike with their old system, they could scale things horizontally with one command. "As we rolled it out," Farrelly says, "we could anticipate and just click a button. This allowed us to deal with the demand that our users were placing on the system and easily scale it to handle it."
+
+
Another thing they weren't able to do before was a canary deploy. This new capability "made us so much more confident in deploying big changes," says Farrelly. "Before, it took a lot of testing, which is still good, but it was also a lot of 'fingers crossed.' And this is something that gets run 800,000 times a day, the core of our business. If it doesn't work, our business doesn't work. In a Kubernetes world, I can do a canary deploy to test it for 1 percent and I can shut it down very quickly if it isn't working. This has leveled up our ability to deploy and roll out new changes quickly while reducing risk."
+
+{{< case-studies/quote >}}
+"If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We're a relatively small team that's actually running Kubernetes, and we've never run anything like it before. So it's more approachable than you might think. That's the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
+{{< /case-studies/quote >}}
+
+
By October 2016, 54 percent of Buffer's traffic was going through their Kubernetes cluster. "There's a lot of our legacy functionality that still runs alright, and those parts might move to Kubernetes or stay in our old setup forever," says Farrelly. But the company made the commitment at that time that going forward, "all new development, all new features, will be running on Kubernetes."
+
+
The plan for 2017 is to move all the legacy applications to a new Kubernetes cluster, and run everything they've pulled out of their old infrastructure, plus the new services they're developing in Kubernetes, on another cluster. "I want to bring all the benefits that we've seen on our early services to everyone on the team," says Farrelly.
+
+{{< case-studies/lead >}}
+For Buffer's engineers, it's an exciting process. "Every time we're deploying a new service, we need to figure out: OK, what's the architecture? How do these services communicate? What's the best way to build this service?" Farrelly says. "And then we use the different features that Kubernetes has to glue all the pieces together. It's enabling us to experiment as we're learning how to design a service-oriented architecture. Before, we just wouldn't have been able to do it. This is actually giving us a blank white board so we can do whatever we want on it."
+{{< /case-studies/lead >}}
+
+
Part of that blank slate is the flexibility that Kubernetes offers should the time come when Buffer may want or need to change its cloud. "It's cloud agnostic so maybe one day we could switch to Google or somewhere else," Farrelly says. "We're very deep in Amazon but it's nice to know we could move away if we need to."
+
+
At this point, the team at Buffer can't imagine running their infrastructure any other way—and they're happy to spread the word. "If you want to run containers in production, with nearly the power that Google uses internally, this [Kubernetes] is a great way to do that," Farrelly says. "We're a relatively small team that's actually running Kubernetes, and we've never run anything like it before. So it's more approachable than you might think. That's the one big thing that I tell people who are experimenting with it. Pick a couple of things, roll it out, kick the tires on this for a couple of months and see how much it can handle. You start learning a lot this way."
diff --git a/content/ko/case-studies/capital-one/index.html b/content/ko/case-studies/capital-one/index.html
index f95fb2acc703b..7474b71b1020c 100644
--- a/content/ko/case-studies/capital-one/index.html
+++ b/content/ko/case-studies/capital-one/index.html
@@ -2,95 +2,60 @@
title: Capital One Case Study
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/capitalone/banner1.jpg
+heading_title_logo: /images/capitalone-logo.png
+subheading: >
+ Supporting Fast Decisioning Applications with Kubernetes
+case_study_details:
+ - Company: Capital One
+ - Location: McLean, Virginia
+ - Industry: Retail banking
---
-
-
CASE STUDY:
Supporting Fast Decisioning Applications with Kubernetes
+
Challenge
+
+
The team set out to build a provisioning platform for Capital One applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.
-
+
Solution
-
+
The decision to run Kubernetes "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. There's a degree of affinity in our product development."
-
- Company Capital One Location McLean, Virginia Industry Retail banking
-
+
Impact
-
-
-
-
-
Challenge
- The team set out to build a provisioning platform for Capital One applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisioning. The key considerations: resilience and speed—as well as full rehydration of the cluster from base AMIs.
+
"Kubernetes is a significant productivity multiplier," says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would "easily see our costs triple, quadruple what they are now for the amount of pure AWS expense." Time to market has been improved as well: "Now, a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer." Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple hours with Kubernetes automation and declarative configuration.
- The decision to run Kubernetes "is very strategic for us," says John Swift, Senior Director Software Engineering. "We use Kubernetes as a substrate or an operating system, if you will. There’s a degree of affinity in our product development."
-
+"With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."
+{{< /case-studies/quote >}}
-
+
As a top 10 U.S. retail bank, Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team led by Senior Director Software Engineering John Swift embraced Kubernetes for its provisioning platform. "Kubernetes and its entire ecosystem are very strategic for us," says Swift. "We use Kubernetes as a substrate or an operating system, if you will. There's a degree of affinity in our product development."
-
Impact
- "Kubernetes is a significant productivity multiplier," says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would "easily see our costs triple, quadruple what they are now for the amount of pure AWS expense." Time to market has been improved as well: "Now, a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer." Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple hours with Kubernetes automation and declarative configuration.
-
-
-
-
-
-
-
-
-
-"With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before." — Jamil Jadallah, Scrum Master
-
-
-
-
-
-
- As a top 10 U.S. retail bank, Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team led by Senior Director Software Engineering John Swift embraced Kubernetes for its provisioning platform. "Kubernetes and its entire ecosystem are very strategic for us," says Swift. "We use Kubernetes as a substrate or an operating system, if you will. There’s a degree of affinity in our product development."
- Almost two years ago, the team embarked on this journey by first working with Docker. Then came Kubernetes. "We wanted to put streaming services into Kubernetes as one feature of the workloads for fast decisioning, and to be able to do batch alongside it," says Lead Software Engineer Keith Gasser. "Once the data is streamed and batched, there are so many tool sets in Flink that we use for decisioning. We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
-
-
-
-
-
-
-
- "We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
-
-
-
-
-
-
- In this first year, the impact has already been great. "Time to market is really huge for us," says Gasser. "Especially with fraud, you have to be very nimble in the way you respond to threats in the marketplace—being able to add and push new rules, detect new patterns of behavior, detect anomalies in account and transaction flows." With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
- Teams now have the tools to be autonomous in their deployments, and as a result, deployments have increased by two orders of magnitude. "And that was with just seven dedicated resources, without needing a whole group sitting there watching everything," says Scrum Master Jamil Jadallah. "That’s a huge cost savings. With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."
-
-
-
-
-
- With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
-
-
-
-
-
- Kubernetes has also been a great time-saver for Capital One’s required period "rehydration" of clusters from base AMIs. To minimize the attack vulnerability profile for applications in the cloud, "Our entire clusters get rebuilt from scratch periodically, with new fresh instances and virtual server images that are patched with the latest and greatest security patches," says Gasser. This process used to take the better part of a day, and personnel, to do manually. It’s now a quick Kubernetes job.
- Savings extend to both capital and operating expenses. "It takes very little to get into Kubernetes because it’s all open source," Gasser points out. "We went the DIY route for building our cluster, and we definitely like the flexibility of being able to embrace the latest from the community immediately without waiting for a downstream company to do it. There’s capex related to those licenses that we don’t have to pay for. Moreover, there’s capex savings for us from some of the proprietary software that we get to sunset in our particular domain. So that goes onto our ledger in a positive way as well." (Some of those open source technologies include Prometheus, Fluentd, gRPC, Istio, CNI, and Envoy.)
-
-
-
-
-
- "If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn’t account for personnel to deploy and maintain all the additional infrastructure."
-
-
-
-
- And on the opex side, Gasser says, the savings are high. "We run dozens of services, we have scores of pods, many daemon sets, and since we’re data-driven, we take advantage of EBS-backed volume claims for all of our stateful services. If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn’t account for personnel to deploy and maintain all the additional infrastructure."
- The team is confident that the benefits will continue to multiply—without a steep learning curve for the engineers being exposed to the new technology. "As we onboard additional tenants in this ecosystem, I think the need for folks to understand Kubernetes may not necessarily go up. In fact, I think it goes down, and that’s good," says Gasser. "Because that really demonstrates the scalability of the technology. You start to reap the benefits, and they can concentrate on all the features they need to build for great decisioning in the business— fraud decisions, credit decisions—and not have to worry about, ‘Is my AWS server broken? Is my pod not running?’"
-
-
-
+
Almost two years ago, the team embarked on this journey by first working with Docker. Then came Kubernetes. "We wanted to put streaming services into Kubernetes as one feature of the workloads for fast decisioning, and to be able to do batch alongside it," says Lead Software Engineer Keith Gasser. "Once the data is streamed and batched, there are so many tool sets in Flink that we use for decisioning. We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
+
+{{< case-studies/quote image="/images/case-studies/capitalone/banner3.jpg" >}}
+"We want to provide the tools in the same ecosystem, in a consistent way, rather than have a large custom snowflake ecosystem where every tool needs its own custom deployment. Kubernetes gives us the ability to bring all of these together, so the richness of the open source and even the license community dealing with big data can be corralled."
+{{< /case-studies/quote >}}
+
+
In this first year, the impact has already been great. "Time to market is really huge for us," says Gasser. "Especially with fraud, you have to be very nimble in the way you respond to threats in the marketplace—being able to add and push new rules, detect new patterns of behavior, detect anomalies in account and transaction flows." With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
+
+
Teams now have the tools to be autonomous in their deployments, and as a result, deployments have increased by two orders of magnitude. "And that was with just seven dedicated resources, without needing a whole group sitting there watching everything," says Scrum Master Jamil Jadallah. "That's a huge cost savings. With the scalability, the management, the coordination, Kubernetes really empowers us and gives us more time back than we had before."
+
+{{< case-studies/quote image="/images/case-studies/capitalone/banner4.jpg" >}}
+With Kubernetes, "a team can come to us and we can have them up and running with a basic decisioning app in a fortnight, which before would have taken a whole quarter, if not longer. Kubernetes is a manifold productivity multiplier."
+{{< /case-studies/quote >}}
+
+
Kubernetes has also been a great time-saver for Capital One's required period "rehydration" of clusters from base AMIs. To minimize the attack vulnerability profile for applications in the cloud, "Our entire clusters get rebuilt from scratch periodically, with new fresh instances and virtual server images that are patched with the latest and greatest security patches," says Gasser. This process used to take the better part of a day, and personnel, to do manually. It's now a quick Kubernetes job.
+
+
Savings extend to both capital and operating expenses. "It takes very little to get into Kubernetes because it's all open source," Gasser points out. "We went the DIY route for building our cluster, and we definitely like the flexibility of being able to embrace the latest from the community immediately without waiting for a downstream company to do it. There's capex related to those licenses that we don't have to pay for. Moreover, there's capex savings for us from some of the proprietary software that we get to sunset in our particular domain. So that goes onto our ledger in a positive way as well." (Some of those open source technologies include Prometheus, Fluentd, gRPC, Istio, CNI, and Envoy.)
+
+{{< case-studies/quote >}}
+"If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn't account for personnel to deploy and maintain all the additional infrastructure."
+{{< /case-studies/quote >}}
+
+
And on the opex side, Gasser says, the savings are high. "We run dozens of services, we have scores of pods, many daemon sets, and since we're data-driven, we take advantage of EBS-backed volume claims for all of our stateful services. If we had to do all of this without Kubernetes, on underlying cloud services, I could easily see our costs triple, quadruple what they are now for the amount of pure AWS expense. That doesn't account for personnel to deploy and maintain all the additional infrastructure."
+
+
The team is confident that the benefits will continue to multiply—without a steep learning curve for the engineers being exposed to the new technology. "As we onboard additional tenants in this ecosystem, I think the need for folks to understand Kubernetes may not necessarily go up. In fact, I think it goes down, and that's good," says Gasser. "Because that really demonstrates the scalability of the technology. You start to reap the benefits, and they can concentrate on all the features they need to build for great decisioning in the business— fraud decisions, credit decisions—and not have to worry about, 'Is my AWS server broken? Is my pod not running?'"
diff --git a/content/ko/case-studies/crowdfire/index.html b/content/ko/case-studies/crowdfire/index.html
index 227a5c08394bd..98caae28303c6 100644
--- a/content/ko/case-studies/crowdfire/index.html
+++ b/content/ko/case-studies/crowdfire/index.html
@@ -1,101 +1,85 @@
---
title: Crowdfire Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_crowdfire.css
----
-
-
CASE STUDY:
How to Keep Iterating a Fast-Growing App With a Cloud-Native Approach
+new_case_study_styles: true
+heading_background: /images/case-studies/crowdfire/banner1.jpg
+heading_title_logo: /images/crowdfire_logo.png
+subheading: >
+ How to Keep Iterating a Fast-Growing App With a Cloud-Native Approach
+case_study_details:
+ - Company: Crowdfire
+ - Location: Mumbai, India
+ - Industry: Social Media Software
+---
-
+
Challenge
-
- Company Crowdfire Location Mumbai, India Industry Social Media Software
-
+
Crowdfire helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on Google App Engine, and in 2015, the company began a transformation to microservices running on Amazon Web Services Elastic Beanstalk. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.
-
-
-
-
-
Challenge
- Crowdfire helps content creators create their content anywhere on the Internet and publish it everywhere else in the right format. Since its launch in 2010, it has grown to 16 million users. The product began as a monolith app running on Google App Engine, and in 2015, the company began a transformation to microservices running on Amazon Web Services Elastic Beanstalk. "It was okay for our use cases initially, but as the number of services, development teams and scale increased, the deploy times, self-healing capabilities and resource utilization started to become problems for us," says Software Engineer Amanpreet Singh, who leads the infrastructure team for Crowdfire.
-
Solution
- "We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible.
-
-
+
Solution
-
+
"We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible.
Impact
- "Kubernetes has helped us reduce the deployment time from 15 minutes to less than a minute," says Singh. "Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure." Plus, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it’s finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines."
-
-
-
-
-
-
- "In the 15 months that we’ve been using Kubernetes, it has been amazing for us. It enabled us to iterate quickly, increase development speed, and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control." - Amanpreet Singh, Software Engineer at Crowdfire
-
-
-
-
-
"If you build it, they will come."
- For most content creators, only half of that movie quote may ring true. Sure, platforms like Wordpress, YouTube and Shopify have made it simple for almost anyone to start publishing new content online, but attracting an audience isn’t as easy. Crowdfire "helps users publish their content to all possible places where their audience exists," says Amanpreet Singh, a Software Engineer at the company based in Mumbai, India. Crowdfire has gained more than 16 million users—from bloggers and artists to makers and small businesses—since its launch in 2010.
- With that kind of growth—and a high demand from users for new features and continuous improvements—the Crowdfire team struggled to keep up behind the scenes. In 2015, they moved their monolith Java application to Amazon Web Services Elastic Beanstalk and started breaking it down into microservices.
- It was a good first step, but the team soon realized they needed to go further down the cloud-native path, which would lead them to Kubernetes. "It was okay for our use cases initially, but as the number of services and development teams increased and we scaled further, deploy times, self-healing capabilities and resource utilization started to become problematic," says Singh, who leads the infrastructure team at Crowdfire. "We realized that we needed a more cloud-native approach to deal with these issues."
- As he looked around for solutions, Singh had a checklist of what Crowdfire needed. "We wanted to keep some things separate so they could be shipped independent of other things; this would help remove blockers and let different teams work at their own pace," he says. "We also make a lot of data-driven decisions, so shipping a feature and its iterations quickly was a must."
- Kubernetes checked all the boxes and then some. "One of the best things was the built-in service discovery," he says. "When you have a bunch of microservices that need to call each other, having internal DNS readily available and service IPs and ports automatically set as environment variables help a lot." Plus, he adds, "Kubernetes’s opinionated approach made it easier to get started."
-
-
-
-
-
- "We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible."
-
-
-
-
- There was another compelling business reason for the cloud-native approach. "In today’s world of ever-changing business requirements, using cloud native technology provides a variety of options to choose from—even the ability to run services in a hybrid cloud environment," says Singh. "Businesses can keep services in a region closest to the users, and thus benefit from high-availability and resiliency."
- So in February 2016, Singh set up a test Kubernetes cluster using the kube-up scripts provided. "I explored the features and was able to deploy an application pretty easily," he says. "However, it seemed like a black box since I didn’t understand the components completely, and had no idea what the kube-up script did under the hood. So when it broke, it was hard to find the issue and fix it."
- To get a better understanding, Singh dove into the internals of Kubernetes, reading the docs and even some of the code. And he looked to the Kubernetes community for more insight. "I used to stay up a little late every night (a lot of users were active only when it’s night here in India) and would try to answer questions on the Kubernetes community Slack from users who were getting started," he says. "I would also follow other conversations closely. I must admit I was able to avoid a lot of issues in our setup because I knew others had faced the same issues."
- Based on the knowledge he gained, Singh decided to implement a custom setup of Kubernetes based on Terraform and Ansible. "I wrote Terraform to launch Kubernetes master and nodes (Auto Scaling Groups) and an Ansible playbook to install the required components," he says. (The company recently switched to using prebaked AMIs to make the node bringup faster, and is planning to change its networking layer.)
-
-
-
-
-
- "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure."
-
-
-
-
-
- First, the team migrated a few staging services from Elastic Beanstalk to the new Kubernetes staging cluster, and then set up a production cluster a month later to deploy some services. The results were convincing. "By the end of March 2016, we established that all the new services must be deployed on Kubernetes," says Singh. "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes’s self-healing nature, the operations team doesn’t need to do any manual intervention in case of a node or pod failure." On top of that, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it’s finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines. This brings more visibility into the changes being made, and keeping an audit trail."
- Over the next six months, the team worked on migrating all the services from Elastic Beanstalk to Kubernetes, except for the few that were deprecated and would soon be terminated anyway. The services were moved one at a time, and their performance was monitored for two to three days each. Today, "We’re completely migrated and we run all new services on Kubernetes," says Singh.
- The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
- All 30 engineers at Crowdfire were onboarded at once. "I gave an internal talk where I shared the basic components and demoed the usage of kubectl," says Singh. "Everyone was excited and happy about using Kubernetes. Developers have more control and visibility into their applications running in production now. Most of all, they’re happy with the low deploy times and self-healing services."
- And they’re much more productive, too. "Where we used to do about 5 deployments per day," says Singh, "now we’re doing 30+ production and 50+ staging deployments almost every day."
-
-
-
-
-
-
- The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
-
-
-
-
-
-
- Singh notes that almost all of the engineers interact with the staging cluster on a daily basis, and that has created a cultural change at Crowdfire. "Developers are more aware of the cloud infrastructure now," he says. "They’ve started following cloud best practices like better health checks, structured logs to stdout [standard output], and config via files or environment variables."
- With Crowdfire’s commitment to Kubernetes, Singh is looking to expand the company’s cloud-native stack. The team already uses Prometheus for monitoring, and he says he is evaluating Linkerd and Envoy Proxy as a way to "get more metrics about request latencies and failures, and handle them better." Other CNCF projects, including OpenTracing and gRPC are also on his radar.
- Singh has found that the cloud-native community is growing in India, too, particularly in Bangalore. "A lot of startups and new companies are starting to run their infrastructure on Kubernetes," he says.
- And when people ask him about Crowdfire’s experience, he has this advice to offer: "Kubernetes is a great piece of technology, but it might not be right for you, especially if you have just one or two services or your app isn’t easy to run in a containerized environment," he says. "Assess your situation and the value that Kubernetes provides before going all in. If you do decide to use Kubernetes, make sure you understand the components that run under the hood and what role they play in smoothly running the cluster. Another thing to consider is if your apps are ‘Kubernetes-ready,’ meaning if they have proper health checks and handle termination signals to shut down gracefully."
- And if your company fits that profile, go for it. Crowdfire clearly did—and is now reaping the benefits. "In the 15 months that we’ve been using Kubernetes, it has been amazing for us," says Singh. "It enabled us to iterate quickly, increase development speed and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."
-
-
-
-
+
+
"Kubernetes has helped us reduce the deployment time from 15 minutes to less than a minute," says Singh. "Due to Kubernetes's self-healing nature, the operations team doesn't need to do any manual intervention in case of a node or pod failure." Plus, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it's finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines."
+
+{{< case-studies/quote author="Amanpreet Singh, Software Engineer at Crowdfire" >}}
+"In the 15 months that we've been using Kubernetes, it has been amazing for us. It enabled us to iterate quickly, increase development speed, and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+"If you build it, they will come."
+{{< /case-studies/lead >}}
+
+
For most content creators, only half of that movie quote may ring true. Sure, platforms like Wordpress, YouTube and Shopify have made it simple for almost anyone to start publishing new content online, but attracting an audience isn't as easy. Crowdfire "helps users publish their content to all possible places where their audience exists," says Amanpreet Singh, a Software Engineer at the company based in Mumbai, India. Crowdfire has gained more than 16 million users—from bloggers and artists to makers and small businesses—since its launch in 2010.
+
+
With that kind of growth—and a high demand from users for new features and continuous improvements—the Crowdfire team struggled to keep up behind the scenes. In 2015, they moved their monolith Java application to Amazon Web Services Elastic Beanstalk and started breaking it down into microservices.
+
+
It was a good first step, but the team soon realized they needed to go further down the cloud-native path, which would lead them to Kubernetes. "It was okay for our use cases initially, but as the number of services and development teams increased and we scaled further, deploy times, self-healing capabilities and resource utilization started to become problematic," says Singh, who leads the infrastructure team at Crowdfire. "We realized that we needed a more cloud-native approach to deal with these issues."
+
+
As he looked around for solutions, Singh had a checklist of what Crowdfire needed. "We wanted to keep some things separate so they could be shipped independent of other things; this would help remove blockers and let different teams work at their own pace," he says. "We also make a lot of data-driven decisions, so shipping a feature and its iterations quickly was a must."
+
+
Kubernetes checked all the boxes and then some. "One of the best things was the built-in service discovery," he says. "When you have a bunch of microservices that need to call each other, having internal DNS readily available and service IPs and ports automatically set as environment variables help a lot." Plus, he adds, "Kubernetes's opinionated approach made it easier to get started."
+
+{{< case-studies/quote image="/images/case-studies/crowdfire/banner3.jpg" >}}
+"We realized that we needed a more cloud-native approach to deal with these issues," says Singh. The team decided to implement a custom setup of Kubernetes based on Terraform and Ansible."
+{{< /case-studies/quote >}}
+
+
There was another compelling business reason for the cloud-native approach. "In today's world of ever-changing business requirements, using cloud native technology provides a variety of options to choose from—even the ability to run services in a hybrid cloud environment," says Singh. "Businesses can keep services in a region closest to the users, and thus benefit from high-availability and resiliency."
+
+
So in February 2016, Singh set up a test Kubernetes cluster using the kube-up scripts provided. "I explored the features and was able to deploy an application pretty easily," he says. "However, it seemed like a black box since I didn't understand the components completely, and had no idea what the kube-up script did under the hood. So when it broke, it was hard to find the issue and fix it."
+
+
To get a better understanding, Singh dove into the internals of Kubernetes, reading the docs and even some of the code. And he looked to the Kubernetes community for more insight. "I used to stay up a little late every night (a lot of users were active only when it's night here in India) and would try to answer questions on the Kubernetes community Slack from users who were getting started," he says. "I would also follow other conversations closely. I must admit I was able to avoid a lot of issues in our setup because I knew others had faced the same issues."
+
+
Based on the knowledge he gained, Singh decided to implement a custom setup of Kubernetes based on Terraform and Ansible. "I wrote Terraform to launch Kubernetes master and nodes (Auto Scaling Groups) and an Ansible playbook to install the required components," he says. (The company recently switched to using prebaked AMIs to make the node bringup faster, and is planning to change its networking layer.)
+
+{{< case-studies/quote image="/images/case-studies/crowdfire/banner4.jpg" >}}
+"Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes's self-healing nature, the operations team doesn't need to do any manual intervention in case of a node or pod failure."
+{{< /case-studies/quote >}}
+
+
First, the team migrated a few staging services from Elastic Beanstalk to the new Kubernetes staging cluster, and then set up a production cluster a month later to deploy some services. The results were convincing. "By the end of March 2016, we established that all the new services must be deployed on Kubernetes," says Singh. "Kubernetes helped us reduce the deployment time from 15 minutes to less than a minute. Due to Kubernetes's self-healing nature, the operations team doesn't need to do any manual intervention in case of a node or pod failure." On top of that, he says, "Dev-Prod parity has improved since developers can experiment with options in dev/staging clusters, and when it's finalized, they just commit the config changes in the respective code repositories. These changes automatically get replicated on the production cluster via CI/CD pipelines. This brings more visibility into the changes being made, and keeping an audit trail."
+
+
Over the next six months, the team worked on migrating all the services from Elastic Beanstalk to Kubernetes, except for the few that were deprecated and would soon be terminated anyway. The services were moved one at a time, and their performance was monitored for two to three days each. Today, "We're completely migrated and we run all new services on Kubernetes," says Singh.
+
+
The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
+
+
All 30 engineers at Crowdfire were onboarded at once. "I gave an internal talk where I shared the basic components and demoed the usage of kubectl," says Singh. "Everyone was excited and happy about using Kubernetes. Developers have more control and visibility into their applications running in production now. Most of all, they're happy with the low deploy times and self-healing services."
+
+
And they're much more productive, too. "Where we used to do about 5 deployments per day," says Singh, "now we're doing 30+ production and 50+ staging deployments almost every day."
+
+{{< case-studies/quote >}}
+The impact has been considerable: With Kubernetes, the company has experienced a 90% cost savings on Elastic Load Balancer, which is now only used for their public, user-facing services. Their EC2 operating expenses have been decreased by as much as 50%.
+{{< /case-studies/quote >}}
+
+
Singh notes that almost all of the engineers interact with the staging cluster on a daily basis, and that has created a cultural change at Crowdfire. "Developers are more aware of the cloud infrastructure now," he says. "They've started following cloud best practices like better health checks, structured logs to stdout [standard output], and config via files or environment variables."
+
+
With Crowdfire's commitment to Kubernetes, Singh is looking to expand the company's cloud-native stack. The team already uses Prometheus for monitoring, and he says he is evaluating Linkerd and Envoy Proxy as a way to "get more metrics about request latencies and failures, and handle them better." Other CNCF projects, including OpenTracing and gRPC are also on his radar.
+
+
Singh has found that the cloud-native community is growing in India, too, particularly in Bangalore. "A lot of startups and new companies are starting to run their infrastructure on Kubernetes," he says.
+
+
And when people ask him about Crowdfire's experience, he has this advice to offer: "Kubernetes is a great piece of technology, but it might not be right for you, especially if you have just one or two services or your app isn't easy to run in a containerized environment," he says. "Assess your situation and the value that Kubernetes provides before going all in. If you do decide to use Kubernetes, make sure you understand the components that run under the hood and what role they play in smoothly running the cluster. Another thing to consider is if your apps are 'Kubernetes-ready,' meaning if they have proper health checks and handle termination signals to shut down gracefully."
+
+
And if your company fits that profile, go for it. Crowdfire clearly did—and is now reaping the benefits. "In the 15 months that we've been using Kubernetes, it has been amazing for us," says Singh. "It enabled us to iterate quickly, increase development speed and continuously deliver new features and bug fixes to our users, while keeping our operational costs and infrastructure management overhead under control."
diff --git a/content/ko/case-studies/golfnow/index.html b/content/ko/case-studies/golfnow/index.html
index f4bf4d4f278c2..9e82d90cd10fd 100644
--- a/content/ko/case-studies/golfnow/index.html
+++ b/content/ko/case-studies/golfnow/index.html
@@ -1,125 +1,89 @@
---
title: GolfNow Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_golfnow.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/golfnow/banner1.jpg
+heading_title_logo: /images/golfnow_logo.png
+subheading: >
+ Saving Time and Money with Cloud Native Infrastructure
+case_study_details:
+ - Company: GolfNow
+ - Location: Orlando, Florida
+ - Industry: Golf Industry Technology and Services Provider
---
-
-
CASE STUDY:
-
Saving Time and Money with Cloud Native Infrastructure
-
-
-
-
- Company GolfNow Location Orlando, Florida Industry Golf Industry Technology and Services Provider
-
-
-
-
-
-
-
-
-
Challenge
- A member of the NBC Sports Group, GolfNow is the golf industry’s technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNow’s monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNow’s Director, Architecture. "We wanted the ability to more easily expand globally."
-
-
-
-
-
Solution
- Turning to microservices and containerization, GolfNow began moving its applications and databases from third-party services to its own clusters running on Docker and Kubernetes.
-
-
Impact
- The results were immediate. While maintaining the same capacity—and beyond, during peak periods—GolfNow saw its infrastructure costs for the first application virtually cut in half.
-
-
-
-
-
-
-
- "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally. We were basically wasting money and doubling the cost of our infrastructure."
- SHERIFF MOHAMED, DIRECTOR, ARCHITECTURE AT GOLFNOW
-
-
-
-
-
-
It’s not every day that you can say you’ve slashed an operating expense by half.
-
- But Sheriff Mohamed and Josh Chandler did just that when they helped lead their company, GolfNow, on a journey from a monolithic to a containerized, cloud native infrastructure managed by Kubernetes.
-
- A top-performing business within the NBC Sports Group, GolfNow is a technology and services company with the largest tee time marketplace in the world. GolfNow serves 5 million active golfers across 10 different products. In recent years, the business had grown so fast that the infrastructure supporting their giant monolithic application (written in C#.NET and backed by SQL Server database management system) could not keep up. "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally," says Sheriff, GolfNow’s Director, Architecture. "Our costs were growing exponentially. And on top of that, we had to build a Disaster Recovery (DR) environment, which then meant we’d have to copy exactly what we had in our original data center to another data center that was just the standby. We were basically wasting money and doubling the cost of our infrastructure."
-
- In moving just the first of GolfNow’s important applications—a booking engine for golf courses and B2B marketing platform—from third-party services to their own Kubernetes environment, "our bill went down drastically," says Sheriff.
-
- The path to those stellar results began in late 2014. In order to support GolfNow’s global growth, the team decided that the company needed to have multiple data centers and the ability to quickly and easily re-route traffic as needed. "From there we knew that we needed to go in a direction of breaking things apart, microservices, and containerization," says Sheriff. "At the time we were trying to get away from C#.NET and SQL Server since it didn’t run very well on Linux, where everything container was running smoothly."
-
- To that end, the team shifted to working with Node.js, the open-source, cross-platform JavaScript runtime environment for developing tools and applications, and MongoDB, the open-source database program. At the time, Docker, the platform for deploying applications in containers, was still new. But once the team began experimenting with it, Sheriff says, "we realized that was the way we wanted to go, especially since that’s the way the industry is heading."
-
-
-
-
-
- "The team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, 'Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn’t have to pay extra money at all.'"
-
-
-
-
-
- GolfNow’s dev team ran an "internal, low-key" proof of concept and were won over. "We really liked how easy it was to be able to pass containers around to each other and have them up and running in no time, exactly the way it was running on my machine," says Sheriff. "Because that is always the biggest gripe that Ops has with developers, right? ‘It worked on my machine!’ But then we started getting to the point of, ‘How do we make sure that these things stay up and running?’"
- That led the team on a quest to find the right orchestration system for the company’s needs. Sheriff says the first few options they tried were either too heavy or "didn’t feel quite right." In late summer 2015, they discovered the just-released Kubernetes, which Sheriff immediately liked for its ease of use. "We did another proof of concept," he says, "and Kubernetes won because of the fact that the community backing was there, built on top of what Google had already done."
-
- But before they could go with Kubernetes, NBC, GolfNow’s parent company, also asked them to comparison shop with another company. Sheriff and his team liked the competing company’s platform user interface, but didn’t like that its platform would not allow containers to run natively on Docker. With no clear decision in sight, Sheriff’s VP at GolfNow, Steve McElwee, set up a three-month trial during which a GolfNow team (consisting of Sheriff and Josh, who’s now Lead Architect, Open Platforms) would build out a Kubernetes environment, and a large NBC team would build out one with the other company’s platform.
-
- "We spun up the cluster and we tried to get everything to run the way we wanted it to run," Sheriff says. "The biggest thing that we took away from it is that not only did we want our applications to run within Kubernetes and Docker, we also wanted our databases to run there. We literally wanted our entire infrastructure to run within Kubernetes."
-
- At the time there was nothing in the community to help them get Kafka and MongoDB clusters running within a Kubernetes and Docker environment, so Sheriff and Josh figured it out on their own, taking a full month to get it right. "Everything started rolling from there," Sheriff says. "We were able to get all our applications connected, and we finished our side of the proof of concept a month in advance. My VP was like, ‘Alright, it’s over. Kubernetes wins.’"
-
- The next step, beginning in January 2016, was getting everything working in production. The team focused first on one application that was already written in Node.js and MongoDB. A booking engine for golf courses and B2B marketing platform, the application was already going in the microservice direction but wasn’t quite finished yet. At the time, it was running in Heroku Compose and other third-party services—resulting in a large monthly bill.
-
-
-
-
-
-
- "'The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven’t come from the Kubernetes world you wouldn’t believe me.' Sheriff puts it in these terms: 'Before Kubernetes I wasn’t sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I’ve been sleeping at night.'"
-
-
-
-
-
- "The goal was to take all of that out and put it within this new platform we’ve created with Kubernetes on Google Compute Engine (GCE)," says Sheriff. "So we ended up building piece by piece, in parallel, what was out in Heroku and Compose, in our Kubernetes cluster. Then, literally, just switched configs in the background. So in Heroku we had the app running hitting a Compose database. We’d take the config, change it and make it hit the database that was running in our cluster."
-
- Using this procedure, they were able to migrate piecemeal, without any downtime. The first migration was done during off hours, but to test the limits, the team migrated the second database in the middle of the day, when lots of users were running the application. "We did it," Sheriff says, "and again it was successful. Nobody noticed."
-
- After three weeks of monitoring to make sure everything was running stable, the team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, "Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn’t have to pay extra money at all."
-
- Not only were they saving money, but they were also saving time. "I had a meeting this morning about migrating some applications from one cluster to another," says Josh. "I spent about 2 hours explaining the process. The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven’t come from the Kubernetes world you wouldn’t believe me." Sheriff puts it in these terms: "Before Kubernetes I wasn’t sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I’ve been sleeping at night."
-
- A small percentage of the applications on GolfNow have been migrated over to the Kubernetes environment. "Our Core Team is rewriting a lot of the .NET applications into .NET Core [which is compatible with Linux and Docker] so that we can run them within containers," says Sheriff.
-
- Looking ahead, Sheriff and his team want to spend 2017 continuing to build a whole platform around Kubernetes with Drone, an open-source continuous delivery platform, to make it more developer-centric. "Now they’re able to manage configuration, they’re able to manage their deployments and things like that, making all these subteams that are now creating all these microservices, be self sufficient," he says. "So it can pull us away from applications and allow us to just make sure the cluster is running and healthy, and then actually migrate that over to our Ops team."
-
-
-
-
-
-
- "Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. 'This is The Six Million Dollar Man of the cloud right now,' adds Josh. 'Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They’re faster, they’re more resilient.'"
-
-
-
-
-
- And long-term, Sheriff has an even bigger goal for getting more people into the Kubernetes fold. "We’re actually trying to make this platform generic enough so that any of our sister companies can use it if they wish," he says. "Most definitely I think it can be used as a model. I think the way we migrated into it, the way we built it out, are all ways that I think other companies can learn from, and should not be afraid of."
-
- The GolfNow team is also giving back to the Kubernetes community by open-sourcing a bot framework that Josh built. "We noticed that the dashboard user interface is actually moving a lot faster than when we started," says Sheriff. "However we realized what we needed was something that’s more of a bot that really helps us administer Kubernetes as a whole through Slack." Josh explains: "With the Kubernetes-Slack integration, you can essentially hook into a cluster and the issue commands and edit configurations. We’ve tried to simplify the security configuration as much as possible. We hope this will be our major thank you to Kubernetes, for everything you’ve given us."
-
- Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. The lessons they’ve learned: "You’ve got to have buy-in from your boss," says Sheriff. "Another big deal is having two to three people dedicated to this type of endeavor. You can’t have people who are half in, half out." And if you don’t have buy-in from the get go, proving it out will get you there.
-
- "This is The Six Million Dollar Man of the cloud right now," adds Josh. "Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They’re faster, they’re more resilient."
-
-
-
+
Challenge
+
+
A member of the NBC Sports Group, GolfNow is the golf industry's technology and services leader, managing 10 different products, as well as the largest e-commerce tee time marketplace in the world. As its business began expanding rapidly and globally, GolfNow's monolithic application became problematic. "We kept growing our infrastructure vertically rather than horizontally, and the cost of doing business became problematic," says Sheriff Mohamed, GolfNow's Director, Architecture. "We wanted the ability to more easily expand globally."
+
+
Solution
+
+
Turning to microservices and containerization, GolfNow began moving its applications and databases from third-party services to its own clusters running on Docker and Kubernetes.
+
+
Impact
+
+
The results were immediate. While maintaining the same capacity—and beyond, during peak periods—GolfNow saw its infrastructure costs for the first application virtually cut in half.
+
+{{< case-studies/quote author="SHERIFF MOHAMED, DIRECTOR, ARCHITECTURE AT GOLFNOW" >}}
+"With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally. We were basically wasting money and doubling the cost of our infrastructure."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+It's not every day that you can say you've slashed an operating expense by half.
+{{< /case-studies/lead >}}
+
+
But Sheriff Mohamed and Josh Chandler did just that when they helped lead their company, GolfNow, on a journey from a monolithic to a containerized, cloud native infrastructure managed by Kubernetes.
+
+
A top-performing business within the NBC Sports Group, GolfNow is a technology and services company with the largest tee time marketplace in the world. GolfNow serves 5 million active golfers across 10 different products. In recent years, the business had grown so fast that the infrastructure supporting their giant monolithic application (written in C#.NET and backed by SQL Server database management system) could not keep up. "With our growth we obviously needed to expand our infrastructure, and we kept growing vertically rather than horizontally," says Sheriff, GolfNow's Director, Architecture. "Our costs were growing exponentially. And on top of that, we had to build a Disaster Recovery (DR) environment, which then meant we'd have to copy exactly what we had in our original data center to another data center that was just the standby. We were basically wasting money and doubling the cost of our infrastructure."
+
+
In moving just the first of GolfNow's important applications—a booking engine for golf courses and B2B marketing platform—from third-party services to their own Kubernetes environment, "our bill went down drastically," says Sheriff.
+
+
The path to those stellar results began in late 2014. In order to support GolfNow's global growth, the team decided that the company needed to have multiple data centers and the ability to quickly and easily re-route traffic as needed. "From there we knew that we needed to go in a direction of breaking things apart, microservices, and containerization," says Sheriff. "At the time we were trying to get away from C#.NET and SQL Server since it didn't run very well on Linux, where everything container was running smoothly."
+
+
To that end, the team shifted to working with Node.js, the open-source, cross-platform JavaScript runtime environment for developing tools and applications, and MongoDB, the open-source database program. At the time, Docker, the platform for deploying applications in containers, was still new. But once the team began experimenting with it, Sheriff says, "we realized that was the way we wanted to go, especially since that's the way the industry is heading."
+
+{{< case-studies/quote image="/images/case-studies/golfnow/banner3.jpg" >}}
+"The team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, 'Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn't have to pay extra money at all.'"
+{{< /case-studies/quote >}}
+
+
GolfNow's dev team ran an "internal, low-key" proof of concept and were won over. "We really liked how easy it was to be able to pass containers around to each other and have them up and running in no time, exactly the way it was running on my machine," says Sheriff. "Because that is always the biggest gripe that Ops has with developers, right? 'It worked on my machine!' But then we started getting to the point of, 'How do we make sure that these things stay up and running?'"
+
+
That led the team on a quest to find the right orchestration system for the company's needs. Sheriff says the first few options they tried were either too heavy or "didn't feel quite right." In late summer 2015, they discovered the just-released Kubernetes, which Sheriff immediately liked for its ease of use. "We did another proof of concept," he says, "and Kubernetes won because of the fact that the community backing was there, built on top of what Google had already done."
+
+
But before they could go with Kubernetes, NBC, GolfNow's parent company, also asked them to comparison shop with another company. Sheriff and his team liked the competing company's platform user interface, but didn't like that its platform would not allow containers to run natively on Docker. With no clear decision in sight, Sheriff's VP at GolfNow, Steve McElwee, set up a three-month trial during which a GolfNow team (consisting of Sheriff and Josh, who's now Lead Architect, Open Platforms) would build out a Kubernetes environment, and a large NBC team would build out one with the other company's platform.
+
+
"We spun up the cluster and we tried to get everything to run the way we wanted it to run," Sheriff says. "The biggest thing that we took away from it is that not only did we want our applications to run within Kubernetes and Docker, we also wanted our databases to run there. We literally wanted our entire infrastructure to run within Kubernetes."
+
+
At the time there was nothing in the community to help them get Kafka and MongoDB clusters running within a Kubernetes and Docker environment, so Sheriff and Josh figured it out on their own, taking a full month to get it right. "Everything started rolling from there," Sheriff says. "We were able to get all our applications connected, and we finished our side of the proof of concept a month in advance. My VP was like, 'Alright, it's over. Kubernetes wins.'"
+
+
The next step, beginning in January 2016, was getting everything working in production. The team focused first on one application that was already written in Node.js and MongoDB. A booking engine for golf courses and B2B marketing platform, the application was already going in the microservice direction but wasn't quite finished yet. At the time, it was running in Heroku Compose and other third-party services—resulting in a large monthly bill.
+
+{{< case-studies/quote image="/images/case-studies/golfnow/banner4.jpg" >}}
+"'The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven't come from the Kubernetes world you wouldn't believe me.' Sheriff puts it in these terms: 'Before Kubernetes I wasn't sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I've been sleeping at night.'"
+{{< /case-studies/quote >}}
+
+
"The goal was to take all of that out and put it within this new platform we've created with Kubernetes on Google Compute Engine (GCE)," says Sheriff. "So we ended up building piece by piece, in parallel, what was out in Heroku and Compose, in our Kubernetes cluster. Then, literally, just switched configs in the background. So in Heroku we had the app running hitting a Compose database. We'd take the config, change it and make it hit the database that was running in our cluster."
+
+
Using this procedure, they were able to migrate piecemeal, without any downtime. The first migration was done during off hours, but to test the limits, the team migrated the second database in the middle of the day, when lots of users were running the application. "We did it," Sheriff says, "and again it was successful. Nobody noticed."
+
+
After three weeks of monitoring to make sure everything was running stable, the team migrated the rest of the application into their Kubernetes cluster. And the impact was immediate: On top of cutting monthly costs by a large percentage, says Sheriff, "Running at the same capacity and during our peak time, we were able to horizontally grow. Since we were using our VMs more efficiently with containers, we didn't have to pay extra money at all."
+
+
Not only were they saving money, but they were also saving time. "I had a meeting this morning about migrating some applications from one cluster to another," says Josh. "I spent about 2 hours explaining the process. The time I spent actually moving the applications was under 30 seconds! We can move data centers in just incredible amounts of time. If you haven't come from the Kubernetes world you wouldn't believe me." Sheriff puts it in these terms: "Before Kubernetes I wasn't sleeping at night, literally. I was woken up all the time, because things were down. After Kubernetes, I've been sleeping at night."
+
+
A small percentage of the applications on GolfNow have been migrated over to the Kubernetes environment. "Our Core Team is rewriting a lot of the .NET applications into .NET Core [which is compatible with Linux and Docker] so that we can run them within containers," says Sheriff.
+
+
Looking ahead, Sheriff and his team want to spend 2017 continuing to build a whole platform around Kubernetes with Drone, an open-source continuous delivery platform, to make it more developer-centric. "Now they're able to manage configuration, they're able to manage their deployments and things like that, making all these subteams that are now creating all these microservices, be self sufficient," he says. "So it can pull us away from applications and allow us to just make sure the cluster is running and healthy, and then actually migrate that over to our Ops team."
+
+{{< case-studies/quote >}}
+"Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. 'This is The Six Million Dollar Man of the cloud right now,' adds Josh. 'Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They're faster, they're more resilient.'"
+{{< /case-studies/quote >}}
+
+
And long-term, Sheriff has an even bigger goal for getting more people into the Kubernetes fold. "We're actually trying to make this platform generic enough so that any of our sister companies can use it if they wish," he says. "Most definitely I think it can be used as a model. I think the way we migrated into it, the way we built it out, are all ways that I think other companies can learn from, and should not be afraid of."
+
+
The GolfNow team is also giving back to the Kubernetes community by open-sourcing a bot framework that Josh built. "We noticed that the dashboard user interface is actually moving a lot faster than when we started," says Sheriff. "However we realized what we needed was something that's more of a bot that really helps us administer Kubernetes as a whole through Slack." Josh explains: "With the Kubernetes-Slack integration, you can essentially hook into a cluster and the issue commands and edit configurations. We've tried to simplify the security configuration as much as possible. We hope this will be our major thank you to Kubernetes, for everything you've given us."
+
+
Having gone from complete newbies to production-ready in three months, the GolfNow team is eager to encourage other companies to follow their lead. The lessons they've learned: "You've got to have buy-in from your boss," says Sheriff. "Another big deal is having two to three people dedicated to this type of endeavor. You can't have people who are half in, half out." And if you don't have buy-in from the get go, proving it out will get you there.
+
+
"This is The Six Million Dollar Man of the cloud right now," adds Josh. "Just try it out, watch it happen. I feel like the proof is in the pudding when you look at these kinds of application stacks. They're faster, they're more resilient."
diff --git a/content/ko/case-studies/haufegroup/index.html b/content/ko/case-studies/haufegroup/index.html
index f4256ff569b4a..580a282683cfa 100644
--- a/content/ko/case-studies/haufegroup/index.html
+++ b/content/ko/case-studies/haufegroup/index.html
@@ -1,112 +1,85 @@
---
title: Haufe Group Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_haufegroup.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/haufegroup/banner1.jpg
+heading_title_logo: /images/haufegroup_logo.png
+subheading: >
+ Paving the Way for Cloud Native for Midsize Companies
+case_study_details:
+ - Company: Haufe Group
+ - Location: Freiburg, Germany
+ - Industry: Media and Software
---
+
Challenge
-
-
CASE STUDY:
Paving the Way for Cloud Native for Midsize Companies
+
Founded in 1930 as a traditional publisher, Haufe Group has grown into a media and software company with 95 percent of its sales from digital products. Over the years, the company has gone from having "hardware in the basement" to outsourcing its infrastructure operations and IT. More recently, the development of new products, from Internet portals for tax experts to personnel training software, has created demands for increased speed, reliability and scalability. "We need to be able to move faster," says Solution Architect Martin Danielsson. "Adapting workloads is something that we really want to be able to do."
-
+
Solution
+
Haufe Group began its cloud-native journey when Microsoft Azure became available in Europe; the company needed cloud deployments for its desktop apps with bandwidth-heavy download services. "After that, it has been different projects trying out different things," says Danielsson. Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy.
-
- Company Haufe Group Location Freiburg, Germany Industry Media and Software
-
+
A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker. The company is now getting ready to go live with two services in production using Kubernetes orchestration on Microsoft Azure and Amazon Web Services. The team is also working on breaking up one of their core Java Enterprise desktop products into microservices to allow for better evolvability and dynamic scaling in the cloud.
-
+
Impact
+
With the ability to adapt workloads, Danielsson says, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." Plus, shorter release times have had a major impact. "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," he says. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
-
+{{< case-studies/quote author="Martin Danielsson, Solution Architect, Haufe Group" >}}
+"Over the next couple of years, people won't even think that much about it when they want to run containers. Kubernetes is going to be the go-to solution."
+{{< /case-studies/quote >}}
-
-
-
Challenge
- Founded in 1930 as a traditional publisher, Haufe Group has grown into a media and software company with 95 percent of its sales from digital products. Over the years, the company has gone from having "hardware in the basement" to outsourcing its infrastructure operations and IT. More recently, the development of new products, from Internet portals for tax experts to personnel training software, has created demands for increased speed, reliability and scalability. "We need to be able to move faster," says Solution Architect Martin Danielsson. "Adapting workloads is something that we really want to be able to do."
-
-
-
Solution
- Haufe Group began its cloud-native journey when Microsoft Azure became available in Europe; the company needed cloud deployments for its desktop apps with bandwidth-heavy download services. "After that, it has been different projects trying out different things," says Danielsson. Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy.
-
-
- A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker. The company is now getting ready to go live with two services in production using Kubernetes orchestration on Microsoft Azure and Amazon Web Services. The team is also working on breaking up one of their core Java Enterprise desktop products into microservices to allow for better evolvability and dynamic scaling in the cloud.
-
-
-
Impact
- With the ability to adapt workloads, Danielsson says, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost." Plus, shorter release times have had a major impact. "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," he says. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
-
-
-
-
-
-
-
-
- "Over the next couple of years, people won’t even think that much about it when they want to run containers. Kubernetes is going to be the go-to solution." - Martin Danielsson, Solution Architect, Haufe Group
-
-
-
-
-
-
-
More than 80 years ago, Haufe Group was founded as a traditional publishing company, printing books and commentary on paper.
By the 1990s, though, the company’s leaders recognized that the future was digital, and to their credit, were able to transform Haufe Group into a media and software business that now gets 95 percent of its sales from digital products. "Among the German companies doing this, we were one of the early adopters," says Martin Danielsson, Solution Architect for Haufe Group.
- And now they’re leading the way for midsize companies embracing cloud-native technology like Kubernetes. "The really big companies like Ticketmaster and Google get it right, and the startups get it right because they’re faster," says Danielsson. "We’re in this big lump of companies in the middle with a lot of legacy, a lot of structure, a lot of culture that does not easily fit the cloud technologies. We’re just 1,500 people, but we have hundreds of customer-facing applications. So we’re doing things that will be relevant for many companies of our size or even smaller."
- Many of those legacy challenges stemmed from simply following the technology trends of the times. "We used to do full DevOps," he says. In the 1990s and 2000s, "that meant that you had your hardware in the basement. And then 10 years ago, the hype of the moment was to outsource application operations, outsource everything, and strip down your IT department to take away the distraction of all these hardware things. That’s not our area of expertise. We didn’t want to be an infrastructure provider. And now comes the backlash of that."
- Haufe Group began feeling the pain as they were developing more new products, from Internet portals for tax experts to personnel training software, that have created demands for increased speed, reliability and scalability. "Right now, we have this break in workflows, where we go from writing concepts to developing, handing it over to production and then handing that over to your host provider," he says. "And then when things go bad we have no clue what went wrong. We definitely want to take back control, and we want to move a lot faster. Adapting workloads is something that we really want to be able to do."
- Those needs led them to explore cloud-native technology. Their first foray into the cloud was doing deployments in Microsoft Azure, once it became available in Europe, for desktop products that had built-in download services. Hosting expenses for such bandwidth-heavy services were too high, so the company turned to the cloud. "After that, it has been different projects trying out different things," says Danielsson.
-
-
-
-
-
- "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn’t fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
-
-
-
-
-
-
- Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy. A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker.
- Some experiments went further than others; German regulations about sensitive data proved to be a road block in moving some workloads to Azure and Amazon Web Services. "Due to our history, Germany is really strict with things like personally identifiable data," Danielsson says.
- These experiments took on new life with the arrival of the Azure Sovereign Cloud for Germany (an Azure clone run by the German T-Systems provider). With the availability of Azure.de—which conforms to Germany’s privacy regulations—teams started to seriously consider deploying production loads in Docker into the cloud. "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn’t fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
- In parallel, Danielsson had built an API management system with the aim of supporting CI/CD scenarios, aspects of which were missing in off-the-shelf API management products. With a foundation based on Mashape’s Kong gateway, it is open-sourced as wicked.haufe.io. He put wicked.haufe.io to use with his product team.
Otherwise, Danielsson says his philosophy was "don’t try to reinvent the wheel all the time. Go for what’s there and 99 percent of the time it will be enough. And if you think you really need something custom or additional, think perhaps once or twice again. One of the things that I find so amazing with this cloud-native framework is that everything ties in."
- Currently, Haufe Group is working on two projects using Kubernetes in production. One is a new mobile application for researching legislation and tax laws. "We needed a way to take out functionality from a legacy core and put an application on top of that with an API gateway—a lot of moving parts that screams containers," says Danielsson. So the team moved the build pipeline away from "deploying to some old, huge machine that you could deploy anything to" and onto a Kubernetes cluster where there would be automatic CI/CD "with feature branches and all these things that were a bit tedious in the past."
-
-
-
-
-
- "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
-
-
-
-
-
- It was a proof of concept effort, and the proof was in the pudding. "Everyone was really impressed at what we accomplished in a week," says Danielsson. "We did these kinds of integrations just to make sure that we got a handle on how Kubernetes works. If you can create optimism and buzz around something, it’s half won. And if the developers and project managers know this is working, you’re more or less done." Adds Reinhardt: "You need to create some very visible, quick wins in order to overcome the status quo."
- The impact on the speed of deployment was clear: "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
- The potential impact on cost was another bonus. "Hosting applications is quite expensive, so moving to the cloud is something that we really want to be able to do," says Danielsson. With the ability to adapt workloads, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost."
- Just as importantly, Danielsson says, there’s added flexibility: "When we try to move or rework applications that are really crucial, it’s often tricky to validate whether the path we want to take is going to work out well. In order to validate that, we would need to reproduce the environment and really do testing, and that’s prohibitively expensive and simply not doable with traditional host providers. Cloud native gives us the ability to do risky changes and validate them in a cost-effective way."
- As word of the two successful test projects spread throughout the company, interest in Kubernetes has grown. "We want to be able to support our developers in running Kubernetes clusters but we’re not there yet, so we allow them to do it as long as they’re aware that they are on their own," says Danielsson. "So that’s why we are also looking at things like [the managed Kubernetes platform] CoreOS Tectonic, Azure Container Service, ECS, etc. These kinds of services will be a lot more relevant to midsize companies that want to leverage cloud native but don’t have the IT departments or the structure around that."
- In the next year and a half, Danielsson says the company will be working on moving one of their legacy desktop products, a web app for researching legislation and tax laws originally built in Java Enterprise, onto cloud-native technology. "We’re doing a microservice split out right now so that we can independently deploy the different parts," he says. The main website, which provides free content for customers, is also moving to cloud native.
-
-
-
-
-
-
- "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
-
-
-
-
-
-
- But with these goals, Danielsson believes there are bigger cultural challenges that need to be constantly addressed. The move to new technology, not to mention a shift toward DevOps, means a lot of change for employees. "The roles were rather fixed in the past," he says. "You had developers, you had project leads, you had testers. And now you get into these really, really important things like test automation. Testers aren’t actually doing click testing anymore, and they have to write automated testing. And if you really want to go full-blown CI/CD, all these little pieces have to work together so that you get the confidence to do a check in, and know this check in is going to land in production, because if I messed up, some test is going to break. This is a really powerful thing because whatever you do, whenever you merge something into the trunk or to the master, this is going live. And that’s where you either get the people or they run away screaming."
- Danielsson understands that it may take some people much longer to get used to the new ways.
- "Culture is nothing that you can force on people," he says. "You have to live it for yourself. You have to evangelize. You have to show the advantages time and time again: This is how you can do it, this is what you get from it." To that end, his team has scheduled daylong workshops for the staff, bringing in outside experts to talk about everything from API to Devops to cloud.
- For every person who runs away screaming, many others get drawn in. "Get that foot in the door and make them really interested in this stuff," says Danielsson. "Usually it catches on. We have people you never would have expected chanting, ‘Docker Docker Docker’ now. It’s cool to see them realize that there is a world outside of their Python libraries. It’s awesome to see them really work with Kubernetes."
- Ultimately, Reinhardt says, "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
-
-
-
+{{< case-studies/lead >}}
+More than 80 years ago, Haufe Group was founded as a traditional publishing company, printing books and commentary on paper.
+{{< /case-studies/lead >}}
+
+
By the 1990s, though, the company's leaders recognized that the future was digital, and to their credit, were able to transform Haufe Group into a media and software business that now gets 95 percent of its sales from digital products. "Among the German companies doing this, we were one of the early adopters," says Martin Danielsson, Solution Architect for Haufe Group.
+
+
And now they're leading the way for midsize companies embracing cloud-native technology like Kubernetes. "The really big companies like Ticketmaster and Google get it right, and the startups get it right because they're faster," says Danielsson. "We're in this big lump of companies in the middle with a lot of legacy, a lot of structure, a lot of culture that does not easily fit the cloud technologies. We're just 1,500 people, but we have hundreds of customer-facing applications. So we're doing things that will be relevant for many companies of our size or even smaller."
+
+
Many of those legacy challenges stemmed from simply following the technology trends of the times. "We used to do full DevOps," he says. In the 1990s and 2000s, "that meant that you had your hardware in the basement. And then 10 years ago, the hype of the moment was to outsource application operations, outsource everything, and strip down your IT department to take away the distraction of all these hardware things. That's not our area of expertise. We didn't want to be an infrastructure provider. And now comes the backlash of that."
+
+
Haufe Group began feeling the pain as they were developing more new products, from Internet portals for tax experts to personnel training software, that have created demands for increased speed, reliability and scalability. "Right now, we have this break in workflows, where we go from writing concepts to developing, handing it over to production and then handing that over to your host provider," he says. "And then when things go bad we have no clue what went wrong. We definitely want to take back control, and we want to move a lot faster. Adapting workloads is something that we really want to be able to do."
+
+
Those needs led them to explore cloud-native technology. Their first foray into the cloud was doing deployments in Microsoft Azure, once it became available in Europe, for desktop products that had built-in download services. Hosting expenses for such bandwidth-heavy services were too high, so the company turned to the cloud. "After that, it has been different projects trying out different things," says Danielsson.
+
+{{< case-studies/quote image="/images/case-studies/haufegroup/banner3.jpg" >}}
+"We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn't fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
+{{< /case-studies/quote >}}
+
+
Two years ago, Holger Reinhardt joined Haufe Group as CTO and rapidly re-oriented the traditional host provider-based approach toward a cloud and API-first strategy. A core part of this strategy was a strong mandate to embrace infrastructure-as-code across the entire software deployment lifecycle via Docker. Some experiments went further than others; German regulations about sensitive data proved to be a road block in moving some workloads to Azure and Amazon Web Services. "Due to our history, Germany is really strict with things like personally identifiable data," Danielsson says.
+
+
These experiments took on new life with the arrival of the Azure Sovereign Cloud for Germany (an Azure clone run by the German T-Systems provider). With the availability of Azure.de—which conforms to Germany's privacy regulations—teams started to seriously consider deploying production loads in Docker into the cloud. "We have been doing containers for the last two years, and we really got the hang of how they work," says Danielsson. "But it was always for development and test, never in production, because we didn't fully understand how that would work. And to me, Kubernetes was definitely the technology that solved that."
+
+
In parallel, Danielsson had built an API management system with the aim of supporting CI/CD scenarios, aspects of which were missing in off-the-shelf API management products. With a foundation based on Mashape's Kong gateway, it is open-sourced as wicked.haufe.io. He put wicked.haufe.io to use with his product team.
Otherwise, Danielsson says his philosophy was "don't try to reinvent the wheel all the time. Go for what's there and 99 percent of the time it will be enough. And if you think you really need something custom or additional, think perhaps once or twice again. One of the things that I find so amazing with this cloud-native framework is that everything ties in."
+
+
Currently, Haufe Group is working on two projects using Kubernetes in production. One is a new mobile application for researching legislation and tax laws. "We needed a way to take out functionality from a legacy core and put an application on top of that with an API gateway—a lot of moving parts that screams containers," says Danielsson. So the team moved the build pipeline away from "deploying to some old, huge machine that you could deploy anything to" and onto a Kubernetes cluster where there would be automatic CI/CD "with feature branches and all these things that were a bit tedious in the past."
+
+{{< case-studies/quote image="/images/case-studies/haufegroup/banner4.jpg" >}}
+"Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
+{{< /case-studies/quote >}}
+
+
It was a proof of concept effort, and the proof was in the pudding. "Everyone was really impressed at what we accomplished in a week," says Danielsson. "We did these kinds of integrations just to make sure that we got a handle on how Kubernetes works. If you can create optimism and buzz around something, it's half won. And if the developers and project managers know this is working, you're more or less done." Adds Reinhardt: "You need to create some very visible, quick wins in order to overcome the status quo."
+
+
The impact on the speed of deployment was clear: "Before, we had to announce at least a week in advance when we wanted to do a release because there was a huge checklist of things that you had to do," says Danielsson. "By going cloud native, we have the infrastructure in place to be able to automate all of these things. Now we can get a new release done in half an hour instead of days."
+
+
The potential impact on cost was another bonus. "Hosting applications is quite expensive, so moving to the cloud is something that we really want to be able to do," says Danielsson. With the ability to adapt workloads, teams "will be able to scale down to around half the capacity at night, saving 30 percent of the hardware cost."
+
+
Just as importantly, Danielsson says, there's added flexibility: "When we try to move or rework applications that are really crucial, it's often tricky to validate whether the path we want to take is going to work out well. In order to validate that, we would need to reproduce the environment and really do testing, and that's prohibitively expensive and simply not doable with traditional host providers. Cloud native gives us the ability to do risky changes and validate them in a cost-effective way."
+
+
As word of the two successful test projects spread throughout the company, interest in Kubernetes has grown. "We want to be able to support our developers in running Kubernetes clusters but we're not there yet, so we allow them to do it as long as they're aware that they are on their own," says Danielsson. "So that's why we are also looking at things like [the managed Kubernetes platform] CoreOS Tectonic, Azure Container Service, ECS, etc. These kinds of services will be a lot more relevant to midsize companies that want to leverage cloud native but don't have the IT departments or the structure around that."
+
+
In the next year and a half, Danielsson says the company will be working on moving one of their legacy desktop products, a web app for researching legislation and tax laws originally built in Java Enterprise, onto cloud-native technology. "We're doing a microservice split out right now so that we can independently deploy the different parts," he says. The main website, which provides free content for customers, is also moving to cloud native.
+
+{{< case-studies/quote >}}
+"the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
+{{< /case-studies/quote >}}
+
+
But with these goals, Danielsson believes there are bigger cultural challenges that need to be constantly addressed. The move to new technology, not to mention a shift toward DevOps, means a lot of change for employees. "The roles were rather fixed in the past," he says. "You had developers, you had project leads, you had testers. And now you get into these really, really important things like test automation. Testers aren't actually doing click testing anymore, and they have to write automated testing. And if you really want to go full-blown CI/CD, all these little pieces have to work together so that you get the confidence to do a check in, and know this check in is going to land in production, because if I messed up, some test is going to break. This is a really powerful thing because whatever you do, whenever you merge something into the trunk or to the master, this is going live. And that's where you either get the people or they run away screaming." Danielsson understands that it may take some people much longer to get used to the new ways.
+
+
"Culture is nothing that you can force on people," he says. "You have to live it for yourself. You have to evangelize. You have to show the advantages time and time again: This is how you can do it, this is what you get from it." To that end, his team has scheduled daylong workshops for the staff, bringing in outside experts to talk about everything from API to Devops to cloud.
+
+
For every person who runs away screaming, many others get drawn in. "Get that foot in the door and make them really interested in this stuff," says Danielsson. "Usually it catches on. We have people you never would have expected chanting, 'Docker Docker Docker' now. It's cool to see them realize that there is a world outside of their Python libraries. It's awesome to see them really work with Kubernetes."
+
+
Ultimately, Reinhardt says, "the execution of a strategy requires alignment of culture, structure and technology. Only if those three dimensions are aligned can you successfully execute a transformation into microservices and cloud-native architectures. And it is only then that the Cloud will pay the dividends in much faster speeds in product innovation and much lower operational costs."
diff --git a/content/ko/case-studies/huawei/index.html b/content/ko/case-studies/huawei/index.html
index 29de86f5c4ef4..ec1cd212f2fcf 100644
--- a/content/ko/case-studies/huawei/index.html
+++ b/content/ko/case-studies/huawei/index.html
@@ -1,101 +1,73 @@
---
title: Huawei Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_huawei.css
----
-
-
CASE STUDY:
Embracing Cloud Native as a User – and a Vendor
-
+new_case_study_styles: true
+heading_background: /images/case-studies/huawei/banner1.jpg
+heading_title_logo: /images/huawei_logo.png
+subheading: >
+ Embracing Cloud Native as a User – and a Vendor
+case_study_details:
+ - Company: Huawei
+ - Location: Shenzhen, China
+ - Industry: Telecommunications Equipment
+---
-
- Company Huawei Location Shenzhen, China Industry Telecommunications Equipment
-
+
Challenge
-
+
A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, Huawei has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."
-
+
Solution
-
-
-
Challenge
- A multinational company that’s the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, Huawei has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It’s very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company’s Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."
-
+
After deciding to use container technology, Huawei began moving the internal I.T. department's applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.
-
-
Solution
- After deciding to use container technology, Huawei began moving the internal I.T. department’s applications to run on Kubernetes. So far, about 30 percent of these applications have been transferred to cloud native.
-
-
Impact
- "By the end of 2016, Huawei’s internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally – and the demand it is seeing externally – the company has also built the technologies into FusionStage™, the PaaS solution it offers its customers.
-
-
-
-
-
-
-
-
- "If you’re a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology." - Peixin Hou, chief software architect and community director for open source
-
-
-
-
-
-
- Huawei’s Kubernetes journey began with one developer.
- Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested in Kubernetes, the technology for managing application containers across clusters of hosts, and started contributing to its open source community. As the technology developed and the community grew, he kept telling his managers about it.
- And as fate would have it, at the same time, Huawei was looking for a better orchestration system for its internal enterprise I.T. department, which supports every business flow processing. "We have more than 180,000 employees worldwide, and a complicated internal procedure, so probably every week this department needs to develop some new applications," says Peixin Hou, Huawei’s Chief Software Architect and Community Director for Open Source. "Very often our I.T. departments need to launch tens of thousands of containers, with tasks running across thousands of nodes across the world. It’s very much a distributed system, so we found that managing all of the tasks in a more consistent way is always a challenge."
- In the past, Huawei had used virtual machines to encapsulate applications, but "every time when we start a VM," Hou says, "whether because it’s a new service or because it was a service that was shut down because of some abnormal node functioning, it takes a lot of time." Huawei turned to containerization, so the timing was right to try Kubernetes. It took a year to adopt that engineer’s suggestion – the process "is not overnight," says Hou – but once in use, he says, "Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
- Hou sees great benefits to the company that come with using this technology: "Kubernetes brings agility, scale-out capability, and DevOps practice to the cloud-based applications," he says. "It provides us with the ability to customize the scheduling architecture, which makes possible the affinity between container tasks that gives greater efficiency. It supports multiple container formats. It has extensive support for various container networking solutions and container storage."
-
-
-
-
-
- "Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
-
-
-
-
-
- And not least of all, there’s an impact on the bottom line. Says Hou: "We also see significant operating expense spending cut in some circumstances 20-30 percent, which is very helpful for our business."
- Pleased with those initial results, and seeing a demand for cloud native technologies from its customers, Huawei doubled down on Kubernetes. In the spring of 2016, the company became not only a user but also a vendor.
- "We built the Kubernetes technologies into our solutions," says Hou, referring to Huawei’s FusionStage™ PaaS offering. "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them. We’ve started to work with some Chinese banks, and we see a lot of interest from our customers like China Mobile and Deutsche Telekom."
- "If you’re just a user, you’re just a user," adds Hou. "But if you’re a vendor, in order to even convince your customers, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology. We provide customer wisdom." While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huawei’s solutions. It’s a big selling point that most of the public cloud providers now support Kubernetes. "This makes the cross-cloud transition much easier than with other solutions," says Hou.
-
-
-
-
-
- "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them."
-
-
-
-
-
- Within Huawei itself, once his team completes the transition of the internal business procedure department to Kubernetes, Hou is looking to convince more departments to move over to the cloud native development cycle and practice. "We have a lot of software developers, so we will provide them with our platform as a service solution, our own product," he says. "We would like to see significant cuts in their iteration cycle."
- Having overseen the initial move to Kubernetes at Huawei, Hou has advice for other companies considering the technology: "When you start to design the architecture of your application, think about cloud native, think about microservice architecture from the beginning," he says. "I think you will benefit from that."
- But if you already have legacy applications, "start from some microservice-friendly part of those applications first, parts that are relatively easy to be decomposed into simpler pieces and are relatively lightweight," Hou says. "Don’t think from day one that within how many days I want to move the whole architecture, or move everything into microservices. Don’t put that as a kind of target. You should do it in a gradual manner. And I would say for legacy applications, not every piece would be suitable for microservice architecture. No need to force it."
- After all, as enthusiastic as Hou is about Kubernetes at Huawei, he estimates that "in the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There’s still 20 percent that’s not, but it’s fine. If we can make 80 percent of our workload really be cloud native, to have agility, it’s a much better world at the end of the day."
-
-
-
-
-
-
- "In the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There’s still 20 percent that’s not, but it’s fine. If we can make 80 percent of our workload really be cloud native, to have agility, it’s a much better world at the end of the day."
-
-
-
-
-
- In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes, not least of all the ones that Huawei is contributing to. Huawei engineers have worked on the federation feature (which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling, container networking and storage, and a just-announced technology called Container Ops, which is a DevOps pipeline engine. "This will put every DevOps job into a container," he explains. "And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes. With that mechanism, we can make the containerized DevOps jobs be created, shared and managed much more easily than before."
- Still, Hou sees this technology as only halfway to its full potential. First and foremost, he’d like to expand the scale it can orchestrate, which is important for supersized companies like Huawei – as well as some of its customers.
- Hou proudly notes that two years after that first Huawei engineer became a contributor to and evangelist for Kubernetes, Huawei is now a top contributor to the community. "We’ve learned that the more you contribute to the community," he says, "the more you get back."
-
-
-
+
+
"By the end of 2016, Huawei's internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally – and the demand it is seeing externally – the company has also built the technologies into FusionStage™, the PaaS solution it offers its customers.
+
+{{< case-studies/quote author="Peixin Hou, chief software architect and community director for open source" >}}
+"If you're a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology."
+{{< /case-studies/quote >}}
+
+
Huawei's Kubernetes journey began with one developer. Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested in Kubernetes, the technology for managing application containers across clusters of hosts, and started contributing to its open source community. As the technology developed and the community grew, he kept telling his managers about it.
+
+
And as fate would have it, at the same time, Huawei was looking for a better orchestration system for its internal enterprise I.T. department, which supports every business flow processing. "We have more than 180,000 employees worldwide, and a complicated internal procedure, so probably every week this department needs to develop some new applications," says Peixin Hou, Huawei's Chief Software Architect and Community Director for Open Source. "Very often our I.T. departments need to launch tens of thousands of containers, with tasks running across thousands of nodes across the world. It's very much a distributed system, so we found that managing all of the tasks in a more consistent way is always a challenge."
+
+
In the past, Huawei had used virtual machines to encapsulate applications, but "every time when we start a VM," Hou says, "whether because it's a new service or because it was a service that was shut down because of some abnormal node functioning, it takes a lot of time." Huawei turned to containerization, so the timing was right to try Kubernetes. It took a year to adopt that engineer's suggestion – the process "is not overnight," says Hou – but once in use, he says, "Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
+
+
Hou sees great benefits to the company that come with using this technology: "Kubernetes brings agility, scale-out capability, and DevOps practice to the cloud-based applications," he says. "It provides us with the ability to customize the scheduling architecture, which makes possible the affinity between container tasks that gives greater efficiency. It supports multiple container formats. It has extensive support for various container networking solutions and container storage."
+
+{{< case-studies/quote image="/images/case-studies/huawei/banner3.jpg" >}}
+"Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
+{{< /case-studies/quote >}}
+
+
And not least of all, there's an impact on the bottom line. Says Hou: "We also see significant operating expense spending cut in some circumstances 20-30 percent, which is very helpful for our business."
+
+
Pleased with those initial results, and seeing a demand for cloud native technologies from its customers, Huawei doubled down on Kubernetes. In the spring of 2016, the company became not only a user but also a vendor.
+
+
"We built the Kubernetes technologies into our solutions," says Hou, referring to Huawei's FusionStage™ PaaS offering. "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them. We've started to work with some Chinese banks, and we see a lot of interest from our customers like China Mobile and Deutsche Telekom."
+
+
"If you're just a user, you're just a user," adds Hou. "But if you're a vendor, in order to even convince your customers, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology. We provide customer wisdom." While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huawei's solutions. It's a big selling point that most of the public cloud providers now support Kubernetes. "This makes the cross-cloud transition much easier than with other solutions," says Hou.
+
+{{< case-studies/quote image="/images/case-studies/huawei/banner4.jpg" >}}
+"Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them."
+{{< /case-studies/quote >}}
+
+
Within Huawei itself, once his team completes the transition of the internal business procedure department to Kubernetes, Hou is looking to convince more departments to move over to the cloud native development cycle and practice. "We have a lot of software developers, so we will provide them with our platform as a service solution, our own product," he says. "We would like to see significant cuts in their iteration cycle."
+
+
Having overseen the initial move to Kubernetes at Huawei, Hou has advice for other companies considering the technology: "When you start to design the architecture of your application, think about cloud native, think about microservice architecture from the beginning," he says. "I think you will benefit from that."
+
+
But if you already have legacy applications, "start from some microservice-friendly part of those applications first, parts that are relatively easy to be decomposed into simpler pieces and are relatively lightweight," Hou says. "Don't think from day one that within how many days I want to move the whole architecture, or move everything into microservices. Don't put that as a kind of target. You should do it in a gradual manner. And I would say for legacy applications, not every piece would be suitable for microservice architecture. No need to force it."
+
+
After all, as enthusiastic as Hou is about Kubernetes at Huawei, he estimates that "in the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There's still 20 percent that's not, but it's fine. If we can make 80 percent of our workload really be cloud native, to have agility, it's a much better world at the end of the day."
+
+{{< case-studies/quote >}}
+"In the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There's still 20 percent that's not, but it's fine. If we can make 80 percent of our workload really be cloud native, to have agility, it's a much better world at the end of the day."
+{{< /case-studies/quote >}}
+
+
In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes, not least of all the ones that Huawei is contributing to. Huawei engineers have worked on the federation feature (which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling, container networking and storage, and a just-announced technology called Container Ops, which is a DevOps pipeline engine. "This will put every DevOps job into a container," he explains. "And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes. With that mechanism, we can make the containerized DevOps jobs be created, shared and managed much more easily than before."
+
+
Still, Hou sees this technology as only halfway to its full potential. First and foremost, he'd like to expand the scale it can orchestrate, which is important for supersized companies like Huawei – as well as some of its customers.
+
+
Hou proudly notes that two years after that first Huawei engineer became a contributor to and evangelist for Kubernetes, Huawei is now a top contributor to the community. "We've learned that the more you contribute to the community," he says, "the more you get back."
diff --git a/content/ko/case-studies/ibm/index.html b/content/ko/case-studies/ibm/index.html
index e9a78a944371f..aa3f108e1cfba 100644
--- a/content/ko/case-studies/ibm/index.html
+++ b/content/ko/case-studies/ibm/index.html
@@ -1,108 +1,80 @@
---
title: IBM Case Study
-
linkTitle: IBM
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
logo: ibm_featured_logo.svg
featured: false
+
+new_case_study_styles: true
+heading_background: /images/case-studies/ibm/banner1.jpg
+heading_title_logo: /images/ibm_logo.png
+subheading: >
+ Building an Image Trust Service on Kubernetes with Notary and TUF
+case_study_details:
+ - Company: IBM
+ - Location: Armonk, New York
+ - Industry: Cloud Computing
---
-
-
CASE STUDY:
Building an Image Trust Service on Kubernetes with Notary and TUF
+
Challenge
-
+
IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company's enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM's Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.
-
- Company IBM Location Armonk, New York Industry Cloud Computing
-
+
Solution
-
-
-
-
-
Challenge
- IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.
-
-
Solution
- The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they're loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.
+
The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team. Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM's trust story, since it makes it possible for users to consume the company's Notary offering from within their IKS clusters. The offering is that Notary server runs in IBM's cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they're loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.
-
+
Impact
-
+
IBM's intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. "Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem," Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. "We had a multi-tenant Docker Registry with private image hosting," Hough says. "The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose."
+{{< case-studies/quote author="Michael Hough, a software developer with the IBM Container Registry team" >}}
+"We see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project."
+{{< /case-studies/quote >}}
-
Impact
- IBM's intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. "Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem," Hough says. The company had not been offering image signing before, and Notary is the tool it used to implement that capability. "We had a multi-tenant Docker Registry with private image hosting," Hough says. "The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose."
-
-
-
-
-
-
-
-
- "We see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project." - Michael Hough, a software developer with the IBM Container Registry team
-
-
-
-
-
Docker had already created the Notary project as an implementation of The Update Framework (TUF), and this implementation of TUF provided the capabilities for Docker Content Trust.
"After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem", says Michael Hough, a software developer with the IBM Cloud Container Registry team.
-
-The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBM’s container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were "attractive design decisions that confirmed our choice of Notary," he says.
-
-The introduction of Notary to implement image signing capability in IBM Cloud encourages increased security across IBM's cloud platform, "where we expect it will include both the signing of official IBM images as well as expected use by security-conscious enterprise customers," Hough says. "When combined with security policy implementations, we expect an increased use of deployment policies in CI/CD pipelines that allow for fine-grained control of service deployment based on image signers."
-The availability of image signing "is a huge benefit to security-conscious customers who require this level of image provenance and security," Hough says. "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment."
-
-
-
-
-
-
- "Image signing is one key part of our Kubernetes container service offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem"
- Michael Hough, a software developer with the IBM Cloud Container Registry team
-
-
-
-
- Now that the Notary-implemented service is generally available in IBM’s public cloud as a component of its existing IBM Cloud Container Registry, it is deployed as a highly available service across five IBM Cloud regions. This high-availability deployment has three instances across two zones in each of the five regions, load balanced with failover support. "We have also deployed it with end-to-end TLS support through to our back-end IBM Cloudant persistence storage service," Hough says.
-
- The IBM team has created and open sourced a Kubernetes admission controller called Portieris, which uses Notary signing information combined with customer-defined security policies to control image deployment into their cluster. "We are hoping to drive adoption of Portieris through its use of our Notary offering," Hough says.
-
- IBM has been a key player in the creation and support of open source foundations, including CNCF. Todd Moore, IBM's vice president of Open Technology, is the current CNCF governing board chair and a number of IBMers are active across many of the CNCF member projects.
-
-
-
-
-
-
-
- "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment."
- Michael Hough, a software developer with the IBM Cloud Container Registry team
-
-
-
-
-
-
- "Given that, we see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project," Hough says. Because the entire cloud native world is a fast-moving area with many competing vendors and solutions, "we see the CNCF model as an arbiter of openness and fair play across the ecosystem," he says.
-
-With both TUF and Notary as part of CNCF, IBM expects there to be standardization around these capabilities beyond just de facto standards for signing and provenance. IBM has determined to not simply consume Notary, but also to contribute to the open source project where applicable. "IBMers have contributed a CouchDB backend to support our use of IBM Cloudant as the persistent store; and are working on generalization of the pkcs11 provider, allowing support of other security hardware devices beyond Yubikey," Hough says.
-
-
-
-
-
- "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage."
- Michael Hough, a software developer with the IBM Cloud Container Registry team
-
-What advice does Hough have for other companies that are looking to deploy Notary or a cloud native infrastructure?
-
-"While this is true for many areas of cloud native infrastructure software, we found that a high-availability, multi-region deployment of Notary requires a solid implementation to handle certificate management and rotation," he says. "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage."
-
-
-
-
+{{< case-studies/lead >}}
+Docker had already created the Notary project as an implementation of The Update Framework (TUF), and this implementation of TUF provided the capabilities for Docker Content Trust.
+{{< /case-studies/lead >}}
+
+
"After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem", says Michael Hough, a software developer with the IBM Cloud Container Registry team.
+
+
The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBM's container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were "attractive design decisions that confirmed our choice of Notary," he says.
+
+
The introduction of Notary to implement image signing capability in IBM Cloud encourages increased security across IBM's cloud platform, "where we expect it will include both the signing of official IBM images as well as expected use by security-conscious enterprise customers," Hough says. "When combined with security policy implementations, we expect an increased use of deployment policies in CI/CD pipelines that allow for fine-grained control of service deployment based on image signers."
+
+
The availability of image signing "is a huge benefit to security-conscious customers who require this level of image provenance and security," Hough says. "With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment."
+
+{{< case-studies/quote
+ image="/images/case-studies/ibm/banner3.jpg"
+ author="Michael Hough, a software developer with the IBM Cloud Container Registry team"
+>}}
+"Image signing is one key part of our Kubernetes container service offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem"
+{{< /case-studies/quote >}}
+
+
Now that the Notary-implemented service is generally available in IBM's public cloud as a component of its existing IBM Cloud Container Registry, it is deployed as a highly available service across five IBM Cloud regions. This high-availability deployment has three instances across two zones in each of the five regions, load balanced with failover support. "We have also deployed it with end-to-end TLS support through to our back-end IBM Cloudant persistence storage service," Hough says.
+
+
The IBM team has created and open sourced a Kubernetes admission controller called Portieris, which uses Notary signing information combined with customer-defined security policies to control image deployment into their cluster. "We are hoping to drive adoption of Portieris through its use of our Notary offering," Hough says.
+
+
IBM has been a key player in the creation and support of open source foundations, including CNCF. Todd Moore, IBM's vice president of Open Technology, is the current CNCF governing board chair and a number of IBMers are active across many of the CNCF member projects.
+
+{{< case-studies/quote
+ image="/images/case-studies/ibm/banner4.jpg"
+ author="Michael Hough, a software developer with the IBM Cloud Container Registry team"
+>}}
+"With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment."
+{{< /case-studies/quote >}}
+
+
"Given that, we see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects—no matter the originating vendor or project," Hough says. Because the entire cloud native world is a fast-moving area with many competing vendors and solutions, "we see the CNCF model as an arbiter of openness and fair play across the ecosystem," he says.
+
+
With both TUF and Notary as part of CNCF, IBM expects there to be standardization around these capabilities beyond just de facto standards for signing and provenance. IBM has determined to not simply consume Notary, but also to contribute to the open source project where applicable. "IBMers have contributed a CouchDB backend to support our use of IBM Cloudant as the persistent store; and are working on generalization of the pkcs11 provider, allowing support of other security hardware devices beyond Yubikey," Hough says.
+
+{{< case-studies/quote author="Michael Hough, a software developer with the IBM Cloud Container Registry team" >}}
+"There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage."
+{{< /case-studies/quote >}}
+
+
What advice does Hough have for other companies that are looking to deploy Notary or a cloud native infrastructure?
+
+
"While this is true for many areas of cloud native infrastructure software, we found that a high-availability, multi-region deployment of Notary requires a solid implementation to handle certificate management and rotation," he says. "There are new projects addressing these challenges, including within CNCF. We will definitely be following these advancements with interest. We found the Notary community to be an active and friendly community open to changes, such as our addition of a CouchDB backend for persistent storage."
diff --git a/content/ko/case-studies/ing/index.html b/content/ko/case-studies/ing/index.html
index 943daec2dec8f..037ba9775d90d 100644
--- a/content/ko/case-studies/ing/index.html
+++ b/content/ko/case-studies/ing/index.html
@@ -5,95 +5,74 @@
cid: caseStudies
weight: 50
featured: true
-css: /css/style_case_studies.css
quote: >
- The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that’s quite feasible to us.
+ The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that's quite feasible to us.
+
+new_case_study_styles: true
+heading_background: /images/case-studies/ing/banner1.jpg
+heading_title_logo: /images/ing_logo.png
+subheading: >
+ Driving Banking Innovation with Cloud Native
+case_study_details:
+ - Company: ING
+ - Location: Amsterdam, Netherlands
+ - Industry: Finance
---
+
Challenge
+
+
After undergoing an agile transformation, ING realized it needed a standardized platform to support the work their developers were doing. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with Docker, Docker Swarm, Kubernetes, Mesos. Well, it's not really useful for a company to have one hundred wheels, instead of one good wheel.
+
+
Solution
+
+
Using Kubernetes for container orchestration and Docker for containerization, the ING team began building an internal public cloud for its CI/CD pipeline and green-field applications. The pipeline, which has been built on Mesos Marathon, will be migrated onto Kubernetes. The bank-account management app Yolt in the U.K. (and soon France and Italy) market already is live hosted on a Kubernetes framework. At least two greenfield projects currently on the Kubernetes framework will be going into production later this year. By the end of 2018, the company plans to have converted a number of APIs used in the banking customer experience to cloud native APIs and host these on the Kubernetes-based platform.
+
+
Impact
+
+
"Cloud native technologies are helping our speed, from getting an application to test to acceptance to production," says Infrastructure Architect Onno Van der Voort. "If you walk around ING now, you see all these DevOps teams, doing stand-ups, demoing. They try to get new functionality out there really fast. We held a hackathon for one of our existing components and basically converted it to cloud native within 2.5 days, though of course the tail takes more time before code is fully production ready."
+
+{{< case-studies/quote author="Thijs Ebbers, Infrastructure Architect, ING">}}
+"The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that's quite feasible to us."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+ING has long embraced innovation in banking, launching the internet-based ING Direct in 1997.
+{{< /case-studies/lead >}}
+
+
In that same spirit, the company underwent an agile transformation a few years ago. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with Docker, Docker Swarm, Kubernetes, Mesos. Well, it's not really useful for a company to have one hundred wheels, instead of one good wheel."
+
+
Looking to standardize the deployment process within the company's strict security guidelines, the team looked at several solutions and found that in the past year, "Kubernetes won the container management framework wars," says Ebbers. "We decided to standardize ING on a Kubernetes framework." Everything is run on premise due to banking regulations, he adds, but "we will be building an internal public cloud. We are trying to get on par with what public clouds are doing. That's one of the reasons we got Kubernetes."
+
+
They also embraced Docker to address a major pain point in ING's CI/CD pipeline. Before containerization, "Every development team had to order a VM, and it was quite a heavy delivery model for them," says Infrastructure Architect Onno Van der Voort. "Another use case for containerization is when the application travels through the pipeline, they fire up Docker containers to do test work against the applications and after they've done the work, the containers get killed again."
+
+{{< case-studies/quote
+ image="/images/case-studies/ing/banner3.jpg"
+ author="Thijs Ebbers, Infrastructure Architect, ING"
+>}}
+"We decided to standardize ING on a Kubernetes framework." Everything is run on premise due to banking regulations, he adds, but "we will be building an internal public cloud. We are trying to get on par with what public clouds are doing. That's one of the reasons we got Kubernetes."
+{{< /case-studies/quote >}}
+
+
Because of industry regulations, applications are only allowed to go through the pipeline, where compliance is enforced, rather than be deployed directly into a container. "We have to run the complete platform of services we need, many routing from different places," says Van der Voort. "We need this Kubernetes framework for deploying the containers, with all those components, monitoring, logging. It's complex." For that reason, ING has chosen to start on the OpenShift Origin Kubernetes distribution.
+
+
Already, "cloud native technologies are helping our speed, from getting an application to test to acceptance to production," says Van der Voort. "If you walk around ING now, you see all these DevOps teams, doing stand-ups, demoing. They try to get new functionality out there really fast. We held a hackathon for one of our existing components and basically converted it to cloud native within 2.5 days, though of course the tail takes more time before code is fully production ready."
+
+
The pipeline, which has been built on Mesos Marathon, will be migrated onto Kubernetes. Some legacy applications are also being rewritten as cloud native in order to run on the framework. At least two smaller greenfield projects built on Kubernetes will go into production this year. By the end of 2018, the company plans to have converted a number of APIs used in the banking customer experience to cloud native APIs and host these on the Kubernetes-based platform.
+
+{{< case-studies/quote
+ image="/images/case-studies/ing/banner4.jpg"
+ author="Onno Van der Voort, Infrastructure Architect, ING"
+>}}
+"We have to run the complete platform of services we need, many routing from different places. We need this Kubernetes framework for deploying the containers, with all those components, monitoring, logging. It's complex."
+{{< /case-studies/quote >}}
+
+
The team, however, doesn't see the bank's back-end systems going onto the Kubernetes platform. "Our philosophy is it only makes sense to move things to cloud if they are cloud native," says Van der Voort. "If you have traditional architecture, build traditional patterns, it doesn't hold any value to go to the cloud." Adds Cloud Platform Architect Alfonso Fernandez-Barandiaran: "ING has a strategy about where we will go, in order to improve our agility. So it's not about how cool this technology is, it's about finding the right technology and the right approach."
+
+
The Kubernetes framework will be hosting some greenfield projects that are high priority for ING: applications the company is developing in response to PSD2, the European Commission directive requiring more innovative online and mobile payments that went into effect at the beginning of 2018. For example, a bank-account management app called Yolt, serving the U.K. market (and soon France and Italy), was built on a Kubernetes platform and has gone into production. ING is also developing blockchain-enabled applications that will live on the Kubernetes platform. "We've been contacted by a lot of development teams that have ideas with what they want to do with containers," says Ebbers.
+
+{{< case-studies/quote author="Alfonso Fernandez-Barandiaran, Cloud Platform Architect, ING" >}}
+Even with the particular requirements that come in banking, ING has managed to take a lead in technology and innovation. "Every time we have constraints, we look for maybe a better way that we can use this technology."
+{{< /case-studies/quote >}}
+
+
Even with the particular requirements that come in banking, ING has managed to take a lead in technology and innovation. "Every time we have constraints, we look for maybe a better way that we can use this technology," says Fernandez-Barandiaran.
-
-
CASE STUDY:
Driving Banking Innovation with Cloud Native
-
-
-
-
-
- Company ING Location Amsterdam, Netherlands
- Industry Finance
-
-
-
-
-
-
-
Challenge
- After undergoing an agile transformation, ING realized it needed a standardized platform to support the work their developers were doing. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with Docker, Docker Swarm, Kubernetes, Mesos. Well, it’s not really useful for a company to have one hundred wheels, instead of one good wheel.
-
-
-
Solution
- Using Kubernetes for container orchestration and Docker for containerization, the ING team began building an internal public cloud for its CI/CD pipeline and green-field applications. The pipeline, which has been built on Mesos Marathon, will be migrated onto Kubernetes. The bank-account management app Yolt in the U.K. (and soon France and Italy) market already is live hosted on a Kubernetes framework. At least two greenfield projects currently on the Kubernetes framework will be going into production later this year. By the end of 2018, the company plans to have converted a number of APIs used in the banking customer experience to cloud native APIs and host these on the Kubernetes-based platform.
-
-
-
-
-
-
-
Impact
- "Cloud native technologies are helping our speed, from getting an application to test to acceptance to production," says Infrastructure Architect Onno Van der Voort. "If you walk around ING now, you see all these DevOps teams, doing stand-ups, demoing. They try to get new functionality out there really fast. We held a hackathon for one of our existing components and basically converted it to cloud native within 2.5 days, though of course the tail takes more time before code is fully production ready."
-
-
-
-
-
-
- "The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that’s quite feasible to us."
-
— Thijs Ebbers, Infrastructure Architect, ING
-
-
-
-
-
ING has long embraced innovation in banking, launching the internet-based ING Direct in 1997.
In that same spirit, the company underwent an agile transformation a few years ago. "Our DevOps teams got empowered to be autonomous," says Infrastructure Architect Thijs Ebbers. "It has benefits; you get all kinds of ideas. But a lot of teams are going to devise the same wheel. Teams started tinkering with Docker, Docker Swarm, Kubernetes, Mesos. Well, it’s not really useful for a company to have one hundred wheels, instead of one good wheel."
- Looking to standardize the deployment process within the company’s strict security guidelines, the team looked at several solutions and found that in the past year, "Kubernetes won the container management framework wars," says Ebbers. "We decided to standardize ING on a Kubernetes framework." Everything is run on premise due to banking regulations, he adds, but "we will be building an internal public cloud. We are trying to get on par with what public clouds are doing. That’s one of the reasons we got Kubernetes."
- They also embraced Docker to address a major pain point in ING’s CI/CD pipeline. Before containerization, "Every development team had to order a VM, and it was quite a heavy delivery model for them," says Infrastructure Architect Onno Van der Voort. "Another use case for containerization is when the application travels through the pipeline, they fire up Docker containers to do test work against the applications and after they’ve done the work, the containers get killed again."
-
-
-
-
-
- "We decided to standardize ING on a Kubernetes framework." Everything is run on premise due to banking regulations, he adds, but "we will be building an internal public cloud. We are trying to get on par with what public clouds are doing. That’s one of the reasons we got Kubernetes."
-
— Thijs Ebbers, Infrastructure Architect, ING
-
-
-
-
- Because of industry regulations, applications are only allowed to go through the pipeline, where compliance is enforced, rather than be deployed directly into a container. "We have to run the complete platform of services we need, many routing from different places," says Van der Voort. "We need this Kubernetes framework for deploying the containers, with all those components, monitoring, logging. It’s complex." For that reason, ING has chosen to start on the OpenShift Origin Kubernetes distribution.
- Already, "cloud native technologies are helping our speed, from getting an application to test to acceptance to production," says Van der Voort. "If you walk around ING now, you see all these DevOps teams, doing stand-ups, demoing. They try to get new functionality out there really fast. We held a hackathon for one of our existing components and basically converted it to cloud native within 2.5 days, though of course the tail takes more time before code is fully production ready."
- The pipeline, which has been built on Mesos Marathon, will be migrated onto Kubernetes. Some legacy applications are also being rewritten as cloud native in order to run on the framework. At least two smaller greenfield projects built on Kubernetes will go into production this year. By the end of 2018, the company plans to have converted a number of APIs used in the banking customer experience to cloud native APIs and host these on the Kubernetes-based platform.
-
-
-
-
-
- "We have to run the complete platform of services we need, many routing from different places. We need this Kubernetes framework for deploying the containers, with all those components, monitoring, logging. It’s complex."
— Onno Van der Voort, Infrastructure Architect, ING
-
-
-
-
-
- The team, however, doesn’t see the bank’s back-end systems going onto the Kubernetes platform. "Our philosophy is it only makes sense to move things to cloud if they are cloud native," says Van der Voort. "If you have traditional architecture, build traditional patterns, it doesn’t hold any value to go to the cloud." Adds Cloud Platform Architect Alfonso Fernandez-Barandiaran: "ING has a strategy about where we will go, in order to improve our agility. So it’s not about how cool this technology is, it’s about finding the right technology and the right approach."
- The Kubernetes framework will be hosting some greenfield projects that are high priority for ING: applications the company is developing in response to PSD2, the European Commission directive requiring more innovative online and mobile payments that went into effect at the beginning of 2018. For example, a bank-account management app called Yolt, serving the U.K. market (and soon France and Italy), was built on a Kubernetes platform and has gone into production. ING is also developing blockchain-enabled applications that will live on the Kubernetes platform. "We’ve been contacted by a lot of development teams that have ideas with what they want to do with containers," says Ebbers.
-
-
-
-
-
-Even with the particular requirements that come in banking, ING has managed to take a lead in technology and innovation. "Every time we have constraints, we look for maybe a better way that we can use this technology."
— Alfonso Fernandez-Barandiaran, Cloud Platform Architect, ING
-
-
-
- Even with the particular requirements that come in banking, ING has managed to take a lead in technology and innovation. "Every time we have constraints, we look for maybe a better way that we can use this technology," says Fernandez-Barandiaran.
- The results, after all, are worth the effort. "The big cloud native promise to our business is the ability to go from idea to production within 48 hours," says Ebbers. "That would require all these projects to be mature. We are some years away from this, but that’s quite feasible to us."
-
-
-
-
+
The results, after all, are worth the effort. "The big cloud native promise to our business is the ability to go from idea to production within 48 hours," says Ebbers. "That would require all these projects to be mature. We are some years away from this, but that's quite feasible to us."
diff --git a/content/ko/case-studies/naic/index.html b/content/ko/case-studies/naic/index.html
index 3deb91e4808a0..89ef6cb8ded8e 100644
--- a/content/ko/case-studies/naic/index.html
+++ b/content/ko/case-studies/naic/index.html
@@ -1,113 +1,87 @@
---
title: NAIC Case Study
-
linkTitle: NAIC
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
logo: naic_featured_logo.png
featured: false
+
+new_case_study_styles: true
+heading_background: /images/case-studies/naic/banner1.jpg
+heading_title_logo: /images/naic_logo.png
+subheading: >
+ A Culture and Technology Transition Enabled by Kubernetes
+case_study_details:
+ - Company: NAIC
+ - Location: Washington, DC
+ - Industry: Regulatory
---
-
-
CASE STUDY:
A Culture and Technology Transition Enabled by Kubernetes
+
Challenge
+
+
The National Association of Insurance Commissioners (NAIC), the U.S. standard-setting and regulatory support organization, was looking for a way to deliver new services faster to provide more value for members and staff. It also needed greater agility to improve productivity internally.
-
+
Solution
-
- Company National Association of Insurance Commissioners (NAIC) Location Washington, DC Industry Regulatory
-
+
Beginning in 2016, they started using Cloud Native Computing Foundation (CNCF) tools such as Prometheus. NAIC began hosting internal systems and development systems on Kubernetes at the beginning of 2018, as part of a broad move toward the public cloud. "Our culture and technology transition is a strategy embraced by our top leaders," says Dan Barker, Chief Enterprise Architect. "It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies."
-
-
-
-
-
Challenge
- The National Association of Insurance Commissioners (NAIC), the U.S. standard-setting and regulatory support organization, was looking for a way to deliver new services faster to provide more value for members and staff. It also needed greater agility to improve productivity internally.
-
-
Solution
- Beginning in 2016, they started using Cloud Native Computing Foundation (CNCF) tools such as Prometheus. NAIC began hosting internal systems and development systems on Kubernetes at the beginning of 2018, as part of a broad move toward the public cloud. "Our culture and technology transition is a strategy embraced by our top leaders," says Dan Barker, Chief Enterprise Architect. "It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies."
+
Impact
+
Leveraging Kubernetes, "our development teams can create rapid prototypes far faster than they used to," Barker said. Applications running on Kubernetes are more resilient than those running in other environments. The deployment of open source solutions is helping influence company culture, as NAIC becomes a more open and transparent organization.
+
"We completed a small prototype in two days that would have previously taken at least a month," Barker says. Resiliency is currently measured in how much downtime systems have. "They've basically had none, and the occasional issue is remedied in minutes," he says.
-
+{{< case-studies/quote author="Dan Barker, Chief Enterprise Architect, NAIC" >}}
+"Our culture and technology transition is a strategy embraced by our top leaders. It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies."
+{{< /case-studies/quote >}}
-
+
NAIC—which was created and overseen by the chief insurance regulators from the 50 states, the District of Columbia and five U.S. territories—provides a means through which state insurance regulators establish standards and best practices, conduct peer reviews, and coordinate their regulatory oversight. Their staff supports these efforts and represents the collective views of regulators in the United States and internationally. NAIC members, together with the organization's central resources, form the national system of state-based insurance regulation in the United States.
+
The organization has been using the cloud for years, and wanted to find more ways to quickly deliver new services that provide more value for members and staff. They looked to Kubernetes for a solution. Within NAIC, several groups are leveraging Kubernetes, one being the Platform Engineering Team. "The team building out these tools are not only deploying and operating Kubernetes, but they're also using them," Barker says. "In fact, we're using GitLab to deploy Kubernetes with a pipeline using kops. This team was created from developers, operators, and quality engineers from across the company, so their jobs have changed quite a bit."
-
Impact
- Leveraging Kubernetes, "our development teams can create rapid prototypes far faster than they used to," Barker said. Applications running on Kubernetes are more resilient than those running in other environments. The deployment of open source solutions is helping influence company culture, as NAIC becomes a more open and transparent organization.
-
- "We completed a small prototype in two days that would have previously taken at least a month," Barker says. Resiliency is currently measured in how much downtime systems have. "They’ve basically had none, and the occasional issue is remedied in minutes," he says.
-
-
-
-
-
-
-
-
- "Our culture and technology transition is a strategy embraced by our top leaders. It has already proven successful by allowing us to accelerate our value pipeline by more than double while decreasing our costs by more than half. We are also seeing customer satisfaction increase as we add more and more applications to these new technologies." - Dan Barker, Chief Enterprise Architect, NAIC
-
-
-
-
- NAIC—which was created and overseen by the chief insurance regulators from the 50 states, the District of Columbia and five U.S. territories—provides a means through which state insurance regulators establish standards and best practices, conduct peer reviews, and coordinate their regulatory oversight. Their staff supports these efforts and represents the collective views of regulators in the United States and internationally. NAIC members, together with the organization’s central resources, form the national system of state-based insurance regulation in the United States.
-The organization has been using the cloud for years, and wanted to find more ways to quickly deliver new services that provide more value for members and staff. They looked to Kubernetes for a solution. Within NAIC, several groups are leveraging Kubernetes, one being the Platform Engineering Team. "The team building out these tools are not only deploying and operating Kubernetes, but they’re also using them," Barker says. "In fact, we’re using GitLab to deploy Kubernetes with a pipeline using kops. This team was created from developers, operators, and quality engineers from across the company, so their jobs have changed quite a bit."
-In addition, NAIC is onboarding teams to the new platform, and those teams have seen a lot of change in how they work and what they can do. "They now have more power in creating their own infrastructure and deploying their own applications," Barker says. They also use pipelines to facilitate their currently manual processes. NAIC has consumers who are using GitLab heavily, and they’re starting to use Kubernetes to deploy simple applications that help their internal processes.
-
-
-
-
-
-
- "In our experience, vendor lock-in and tooling that is highly specific results in less resilient technology with fewer minds working to solve problems and grow the community." - Dan Barker, Chief Enterprise Architect, NAIC
-
-
-
-
- "We needed greater agility to enable our own productivity internally," he says. "We decided it was right for us to move everything to the public cloud [Amazon Web Services] to help with that process and be able to access many of the native tools that allows us to move faster by not needing to build everything."
-The NAIC also wanted to be cloud-agnostic, "and Kubernetes helps with this for our compute layer," Barker says. "Compute is pretty standard across the clouds, and now we can take advantage of any of them while getting all of the other features Kubernetes offers."
-The NAIC currently hosts internal systems and development systems on Kubernetes, and has already seen how impactful it can be. "Our development teams can create rapid prototypes in minutes instead of weeks," Barker says. "This recently happened with an internal tool that had no measurable wait time on the infrastructure. It was solely development bound. There is now a central shared resource that lives in AWS, which means it can grow as needed."
-The native integrations into Kubernetes at NAIC has made it easy to write code and have it running in minutes instead of weeks. Applications running on Kubernetes have also proven to be more resilient than those running in other environments. "We even have teams using this to create more internal tools to help with communication or automating some of their current tasks," Barker says.
-
-"We knew that Kubernetes had become the de facto standard for container orchestration," he says. "Two major factors for selecting this were the three major cloud vendors hosting their own versions and having it hosted in a neutral party as fully open source."
-
-As for other CNCF projects, NAIC is using Prometheus on a small scale and hopes to continue using it moving forward because of the seamless integration with Kubernetes. The Association also is considering gRPC as its internal communications standard, Envoy in conjunction with Istio for service mesh, OpenTracing and Jaeger for tracing aggregation, and Fluentd with its Elasticsearch cluster.
-
-
-
-
-
- "We knew that Kubernetes had become the de facto standard for container orchestration. Two major factors for selecting this were the three major cloud vendors hosting their own versions and having it hosted in a neutral party as fully open source."
- Dan Barker, Chief Enterprise Architect, NAIC
-
-
-
-
-
-
-
-The open governance and broad industry participation in CNCF provided a comfort level with the technology, Barker says. "We also see it as helping to influence our own company culture," he says. "We’re moving to be a more open and transparent company, and we are encouraging our staff to get involved with the different working groups and codebases. We recently became CNCF members to help further our commitment to community contribution and transparency."
-Factors such as vendor-neutrality and cross-industry investment were important in the selection. "In our experience, vendor lock-in and tooling that is highly specific results in less resilient technology with fewer minds working to solve problems and grow the community," Barker says.
-NAIC is a largely Oracle shop, Barker says, and has been running mostly Java on JBoss. "However, we have years of history with other applications," he says. "Some of these have been migrated by completely rewriting the application, while others are just being modified slightly to fit into this new paradigm."
-Running on AWS cloud, the Association has not specifically taken a microservices approach. "We are moving to microservices where practical, but we haven’t found that it’s a necessity to operate them within Kubernetes," Barker says
-All of its databases are currently running within public cloud services, but they have explored eventually running those in Kubernetes, as it makes sense. "We’re doing this to get more reuse from common components and to limit our failure domains to something more manageable and observable," Barker says.
-
-
-
-
-
-
- "We have been able to move much faster at lower cost than we were able to in the past," Barker says. "We were able to complete one of our projects in a year, when the previous version took over two years. And the new project cost $500,000 while the original required $3 million, and with fewer defects. We are also able to push out new features much faster."
- Dan Barker, Chief Enterprise Architect, NAIC
-
-
-
-
-
-NAIC has seen a significant business impact from its efforts. "We have been able to move much faster at lower cost than we were able to in the past," Barker says. "We were able to complete one of our projects in a year, when the previous version took over two years. And the new project cost $500,000 while the original required $3 million, and with fewer defects. We are also able to push out new features much faster."
-He says the organization is moving toward continuous deployment "because the business case makes sense. The research is becoming very hard to argue with. We want to reduce our batch sizes and optimize on delivering value to customers and not feature count. This is requiring a larger cultural shift than just a technology shift."
-NAIC is "becoming more open and transparent, as well as more resilient to failure," Barker says. "Even our customers are wanting more and more of this and trying to figure out how they can work with us to accomplish our mutual goals faster. Members of the insurance industry have reached out so that we can better learn together and grow as an industry."
-
-
-
-
+
In addition, NAIC is onboarding teams to the new platform, and those teams have seen a lot of change in how they work and what they can do. "They now have more power in creating their own infrastructure and deploying their own applications," Barker says. They also use pipelines to facilitate their currently manual processes. NAIC has consumers who are using GitLab heavily, and they're starting to use Kubernetes to deploy simple applications that help their internal processes.
+
+{{< case-studies/quote
+ image="/images/case-studies/naic/banner3.jpg"
+ author="Dan Barker, Chief Enterprise Architect, NAIC"
+>}}
+"In our experience, vendor lock-in and tooling that is highly specific results in less resilient technology with fewer minds working to solve problems and grow the community."
+{{< /case-studies/quote >}}
+
+
"We needed greater agility to enable our own productivity internally," he says. "We decided it was right for us to move everything to the public cloud [Amazon Web Services] to help with that process and be able to access many of the native tools that allows us to move faster by not needing to build everything."
+The NAIC also wanted to be cloud-agnostic, "and Kubernetes helps with this for our compute layer," Barker says. "Compute is pretty standard across the clouds, and now we can take advantage of any of them while getting all of the other features Kubernetes offers."
+
+
The NAIC currently hosts internal systems and development systems on Kubernetes, and has already seen how impactful it can be. "Our development teams can create rapid prototypes in minutes instead of weeks," Barker says. "This recently happened with an internal tool that had no measurable wait time on the infrastructure. It was solely development bound. There is now a central shared resource that lives in AWS, which means it can grow as needed."
+
+
The native integrations into Kubernetes at NAIC has made it easy to write code and have it running in minutes instead of weeks. Applications running on Kubernetes have also proven to be more resilient than those running in other environments. "We even have teams using this to create more internal tools to help with communication or automating some of their current tasks," Barker says.
+
+
"We knew that Kubernetes had become the de facto standard for container orchestration," he says. "Two major factors for selecting this were the three major cloud vendors hosting their own versions and having it hosted in a neutral party as fully open source."
+
+
As for other CNCF projects, NAIC is using Prometheus on a small scale and hopes to continue using it moving forward because of the seamless integration with Kubernetes. The Association also is considering gRPC as its internal communications standard, Envoy in conjunction with Istio for service mesh, OpenTracing and Jaeger for tracing aggregation, and Fluentd with its Elasticsearch cluster.
+
+{{< case-studies/quote
+ image="/images/case-studies/naic/banner4.jpg"
+ author="Dan Barker, Chief Enterprise Architect, NAIC"
+>}}
+"We knew that Kubernetes had become the de facto standard for container orchestration. Two major factors for selecting this were the three major cloud vendors hosting their own versions and having it hosted in a neutral party as fully open source."
+{{< /case-studies/quote >}}
+
+
The open governance and broad industry participation in CNCF provided a comfort level with the technology, Barker says. "We also see it as helping to influence our own company culture," he says. "We're moving to be a more open and transparent company, and we are encouraging our staff to get involved with the different working groups and codebases. We recently became CNCF members to help further our commitment to community contribution and transparency."
+
+
Factors such as vendor-neutrality and cross-industry investment were important in the selection. "In our experience, vendor lock-in and tooling that is highly specific results in less resilient technology with fewer minds working to solve problems and grow the community," Barker says.
+
+
NAIC is a largely Oracle shop, Barker says, and has been running mostly Java on JBoss. "However, we have years of history with other applications," he says. "Some of these have been migrated by completely rewriting the application, while others are just being modified slightly to fit into this new paradigm."
+
+
Running on AWS cloud, the Association has not specifically taken a microservices approach. "We are moving to microservices where practical, but we haven't found that it's a necessity to operate them within Kubernetes," Barker says.
+
+
All of its databases are currently running within public cloud services, but they have explored eventually running those in Kubernetes, as it makes sense. "We're doing this to get more reuse from common components and to limit our failure domains to something more manageable and observable," Barker says.
+
+{{< case-studies/quote author="Dan Barker, Chief Enterprise Architect, NAIC" >}}
+"We have been able to move much faster at lower cost than we were able to in the past," Barker says. "We were able to complete one of our projects in a year, when the previous version took over two years. And the new project cost $500,000 while the original required $3 million, and with fewer defects. We are also able to push out new features much faster."
+{{< /case-studies/quote >}}
+
+
NAIC has seen a significant business impact from its efforts. "We have been able to move much faster at lower cost than we were able to in the past," Barker says. "We were able to complete one of our projects in a year, when the previous version took over two years. And the new project cost $500,000 while the original required $3 million, and with fewer defects. We are also able to push out new features much faster."
+
+
He says the organization is moving toward continuous deployment "because the business case makes sense. The research is becoming very hard to argue with. We want to reduce our batch sizes and optimize on delivering value to customers and not feature count. This is requiring a larger cultural shift than just a technology shift."
+
+
NAIC is "becoming more open and transparent, as well as more resilient to failure," Barker says. "Even our customers are wanting more and more of this and trying to figure out how they can work with us to accomplish our mutual goals faster. Members of the insurance industry have reached out so that we can better learn together and grow as an industry."
diff --git a/content/ko/case-studies/newyorktimes/index.html b/content/ko/case-studies/newyorktimes/index.html
index 53dbd06a55085..ae57a9ec7d7a8 100644
--- a/content/ko/case-studies/newyorktimes/index.html
+++ b/content/ko/case-studies/newyorktimes/index.html
@@ -2,107 +2,73 @@
title: New York Times Case Study
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/newyorktimes/banner1.jpg
+heading_title_logo: /images/newyorktimes_logo.png
+subheading: >
+ The New York Times: From Print to the Web to Cloud Native
+case_study_details:
+ - Company: New York Times
+ - Location: New York, N.Y.
+ - Industry: News Media
---
-
-
CASE STUDY:
The New York Times: From Print to the Web to Cloud Native
-
-
-
-
-
-
- Company New York Times Location New York, N.Y.
- Industry News Media
-
-
-
-
-
-
-
Challenge
- When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center," says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would "design for the abstractions that cloud providers offer us."
-
-
-
Solution
- The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
-
-
-
-
-
-
Impact
- Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was "just a few seconds to a couple of minutes," says Engineering Manager Brian Balser. Adds Li: "Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary." Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
Challenge
+
+
When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center," says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would "design for the abstractions that cloud providers offer us."
Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was "just a few seconds to a couple of minutes," says Engineering Manager Brian Balser. Adds Li: "Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary." Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.
+
+{{< case-studies/quote author="Deep Kapadia, Executive Director, Engineering at The New York Times">}}
+
+
- "I think once you get over the initial hump, things get a lot easier and actually a lot faster." — Deep Kapadia, Executive Director, Engineering at The New York Times
-
-
-
-
- Founded in 1851 and known as the newspaper of record, The New York Times is a digital pioneer: Its first website launched in 1996, before Google even existed. After the company decided a few years ago to move out of its private data centers—including one located in the pricy real estate of Manhattan. It recently took another step into the future by going cloud native.
- At first, the infrastructure team "managed the virtual machines in the Amazon cloud, and they deployed more critical applications in our data centers and the less critical ones on AWS as an experiment," says Deep Kapadia, Executive Director, Engineering at The New York Times. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center."
- To get the most out of the cloud, Kapadia was tapped to lead a new Delivery Engineering Team that would "design for the abstractions that cloud providers offer us." In mid-2016, they began looking at the Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
- At the time, says team member Tony Li, a Site Reliability Engineer, "We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
- In early 2017, the first production application—the nytimes.com mobile homepage—began running on Kubernetes, serving just 1% of the traffic. Today, almost 100% of the nytimes.com site’s end-user facing applications run on GCP, with the majority on Kubernetes.
-
-
-
-
-
- "We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
-
-
-
-
-
- The team found that the speed of delivery was immediately impacted. "Deploying Docker images versus spinning up VMs was quite a lot faster," says Engineering Manager Brian Balser. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was "just a few seconds to a couple of minutes."
- The plan is to get as much as possible, not just the website, running on Kubernetes, and beyond that, moving toward serverless deployments. For instance, The New York Times crossword app was built on Google App Engine, which has been the main platform for the company’s experimentation with serverless. "The hardest part was getting the engineers over the hurdle of how little they had to do," Chief Technology Officer Nick Rockwell recently told The CTO Advisor. "Our experience has been very, very good. We have invested a lot of work into deploying apps on container services, and I’m really excited about experimenting with deploying those on App Engine Flex and AWS Fargate and seeing how that feels, because that’s a great migration path."
- There are some exceptions to the move to cloud native, of course. "We have the print publishing business as well," says Kapadia. "A lot of that is definitely not going down the cloud-native path because they’re using vendor software and even special machinery that prints the physical paper. But even those teams are looking at things like App Engine and Kubernetes if they can."
- Kapadia acknowledges that there was a steep learning curve for some engineers, but "I think once you get over the initial hump, things get a lot easier and actually a lot faster."
-
-
-
-
-
- "Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
-
-
-
-
-
- At The New York Times, they did. As teams started sharing their own best practices with each other, "We’re no longer the bottleneck for figuring out certain things," Kapadia says. "Most of the infrastructure and systems were managed by a centralized function. We’ve sort of blown that up, partly because Google and Amazon have tools that allow us to do that. We provide teams with complete ownership of their Google Cloud Platform projects, and give them a set of sensible defaults or standards. We let them know, ‘If this works for you as is, great! If not, come talk to us and we’ll figure out how to make it work for you.’"
- As a result, "It’s really allowed teams to move at a much more rapid pace than they were able to in the past," says Kapadia. Adds Li: "The use of GKE means each team can get their own compute cluster, reducing the number of individual instances they have to care about since developers can treat the cluster as a whole. Because the ticket-based workflow was removed from requesting resources and connections, developers can just call an API to get what they want. Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary."
- Another benefit to adopting Kubernetes: allowing for a more unified approach to deployment across the engineering staff. "Before, many teams were building their own tools for deployment," says Balser. With Kubernetes—as well as the other CNCF projects The New York Times uses, including Fluentd to collect logs for all of its AWS servers, gRPC for its Publishing Pipeline, Prometheus, and Envoy—"we can benefit from the advances that each of these technologies make, instead of trying to catch up."
-
-
-
-
-
-
-Li calls the Cloud Native Computing Foundation’s projects "a northern star that we can all look at and follow."
-
-
-
-
- These open-source technologies have given the company more portability. "CNCF has enabled us to follow an industry standard," says Kapadia. "It allows us to think about whether we want to move away from our current service providers. Most of our applications are connected to Fluentd. If we wish to switch our logging provider from provider A to provider B we can do that. We’re running Kubernetes in GCP today, but if we want to run it in Amazon or Azure, we could potentially look into that as well."
- Li calls the Cloud Native Computing Foundation’s projects "a northern star that we can all look at and follow." Led by that star, the team is looking ahead to a year of onboarding the remaining half of the 40 or so product engineering teams to extract even more value out of the technology. "Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
-
-
-
+"I think once you get over the initial hump, things get a lot easier and actually a lot faster."
+{{< /case-studies/quote >}}
+
+
Founded in 1851 and known as the newspaper of record, The New York Times is a digital pioneer: Its first website launched in 1996, before Google even existed. After the company decided a few years ago to move out of its private data centers—including one located in the pricy real estate of Manhattan. It recently took another step into the future by going cloud native.
+
+
At first, the infrastructure team "managed the virtual machines in the Amazon cloud, and they deployed more critical applications in our data centers and the less critical ones on AWS as an experiment," says Deep Kapadia, Executive Director, Engineering at The New York Times. "We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center."
+
+
To get the most out of the cloud, Kapadia was tapped to lead a new Delivery Engineering Team that would "design for the abstractions that cloud providers offer us." In mid-2016, they began looking at the Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
+
+
At the time, says team member Tony Li, a Site Reliability Engineer, "We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
+
+
In early 2017, the first production application—the nytimes.com mobile homepage—began running on Kubernetes, serving just 1% of the traffic. Today, almost 100% of the nytimes.com site's end-user facing applications run on GCP, with the majority on Kubernetes.
+
+{{< case-studies/quote image="/images/case-studies/newyorktimes/banner3.jpg" >}}
+"We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?"
+{{< /case-studies/quote >}}
+
+
The team found that the speed of delivery was immediately impacted. "Deploying Docker images versus spinning up VMs was quite a lot faster," says Engineering Manager Brian Balser. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was "just a few seconds to a couple of minutes."
+
+
The plan is to get as much as possible, not just the website, running on Kubernetes, and beyond that, moving toward serverless deployments. For instance, The New York Times crossword app was built on Google App Engine, which has been the main platform for the company's experimentation with serverless. "The hardest part was getting the engineers over the hurdle of how little they had to do," Chief Technology Officer Nick Rockwell recently told The CTO Advisor. "Our experience has been very, very good. We have invested a lot of work into deploying apps on container services, and I'm really excited about experimenting with deploying those on App Engine Flex and AWS Fargate and seeing how that feels, because that's a great migration path."
+
+
There are some exceptions to the move to cloud native, of course. "We have the print publishing business as well," says Kapadia. "A lot of that is definitely not going down the cloud-native path because they're using vendor software and even special machinery that prints the physical paper. But even those teams are looking at things like App Engine and Kubernetes if they can."
+
+
Kapadia acknowledges that there was a steep learning curve for some engineers, but "I think once you get over the initial hump, things get a lot easier and actually a lot faster."
+
+{{< case-studies/quote image="/images/case-studies/newyorktimes/banner4.jpg" >}}
+"Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
+{{< /case-studies/quote >}}
+
+
At The New York Times, they did. As teams started sharing their own best practices with each other, "We're no longer the bottleneck for figuring out certain things," Kapadia says. "Most of the infrastructure and systems were managed by a centralized function. We've sort of blown that up, partly because Google and Amazon have tools that allow us to do that. We provide teams with complete ownership of their Google Cloud Platform projects, and give them a set of sensible defaults or standards. We let them know, 'If this works for you as is, great! If not, come talk to us and we'll figure out how to make it work for you.'"
+
+
As a result, "It's really allowed teams to move at a much more rapid pace than they were able to in the past," says Kapadia. Adds Li: "The use of GKE means each team can get their own compute cluster, reducing the number of individual instances they have to care about since developers can treat the cluster as a whole. Because the ticket-based workflow was removed from requesting resources and connections, developers can just call an API to get what they want. Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary."
+
+
Another benefit to adopting Kubernetes: allowing for a more unified approach to deployment across the engineering staff. "Before, many teams were building their own tools for deployment," says Balser. With Kubernetes—as well as the other CNCF projects The New York Times uses, including Fluentd to collect logs for all of its AWS servers, gRPC for its Publishing Pipeline, Prometheus, and Envoy—"we can benefit from the advances that each of these technologies make, instead of trying to catch up."
+
+{{< case-studies/quote >}}
+Li calls the Cloud Native Computing Foundation's projects "a northern star that we can all look at and follow."
+{{< /case-studies/quote >}}
+
+
These open-source technologies have given the company more portability. "CNCF has enabled us to follow an industry standard," says Kapadia. "It allows us to think about whether we want to move away from our current service providers. Most of our applications are connected to Fluentd. If we wish to switch our logging provider from provider A to provider B we can do that. We're running Kubernetes in GCP today, but if we want to run it in Amazon or Azure, we could potentially look into that as well."
+
+
Li calls the Cloud Native Computing Foundation's projects "a northern star that we can all look at and follow." Led by that star, the team is looking ahead to a year of onboarding the remaining half of the 40 or so product engineering teams to extract even more value out of the technology. "Right now, every team is running a small Kubernetes cluster, but it would be nice if we could all live in a larger ecosystem," says Kapadia. "Then we can harness the power of things like service mesh proxies that can actually do a lot of instrumentation between microservices, or service-to-service orchestration. Those are the new things that we want to experiment with as we go forward."
diff --git a/content/ko/case-studies/nordstrom/index.html b/content/ko/case-studies/nordstrom/index.html
index 788453de35a06..73bc4e147e055 100644
--- a/content/ko/case-studies/nordstrom/index.html
+++ b/content/ko/case-studies/nordstrom/index.html
@@ -2,109 +2,74 @@
title: Nordstrom Case Study
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/nordstrom/banner1.jpg
+heading_title_logo: /images/nordstrom_logo.png
+subheading: >
+ Finding Millions in Potential Savings in a Tough Retail Climate
+case_study_details:
+ - Company: Nordstrom
+ - Location: Seattle, Washington
+ - Industry: Retail
---
-
-
CASE STUDY:
Finding Millions in Potential Savings in a Tough Retail Climate
-
-
-
-
-
-
-
- Company Nordstrom Location Seattle, Washington Industry Retail
-
-
-
-
-
-
-
Challenge
- Nordstrom wanted to increase the efficiency and speed of its technology operations, which includes the Nordstrom.com e-commerce site. At the same time, Nordstrom Technology was looking for ways to tighten its technology operational costs.
-
-
-
Solution
- After embracing a DevOps transformation and launching a continuous integration/continuous deployment (CI/CD) project four years ago, the company reduced its deployment time from three months to 30 minutes. But they wanted to go even faster across environments, so they began their cloud native journey, adopting Docker containers orchestrated with Kubernetes.
-
-
-
-
-
-
-
-
-
Impact
- Nordstrom Technology developers using Kubernetes now deploy faster and can "just focus on writing applications," says Dhawal Patel, a senior engineer on the team building a Kubernetes enterprise platform for Nordstrom. Furthermore, the team has increased Ops efficiency, improving CPU utilization from 5x to 12x depending on the workload. "We run thousands of virtual machines (VMs), but aren’t effectively using all those resources," says Patel. "With Kubernetes, without even trying to make our cluster efficient, we are currently at a 10x increase."
-
-
-
-
-
-
- "We are always looking for ways to optimize and provide more value through technology. With Kubernetes we are showcasing two types of efficiency that we can bring: Dev efficiency and Ops efficiency. It’s a win-win."
- -— Dhawal Patel, senior engineer at Nordstrom
-
-
-
-
- When Dhawal Patel joined Nordstrom five years ago as an application developer for the retailer’s website, he realized there was an opportunity to help speed up development cycles.
-
- In those early DevOps days, Nordstrom Technology still followed a traditional model of silo teams and functions. "As a developer, I was spending more time fixing environments than writing code and adding value to business," Patel says. "I was passionate about that—so I was given the opportunity to help fix it."
-
- The company was eager to move faster, too, and in 2013 launched the first continuous integration/continuous deployment (CI/CD) project. That project was the first step in Nordstrom’s cloud native journey.
-
- Dev and Ops team members built a CI/CD pipeline, working with the company’s servers on premise. The team chose Chef, and wrote cookbooks that automated virtual IP creation, servers, and load balancing. "After we completed the project, deployment went from three months to 30 minutes," says Patel. "We still had multiple environments—dev, test, staging, then production—so with each environment running the Chef cookbooks, it took 30 minutes. It was a huge achievement at that point."
-
But new environments still took too long to turn up, so the next step was working in the cloud. Today, Nordstrom Technology has built an enterprise platform that allows the company’s 1,500 developers to deploy applications running as Docker containers in the cloud, orchestrated with Kubernetes.
-
-
-
-
-
- "We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core,"
-
-
-
-
-
-"The cloud provided faster access to resources, because it took weeks for us to get a virtual machine (VM) on premises," says Patel. "But now we can do the same thing in only five minutes."
-
-Nordstrom’s first foray into scheduling containers on a cluster was a homegrown system based on CoreOS fleet. They began doing a few proofs of concept projects with that system until Kubernetes 1.0 was released when they made the switch. "We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core," says Marius Grigoriu, Sr. Manager of the Kubernetes team at Nordstrom.
-While Kubernetes is often thought as a platform for microservices, the first application to launch on Kubernetes in a critical production role at Nordstrom was Jira. "It was not the ideal microservice we were hoping to get as our first application," Patel admits, "but the team that was working on it was really passionate about Docker and Kubernetes, and they wanted to try it out. They had their application running on premises, and wanted to move it to Kubernetes."
-
-The benefits were immediate for the teams that came on board. "Teams running on our Kubernetes cluster loved the fact that they had fewer issues to worry about. They didn’t need to manage infrastructure or operating systems," says Grigoriu. "Early adopters loved the declarative nature of Kubernetes. They loved the reduced surface area they had to deal with."
-
-
-
-
-
- "Teams running on our Kubernetes cluster loved the fact that they had fewer issues to worry about. They didn’t need to manage infrastructure or operating systems," says Grigoriu. "Early adopters loved the declarative nature of Kubernetes. They loved the reduced surface area they had to deal with."
-
-
-
-
-
- To support these early adopters, Patel’s team began growing the cluster and building production-grade services. "We integrated with Prometheus for monitoring, with a Grafana front end; we used Fluentd to push logs to Elasticsearch, so that gives us log aggregation," says Patel. The team also added dozens of open-source components, including CNCF projects and has made contributions to Kubernetes, Terraform, and kube2iam.
-
-There are now more than 60 development teams running Kubernetes in Nordstrom Technology, and as success stories have popped up, more teams have gotten on board. "Our initial customer base, the ones who were willing to try this out, are now going and evangelizing to the next set of users," says Patel. "One early adopter had Docker containers and he was not sure how to run it in production. We sat with him and within 15 minutes we deployed it in production. He thought it was amazing, and more people in his org started coming in."
-
-For Nordstrom Technology, going cloud-native has vastly improved development and operational efficiency. The developers using Kubernetes now deploy faster and can focus on building value in their applications. One such team started with a 25-minute merge to deploy by launching virtual machines in the cloud. Switching to Kubernetes was a 5x speedup in their process, improving their merge to deploy time to 5 minutes.
-
-
-
-
- "With Kubernetes, without even trying to make our cluster efficient, we are currently at 40 percent CPU utilization—a 10x increase. we are running 2600+ customer pods that would have been 2600+ VMs if they had gone directly to the cloud. We are running them on 40 VMs now, so that’s a huge reduction in operational overhead."
-
-
-
-
- Speed is great, and easily demonstrated, but perhaps the bigger impact lies in the operational efficiency. "We run thousands of VMs on AWS, and their overall average CPU utilization is about four percent," says Patel. "With Kubernetes, without even trying to make our cluster efficient, we are currently at 40 percent CPU utilization—a 10x increase. We are running 2600+ customer pods that would have been 2600+ VMs if they had gone directly to the cloud. We are running them on 40 VMs now, so that’s a huge reduction in operational overhead."
-
- Nordstrom Technology is also exploring running Kubernetes on bare metal on premises. "If we can build an on-premises Kubernetes cluster," says Patel, "we could bring the power of cloud to provision resources fast on-premises. Then for the developer, their interface is Kubernetes; they might not even realize or care that their services are now deployed on premises because they’re only working with Kubernetes."
- For that reason, Patel is eagerly following Kubernetes’ development of multi-cluster capabilities. "With cluster federation, we can have our on-premise as the primary cluster and the cloud as a secondary burstable cluster," he says. "So, when there is an anniversary sale or Black Friday sale, and we need more containers - we can go to the cloud."
-
- That kind of possibility—as well as the impact that Grigoriu and Patel’s team has already delivered using Kubernetes—is what led Nordstrom on its cloud native journey in the first place. "The way the retail environment is today, we are trying to build responsiveness and flexibility where we can," says Grigoriu. "Kubernetes makes it easy to: bring efficiency to both the Dev and Ops side of the equation. It’s a win-win."
-
-
-
+
Challenge
+
+
Nordstrom wanted to increase the efficiency and speed of its technology operations, which includes the Nordstrom.com e-commerce site. At the same time, Nordstrom Technology was looking for ways to tighten its technology operational costs.
+
+
Solution
+
+
After embracing a DevOps transformation and launching a continuous integration/continuous deployment (CI/CD) project four years ago, the company reduced its deployment time from three months to 30 minutes. But they wanted to go even faster across environments, so they began their cloud native journey, adopting Docker containers orchestrated with Kubernetes.
+
+
Impact
+
+
Nordstrom Technology developers using Kubernetes now deploy faster and can "just focus on writing applications," says Dhawal Patel, a senior engineer on the team building a Kubernetes enterprise platform for Nordstrom. Furthermore, the team has increased Ops efficiency, improving CPU utilization from 5x to 12x depending on the workload. "We run thousands of virtual machines (VMs), but aren't effectively using all those resources," says Patel. "With Kubernetes, without even trying to make our cluster efficient, we are currently at a 10x increase."
+
+{{< case-studies/quote author="Dhawal Patel, senior engineer at Nordstrom" >}}
+"We are always looking for ways to optimize and provide more value through technology. With Kubernetes we are showcasing two types of efficiency that we can bring: Dev efficiency and Ops efficiency. It's a win-win."
+{{< /case-studies/quote >}}
+
+
When Dhawal Patel joined Nordstrom five years ago as an application developer for the retailer's website, he realized there was an opportunity to help speed up development cycles.
+
+
In those early DevOps days, Nordstrom Technology still followed a traditional model of silo teams and functions. "As a developer, I was spending more time fixing environments than writing code and adding value to business," Patel says. "I was passionate about that—so I was given the opportunity to help fix it."
+
+
The company was eager to move faster, too, and in 2013 launched the first continuous integration/continuous deployment (CI/CD) project. That project was the first step in Nordstrom's cloud native journey.
+
+
Dev and Ops team members built a CI/CD pipeline, working with the company's servers on premise. The team chose Chef, and wrote cookbooks that automated virtual IP creation, servers, and load balancing. "After we completed the project, deployment went from three months to 30 minutes," says Patel. "We still had multiple environments—dev, test, staging, then production—so with each environment running the Chef cookbooks, it took 30 minutes. It was a huge achievement at that point."
+
+
But new environments still took too long to turn up, so the next step was working in the cloud. Today, Nordstrom Technology has built an enterprise platform that allows the company's 1,500 developers to deploy applications running as Docker containers in the cloud, orchestrated with Kubernetes.
+
+{{< case-studies/quote image="/images/case-studies/nordstrom/banner3.jpg" >}}
+"We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core,"
+{{< /case-studies/quote >}}
+
+
"The cloud provided faster access to resources, because it took weeks for us to get a virtual machine (VM) on premises," says Patel. "But now we can do the same thing in only five minutes."
+
+
Nordstrom's first foray into scheduling containers on a cluster was a homegrown system based on CoreOS fleet. They began doing a few proofs of concept projects with that system until Kubernetes 1.0 was released when they made the switch. "We made a bet that Kubernetes was going to take off, informed by early indicators of community support and project velocity, so we rebuilt our system with Kubernetes at the core," says Marius Grigoriu, Sr. Manager of the Kubernetes team at Nordstrom.
+
+
While Kubernetes is often thought as a platform for microservices, the first application to launch on Kubernetes in a critical production role at Nordstrom was Jira. "It was not the ideal microservice we were hoping to get as our first application," Patel admits, "but the team that was working on it was really passionate about Docker and Kubernetes, and they wanted to try it out. They had their application running on premises, and wanted to move it to Kubernetes."
+
+
The benefits were immediate for the teams that came on board. "Teams running on our Kubernetes cluster loved the fact that they had fewer issues to worry about. They didn't need to manage infrastructure or operating systems," says Grigoriu. "Early adopters loved the declarative nature of Kubernetes. They loved the reduced surface area they had to deal with."
+
+{{< case-studies/quote image="/images/case-studies/nordstrom/banner4.jpg">}}
+"Teams running on our Kubernetes cluster loved the fact that they had fewer issues to worry about. They didn't need to manage infrastructure or operating systems," says Grigoriu. "Early adopters loved the declarative nature of Kubernetes. They loved the reduced surface area they had to deal with."
+{{< /case-studies/quote >}}
+
+
To support these early adopters, Patel's team began growing the cluster and building production-grade services. "We integrated with Prometheus for monitoring, with a Grafana front end; we used Fluentd to push logs to Elasticsearch, so that gives us log aggregation," says Patel. The team also added dozens of open-source components, including CNCF projects and has made contributions to Kubernetes, Terraform, and kube2iam.
+
+
There are now more than 60 development teams running Kubernetes in Nordstrom Technology, and as success stories have popped up, more teams have gotten on board. "Our initial customer base, the ones who were willing to try this out, are now going and evangelizing to the next set of users," says Patel. "One early adopter had Docker containers and he was not sure how to run it in production. We sat with him and within 15 minutes we deployed it in production. He thought it was amazing, and more people in his org started coming in."
+
+
For Nordstrom Technology, going cloud-native has vastly improved development and operational efficiency. The developers using Kubernetes now deploy faster and can focus on building value in their applications. One such team started with a 25-minute merge to deploy by launching virtual machines in the cloud. Switching to Kubernetes was a 5x speedup in their process, improving their merge to deploy time to 5 minutes.
+
+{{< case-studies/quote >}}
+"With Kubernetes, without even trying to make our cluster efficient, we are currently at 40 percent CPU utilization—a 10x increase. we are running 2600+ customer pods that would have been 2600+ VMs if they had gone directly to the cloud. We are running them on 40 VMs now, so that's a huge reduction in operational overhead."
+{{< /case-studies/quote >}}
+
+
Speed is great, and easily demonstrated, but perhaps the bigger impact lies in the operational efficiency. "We run thousands of VMs on AWS, and their overall average CPU utilization is about four percent," says Patel. "With Kubernetes, without even trying to make our cluster efficient, we are currently at 40 percent CPU utilization—a 10x increase. We are running 2600+ customer pods that would have been 2600+ VMs if they had gone directly to the cloud. We are running them on 40 VMs now, so that's a huge reduction in operational overhead."
+
+
Nordstrom Technology is also exploring running Kubernetes on bare metal on premises. "If we can build an on-premises Kubernetes cluster," says Patel, "we could bring the power of cloud to provision resources fast on-premises. Then for the developer, their interface is Kubernetes; they might not even realize or care that their services are now deployed on premises because they're only working with Kubernetes."
+
+
For that reason, Patel is eagerly following Kubernetes' development of multi-cluster capabilities. "With cluster federation, we can have our on-premise as the primary cluster and the cloud as a secondary burstable cluster," he says. "So, when there is an anniversary sale or Black Friday sale, and we need more containers - we can go to the cloud."
+
+
That kind of possibility—as well as the impact that Grigoriu and Patel's team has already delivered using Kubernetes—is what led Nordstrom on its cloud native journey in the first place. "The way the retail environment is today, we are trying to build responsiveness and flexibility where we can," says Grigoriu. "Kubernetes makes it easy to: bring efficiency to both the Dev and Ops side of the equation. It's a win-win."
In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual's leading products and services and meld it with LearnVest's digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company's existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
-
+
Solution
-
+
The platform team came up with a plan for using the public cloud (AWS), Docker containers, and Kubernetes for orchestration. "Kubernetes gave us that base framework so teams can be very autonomous in what they're building and deliver very quickly and frequently," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. The team also built and open-sourced Kanali, a Kubernetes-native API management tool that uses OpenTracing, Jaeger, and gRPC.
-
- Company Northwestern Mutual Location Milwaukee, WI Industry Insurance and Financial Services
-
+
Impact
-
-
-
-
-
Challenge
- In the spring of 2015, Northwestern Mutual acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on on-prem networks; deployments were very traditional, focused on following a process instead of providing deployment agility. "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website so our end-customers have the experience they expect," says Williams.
-
-
Solution
- The platform team came up with a plan for using the public cloud (AWS), Docker containers, and Kubernetes for orchestration. "Kubernetes gave us that base framework so teams can be very autonomous in what they’re building and deliver very quickly and frequently," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. The team also built and open-sourced Kanali, a Kubernetes-native API management tool that uses OpenTracing, Jaeger, and gRPC.
+
Before, infrastructure deployments could take weeks; now, it is done in a matter of minutes. The number of deployments has increased dramatically, from about 24 a year to over 500 in just the first 10 months of 2017. Availability has also increased: There used to be a six-hour control window for commits every Sunday morning, as well as other periods of general maintenance, during which outages could happen. "Now we have eliminated the planned outage windows," says Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. Kanali has had an impact on the bottom line. The vendor API management product that the company previously used required 23 servers, "dedicated, to only API management," says Pfremmer. "Now it's all integrated in the existing stack and running as another deployment on Kubernetes. And that's just one environment. Between the three that we had plus the test, that's hard dollar savings."
+{{< case-studies/quote author="Frank Greco Jr., Cloud Native Engineer at Northwestern Mutual">}}
+"In a large enterprise, you're going to have people using Kubernetes, but then you're also going to have people using WAS and .NET. You may not be at a point where your whole stack can be cloud native. What if you can take your API management tool and make it cloud native, but still proxy to legacy systems? Using different pieces that are cloud native, open source and Kubernetes native, you can do pretty innovative stuff."
+{{< /case-studies/quote >}}
-
+{{< case-studies/lead >}}
+For more than 160 years, Northwestern Mutual has maintained its industry leadership in part by keeping a strong focus on risk management.
+{{< /case-studies/lead >}}
-
+
For many years, the company took a similar approach to managing its technology and has recently undergone a digital transformation to advance the company's digital strategy - including making a lot of noise in the cloud-native world.
-
Impact
- Before, infrastructure deployments could take weeks; now, it is done in a matter of minutes. The number of deployments has increased dramatically, from about 24 a year to over 500 in just the first 10 months of 2017. Availability has also increased: There used to be a six-hour control window for commits every Sunday morning, as well as other periods of general maintenance, during which outages could happen. "Now we have eliminated the planned outage windows," says Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. Kanali has had an impact on the bottom line. The vendor API management product that the company previously used required 23 servers, "dedicated, to only API management," says Pfremmer. "Now it’s all integrated in the existing stack and running as another deployment on Kubernetes. And that’s just one environment. Between the three that we had plus the test, that’s hard dollar savings."
-
-
-
-
-
-
-
-"In a large enterprise, you’re going to have people using Kubernetes, but then you’re also going to have people using WAS and .NET. You may not be at a point where your whole stack can be cloud native. What if you can take your API management tool and make it cloud native, but still proxy to legacy systems? Using different pieces that are cloud native, open source and Kubernetes native, you can do pretty innovative stuff." — Frank Greco Jr., Cloud Native Engineer at Northwestern Mutual
-
-
-
-
For more than 160 years, Northwestern Mutual has maintained its industry leadership in part by keeping a strong focus on risk management.
- For many years, the company took a similar approach to managing its technology and has recently undergone a digital transformation to advance the company’s digital strategy - including making a lot of noise in the cloud-native world.
-In the spring of 2015, this insurance and financial services company acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual’s leading products and services and meld it with LearnVest’s digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company’s existing infrastructure had been optimized for batch workflows hosted on an on-premise datacenter; deployments were very traditional and had to many manual steps that were error prone.
-In order to give the company’s 4.5 million clients the digital experience they’d come to expect, says Williams, "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website. We essentially said, 'You build the system that you think is necessary to support a new, modern-facing one.’ That’s why we departed from anything legacy."
-
-
-
-
-
-
- "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they’re building and deliver very quickly and frequently."
-
-
-
-
-
- Williams and the rest of the platform team decided that the first step would be to start moving from private data centers to AWS. With a new microservice architecture in mind—and the freedom to implement what was best for the organization—they began using Docker containers. After looking into the various container orchestration options, they went with Kubernetes, even though it was still in beta at the time. "There was some debate whether we should build something ourselves, or just leverage that product and evolve with it," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they’re building and deliver very quickly and frequently."
-As early adopters, the team had to do a lot of work with Ansible scripts to stand up the cluster. "We had a lot of hard security requirements given the nature of our business," explains Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. "We found ourselves running a configuration that very few other people ever tried." The client experience group was the first to use the new platform; today, a few hundred of the company’s 1,500 engineers are using it and more are eager to get on board.
-The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
-
-
-
-
-
-
-"Now, developers have autonomy, they can use this whenever they want, however they want. It becomes more valuable the more instrumentation downstream that happens, as we mature in it."
-
-
+
In the spring of 2015, this insurance and financial services company acquired a fintech startup, LearnVest, and decided to take "Northwestern Mutual's leading products and services and meld it with LearnVest's digital experience and innovative financial planning platform," says Brad Williams, Director of Engineering for Client Experience, Northwestern Mutual. The company's existing infrastructure had been optimized for batch workflows hosted on an on-premise datacenter; deployments were very traditional and had to many manual steps that were error prone.
+
+
In order to give the company's 4.5 million clients the digital experience they'd come to expect, says Williams, "We had to build a platform that was elastically scalable, but also much more responsive, so we could quickly get data to the client website. We essentially said, 'You build the system that you think is necessary to support a new, modern-facing one.' That's why we departed from anything legacy."
-
-
- The process used to be so cumbersome that minor bug releases would be bundled with feature releases. With the new streamlined system enabled by Kubernetes, the number of deployments has increased from about 24 a year to more than 500 in just the first 10 months of 2017. Availability has also been improved: There used to be a six-hour control window for commits every early Sunday morning, as well as other periods of general maintenance, during which outages could happen. "Now there’s no planned outage window," notes Pfremmer.
-Northwestern Mutual built that API management tool—called Kanali—and open sourced it in the summer of 2017. The team took on the project because it was a key capability for what they were building and prior the solution worked in an "anti-cloud native way that was different than everything else we were doing," says Greco. Now API management is just another container deployed to Kubernetes along with a separate Jaeger deployment.
-Now the engineers using the Kubernetes deployment platform have the added benefit of visibility in production—and autonomy. Before, a centralized team and would have to run a trace. "Now, developers have autonomy, they can use this whenever they want, however they want. It becomes more valuable the more instrumentation downstream that happens, as we mature in it." says Greco.
+{{< case-studies/quote image="/images/case-studies/northwestern/banner3.jpg" >}}
+"Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they're building and deliver very quickly and frequently."
+{{< /case-studies/quote >}}
+
Williams and the rest of the platform team decided that the first step would be to start moving from private data centers to AWS. With a new microservice architecture in mind—and the freedom to implement what was best for the organization—they began using Docker containers. After looking into the various container orchestration options, they went with Kubernetes, even though it was still in beta at the time. "There was some debate whether we should build something ourselves, or just leverage that product and evolve with it," says Northwestern Mutual Cloud Native Engineer Frank Greco Jr. "Kubernetes has definitely been the right choice for us. It gave us that base framework so teams can be autonomous in what they're building and deliver very quickly and frequently."
+
+
As early adopters, the team had to do a lot of work with Ansible scripts to stand up the cluster. "We had a lot of hard security requirements given the nature of our business," explains Bryan Pfremmer, App Platform Teams Manager, Northwestern Mutual. "We found ourselves running a configuration that very few other people ever tried." The client experience group was the first to use the new platform; today, a few hundred of the company's 1,500 engineers are using it and more are eager to get on board.
+
+
The results have been dramatic. Before, infrastructure deployments could take two weeks; now, it is done in a matter of minutes. Now with a focus on Infrastructure automation, and self-service, "You can take an app to production in that same day if you want to," says Pfremmer.
+
+{{< case-studies/quote image="/images/case-studies/northwestern/banner4.jpg" >}}
+"Now, developers have autonomy, they can use this whenever they want, however they want. It becomes more valuable the more instrumentation downstream that happens, as we mature in it."
+{{< /case-studies/quote >}}
-
+
The process used to be so cumbersome that minor bug releases would be bundled with feature releases. With the new streamlined system enabled by Kubernetes, the number of deployments has increased from about 24 a year to more than 500 in just the first 10 months of 2017. Availability has also been improved: There used to be a six-hour control window for commits every early Sunday morning, as well as other periods of general maintenance, during which outages could happen. "Now there's no planned outage window," notes Pfremmer.
-
-
- "We’re trying to make what we’re doing known so that we can find people who are like, 'Yeah, that’s interesting. I want to come do it!’"
-
-
+
Northwestern Mutual built that API management tool—called Kanali—and open sourced it in the summer of 2017. The team took on the project because it was a key capability for what they were building and prior the solution worked in an "anti-cloud native way that was different than everything else we were doing," says Greco. Now API management is just another container deployed to Kubernetes along with a separate Jaeger deployment.
-
- But the team didn’t stop there. "In a large enterprise, you’re going to have people using Kubernetes, but then you’re also going to have people using WAS and .NET," says Greco. "You may not be at a point where your whole stack can be cloud native. What if you can take your API management tool and make it cloud native, but still proxy to legacy systems? Using different pieces that are cloud native, open source and Kubernetes native, you can do pretty innovative stuff."
- As the team continues to improve its stack and share its Kubernetes best practices, it feels that Northwestern Mutual’s reputation as a technology-first company is evolving too. "No one would think a company that’s 160-plus years old is foraying this deep into the cloud and infrastructure stack," says Pfremmer. And they’re hoping that means they’ll be able to attract new talent. "We’re trying to make what we’re doing known so that we can find people who are like, 'Yeah, that’s interesting. I want to come do it!’"
+
Now the engineers using the Kubernetes deployment platform have the added benefit of visibility in production—and autonomy. Before, a centralized team and would have to run a trace. "Now, developers have autonomy, they can use this whenever they want, however they want. It becomes more valuable the more instrumentation downstream that happens, as we mature in it." says Greco.
+{{< case-studies/quote >}}
+"We're trying to make what we're doing known so that we can find people who are like, 'Yeah, that's interesting. I want to come do it!'"
+{{< /case-studies/quote >}}
-
+
But the team didn't stop there. "In a large enterprise, you're going to have people using Kubernetes, but then you're also going to have people using WAS and .NET," says Greco. "You may not be at a point where your whole stack can be cloud native. What if you can take your API management tool and make it cloud native, but still proxy to legacy systems? Using different pieces that are cloud native, open source and Kubernetes native, you can do pretty innovative stuff."
-
+
As the team continues to improve its stack and share its Kubernetes best practices, it feels that Northwestern Mutual's reputation as a technology-first company is evolving too. "No one would think a company that's 160-plus years old is foraying this deep into the cloud and infrastructure stack," says Pfremmer. And they're hoping that means they'll be able to attract new talent. "We're trying to make what we're doing known so that we can find people who are like, 'Yeah, that's interesting. I want to come do it!'"
diff --git a/content/ko/case-studies/ocado/index.html b/content/ko/case-studies/ocado/index.html
index 79ac9bf3a826a..59374f820e6d2 100644
--- a/content/ko/case-studies/ocado/index.html
+++ b/content/ko/case-studies/ocado/index.html
@@ -1,99 +1,83 @@
---
title: Ocado Case Study
-
linkTitle: Ocado
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
logo: ocado_featured_logo.png
featured: true
weight: 4
quote: >
- People at Ocado Technology have been quite amazed. They ask, ‘Can we do this on a Dev cluster?’ and 10 minutes later we have rolled out something that is deployed across the cluster. The speed from idea to implementation to deployment is amazing.
+ People at Ocado Technology have been quite amazed. They ask, 'Can we do this on a Dev cluster?' and 10 minutes later we have rolled out something that is deployed across the cluster. The speed from idea to implementation to deployment is amazing.
+
+new_case_study_styles: true
+heading_background: /images/case-studies/ocado/banner1.jpg
+heading_title_logo: /images/ocado_logo.png
+subheading: >
+ Ocado: Running Grocery Warehouses with a Cloud Native Platform
+case_study_details:
+ - Company: Ocado Technology
+ - Location: Hatfield, England
+ - Industry: Grocery retail technology and platforms
---
-
-
CASE STUDY:
Ocado: Running Grocery Warehouses with a Cloud Native Platform
-
+
Challenge
-
- Company Ocado Technology Location Hatfield, England Industry Grocery retail technology and platforms
-
+
The world's largest online-only grocery retailer, Ocado developed the Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other retailers such as Kroger. To set up the first warehouses for the platform, Ocado shifted from virtual machines and Puppet infrastructure to Docker containers, using CoreOS's fleet scheduler to provision all the services on its OpenStack-based private cloud on bare metal. As the Smart Platform grew and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."
-
-
-
-
-
Challenge
- The world’s largest online-only grocery retailer, Ocado developed the Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other retailers such as Kroger. To set up the first warehouses for the platform, Ocado shifted from virtual machines and Puppet infrastructure to Docker containers, using CoreOS’s fleet scheduler to provision all the services on its OpenStack-based private cloud on bare metal. As the Smart Platform grew and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."
-
Solution
- The team decided to migrate from fleet to Kubernetes on Ocado’s private cloud. The Kubernetes stack currently uses kubeadm for bootstrapping, CNI with Weave Net for networking, Prometheus Operator for monitoring, Fluentd for logging, and OpenTracing for distributed tracing. The first app on Kubernetes, a business-critical service in the warehouses, went into production in the summer of 2017, with a mass migration continuing into 2018. Hundreds of Ocado engineers working on the Smart Platform are now deploying on Kubernetes.
+
Solution
-
+
The team decided to migrate from fleet to Kubernetes on Ocado's private cloud. The Kubernetes stack currently uses kubeadm for bootstrapping, CNI with Weave Net for networking, Prometheus Operator for monitoring, Fluentd for logging, and OpenTracing for distributed tracing. The first app on Kubernetes, a business-critical service in the warehouses, went into production in the summer of 2017, with a mass migration continuing into 2018. Hundreds of Ocado engineers working on the Smart Platform are now deploying on Kubernetes.
-
+
Impact
+
With Kubernetes, "the speed from idea to implementation to deployment is amazing," says Bryant. "I've seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month." And because there are no longer restrictive deployment windows in the warehouses, the rate of deployments has gone from as few as two per week to dozens per week. Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. Says DevOps Team Leader Kevin McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster." The team also uses Prometheus and Grafana to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I'd estimate that we use about 15-25% less hardware resources to host the same applications in Kubernetes in our test environments."
-
Impact
- With Kubernetes, "the speed from idea to implementation to deployment is amazing," says Bryant. "I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month." And because there are no longer restrictive deployment windows in the warehouses, the rate of deployments has gone from as few as two per week to dozens per week. Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. Says DevOps Team Leader Kevin McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster." The team also uses Prometheus and Grafana to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I’d estimate that we use about 15-25% less hardware resources to host the same applications in Kubernetes in our test environments."
-
-
-
-
-
-
-
- "People at Ocado Technology have been quite amazed. They ask, ‘Can we do this on a Dev cluster?’ and 10 minutes later we have rolled out something that is deployed across the cluster. The speed from idea to implementation to deployment is amazing." - Mike Bryant, Platform Engineer, Ocado
-
-
-
-
-
When it was founded in 2000, Ocado was an online-only grocery retailer in the U.K. In the years since, it has expanded from delivering produce to families to providing technology to other grocery retailers.
-The company began developing its Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other grocery chains around the world, such as Kroger. To set up the first warehouses on the platform, Ocado shifted from virtual machines and Puppet infrastructure to Docker containers, using CoreOS’s fleet scheduler to provision all the services on its OpenStack-based private cloud on bare metal. As the Smart Platform grew, and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."
-Bryant had already been using Kubernetes with Code for Life, a children’s education project that’s part of Ocado’s charity arm. "We really liked it, so we started looking at it seriously for our production workloads," says Bryant. The team that managed fleet had researched orchestration solutions and landed on Kubernetes as well. "We were looking for a platform with wide adoption, and that was where the momentum was," says DevOps Team Leader Kevin McCormack. The two paths converged, and "We didn’t even go through any proof-of-concept stage. The Code for Life work served that purpose," says Bryant.
-
-
-
-
-
- "We were looking for a platform with wide adoption, and that was where the momentum was, the two paths converged, and we didn’t even go through any proof-of-concept stage. The Code for Life work served that purpose,"
- Kevin McCormack, DevOps Team Leader, Ocado
-
-
-
-
- In the summer of 2016, the team began migrating from fleet to Kubernetes on Ocado’s private cloud. The Kubernetes stack currently uses kubeadm for bootstrapping, CNI with Weave Net for networking, Prometheus Operator for monitoring, Fluentd for logging, and OpenTracing for distributed tracing.
- The first app on Kubernetes, a business-critical service in the warehouses, went into production a year later. Once that app was running smoothly, a mass migration continued into 2018. Hundreds of Ocado engineers working on the Smart Platform are now deploying on Kubernetes, and the platform is live in Ocado’s warehouses, managing tens of thousands of orders a week. At full capacity, Ocado’s latest warehouse in Erith, southeast London, will deliver more than 200,000 orders per week, making it the world’s largest facility for online grocery.
- There are about 150 microservices now running on Kubernetes, with multiple instances of many of them. "We’re not just deploying all these microservices at once. We’re deploying them all for one warehouse, and then they’re all being deployed again for the next warehouse, and again and again," says Bryant.
- The move to Kubernetes was eye-opening for many people at Ocado Technology. "In the early days of putting the platform into our test infrastructure, the technical architect asked what network performance was like on Weave Net with encryption turned on," recalls Bryant. "So we found a Docker container for iPerf, wrote a daemon set, deployed it. A few moments later, we’ve deployed the entire thing across this cluster. He was pretty blown away by that."
-
-
-
-
-
- "The unified API of Kubernetes means this is all in one place, and it’s one flow for approval and rollout. I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month."
- Mike Bryant, Platform Engineer, Ocado
-
-
-
-
-
-
- Indeed, the impact has been profound. "Prior to containerization, we had quite restrictive deployment windows in our warehouses," says Bryant. "Moving to microservices, we’ve been able to deploy much more frequently. We’ve been able to move towards continuous delivery in a number of areas. In our older warehouse, new application deployments involve talking to a bunch of different teams for different levels of the stack: from VM provisioning, to storage, to load balancers, and so on. The unified API of Kubernetes means this is all in one place, and it’s one flow for approval and rollout. I’ve seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month."
- The rate of deployment has gone from as few as two per week to dozens per week. "With Kubernetes, some of our development teams have been able to deploy their application to production on the new platform without us noticing," says Bryant, "which means they’re faster at doing what they need to do and we have less work."
- Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. "That lets us shrink quite a lot of our deployments from being per-core VM deployments to having fractions of the core," says Bryant. Adds McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster. This means we use our hardware better since if we have to always have two nodes of excess capacity available in case of node failures then we only need two extra instead of 20."
-
-
-
-
-
- "CNCF have provided us with support of different technologies. We’ve been able to adopt those in a very easy fashion. We do like that CNCF is vendor agnostic. We’re not being asked to commit to this one way of doing things. The vast diversity of viewpoints in CNCF lead to better technology."
- Mike Bryant, Platform Engineer, Ocado
-
-
-
-
-
- The team also uses Prometheus and Grafana to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I’d estimate that we use about 15-25% less hardware resource to host the same applications in Kubernetes in our test environments."
- One of the broader benefits of cloud native, says Bryant, is the unified API. "We have one method of doing our deployments that covers the wide range of things we need to do, and we can extend the API," he says. In addition to using Prometheus Operator, the Ocado team has started writing its own operators, some of which have been open sourced. Plus, "CNCF has provided us with support of these different technologies. We’ve been able to adopt those in a very easy fashion. We do like that CNCF is vendor agnostic. We’re not being asked to commit to this one way of doing things. The vast diversity of viewpoints in the CNCF leads to better technology."
- Ocado’s own technology, in the form of its Smart Platform, will soon be used around the world. And cloud native plays a crucial role in this global expansion. "I wouldn’t have wanted to try it without Kubernetes," says Bryant. "Kubernetes has made it so much nicer, especially to have that consistent way of deploying all of the applications, then taking the same thing and being able to replicate it. It’s very valuable."
-
-
-
+{{< case-studies/quote author="Mike Bryant, Platform Engineer, Ocado" >}}
+"People at Ocado Technology have been quite amazed. They ask, 'Can we do this on a Dev cluster?' and 10 minutes later we have rolled out something that is deployed across the cluster. The speed from idea to implementation to deployment is amazing."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+When it was founded in 2000, Ocado was an online-only grocery retailer in the U.K. In the years since, it has expanded from delivering produce to families to providing technology to other grocery retailers.
+{{< /case-studies/lead >}}
+
+
The company began developing its Ocado Smart Platform to manage its own operations, from websites to warehouses, and is now licensing the technology to other grocery chains around the world, such as Kroger. To set up the first warehouses on the platform, Ocado shifted from virtual machines and Puppet infrastructure to Docker containers, using CoreOS's fleet scheduler to provision all the services on its OpenStack-based private cloud on bare metal. As the Smart Platform grew, and "fleet was going end-of-life," says Platform Engineer Mike Bryant, "we started looking for a more complete platform, with all of these disparate infrastructure services being brought together in one unified API."
+
+
Bryant had already been using Kubernetes with Code for Life, a children's education project that's part of Ocado's charity arm. "We really liked it, so we started looking at it seriously for our production workloads," says Bryant. The team that managed fleet had researched orchestration solutions and landed on Kubernetes as well. "We were looking for a platform with wide adoption, and that was where the momentum was," says DevOps Team Leader Kevin McCormack. The two paths converged, and "We didn't even go through any proof-of-concept stage. The Code for Life work served that purpose," says Bryant.
+
+{{< case-studies/quote
+ image="/images/case-studies/ocado/banner3.jpg"
+ author="Kevin McCormack, DevOps Team Leader, Ocado"
+>}}
+"We were looking for a platform with wide adoption, and that was where the momentum was, the two paths converged, and we didn't even go through any proof-of-concept stage. The Code for Life work served that purpose,"
+{{< /case-studies/quote >}}
+
+
In the summer of 2016, the team began migrating from fleet to Kubernetes on Ocado's private cloud. The Kubernetes stack currently uses kubeadm for bootstrapping, CNI with Weave Net for networking, Prometheus Operator for monitoring, Fluentd for logging, and OpenTracing for distributed tracing.
+
+
The first app on Kubernetes, a business-critical service in the warehouses, went into production a year later. Once that app was running smoothly, a mass migration continued into 2018. Hundreds of Ocado engineers working on the Smart Platform are now deploying on Kubernetes, and the platform is live in Ocado's warehouses, managing tens of thousands of orders a week. At full capacity, Ocado's latest warehouse in Erith, southeast London, will deliver more than 200,000 orders per week, making it the world's largest facility for online grocery.
+
+
There are about 150 microservices now running on Kubernetes, with multiple instances of many of them. "We're not just deploying all these microservices at once. We're deploying them all for one warehouse, and then they're all being deployed again for the next warehouse, and again and again," says Bryant.
+
+
The move to Kubernetes was eye-opening for many people at Ocado Technology. "In the early days of putting the platform into our test infrastructure, the technical architect asked what network performance was like on Weave Net with encryption turned on," recalls Bryant. "So we found a Docker container for iPerf, wrote a daemon set, deployed it. A few moments later, we've deployed the entire thing across this cluster. He was pretty blown away by that."
+
+{{< case-studies/quote
+ image="/images/case-studies/ocado/banner4.jpg"
+ author="Mike Bryant, Platform Engineer, Ocado"
+>}}
+"The unified API of Kubernetes means this is all in one place, and it's one flow for approval and rollout. I've seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month."
+{{< /case-studies/quote >}}
+
+
Indeed, the impact has been profound. "Prior to containerization, we had quite restrictive deployment windows in our warehouses," says Bryant. "Moving to microservices, we've been able to deploy much more frequently. We've been able to move towards continuous delivery in a number of areas. In our older warehouse, new application deployments involve talking to a bunch of different teams for different levels of the stack: from VM provisioning, to storage, to load balancers, and so on. The unified API of Kubernetes means this is all in one place, and it's one flow for approval and rollout. I've seen features go from development to production inside of a week now. In the old world, a new application deployment could easily take over a month."
+
+
The rate of deployment has gone from as few as two per week to dozens per week. "With Kubernetes, some of our development teams have been able to deploy their application to production on the new platform without us noticing," says Bryant, "which means they're faster at doing what they need to do and we have less work."
+
+
Ocado has also achieved cost savings because Kubernetes gives the team the ability to have more fine-grained resource allocation. "That lets us shrink quite a lot of our deployments from being per-core VM deployments to having fractions of the core," says Bryant. Adds McCormack: "We have more confidence in the resource allocation/separation features of Kubernetes, so we have been able to migrate from around 10 fleet clusters to one Kubernetes cluster. This means we use our hardware better since if we have to always have two nodes of excess capacity available in case of node failures then we only need two extra instead of 20."
+
+{{< case-studies/quote author="Mike Bryant, Platform Engineer, Ocado" >}}
+"CNCF have provided us with support of different technologies. We've been able to adopt those in a very easy fashion. We do like that CNCF is vendor agnostic. We're not being asked to commit to this one way of doing things. The vast diversity of viewpoints in CNCF lead to better technology."
+{{< /case-studies/quote >}}
+
+
The team also uses Prometheus and Grafana to visualize resource allocation, and makes the data available to developers. "The increased visibility offered by Prometheus means developers are more aware of what they are using and how their use impacts others, especially since we now have one shared cluster," says McCormack. "I'd estimate that we use about 15-25% less hardware resource to host the same applications in Kubernetes in our test environments."
+
+
One of the broader benefits of cloud native, says Bryant, is the unified API. "We have one method of doing our deployments that covers the wide range of things we need to do, and we can extend the API," he says. In addition to using Prometheus Operator, the Ocado team has started writing its own operators, some of which have been open sourced. Plus, "CNCF has provided us with support of these different technologies. We've been able to adopt those in a very easy fashion. We do like that CNCF is vendor agnostic. We're not being asked to commit to this one way of doing things. The vast diversity of viewpoints in the CNCF leads to better technology."
+
+
Ocado's own technology, in the form of its Smart Platform, will soon be used around the world. And cloud native plays a crucial role in this global expansion. "I wouldn't have wanted to try it without Kubernetes," says Bryant. "Kubernetes has made it so much nicer, especially to have that consistent way of deploying all of the applications, then taking the same thing and being able to replicate it. It's very valuable."
diff --git a/content/ko/case-studies/openAI/index.html b/content/ko/case-studies/openAI/index.html
index 1b95ec5f35758..543c1dee648c8 100644
--- a/content/ko/case-studies/openAI/index.html
+++ b/content/ko/case-studies/openAI/index.html
@@ -2,98 +2,68 @@
title: OpenAI Case Study
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/openAI/banner1.jpg
+heading_title_logo: /images/openAI_logo.png
+subheading: >
+ Launching and Scaling Up Experiments, Made Simple
+case_study_details:
+ - Company: OpenAI
+ - Location: San Francisco, California
+ - Industry: Artificial Intelligence Research
---
-
-
CASE STUDY:
Launching and Scaling Up Experiments, Made Simple
+
Challenge
+
+
An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.
+
+
Solution
-
+
OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. "We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster," says Christopher Berner, Head of Infrastructure. "This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
-
+
Impact
-
- Company OpenAI Location San Francisco, California Industry Artificial Intelligence Research
-
+
The company has benefited from greater portability: "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. Being able to use its own data centers when appropriate is "lowering costs and providing us access to hardware that we wouldn't necessarily have access to in the cloud," he adds. "As long as the utilization is high, the costs are much lower there." Launching experiments also takes far less time: "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
-
-
-
-
-
Challenge
- An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.
+{{< case-studies/quote >}}
+
-
Solution
- OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. "We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster," says Christopher Berner, Head of Infrastructure. "This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
-
+Check out "Building the Infrastructure that Powers the Future of AI" presented by Vicki Cheung, Member of Technical Staff & Jonas Schneider, Member of Technical Staff at OpenAI from KubeCon/CloudNativeCon Europe 2017.
+{{< /case-studies/quote >}}
-
+{{< case-studies/lead >}}
+From experiments in robotics to old-school video game play research, OpenAI's work in artificial intelligence technology is meant to be shared.
+{{< /case-studies/lead >}}
-
Impact
- The company has benefited from greater portability: "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. Being able to use its own data centers when appropriate is "lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud," he adds. "As long as the utilization is high, the costs are much lower there." Launching experiments also takes far less time: "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
-
-
-
-
-
-
-
-
-
-
-
-Check out "Building the Infrastructure that Powers the Future of AI" presented by Vicki Cheung, Member of Technical Staff & Jonas Schneider, Member of Technical Staff at OpenAI from KubeCon/CloudNativeCon Europe 2017.
-
-
-
-
-
-
From experiments in robotics to old-school video game play research, OpenAI’s work in artificial intelligence technology is meant to be shared.
- With a mission to ensure powerful AI systems are safe, OpenAI cares deeply about open source—both benefiting from it and contributing safety technology into it. "The research that we do, we want to spread it as widely as possible so everyone can benefit," says OpenAI’s Head of Infrastructure Christopher Berner. The lab’s philosophy—as well as its particular needs—lent itself to embracing an open source, cloud native strategy for its deep learning infrastructure.
- OpenAI started running Kubernetes on top of AWS in 2016, and a year later, migrated the Kubernetes clusters to Azure. "We probably use Kubernetes differently from a lot of people," says Berner. "We use it for batch scheduling and as a workload manager for the cluster. It’s a way of coordinating a large number of containers that are all connected together. We rely on our autoscaler to dynamically scale up and down our cluster. This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
- In the past year, Berner has overseen the launch of several Kubernetes clusters in OpenAI’s own data centers. "We run them in a hybrid model where the control planes—the Kubernetes API servers, etcd and everything—are all in Azure, and then all of the Kubernetes nodes are in our own data center," says Berner. "The cloud is really convenient for managing etcd and all of the masters, and having backups and spinning up new nodes if anything breaks. This model allows us to take advantage of lower costs and have the availability of more specialized hardware in our own data center."
-
-
-
-
-
-
- OpenAI’s experiments take advantage of Kubernetes’ benefits, including portability. "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters..."
-
-
-
-
-
- Different teams at OpenAI currently run a couple dozen projects. While the largest-scale workloads manage bare cloud VMs directly, most of OpenAI’s experiments take advantage of Kubernetes’ benefits, including portability. "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. The on-prem clusters are generally "used for workloads where you need lots of GPUs, something like training an ImageNet model. Anything that’s CPU heavy, that’s run in the cloud. But we also have a number of teams that run their experiments both in Azure and in our own data centers, just depending on which cluster has free capacity, and that’s hugely valuable."
- Berner has made the Kubernetes clusters available to all OpenAI teams to use if it’s a good fit. "I’ve worked a lot with our games team, which at the moment is doing research on classic console games," he says. "They had been running a bunch of their experiments on our dev servers, and they had been trying out Google cloud, managing their own VMs. We got them to try out our first on-prem Kubernetes cluster, and that was really successful. They’ve now moved over completely to it, and it has allowed them to scale up their experiments by 10x, and do that without needing to invest significant engineering time to figure out how to manage more machines. A lot of people are now following the same path."
-
-
-
-
-
-"One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days," says Berner. "In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
-
-
+
With a mission to ensure powerful AI systems are safe, OpenAI cares deeply about open source—both benefiting from it and contributing safety technology into it. "The research that we do, we want to spread it as widely as possible so everyone can benefit," says OpenAI's Head of Infrastructure Christopher Berner. The lab's philosophy—as well as its particular needs—lent itself to embracing an open source, cloud native strategy for its deep learning infrastructure.
-
-
- That path has been simplified by frameworks and tools that two of OpenAI’s teams have developed to handle interaction with Kubernetes. "You can just write some Python code, fill out a bit of configuration with exactly how many machines you need and which types, and then it will prepare all of those specifications and send it to the Kube cluster so that it gets launched there," says Berner. "And it also provides a bit of extra monitoring and better tooling that’s designed specifically for these machine learning projects."
- The impact that Kubernetes has had at OpenAI is impressive. With Kubernetes, the frameworks and tooling, including the autoscaler, in place, launching experiments takes far less time. "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days," says Berner. "In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
- Plus, the flexibility they now have to use their on-prem Kubernetes cluster when appropriate is "lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud," he says. "As long as the utilization is high, the costs are much lower in our data center. To an extent, you can also customize your hardware to exactly what you need."
+
OpenAI started running Kubernetes on top of AWS in 2016, and a year later, migrated the Kubernetes clusters to Azure. "We probably use Kubernetes differently from a lot of people," says Berner. "We use it for batch scheduling and as a workload manager for the cluster. It's a way of coordinating a large number of containers that are all connected together. We rely on our autoscaler to dynamically scale up and down our cluster. This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration."
+
+
In the past year, Berner has overseen the launch of several Kubernetes clusters in OpenAI's own data centers. "We run them in a hybrid model where the control planes—the Kubernetes API servers, etcd and everything—are all in Azure, and then all of the Kubernetes nodes are in our own data center," says Berner. "The cloud is really convenient for managing etcd and all of the masters, and having backups and spinning up new nodes if anything breaks. This model allows us to take advantage of lower costs and have the availability of more specialized hardware in our own data center."
+
+{{< case-studies/quote image="/images/case-studies/openAI/banner3.jpg" >}}
+OpenAI's experiments take advantage of Kubernetes' benefits, including portability. "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters..."
+{{< /case-studies/quote >}}
+
+
Different teams at OpenAI currently run a couple dozen projects. While the largest-scale workloads manage bare cloud VMs directly, most of OpenAI's experiments take advantage of Kubernetes' benefits, including portability. "Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters," says Berner. The on-prem clusters are generally "used for workloads where you need lots of GPUs, something like training an ImageNet model. Anything that's CPU heavy, that's run in the cloud. But we also have a number of teams that run their experiments both in Azure and in our own data centers, just depending on which cluster has free capacity, and that's hugely valuable."
+
+
Berner has made the Kubernetes clusters available to all OpenAI teams to use if it's a good fit. "I've worked a lot with our games team, which at the moment is doing research on classic console games," he says. "They had been running a bunch of their experiments on our dev servers, and they had been trying out Google cloud, managing their own VMs. We got them to try out our first on-prem Kubernetes cluster, and that was really successful. They've now moved over completely to it, and it has allowed them to scale up their experiments by 10x, and do that without needing to invest significant engineering time to figure out how to manage more machines. A lot of people are now following the same path."
+
+{{< case-studies/quote image="/images/case-studies/openAI/banner4.jpg" >}}
+"One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days," says Berner. "In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
+{{< /case-studies/quote >}}
+
That path has been simplified by frameworks and tools that two of OpenAI's teams have developed to handle interaction with Kubernetes. "You can just write some Python code, fill out a bit of configuration with exactly how many machines you need and which types, and then it will prepare all of those specifications and send it to the Kube cluster so that it gets launched there," says Berner. "And it also provides a bit of extra monitoring and better tooling that's designed specifically for these machine learning projects."
-
+
The impact that Kubernetes has had at OpenAI is impressive. With Kubernetes, the frameworks and tooling, including the autoscaler, in place, launching experiments takes far less time. "One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days," says Berner. "In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work."
-
-
- "Research teams can now take advantage of the frameworks we’ve built on top of Kubernetes, which make it easy to launch experiments, scale them by 10x or 50x, and take little effort to manage." — CHRISTOPHER BERNER, HEAD OF INFRASTRUCTURE FOR OPENAI
-
-
+
Plus, the flexibility they now have to use their on-prem Kubernetes cluster when appropriate is "lowering costs and providing us access to hardware that we wouldn't necessarily have access to in the cloud," he says. "As long as the utilization is high, the costs are much lower in our data center. To an extent, you can also customize your hardware to exactly what you need."
-
+{{< case-studies/quote author="CHRISTOPHER BERNER, HEAD OF INFRASTRUCTURE FOR OPENAI" >}}
+"Research teams can now take advantage of the frameworks we've built on top of Kubernetes, which make it easy to launch experiments, scale them by 10x or 50x, and take little effort to manage."
+{{< /case-studies/quote >}}
- OpenAI is also benefiting from other technologies in the CNCF cloud-native ecosystem. gRPC is used by many of its systems for communications between different services, and Prometheus is in place "as a debugging tool if things go wrong," says Berner. "We actually haven’t had any real problems in our Kubernetes clusters recently, so I don’t think anyone has looked at our Prometheus monitoring in a while. If something breaks, it will be there."
- One of the things Berner continues to focus on is Kubernetes’ ability to scale, which is essential to deep learning experiments. OpenAI has been able to push one of its Kubernetes clusters on Azure up to more than 2,500 nodes. "I think we’ll probably hit the 5,000-machine number that Kubernetes has been tested at before too long," says Berner, adding, "We’re definitely hiring if you’re excited about working on these things!"
-
+
OpenAI is also benefiting from other technologies in the CNCF cloud-native ecosystem. gRPC is used by many of its systems for communications between different services, and Prometheus is in place "as a debugging tool if things go wrong," says Berner. "We actually haven't had any real problems in our Kubernetes clusters recently, so I don't think anyone has looked at our Prometheus monitoring in a while. If something breaks, it will be there."
-
+
One of the things Berner continues to focus on is Kubernetes' ability to scale, which is essential to deep learning experiments. OpenAI has been able to push one of its Kubernetes clusters on Azure up to more than 2,500 nodes. "I think we'll probably hit the 5,000-machine number that Kubernetes has been tested at before too long," says Berner, adding, "We're definitely hiring if you're excited about working on these things!"
diff --git a/content/ko/case-studies/peardeck/index.html b/content/ko/case-studies/peardeck/index.html
index 688754a6200f9..43982776778fc 100644
--- a/content/ko/case-studies/peardeck/index.html
+++ b/content/ko/case-studies/peardeck/index.html
@@ -1,111 +1,87 @@
---
title: Pear Deck Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_peardeck.css
----
-
- Company Pear Deck Location Iowa City, Iowa Industry Educational Software
-
+
The three-year-old startup provides a web app for teachers to interact with their students in the classroom. The JavaScript app was built on Google's web app development platform Firebase, using Heroku. As the user base steadily grew, so did the development team. "We outgrew Heroku when we started wanting to have multiple services, and the deploying story got pretty horrendous. We were frustrated that we couldn't have the developers quickly stage a version," says CEO Riley Eynon-Lynch. "Tracing and monitoring became basically impossible." On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.
-
+
Solution
-
-
-
-
Challenge
- The three-year-old startup provides a web app for teachers to interact with their students in the classroom. The JavaScript app was built on Google’s web app development platform Firebase, using Heroku. As the user base steadily grew, so did the development team. "We outgrew Heroku when we started wanting to have multiple services, and the deploying story got pretty horrendous. We were frustrated that we couldn’t have the developers quickly stage a version," says CEO Riley Eynon-Lynch. "Tracing and monitoring became basically impossible." On top of that, many of Pear Deck’s customers are behind government firewalls and connect through Firebase, not Pear Deck’s servers, making troubleshooting even more difficult.
-
- The new cloud native stack immediately improved the development workflow, speeding up deployments. Prometheus gave Pear Deck "a lot of confidence, knowing that people are still logging into the app and using it all the time," says Eynon-Lynch. "The biggest impact is being able to work as a team on the configuration in git in a pull request, and the biggest confidence comes from the solidity of the abstractions and the trust that we have in Kubernetes actually making our yaml files a reality."
-
-
-
-
-
-
- "We didn’t even realize how stressed out we were about our lack of insight into what was happening with the app. I’m really excited and have more and more confidence in the actual state of our application for our actual users, and not just what the CPU graphs are saying, because of Prometheus and Kubernetes."
– RILEY EYNON-LYNCH, CEO OF PEAR DECK
-
-
-
-
-
-
With the speed befitting a startup, Pear Deck delivered its first prototype to customers within three months of incorporating.
-As a former high school math teacher, CEO Riley Eynon-Lynch felt an urgency to provide a tech solution to classes where instructors struggle to interact with every student in a short amount of time. "Pear Deck is an app that students can use to interact with the teacher all at once," he says. "When the teacher asks a question, instead of just the kid at the front of the room answering again, everybody can answer every single question. It’s a huge fundamental shift in the messaging to the students about how much we care about them and how much they are a part of the classroom."
-Eynon-Lynch and his partners quickly built a JavaScript web app on Google’s web app development platform Firebase, and launched the minimum viable product [MVP] on Heroku "because it was fast and easy," he says. "We made everything as easy as we could."
-
-But once it launched, the user base began growing steadily at a rate of 30 percent a month. "Our Heroku bill was getting totally insane," Eynon-Lynch says. But even more crucially, as the company hired more developers to keep pace, "we outgrew Heroku. We wanted to have multiple services and the deploying story got pretty horrendous. We were frustrated that we couldn’t have the developers quickly stage a version. Tracing and monitoring became basically impossible."
-
-On top of that, many of Pear Deck’s customers are behind government firewalls and connect through Firebase, not Pear Deck’s servers, making troubleshooting even more difficult.
-
-The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to Docker containers running on Google Kubernetes Engine, orchestrated by Kubernetes and monitored with Prometheus.
-
-
-
-
-
- "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.
-
-
-
-
-
- They had considered other options like Google’s App Engine (which they were already using for one service) and Amazon’s Elastic Compute Cloud (EC2), while experimenting with running one small service that wasn’t accessible to the Internet in Kubernetes. "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch. "We didn’t really consider Terraform and the other competitors because the abstractions offered by Kubernetes just jumped off the page to us."
- Once the team started porting its Heroku apps into Kubernetes, which was "super easy," he says, the impact was immediate. "Before, to make a new version of the app meant going to Heroku and reconfiguring 10 new services, so basically no one was willing to do it, and we never staged things," he says. "Now we can deploy our exact same configuration in lots of different clusters in 30 seconds. We have a full set up that’s always running, and then any of our developers or designers can stage new versions with one command, including their recent changes. We stage all the time now, and everyone stopped talking about how cool it is because it’s become invisible how great it is."
-
- Along with Kubernetes came Prometheus. "Until pretty recently we didn’t have any kind of visibility into aggregate server metrics or performance," says Eynon-Lynch. The team had tried to use Google Kubernetes Engine’s Stackdriver monitoring, but had problems making it work, and considered New Relic. When they started looking at Prometheus in the fall of 2016, "the fit between the abstractions in Prometheus and the way we think about how our system works, was so clear and obvious," he says.
- The integration with Kubernetes made set-up easy. Once Helm installed Prometheus, "We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
-
-
-
-
-
- "We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
-
-
-
-
-
- With Pear Deck’s specific challenges—traffic through Firebase as well as government firewalls—Prometheus was a game-changer. "We didn’t even realize how stressed out we were about our lack of insight into what was happening with the app," Eynon-Lynch says. Before, when a customer would report that the app wasn’t working, the team had to manually investigate the problem without knowing whether customers were affected all over the world, or whether Firebase was down, and where.
- To help solve that problem, the team wrote a script that pings Firebase from several different geographical locations, and then reports the responses to Prometheus in a histogram. "A huge impact that Prometheus had on us was just an amazing sigh of relief, of feeling like we knew what was happening," he says. "It took 45 minutes to implement [the Firebase alarm] because we knew that we had this trustworthy metrics platform in Prometheus. We weren’t going to have to figure out, ‘Where do we send these metrics? How do we aggregate the metrics? How do we understand them?’"
- Plus, Prometheus has allowed Pear Deck to build alarms for business goals. One measures the rate of successful app loads and goes off if the day’s loads are less than 90 percent of the loads from seven days before. "We run a JavaScript app behind ridiculous firewalls and all kinds of crazy browser extensions messing with it—Chrome will push a feature that breaks some CSS that we’re using," Eynon-Lynch says. "So that gives us a lot of confidence, and we at least know that people are still logging into the app and using it all the time."
- Now, when a customer complains, and none of the alarms have gone off, the team can feel confident that it’s not a widespread problem. "Just to be sure, we can go and double check the graphs and say, ‘Yep, there’s currently 10,000 people connected to that Firebase node. It’s definitely working. Let’s investigate your network settings, customer,’" he says. "And we can pass that back off to our support reps instead of the whole development team freaking out that Firebase is down."
- Pear Deck is also giving back to the community, building and open-sourcing a metrics aggregator that enables end-user monitoring in Prometheus. "We can measure, for example, the time to interactive-dom on the web clients," he says. "The users all report that to our aggregator, then the aggregator reports to Prometheus. So we can set an alarm for some client side errors."
- Most of Pear Deck’s services have now been moved onto Kubernetes. And all of the team’s new code is going on Kubernetes. "Kubernetes lets us experiment with service configurations and stage them on a staging cluster all at once, and test different scenarios and talk about them as a development team looking at code, not just talking about the steps we would eventually take as humans," says Eynon-Lynch.
-
-
-
-
-
- "A huge impact that Prometheus had on us was just an amazing sigh of relief, of feeling like we knew what was happening. It took 45 minutes to implement [the Firebase alarm] because we knew that we had this trustworthy metrics platform in Prometheus...in terms of the cloud, Kubernetes and Prometheus have so much to offer," he says.
-
-
-
-
-
- Looking ahead, the team is planning to explore autoscaling on Kubernetes. With users all over the world but mostly in the United States, there are peaks and valleys in the traffic. One service that’s still on App Engine can get as many as 10,000 requests a second during the day but far less at night. "We pay for the same servers at night, so I understand there’s autoscaling that we can be taking advantage of," he says. "Implementing it is a big worry, exposing the rest of our Kubernetes cluster to us and maybe messing that up. But it’s definitely our intention to move everything over, because now none of the developers want to work on that app anymore because it’s such a pain to deploy it."
-
-They’re also eager to explore the work that Kubernetes is doing with stateful sets. "Right now all of the services we run in Kubernetes are stateless, and Google basically runs our databases for us and manages backups," Eynon-Lynch says. "But we’re interested in building our own web-socket solution that doesn’t have to be super stateful but will have maybe an hour’s worth of state on it."
-
-That project will also involve Prometheus, for a dark launch of web socket connections. "We don’t know how reliable web socket connections behind all these horrible firewalls will be to our servers," he says. "We don’t know what work Firebase has done to make them more reliable. So I’m really looking forward to trying to get persistent connections with web sockets to our clients and have optional tools to understand if it’s working. That’s our next new adventure, into stateful servers."
-
-As for Prometheus, Eynon-Lynch thinks the company has only gotten started. "We haven’t instrumented all our important features, especially those that depend on third parties," he says. "We have to wait for those third parties to tell us they’re down, which sometimes they don’t do for a long time. So I’m really excited and have more and more confidence in the actual state of our application for our actual users, and not just what the CPU graphs are saying, because of Prometheus and Kubernetes."
-
-For a spry startup that’s continuing to grow rapidly—and yes, they’re hiring!—Pear Deck is notably satisfied with how its infrastructure has evolved in the cloud native ecosystem. "Usually I have some angsty thing where I want to get to the new, better technology," says Eynon-Lynch, "but in terms of the cloud, Kubernetes and Prometheus have so much to offer."
-
-
-
-
+
+
The new cloud native stack immediately improved the development workflow, speeding up deployments. Prometheus gave Pear Deck "a lot of confidence, knowing that people are still logging into the app and using it all the time," says Eynon-Lynch. "The biggest impact is being able to work as a team on the configuration in git in a pull request, and the biggest confidence comes from the solidity of the abstractions and the trust that we have in Kubernetes actually making our yaml files a reality."
+
+{{< case-studies/quote author="RILEY EYNON-LYNCH, CEO OF PEAR DECK" >}}
+"We didn't even realize how stressed out we were about our lack of insight into what was happening with the app. I'm really excited and have more and more confidence in the actual state of our application for our actual users, and not just what the CPU graphs are saying, because of Prometheus and Kubernetes."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+With the speed befitting a startup, Pear Deck delivered its first prototype to customers within three months of incorporating.
+{{< /case-studies/lead >}}
+
+
As a former high school math teacher, CEO Riley Eynon-Lynch felt an urgency to provide a tech solution to classes where instructors struggle to interact with every student in a short amount of time. "Pear Deck is an app that students can use to interact with the teacher all at once," he says. "When the teacher asks a question, instead of just the kid at the front of the room answering again, everybody can answer every single question. It's a huge fundamental shift in the messaging to the students about how much we care about them and how much they are a part of the classroom."
+
+
Eynon-Lynch and his partners quickly built a JavaScript web app on Google's web app development platform Firebase, and launched the minimum viable product [MVP] on Heroku "because it was fast and easy," he says. "We made everything as easy as we could."
+
+
But once it launched, the user base began growing steadily at a rate of 30 percent a month. "Our Heroku bill was getting totally insane," Eynon-Lynch says. But even more crucially, as the company hired more developers to keep pace, "we outgrew Heroku. We wanted to have multiple services and the deploying story got pretty horrendous. We were frustrated that we couldn't have the developers quickly stage a version. Tracing and monitoring became basically impossible."
+
+
On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.
+
+
The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to Docker containers running on Google Kubernetes Engine, orchestrated by Kubernetes and monitored with Prometheus.
+
+{{< case-studies/quote image="/images/case-studies/peardeck/banner1.jpg" >}}
+"When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.
+{{< /case-studies/quote >}}
+
+
They had considered other options like Google's App Engine (which they were already using for one service) and Amazon's Elastic Compute Cloud (EC2), while experimenting with running one small service that wasn't accessible to the Internet in Kubernetes. "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch. "We didn't really consider Terraform and the other competitors because the abstractions offered by Kubernetes just jumped off the page to us."
+
+
Once the team started porting its Heroku apps into Kubernetes, which was "super easy," he says, the impact was immediate. "Before, to make a new version of the app meant going to Heroku and reconfiguring 10 new services, so basically no one was willing to do it, and we never staged things," he says. "Now we can deploy our exact same configuration in lots of different clusters in 30 seconds. We have a full set up that's always running, and then any of our developers or designers can stage new versions with one command, including their recent changes. We stage all the time now, and everyone stopped talking about how cool it is because it's become invisible how great it is."
+
+
Along with Kubernetes came Prometheus. "Until pretty recently we didn't have any kind of visibility into aggregate server metrics or performance," says Eynon-Lynch. The team had tried to use Google Kubernetes Engine's Stackdriver monitoring, but had problems making it work, and considered New Relic. When they started looking at Prometheus in the fall of 2016, "the fit between the abstractions in Prometheus and the way we think about how our system works, was so clear and obvious," he says.
+
+
The integration with Kubernetes made set-up easy. Once Helm installed Prometheus, "We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
+
+{{< case-studies/quote image="/images/case-studies/peardeck/banner2.jpg" >}}
+"We started getting a graph of the health of all our Kubernetes nodes and pods immediately. I think we were pretty hooked at that point," Eynon-Lynch says. "Then we got our own custom instrumentation working in 15 minutes, and had an actively updated count of requests that we could do, rate on and get a sense of how many users are connected at a given point. And then it was another hour before we had alarms automatically showing up in our Slack channel. All that was in one afternoon. And it was an afternoon of gasping with delight, basically!"
+{{< /case-studies/quote >}}
+
+
With Pear Deck's specific challenges—traffic through Firebase as well as government firewalls—Prometheus was a game-changer. "We didn't even realize how stressed out we were about our lack of insight into what was happening with the app," Eynon-Lynch says. Before, when a customer would report that the app wasn't working, the team had to manually investigate the problem without knowing whether customers were affected all over the world, or whether Firebase was down, and where.
+
+
To help solve that problem, the team wrote a script that pings Firebase from several different geographical locations, and then reports the responses to Prometheus in a histogram. "A huge impact that Prometheus had on us was just an amazing sigh of relief, of feeling like we knew what was happening," he says. "It took 45 minutes to implement [the Firebase alarm] because we knew that we had this trustworthy metrics platform in Prometheus. We weren't going to have to figure out, 'Where do we send these metrics? How do we aggregate the metrics? How do we understand them?'"
+
+
Plus, Prometheus has allowed Pear Deck to build alarms for business goals. One measures the rate of successful app loads and goes off if the day's loads are less than 90 percent of the loads from seven days before. "We run a JavaScript app behind ridiculous firewalls and all kinds of crazy browser extensions messing with it—Chrome will push a feature that breaks some CSS that we're using," Eynon-Lynch says. "So that gives us a lot of confidence, and we at least know that people are still logging into the app and using it all the time."
+
+
Now, when a customer complains, and none of the alarms have gone off, the team can feel confident that it's not a widespread problem. "Just to be sure, we can go and double check the graphs and say, 'Yep, there's currently 10,000 people connected to that Firebase node. It's definitely working. Let's investigate your network settings, customer,'" he says. "And we can pass that back off to our support reps instead of the whole development team freaking out that Firebase is down."
+
+
Pear Deck is also giving back to the community, building and open-sourcing a metrics aggregator that enables end-user monitoring in Prometheus. "We can measure, for example, the time to interactive-dom on the web clients," he says. "The users all report that to our aggregator, then the aggregator reports to Prometheus. So we can set an alarm for some client side errors."
+
+
Most of Pear Deck's services have now been moved onto Kubernetes. And all of the team's new code is going on Kubernetes. "Kubernetes lets us experiment with service configurations and stage them on a staging cluster all at once, and test different scenarios and talk about them as a development team looking at code, not just talking about the steps we would eventually take as humans," says Eynon-Lynch.
+
+{{< case-studies/quote >}}
+"A huge impact that Prometheus had on us was just an amazing sigh of relief, of feeling like we knew what was happening. It took 45 minutes to implement [the Firebase alarm] because we knew that we had this trustworthy metrics platform in Prometheus...in terms of the cloud, Kubernetes and Prometheus have so much to offer," he says.
+{{< /case-studies/quote >}}
+
+
Looking ahead, the team is planning to explore autoscaling on Kubernetes. With users all over the world but mostly in the United States, there are peaks and valleys in the traffic. One service that's still on App Engine can get as many as 10,000 requests a second during the day but far less at night. "We pay for the same servers at night, so I understand there's autoscaling that we can be taking advantage of," he says. "Implementing it is a big worry, exposing the rest of our Kubernetes cluster to us and maybe messing that up. But it's definitely our intention to move everything over, because now none of the developers want to work on that app anymore because it's such a pain to deploy it."
+
+
They're also eager to explore the work that Kubernetes is doing with stateful sets. "Right now all of the services we run in Kubernetes are stateless, and Google basically runs our databases for us and manages backups," Eynon-Lynch says. "But we're interested in building our own web-socket solution that doesn't have to be super stateful but will have maybe an hour's worth of state on it."
+
+
That project will also involve Prometheus, for a dark launch of web socket connections. "We don't know how reliable web socket connections behind all these horrible firewalls will be to our servers," he says. "We don't know what work Firebase has done to make them more reliable. So I'm really looking forward to trying to get persistent connections with web sockets to our clients and have optional tools to understand if it's working. That's our next new adventure, into stateful servers."
+
+
As for Prometheus, Eynon-Lynch thinks the company has only gotten started. "We haven't instrumented all our important features, especially those that depend on third parties," he says. "We have to wait for those third parties to tell us they're down, which sometimes they don't do for a long time. So I'm really excited and have more and more confidence in the actual state of our application for our actual users, and not just what the CPU graphs are saying, because of Prometheus and Kubernetes."
+
+
For a spry startup that's continuing to grow rapidly—and yes, they're hiring!—Pear Deck is notably satisfied with how its infrastructure has evolved in the cloud native ecosystem. "Usually I have some angsty thing where I want to get to the new, better technology," says Eynon-Lynch, "but in terms of the cloud, Kubernetes and Prometheus have so much to offer."
diff --git a/content/ko/case-studies/pearson/index.html b/content/ko/case-studies/pearson/index.html
index 78f70228e5a30..501bcea8e776c 100644
--- a/content/ko/case-studies/pearson/index.html
+++ b/content/ko/case-studies/pearson/index.html
@@ -3,85 +3,81 @@
linkTitle: Pearson
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
featured: false
quote: >
- We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online.
+ We're already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online.
+
+new_case_study_styles: true
+heading_background: /images/case-studies/pearson/banner1.jpg
+heading_title_logo: /images/pearson_logo.png
+subheading: >
+ Reinventing the World's Largest Education Company With Kubernetes
+case_study_details:
+ - Company: Pearson
+ - Location: Global
+ - Industry: Education
---
-
-
CASE STUDY:
Reinventing the World’s Largest Education Company With Kubernetes
-
-
-
- Company Pearson Location Global
- Industry Education
-
-
-
-
-
-
Challenge
- A global education company serving 75 million learners, Pearson set a goal to more than double that number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.
-
-
Solution
- "To transform our infrastructure, we had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way." The team chose Docker container technology and Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers’ productivity."
-
-
-
-
Impact
- With the platform, there has been substantial improvements in productivity and speed of delivery. "In some cases, we’ve gone from nine months to provision physical assets in a data center to just a few minutes to provision and get a new idea in front of a customer," says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team. Jackson estimates they’ve achieved 15-20% developer productivity savings. Before, outages were an issue during their busiest time of year, the back-to-school period. Now, there’s high confidence in their ability to meet aggressive customer SLAs.
-
-
-
-
-
-
- "We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online."
— Chris Jackson, Director for Cloud Platforms & SRE at Pearson
-
-
-
-
- In 2015, Pearson was already serving 75 million learners as the world’s largest education company, offering curriculum and assessment tools for Pre-K through college and beyond. Understanding that innovating the digital education experience was the key to the future of all forms of education, the company set out to increase its reach to 200 million people by 2025.
- That goal would require a transformation of its existing infrastructure, which was in data centers. In some cases, it took nine months to provision physical assets. In order to adapt to the demands of its growing online audience, Pearson needed an infrastructure platform that would be able to scale quickly and deliver business-critical products to market faster. "We had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way."
- With 400 development groups and diverse brands with varying business and technical needs, Pearson embraced Docker container technology so that each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Jackson chose Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers’ productivity," he says.
- The team adopted Kubernetes when it was still version 1.2 and are still going strong now on 1.7; they use Terraform and Ansible to deploy it on to basic AWS primitives. "We were trying to understand how we can create value for Pearson from this technology," says Ben Somogyi, Principal Architect for the Cloud Platforms. "It turned out that Kubernetes’ benefits are huge. We’re trying to help our applications development teams that use our platform go faster, so we filled that gap with a CI/CD pipeline that builds their images for them, standardizes them, patches everything up, allows them to deploy their different environments onto the cluster, and obfuscating the details of how difficult the work underneath the covers is."
-
-
-
-
- "Your internal customers need to feel like they are choosing the very best option for them. We are experiencing this first hand in the growth of adoption. We are seeing triple-digit, year-on-year growth of the service."
— Chris Jackson, Director for Cloud Platforms & SRE at Pearson
-
-
-
-
- That work resulted in two tools for building and deploying applications in the cluster that Pearson has open sourced. "We’re an education company, so we want to share what we can," says Somogyi.
- Now that development teams no longer have to worry about infrastructure, there have been substantial improvements in productivity and speed of delivery. "In some cases, we’ve gone from nine months to provision physical assets in a data center to just a few minutes to provision and to get a new idea in front of a customer," says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team.
- According to Jackson, the Cloud Platforms team can "provision a new proof-of-concept environment for a development team in minutes, and then they can take that to production as quickly as they are able to. This is the value proposition of all major technology services, and we had to compete like one to become our developers’ preferred choice. Just because you work for the same company, you do not have the right to force people into a mediocre service. Your internal customers need to feel like they are choosing the very best option for them. We are experiencing this first hand in the growth of adoption. We are seeing triple-digit, year-on-year growth of the service."
- Jackson estimates they’ve achieved a 15-20% boost in productivity for developer teams who adopt the platform. They also see a reduction in the number of customer-impacting incidents. Plus, says Jackson, "Teams who were previously limited to 1-2 releases per academic year can now ship code multiple times per day!"
-
-
-
-
- "Teams who were previously limited to 1-2 releases per academic year can now ship code multiple times per day!"
— Chris Jackson, Director for Cloud Platforms & SRE at Pearson
-
-
-
-
- Availability has also been positively impacted. The back-to-school period is the company’s busiest time of year, and "you have to keep applications up," says Somogyi. Before, this was a pain point for the legacy infrastructure. Now, for the applications that have been migrated to the Kubernetes platform, "We have 100% uptime. We’re not worried about 9s. There aren’t any. It’s 100%, which is pretty astonishing for us, compared to some of the existing platforms that have legacy challenges," says Shirley.
-
- "You can’t even begin to put a price on how much that saves the company," Jackson explains. "A reduction in the number of support cases takes load out of our operations. The customer sentiment of having a reliable product drives customer retention and growth. It frees us to think about investing more into our digital transformation and taking a better quality of education to a global scale."
-
- The platform itself is also being broken down, "so we can quickly release smaller pieces of the platform, like upgrading our Kubernetes or all the different modules that make up our platform," says Somogyi. "One of the big focuses in 2018 is this scheme of delivery to update the platform itself."
-
- Guided by Pearson’s overarching goal of getting to 200 million users, the team has run internal tests of the platform’s scalability. "We had a challenge: 28 million requests within a 10 minute period," says Shirley. "And we demonstrated that we can hit that, with an acceptable latency. We saw that we could actually get that pretty readily, and we scaled up in just a few seconds, using open source tools entirely. Shout out to Locustfor that one. So that’s amazing."
-
-
-
- "We have 100% uptime. We’re not worried about 9s. There aren’t any. It’s 100%, which is pretty astonishing for us, compared to some of the existing platforms that have legacy challenges. You can’t even begin to put a price on how much that saves the company."
— Benjamin Somogyi, Principal Systems Architect at Pearson
-
-
- In just two years, "We’re already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure," says Jackson. "But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online."
- So far, about 15 production products are running on the new platform, including Pearson’s new flagship digital education service, the Global Learning Platform. The Cloud Platform team continues to prepare, onboard and support customers that are a good fit for the platform. Some existing products will be refactored into 12-factor apps, while others are being developed so that they can live on the platform from the get-go. "There are challenges with bringing in new customers of course, because we have to help them to see a different way of developing, a different way of building," says Shirley.
- But, he adds, "It is our corporate motto: Always Learning. We encourage those teams that haven’t started a cloud native journey, to see the future of technology, to learn, to explore. It will pique your interest. Keep learning."
-
-
+
+
Challenge
+
+
A global education company serving 75 million learners, Pearson set a goal to more than double that number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.
+
+
Solution
+
+
"To transform our infrastructure, we had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way." The team chose Docker container technology and Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers' productivity."
+
+
Impact
+
+
With the platform, there has been substantial improvements in productivity and speed of delivery. "In some cases, we've gone from nine months to provision physical assets in a data center to just a few minutes to provision and get a new idea in front of a customer," says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team. Jackson estimates they've achieved 15-20% developer productivity savings. Before, outages were an issue during their busiest time of year, the back-to-school period. Now, there's high confidence in their ability to meet aggressive customer SLAs.
+
+{{< case-studies/quote author="Chris Jackson, Director for Cloud Platforms & SRE at Pearson" >}}
+"We're already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure. But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online."
+{{< /case-studies/quote >}}
+
+
In 2015, Pearson was already serving 75 million learners as the world's largest education company, offering curriculum and assessment tools for Pre-K through college and beyond. Understanding that innovating the digital education experience was the key to the future of all forms of education, the company set out to increase its reach to 200 million people by 2025.
+
+
That goal would require a transformation of its existing infrastructure, which was in data centers. In some cases, it took nine months to provision physical assets. In order to adapt to the demands of its growing online audience, Pearson needed an infrastructure platform that would be able to scale quickly and deliver business-critical products to market faster. "We had to think beyond simply enabling automated provisioning," says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. "We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way."
+
+
With 400 development groups and diverse brands with varying business and technical needs, Pearson embraced Docker container technology so that each brand could experiment with building new types of content using their preferred technologies, and then deliver it using containers. Jackson chose Kubernetes orchestration "because of its flexibility, ease of management and the way it would improve our engineers' productivity," he says.
+
+
The team adopted Kubernetes when it was still version 1.2 and are still going strong now on 1.7; they use Terraform and Ansible to deploy it on to basic AWS primitives. "We were trying to understand how we can create value for Pearson from this technology," says Ben Somogyi, Principal Architect for the Cloud Platforms. "It turned out that Kubernetes' benefits are huge. We're trying to help our applications development teams that use our platform go faster, so we filled that gap with a CI/CD pipeline that builds their images for them, standardizes them, patches everything up, allows them to deploy their different environments onto the cluster, and obfuscating the details of how difficult the work underneath the covers is."
+
+{{< case-studies/quote
+ image="/images/case-studies/pearson/banner3.jpg"
+ author="Chris Jackson, Director for Cloud Platforms & SRE at Pearson"
+>}}
+"Your internal customers need to feel like they are choosing the very best option for them. We are experiencing this first hand in the growth of adoption. We are seeing triple-digit, year-on-year growth of the service."
+{{< /case-studies/quote >}}
+
+
That work resulted in two tools for building and deploying applications in the cluster that Pearson has open sourced. "We're an education company, so we want to share what we can," says Somogyi.
+
+
Now that development teams no longer have to worry about infrastructure, there have been substantial improvements in productivity and speed of delivery. "In some cases, we've gone from nine months to provision physical assets in a data center to just a few minutes to provision and to get a new idea in front of a customer," says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team.
+
+
According to Jackson, the Cloud Platforms team can "provision a new proof-of-concept environment for a development team in minutes, and then they can take that to production as quickly as they are able to. This is the value proposition of all major technology services, and we had to compete like one to become our developers' preferred choice. Just because you work for the same company, you do not have the right to force people into a mediocre service. Your internal customers need to feel like they are choosing the very best option for them. We are experiencing this first hand in the growth of adoption. We are seeing triple-digit, year-on-year growth of the service."
+
+
Jackson estimates they've achieved a 15-20% boost in productivity for developer teams who adopt the platform. They also see a reduction in the number of customer-impacting incidents. Plus, says Jackson, "Teams who were previously limited to 1-2 releases per academic year can now ship code multiple times per day!"
+
+{{< case-studies/quote
+ image="/images/case-studies/pearson/banner4.jpg"
+ author="Chris Jackson, Director for Cloud Platforms & SRE at Pearson"
+>}}
+"Teams who were previously limited to 1-2 releases per academic year can now ship code multiple times per day!"
+{{< /case-studies/quote >}}
+
+
Availability has also been positively impacted. The back-to-school period is the company's busiest time of year, and "you have to keep applications up," says Somogyi. Before, this was a pain point for the legacy infrastructure. Now, for the applications that have been migrated to the Kubernetes platform, "We have 100% uptime. We're not worried about 9s. There aren't any. It's 100%, which is pretty astonishing for us, compared to some of the existing platforms that have legacy challenges," says Shirley.
+
+
"You can't even begin to put a price on how much that saves the company," Jackson explains. "A reduction in the number of support cases takes load out of our operations. The customer sentiment of having a reliable product drives customer retention and growth. It frees us to think about investing more into our digital transformation and taking a better quality of education to a global scale."
+
+
The platform itself is also being broken down, "so we can quickly release smaller pieces of the platform, like upgrading our Kubernetes or all the different modules that make up our platform," says Somogyi. "One of the big focuses in 2018 is this scheme of delivery to update the platform itself."
+
+
Guided by Pearson's overarching goal of getting to 200 million users, the team has run internal tests of the platform's scalability. "We had a challenge: 28 million requests within a 10 minute period," says Shirley. "And we demonstrated that we can hit that, with an acceptable latency. We saw that we could actually get that pretty readily, and we scaled up in just a few seconds, using open source tools entirely. Shout out to Locustfor that one. So that's amazing."
+
+{{< case-studies/quote author="Benjamin Somogyi, Principal Systems Architect at Pearson" >}}
+"We have 100% uptime. We're not worried about 9s. There aren't any. It's 100%, which is pretty astonishing for us, compared to some of the existing platforms that have legacy challenges. You can't even begin to put a price on how much that saves the company."
+{{< /case-studies/quote >}}
+
+
In just two years, "We're already seeing tremendous benefits with Kubernetes—improved engineering productivity, faster delivery of applications and a simplified infrastructure," says Jackson. "But this is just the beginning. Kubernetes will help transform the way that educational content is delivered online."
+
+
So far, about 15 production products are running on the new platform, including Pearson's new flagship digital education service, the Global Learning Platform. The Cloud Platform team continues to prepare, onboard and support customers that are a good fit for the platform. Some existing products will be refactored into 12-factor apps, while others are being developed so that they can live on the platform from the get-go. "There are challenges with bringing in new customers of course, because we have to help them to see a different way of developing, a different way of building," says Shirley.
+
+
But, he adds, "It is our corporate motto: Always Learning. We encourage those teams that haven't started a cloud native journey, to see the future of technology, to learn, to explore. It will pique your interest. Keep learning."
diff --git a/content/ko/case-studies/pinterest/index.html b/content/ko/case-studies/pinterest/index.html
index e4be7031bbe32..b95fff054a03c 100644
--- a/content/ko/case-studies/pinterest/index.html
+++ b/content/ko/case-studies/pinterest/index.html
@@ -3,106 +3,82 @@
linkTitle: Pinterest
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
featured: false
weight: 30
quote: >
We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do.
+
+new_case_study_styles: true
+heading_background: /images/case-studies/pinterest/banner1.jpg
+heading_title_logo: /images/pinterest_logo.png
+subheading: >
+ Pinning Its Past, Present, and Future on Cloud Native
+case_study_details:
+ - Company: Pinterest
+ - Location: San Francisco, California
+ - Industry: Web and Mobile App
---
+
Challenge
+
+
After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
-
-
CASE STUDY:
Pinning Its Past, Present, and Future on Cloud Native
+
Solution
-
+
The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
-
+
Impact
-
- Company Pinterest Location San Francisco, California Industry Web and Mobile App
-
+
"By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. "We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
-
-
-
-
-
-
Challenge
- After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
+{{< case-studies/quote author="Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest" >}}
+
+
+"So far it's been good, especially the elasticity around how we can configure our Jenkins workloads on that Kubernetes shared cluster. That is the win we were pushing for."
+{{< /case-studies/quote >}}
-
+{{< case-studies/lead >}}
+Pinterest was born on the cloud—running on AWS since day one in 2010—but even cloud native companies can experience some growing pains.
+{{< /case-studies/lead >}}
-
Solution
- The first phase involved moving services to Docker containers. Once these services went into production in early 2017, the team began looking at orchestration to help create efficiencies and manage them in a decentralized way. After an evaluation of various solutions, Pinterest went with Kubernetes.
+
Since its launch, Pinterest has become a household name, with more than 200 million active monthly users and 100 billion objects saved. Underneath the hood, there are 1,000 microservices running and hundreds of thousands of data jobs.
-
+
With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production. So in 2016, the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
-
Impact
- "By moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest. "We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
-
-
-
-
-
-
-
-
-"So far it’s been good, especially the elasticity around how we can configure our Jenkins workloads on that Kubernetes shared cluster. That is the win we were pushing for."
— Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group at Pinterest
-
-
-
-
-
- Pinterest was born on the cloud—running on AWS since day one in 2010—but even cloud native companies can experience some growing pains. Since its launch, Pinterest has become a household name, with more than 200 million active monthly users and 100 billion objects saved. Underneath the hood, there are 1,000 microservices running and hundreds of thousands of data jobs.
-With such growth came layers of infrastructure and diverse set-up tools and platforms for the different workloads, resulting in an inconsistent and complex end-to-end developer experience, and ultimately less velocity to get to production.
-So in 2016, the company launched a roadmap toward a new compute platform, led by the vision of having the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
-The first phase involved moving to Docker. "Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. "To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn’t want to boil the ocean at that point."
-
-
-
-
-
- "Though Kubernetes lacked certain things we wanted, we realized that by the time we get to productionizing many of those things, we’ll be able to leverage what the community is doing."
— MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST
-
-
-
-
-
-The first service that was migrated was the monolith API fleet that powers most of Pinterest. At the same time, Benedict’s infrastructure governance team built chargeback and capacity planning systems to analyze how the company uses its virtual machines on AWS. "It became clear that running on VMs is just not sustainable with what we’re doing," says Benedict. "A lot of resources were underutilized. There were efficiency efforts, which worked fine at a certain scale, but now you have to move to a more decentralized way of managing that. So orchestration was something we thought could help solve that piece."
-That led to the second phase of the roadmap. In July 2017, after an eight-week evaluation period, the team chose Kubernetes over other orchestration platforms. "Kubernetes lacked certain things at the time—for example, we wanted Spark on Kubernetes," says Benedict. "But we realized that the dev cycles we would put in to even try building that is well worth the outcome, both for Pinterest as well as the community. We’ve been in those conversations in the Big Data SIG. We realized that by the time we get to productionizing many of those things, we’ll be able to leverage what the community is doing."
-At the beginning of 2018, the team began onboarding its first use case into the Kubernetes system: Jenkins workloads. "Although we have builds happening during a certain period of the day, we always need to allocate peak capacity," says Benedict. "They don’t have any auto-scaling capabilities, so that capacity stays constant. It is difficult to speed up builds because ramping up takes more time. So given those kind of concerns, we thought that would be a perfect use case for us to work on."
-
-
-
-
-
-
-"So far it’s been good, especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."
— MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST
-
-
-
-
-
- They ramped up the cluster, and working with a team of four people, got the Jenkins Kubernetes cluster ready for production. "We still have our static Jenkins cluster," says Benedict, "but on Kubernetes, we are doing similar builds, testing the entire pipeline, getting the artifact ready and just doing the comparison to see, how much time did it take to build over here. Is the SLA okay, is the artifact generated correct, are there issues there?"
-"So far it’s been good," he adds, "especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."
-By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the Jenkins Kubernetes Plugin to manage the lifecycle of workers. "We’re currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
-
-
-
-
-
-
- "We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do."
— MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST
-
-
-
-
- Benedict points to a "pretty robust roadmap" going forward. In addition to the Pinterest big data team’s experiments with Spark on Kubernetes, the company collaborated with Amazon’s EKS team on an ENI/CNI plug in.
-Once the Jenkins cluster is up and running out of dark mode, Benedict hopes to establish best practices, including having governance primitives established—including integration with the chargeback system—before moving on to migrating the next service. "We have a healthy pipeline of use-cases to be on-boarded. After Jenkins, we want to enable support for Tensorflow and Apache Spark. At some point, we aim to move the company’s monolithic API service. If we move that and understand the complexity around that, it builds our confidence," says Benedict. "It sets us up for migration of all our other services."
-After years of being a cloud native pioneer, Pinterest is eager to share its ongoing journey. "We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do," says Benedict. "We’re in a great position to contribute back some of those learnings."
-
-
-
-
-
-
+
The first phase involved moving to Docker. "Pinterest has been heavily running on virtual machines, on EC2 instances directly, for the longest time," says Micheal Benedict, Product Manager for the Cloud and the Data Infrastructure Group. "To solve the problem around packaging software and not make engineers own portions of the fleet and those kinds of challenges, we standardized the packaging mechanism and then moved that to the container on top of the VM. Not many drastic changes. We didn't want to boil the ocean at that point."
+
+{{< case-studies/quote
+ image="/images/case-studies/pinterest/banner3.jpg"
+ author="MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST"
+>}}
+"Though Kubernetes lacked certain things we wanted, we realized that by the time we get to productionizing many of those things, we'll be able to leverage what the community is doing."
+{{< /case-studies/quote >}}
+
+
The first service that was migrated was the monolith API fleet that powers most of Pinterest. At the same time, Benedict's infrastructure governance team built chargeback and capacity planning systems to analyze how the company uses its virtual machines on AWS. "It became clear that running on VMs is just not sustainable with what we're doing," says Benedict. "A lot of resources were underutilized. There were efficiency efforts, which worked fine at a certain scale, but now you have to move to a more decentralized way of managing that. So orchestration was something we thought could help solve that piece."
+
+
That led to the second phase of the roadmap. In July 2017, after an eight-week evaluation period, the team chose Kubernetes over other orchestration platforms. "Kubernetes lacked certain things at the time—for example, we wanted Spark on Kubernetes," says Benedict. "But we realized that the dev cycles we would put in to even try building that is well worth the outcome, both for Pinterest as well as the community. We've been in those conversations in the Big Data SIG. We realized that by the time we get to productionizing many of those things, we'll be able to leverage what the community is doing."
+
+
At the beginning of 2018, the team began onboarding its first use case into the Kubernetes system: Jenkins workloads. "Although we have builds happening during a certain period of the day, we always need to allocate peak capacity," says Benedict. "They don't have any auto-scaling capabilities, so that capacity stays constant. It is difficult to speed up builds because ramping up takes more time. So given those kind of concerns, we thought that would be a perfect use case for us to work on."
+
+{{< case-studies/quote
+ image="/images/case-studies/pinterest/banner4.jpg"
+ author="MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST"
+>}}
+"So far it's been good, especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."
+{{< /case-studies/quote >}}
+
+
They ramped up the cluster, and working with a team of four people, got the Jenkins Kubernetes cluster ready for production. "We still have our static Jenkins cluster," says Benedict, "but on Kubernetes, we are doing similar builds, testing the entire pipeline, getting the artifact ready and just doing the comparison to see, how much time did it take to build over here. Is the SLA okay, is the artifact generated correct, are there issues there?"
+
+
"So far it's been good," he adds, "especially the elasticity around how we can configure our Jenkins workloads on Kubernetes shared cluster. That is the win we were pushing for."
+
+
By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes and also collaborated on the Jenkins Kubernetes Plugin to manage the lifecycle of workers. "We're currently building the entire Pinterest JVM stack (one of the larger monorepos at Pinterest which was recently bazelized) on this new cluster," says Benedict. "At peak, we run thousands of pods on a few hundred nodes. Overall, by moving to Kubernetes the team was able to build on-demand scaling and new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins. We not only saw reduced build times but also huge efficiency wins. For instance, the team reclaimed over 80 percent of capacity during non-peak hours. As a result, the Jenkins Kubernetes cluster now uses 30 percent less instance-hours per-day when compared to the previous static cluster."
+
+{{< case-studies/quote author="MICHEAL BENEDICT, PRODUCT MANAGER FOR THE CLOUD AND THE DATA INFRASTRUCTURE GROUP AT PINTEREST">}}
+"We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do."
+{{< /case-studies/quote >}}
+
+
Benedict points to a "pretty robust roadmap" going forward. In addition to the Pinterest big data team's experiments with Spark on Kubernetes, the company collaborated with Amazon's EKS team on an ENI/CNI plug in.
+
+
Once the Jenkins cluster is up and running out of dark mode, Benedict hopes to establish best practices, including having governance primitives established—including integration with the chargeback system—before moving on to migrating the next service. "We have a healthy pipeline of use-cases to be on-boarded. After Jenkins, we want to enable support for Tensorflow and Apache Spark. At some point, we aim to move the company's monolithic API service. If we move that and understand the complexity around that, it builds our confidence," says Benedict. "It sets us up for migration of all our other services."
+
+
After years of being a cloud native pioneer, Pinterest is eager to share its ongoing journey. "We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do," says Benedict. "We're in a great position to contribute back some of those learnings."
diff --git a/content/ko/case-studies/slingtv/index.html b/content/ko/case-studies/slingtv/index.html
index 349ed8c2de9c7..6de46f35ff29c 100644
--- a/content/ko/case-studies/slingtv/index.html
+++ b/content/ko/case-studies/slingtv/index.html
@@ -3,108 +3,77 @@
linkTitle: Sling TV
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
featured: true
weight: 49
quote: >
I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.
----
-
-
-
-
CASE STUDY:
Sling TV: Marrying Kubernetes and AI to Enable Proper Web Scale
-
-
-
-
-
- Company Sling TV Location Englewood, Colorado Industry Streaming television
-
+new_case_study_styles: true
+heading_background: /images/case-studies/slingtv/banner1.jpg
+heading_title_logo: /images/slingtv_logo.png
+subheading: >
+ Sling TV: Marrying Kubernetes and AI to Enable Proper Web Scale
+case_study_details:
+ - Company: Sling TV
+ - Location: Englewood, Colorado
+ - Industry: Streaming television
+---
-
-
-
-
-
Challenge
- Launched by DISH Network in 2015, Sling TV experienced great customer growth from the beginning. After just a year, “we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future,” says Brad Linder, Sling TV’s Cloud Native & Big Data Evangelist. The company has particular challenges: “We take live TV and distribute it over the internet out to a user’s device that we do not control,” says Linder. “In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer’s service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and good customer experience at web scale.”
+
Challenge
-
+
Launched by DISH Network in 2015, Sling TV experienced great customer growth from the beginning. After just a year, "we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future," says Brad Linder, Sling TV's Cloud Native & Big Data Evangelist. The company has particular challenges: "We take live TV and distribute it over the internet out to a user's device that we do not control," says Linder. "In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer's service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and good customer experience at web scale."
-
Solution
- Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of that sort of customer base,” Linder partnered with Rancher Labs to build Sling TV’s next-generation platform around Kubernetes. “We are going to need to enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business at some point, so getting that sort of abstraction was a real goal,” he says. “That is one of the biggest reasons why we picked Kubernetes.” The team launched its first applications on Kubernetes in Sling TV’s two internal data centers. The push to enable AWS as a data center option is underway and should be available by the end of 2018. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
+
Solution
-
-
-
+
Led by the belief that "the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of that sort of customer base," Linder partnered with Rancher Labs to build Sling TV's next-generation platform around Kubernetes. "We are going to need to enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business at some point, so getting that sort of abstraction was a real goal," he says. "That is one of the biggest reasons why we picked Kubernetes." The team launched its first applications on Kubernetes in Sling TV's two internal data centers. The push to enable AWS as a data center option is underway and should be available by the end of 2018. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company's existing tool sets: Zenoss, New Relic and ELK.
Impact
- “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
-
-
-
-
-
-
-
-
- “I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
-
-
-
-
-
The beauty of streaming television, like the service offered by Sling TV, is that you can watch it from any device you want, wherever you want.
Of course, from the provider side of things, that creates a particular set of challenges
-“We take live TV and distribute it over the internet out to a user’s device that we do not control,” says Brad Linder, Sling TV’s Cloud Native & Big Data Evangelist. “In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer’s service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and we have to do it at web scale.”
-Indeed, Sling TV experienced great customer growth from the beginning of its launch by DISH Network in 2015. After just a year, “we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future,” says Linder. Tasked with building a next-generation web scale platform for the “personalized customer experience,” Linder has spent the past year bringing Kubernetes to Sling TV.
-Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of our customers,” Linder partnered with Rancher Labs to build the platform around Kubernetes. “They have really helped us get our head around how to use Kubernetes,” he says. “We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition.”
+
"We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps," says Linder. "We have really enabled a platform thinking based approach to allowing applications to consume common tools. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale."
+{{< case-studies/quote author="Brad Linder, Cloud Native & Big Data Evangelist for Sling TV" >}}
+"I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables."
+{{< /case-studies/quote >}}
-
-
-
-
- “We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition.”
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
-
-
-
-
+{{< case-studies/lead >}}
+The beauty of streaming television, like the service offered by Sling TV, is that you can watch it from any device you want, wherever you want.
+{{< /case-studies/lead >}}
-One big reason he chose Kubernetes was getting a level of abstraction that would enable the company to “enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business,” he says. Another factor was how much the Kubernetes ecosystem has matured over the past couple of years. “We have spent a lot of time and energy around making logging, monitoring and alerting production ready to give us insights into applications’ well-being,” says Linder. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
-With the emphasis on common tooling, “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools and services. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
-
-
-
-
-“We have to be able to react to changes and hiccups in the matrix. It is the foundation for our ability to deliver a high-quality service for our customers."
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
-
-
+
Of course, from the provider side of things, that creates a particular set of challenges "We take live TV and distribute it over the internet out to a user's device that we do not control," says Brad Linder, Sling TV's Cloud Native & Big Data Evangelist. "In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer's service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and we have to do it at web scale."
-
+
Indeed, Sling TV experienced great customer growth from the beginning of its launch by DISH Network in 2015. After just a year, "we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future," says Linder. Tasked with building a next-generation web scale platform for the "personalized customer experience," Linder has spent the past year bringing Kubernetes to Sling TV.
-
- The team launched its first applications on Kubernetes in Sling TV’s two internal data centers in the early part of Q1 2018 and began to enable AWS as a data center option. The company plans to expand into other public clouds in the future.
-The first application that went into production is a web socket-based back-end notification service. “It allows back-end changes to trigger messages to our clients in the field without the polling,” says Linder. “We are talking about very high volumes of messages with this application. Without something like Kubernetes to be able to scale up and down, as well as just support that overall workload, that is pretty hard to do. I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”
- Linder oversees three teams working together on building the next-generation platform: a platform engineering team; an enterprise middleware services team; and a big data and analytics team. “We have really tried to bring everything together to be able to have a client application interact with a cloud native middleware layer. That middleware layer must run on a platform, consume platform services and then have logs and events monitored by an artificial agent to keep things running smoothly,” says Linder.
+
Led by the belief that "the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of our customers," Linder partnered with Rancher Labs to build the platform around Kubernetes. "They have really helped us get our head around how to use Kubernetes," he says. "We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition."
+{{< case-studies/quote
+ image="/images/case-studies/slingtv/banner3.jpg"
+ author="Brad Linder, Cloud Native & Big Data Evangelist for Sling TV"
+>}}
+"We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition."
+{{< /case-studies/quote >}}
-
+
One big reason he chose Kubernetes was getting a level of abstraction that would enable the company to "enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business," he says. Another factor was how much the Kubernetes ecosystem has matured over the past couple of years. "We have spent a lot of time and energy around making logging, monitoring and alerting production ready to give us insights into applications' well-being," says Linder. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company's existing tool sets: Zenoss, New Relic and ELK.
-
-
- This undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works".
— BRAD LINDER, CLOUD NATIVE & BIG DATA EVANGELIST FOR SLING TV
-
-
+
With the emphasis on common tooling, "We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps," says Linder. "We have really enabled a platform thinking based approach to allowing applications to consume common tools and services. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale."
-
- Ultimately, this undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works,” he adds. “We want the artificial agents and the big data platform using the actual logs and events coming out of the applications, Kubernetes, the infrastructure, backing services and changes to the environment to make decisions like, ‘Hey we need more capacity for this service so please add more nodes.’ From a platform perspective, if you are truly doing web scale stuff and you are not using AI and big data, in my opinion, you are going to implode under your own weight. It is not a question of if, it is when. If you are in a ‘millions of users’ sort of environment, that implosion is going to be catastrophic. We are on our way to this goal and have learned a lot along the way.”
-For Sling TV, moving to cloud native has been exactly what they needed. “We have to be able to react to changes and hiccups in the matrix,” says Linder. “It is the foundation for our ability to deliver a high-quality service for our customers. Building intelligent platforms, tools and clients in the field consuming those services has got to be part of all of this. In my eyes that is a big part of what cloud native is all about. It is taking these distributed, potentially unreliable entities and enabling a robust customer experience they expect.”
+{{< case-studies/quote
+ image="/images/case-studies/slingtv/banner4.jpg"
+ author="Brad Linder, Cloud Native & Big Data Evangelist for Sling TV"
+>}}
+"We have to be able to react to changes and hiccups in the matrix. It is the foundation for our ability to deliver a high-quality service for our customers."
+{{< /case-studies/quote >}}
+
The team launched its first applications on Kubernetes in Sling TV's two internal data centers in the early part of Q1 2018 and began to enable AWS as a data center option. The company plans to expand into other public clouds in the future.
+
The first application that went into production is a web socket-based back-end notification service. "It allows back-end changes to trigger messages to our clients in the field without the polling," says Linder. "We are talking about very high volumes of messages with this application. Without something like Kubernetes to be able to scale up and down, as well as just support that overall workload, that is pretty hard to do. I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables."
+
Linder oversees three teams working together on building the next-generation platform: a platform engineering team; an enterprise middleware services team; and a big data and analytics team. "We have really tried to bring everything together to be able to have a client application interact with a cloud native middleware layer. That middleware layer must run on a platform, consume platform services and then have logs and events monitored by an artificial agent to keep things running smoothly," says Linder.
+{{< case-studies/quote author="BRAD LINDER, CLOUD NATIVE & BIG DATA EVANGELIST FOR SLING TV">}}
+This undertaking is about "trying to marry Kubernetes with AI to enable web scale that just works".
+{{< /case-studies/quote >}}
-
+
Ultimately, this undertaking is about "trying to marry Kubernetes with AI to enable web scale that just works," he adds. "We want the artificial agents and the big data platform using the actual logs and events coming out of the applications, Kubernetes, the infrastructure, backing services and changes to the environment to make decisions like, 'Hey we need more capacity for this service so please add more nodes.' From a platform perspective, if you are truly doing web scale stuff and you are not using AI and big data, in my opinion, you are going to implode under your own weight. It is not a question of if, it is when. If you are in a 'millions of users' sort of environment, that implosion is going to be catastrophic. We are on our way to this goal and have learned a lot along the way."
-
+
For Sling TV, moving to cloud native has been exactly what they needed. "We have to be able to react to changes and hiccups in the matrix," says Linder. "It is the foundation for our ability to deliver a high-quality service for our customers. Building intelligent platforms, tools and clients in the field consuming those services has got to be part of all of this. In my eyes that is a big part of what cloud native is all about. It is taking these distributed, potentially unreliable entities and enabling a robust customer experience they expect."
diff --git a/content/ko/case-studies/squarespace/index.html b/content/ko/case-studies/squarespace/index.html
index 27340835f43bd..461e466d8c200 100644
--- a/content/ko/case-studies/squarespace/index.html
+++ b/content/ko/case-studies/squarespace/index.html
@@ -2,100 +2,70 @@
title: Squarespace Case Study
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/squarespace/banner1.jpg
+heading_title_logo: /images/squarespace_logo.png
+subheading: >
+ Squarespace: Gaining Productivity and Resilience with Kubernetes
+case_study_details:
+ - Company: Squarespace
+ - Location: New York, N.Y.
+ - Industry: Software as a Service, Website-Building Platform
---
-
-
CASE STUDY:
Squarespace: Gaining Productivity and Resilience with Kubernetes
-
+
Challenge
+
+
Moving from a monolith to microservices in 2014 "solved a problem on the development side, but it pushed that problem to the infrastructure team," says Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace. "The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."
+
+
Solution
+
+
The team experimented with container orchestration platforms, and found that Kubernetes "answered all the questions that we had," says Lynch. The company began running Kubernetes in its data centers in 2016.
-
+
Impact
-
- Company Squarespace Location New York, N.Y. Industry Software as a Service, Website-Building Platform
-
+
Since Squarespace moved to Kubernetes, in conjunction with modernizing its networking stack, deployment time has been reduced by almost 85%. Before, their VM deployment would take half an hour; now, says Lynch, "someone can generate a templated application, deploy it within five minutes, and have actual instances containerized, running in our staging environment at that point." Because of that, "productivity time is the big cost saver," he adds. "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on." Resilience has also been improved with Kubernetes: "If a node goes down, it's rescheduled immediately and there's no performance impact."
-
-
-
-
-
Challenge
- Moving from a monolith to microservices in 2014 "solved a problem on the development side, but it pushed that problem to the infrastructure team," says Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace. "The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."
+{{< case-studies/quote author="Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace" >}}
+
-
Solution
-The team experimented with container orchestration platforms, and found that Kubernetes "answered all the questions that we had," says Lynch. The company began running Kubernetes in its data centers in 2016.
+"Once you prove that Kubernetes solves one problem, everyone immediately starts solving other problems without you even having to evangelize it."
+{{< /case-studies/quote >}}
-
+{{< case-studies/lead >}}
+Since it was started in a dorm room in 2003, Squarespace has made it simple for millions of people to create their own websites.
+{{< /case-studies/lead >}}
-
+
Behind the scenes, though, the company's monolithic Java application was making things not so simple for its developers to keep improving the platform. So in 2014, the company decided to "go down the microservices path," says Kevin Lynch, staff engineer on Squarespace's Site Reliability team. "But we were always deploying our applications in vCenter VMware VMs [in our own data centers]. Microservices solved a problem on the development side, but it pushed that problem to the Infrastructure team. The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."
-
Impact
-Since Squarespace moved to Kubernetes, in conjunction with modernizing its networking stack, deployment time has been reduced by almost 85%. Before, their VM deployment would take half an hour; now, says Lynch, "someone can generate a templated application, deploy it within five minutes, and have actual instances containerized, running in our staging environment at that point." Because of that, "productivity time is the big cost saver," he adds. "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on." Resilience has also been improved with Kubernetes: "If a node goes down, it’s rescheduled immediately and there’s no performance impact."
-
-
-
-
-
-
-
-
-
"Once you prove that Kubernetes solves one problem, everyone immediately starts solving other problems without you even having to evangelize it."
- — Kevin Lynch, Staff Engineer on the Site Reliability team at Squarespace
-
-
-
-
-
Since it was started in a dorm room in 2003, Squarespace has made it simple for millions of people to create their own websites.
Behind the scenes, though, the company’s monolithic Java application was making things not so simple for its developers to keep improving the platform. So in 2014, the company decided to "go down the microservices path," says Kevin Lynch, staff engineer on Squarespace’s Site Reliability team. "But we were always deploying our applications in vCenter VMware VMs [in our own data centers]. Microservices solved a problem on the development side, but it pushed that problem to the Infrastructure team. The infrastructure deployment process on our 5,000 VM hosts was slowing everyone down."
- After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had." Deploying it in the data center rather than the public cloud was their biggest challenge, and at the time, not a lot of other companies were doing that. "We had to figure out how to deploy this in our infrastructure for ourselves, and we had to integrate it with our other applications," says Lynch.
- At the same time, Squarespace’s Network Engineering team was modernizing its networking stack, switching from a traditional layer-two network to a layer-three spine-and-leaf network. "It mapped beautifully with what we wanted to do with Kubernetes," says Lynch. "It gives us the ability to have our servers communicate directly with the top-of-rack switches. We use Calico for CNI networking for Kubernetes, so we can announce all these individual Kubernetes pod IP addresses and have them integrate seamlessly with our other services that are still provisioned in the VMs."
-
-
-
-
-
- After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
-
-
-
-
-
- Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects Prometheus and fluentd to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Docker file, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."
-
- And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. "From end to end that probably took half an hour, and that’s not accounting for the fact that an infrastructure engineer would be responsible for doing that, so there’s some business delay in there as well."
-
- With faster deployments, "productivity time is the big cost saver," says Lynch. "We had a team that was implementing a new file storage service, and they just started integrating that with our storage back end without our involvement"—which wouldn’t have been possible before Kubernetes. He adds: "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on."
-
-
-
-
-
-
- "We switched to Kubernetes, a new world....It allowed us to streamline our process, so we can now easily create an entire microservice project from templates," Lynch says. And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment.
-
-
-
-
-
- There’s also been a positive impact on the application’s resilience. "When we’re deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down, it’s rescheduled immediately and there’s no performance impact."
-
- Another big benefit is autoscaling. "It wasn’t really possible with the way we’ve been using VMware," says Lynch, "but now we can just add the appropriate autoscaling features via Kubernetes directly, and boom, it’s scaling up as demand increases. And it worked out of the box."
-
- For others starting out with Kubernetes, Lynch says his best advice is to "fail fast": "Once you’ve planned things out, just execute. Kubernetes has been really great for trying something out quickly and seeing if it works or not."
-
-
-
-
-
-
- "When we’re deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down, it’s rescheduled immediately and there’s no performance impact."
-
-
-
-
- Lynch and his team are planning to open source some of the tools they’ve developed to extend Kubernetes and use it as an API itself. The first tool injects dependent applications as containers in a pod. "When you ship an application, usually it comes along with a whole bunch of dependent applications that need to be shipped with that, for example, fluentd for logging," he explains. With this tool, the developer doesn’t need to worry about the configurations.
-
- Going forward, all new services at Squarespace are going into Kubernetes, and the end goal is to convert everything it can. About a quarter of existing services have been migrated. "Our monolithic application is going to be the last one, just because it’s so big and complex," says Lynch. "But now I’m seeing other services get moved over, like the file storage service. Someone just did it and it worked—painlessly. So I believe if we tackle it, it’s probably going to be a lot easier than we fear. Maybe I should just take my own advice and fail fast!"
-
-
-
-
+
After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had." Deploying it in the data center rather than the public cloud was their biggest challenge, and at the time, not a lot of other companies were doing that. "We had to figure out how to deploy this in our infrastructure for ourselves, and we had to integrate it with our other applications," says Lynch.
+
+
At the same time, Squarespace's Network Engineering team was modernizing its networking stack, switching from a traditional layer-two network to a layer-three spine-and-leaf network. "It mapped beautifully with what we wanted to do with Kubernetes," says Lynch. "It gives us the ability to have our servers communicate directly with the top-of-rack switches. We use Calico for CNI networking for Kubernetes, so we can announce all these individual Kubernetes pod IP addresses and have them integrate seamlessly with our other services that are still provisioned in the VMs."
+
+{{< case-studies/quote image="/images/case-studies/squarespace/banner3.jpg" >}}
+After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
+{{< /case-studies/quote >}}
+
+
Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects Prometheus and fluentd to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Docker file, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."
+
+
And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. "From end to end that probably took half an hour, and that's not accounting for the fact that an infrastructure engineer would be responsible for doing that, so there's some business delay in there as well."
+
+
With faster deployments, "productivity time is the big cost saver," says Lynch. "We had a team that was implementing a new file storage service, and they just started integrating that with our storage back end without our involvement"—which wouldn't have been possible before Kubernetes. He adds: "When we started the Kubernetes project, we had probably a dozen microservices. Today there are twice that in the pipeline being actively worked on."
+
+{{< case-studies/quote image="/images/case-studies/squarespace/banner4.jpg" >}}
+"We switched to Kubernetes, a new world....It allowed us to streamline our process, so we can now easily create an entire microservice project from templates," Lynch says. And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment.
+{{< /case-studies/quote >}}
+
+
There's also been a positive impact on the application's resilience. "When we're deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down, it's rescheduled immediately and there's no performance impact."
+
+
Another big benefit is autoscaling. "It wasn't really possible with the way we've been using VMware," says Lynch, "but now we can just add the appropriate autoscaling features via Kubernetes directly, and boom, it's scaling up as demand increases. And it worked out of the box."
+
+
For others starting out with Kubernetes, Lynch says his best advice is to "fail fast": "Once you've planned things out, just execute. Kubernetes has been really great for trying something out quickly and seeing if it works or not."
+
+{{< case-studies/quote >}}
+"When we're deploying VMs, we have to build tooling to ensure that a service is spread across racks appropriately and can withstand failure," he says. "Kubernetes just does it. If a node goes down, it's rescheduled immediately and there's no performance impact."
+{{< /case-studies/quote >}}
+
+
Lynch and his team are planning to open source some of the tools they've developed to extend Kubernetes and use it as an API itself. The first tool injects dependent applications as containers in a pod. "When you ship an application, usually it comes along with a whole bunch of dependent applications that need to be shipped with that, for example, fluentd for logging," he explains. With this tool, the developer doesn't need to worry about the configurations.
+
+
Going forward, all new services at Squarespace are going into Kubernetes, and the end goal is to convert everything it can. About a quarter of existing services have been migrated. "Our monolithic application is going to be the last one, just because it's so big and complex," says Lynch. "But now I'm seeing other services get moved over, like the file storage service. Someone just did it and it worked—painlessly. So I believe if we tackle it, it's probably going to be a lot easier than we fear. Maybe I should just take my own advice and fail fast!"
diff --git a/content/ko/case-studies/wikimedia/index.html b/content/ko/case-studies/wikimedia/index.html
index abc74e3ee33fa..b10002af838a3 100644
--- a/content/ko/case-studies/wikimedia/index.html
+++ b/content/ko/case-studies/wikimedia/index.html
@@ -1,96 +1,66 @@
---
title: Wikimedia Case Study
-
-class: gridPage
+case_study_styles: true
cid: caseStudies
+
+new_case_study_styles: true
+heading_title_text: Wikimedia
+use_gradient_overlay: true
+subheading: >
+ Using Kubernetes to Build Tools to Improve the World's Wikis
+case_study_details:
+ - Company: Wikimedia
+ - Location: San Francisco, CA
---
-
-
Wikimedia Case Study
-
-
-
-
-
-
Using Kubernetes to Build Tools to Improve the World's Wikis
-
- The non-profit Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia. To help users maintain and use wikis, it runs Wikimedia Tool Labs, a hosting environment for community developers working on tools and bots to help editors and other volunteers do their work, including reducing vandalism. The community around Wikimedia Tool Labs began forming nearly 10 years ago.
-
-
-
-
- "Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."
-
-
— Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs
-
-
-
-
-
-
-
-
-
-
Challenges:
-
-
Simplify a complex, difficult-to-manage infrastructure
-
Allow developers to continue writing tools and bots using existing techniques
-
-
-
-
Why Kubernetes:
-
-
Wikimedia Tool Labs chose Kubernetes because it can mimic existing workflows, while reducing complexity
-
-
-
-
Approach:
-
-
Migrate old systems and a complex infrastructure to Kubernetes
-
-
-
-
Results:
-
-
20 percent of web tools that account for more than 40 percent of web traffic now run on Kubernetes
-
A 25-node cluster that keeps up with each new Kubernetes release
-
Thousands of lines of old code have been deleted, thanks to Kubernetes
-
-
-
-
-
-
-
-
-
-
Using Kubernetes to provide tools for maintaining wikis
-
- Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."
-
-
- To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.
-
-
- "With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.
-
-
-
-
-
-
-
-
-
Simplifying infrastructure and keeping wikis running better
-
- Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
-
-
- In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
-
-
- "Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.
-
-
-
-
+
The non-profit Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia. To help users maintain and use wikis, it runs Wikimedia Tool Labs, a hosting environment for community developers working on tools and bots to help editors and other volunteers do their work, including reducing vandalism. The community around Wikimedia Tool Labs began forming nearly 10 years ago.
+
+{{< case-studies/quote author="Yuvi Panda, operations engineer at Wikimedia Foundation and Wikimedia Tool Labs">}}
+
+
+
+"Wikimedia Tool Labs is vital for making sure wikis all around the world work as well as they possibly can. Because it's grown organically for almost 10 years, it has become an extremely challenging environment and difficult to maintain. It's like a big ball of mud — you really can't see through it. With Kubernetes, we're simplifying the environment and making it easier for developers to build the tools that make wikis run better."
+{{< /case-studies/quote >}}
+
+
Challenges
+
+
+
Simplify a complex, difficult-to-manage infrastructure
+
Allow developers to continue writing tools and bots using existing techniques
+
+
+
Why Kubernetes
+
+
+
Wikimedia Tool Labs chose Kubernetes because it can mimic existing workflows, while reducing complexity
+
+
+
Approach
+
+
+
Migrate old systems and a complex infrastructure to Kubernetes
+
+
+
Results
+
+
+
20 percent of web tools that account for more than 40 percent of web traffic now run on Kubernetes
+
A 25-node cluster that keeps up with each new Kubernetes release
+
Thousands of lines of old code have been deleted, thanks to Kubernetes
+
+
+
Using Kubernetes to provide tools for maintaining wikis
+
+
Wikimedia Tool Labs is run by a staff of four-and-a-half paid employees and two volunteers. The infrastructure didn't make it easy or intuitive for developers to build bots and other tools to make wikis work more easily. Yuvi says, "It's incredibly chaotic. We have lots of Perl and Bash duct tape on top of it. Everything is super fragile."
+
+
To solve the problem, Wikimedia Tool Labs migrated parts of its infrastructure to Kubernetes, in preparation for eventually moving its entire system. Yuvi said Kubernetes greatly simplifies maintenance. The goal is to allow developers creating bots and other tools to use whatever development methods they want, but make it easier for the Wikimedia Tool Labs to maintain the required infrastructure for hosting and sharing them.
+
+
"With Kubernetes, I've been able to remove a lot of our custom-made code, which makes everything easier to maintain. Our users' code also runs in a more stable way than previously," says Yuvi.
+
+
Simplifying infrastructure and keeping wikis running better
+
+
Wikimedia Tool Labs has seen great success with the initial Kubernetes deployment. Old code is being simplified and eliminated, contributing developers don't have to change the way they write their tools and bots, and those tools and bots run in a more stable fashion than they have in the past. The paid staff and volunteers are able to better keep up with fixing issues.
+
+
In the future, with a more complete migration to Kubernetes, Wikimedia Tool Labs expects to make it even easier to host and maintain the bots and tools that help run wikis across the world. The tool labs already host approximately 1,300 tools and bots from 800 volunteers, with many more being submitted every day. Twenty percent of the tool labs' web tools that account for more than 60 percent of web traffic now run on Kubernetes. The tool labs has a 25-node cluster that keeps up with each new Kubernetes release. Many existing web tools are migrating to Kubernetes.
+
+
"Our goal is to make sure that people all over the world can share knowledge as easily as possible. Kubernetes helps with that, by making it easier for wikis everywhere to have the tools they need to thrive," says Yuvi.
diff --git a/content/ko/case-studies/wink/index.html b/content/ko/case-studies/wink/index.html
index 3f47f8c779d42..1f8ea4c89da25 100644
--- a/content/ko/case-studies/wink/index.html
+++ b/content/ko/case-studies/wink/index.html
@@ -1,109 +1,87 @@
---
title: Wink Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_wink.css
+
+new_case_study_styles: true
+heading_background: /images/case-studies/wink/banner1.jpg
+heading_title_logo: /images/wink_logo.png
+subheading: >
+ Cloud-Native Infrastructure Keeps Your Smart Home Connected
+case_study_details:
+ - Company: Wink
+ - Location: New York, N.Y.
+ - Industry: Internet of Things Platform
---
-
-
CASE STUDY:
-
Cloud-Native Infrastructure Keeps Your Smart Home Connected
-
-
-
-
-
- Company Wink Location New York, N.Y. Industry Internet of Things Platform
-
-
-
-
-
-
-
-
-
Challenge
- Building a low-latency, highly reliable infrastructure to serve communications between millions of connected smart-home devices and the company’s consumer hubs and mobile app, with an emphasis on horizontal scalability, the ability to encrypt everything quickly and connections that could be easily brought back up if anything went wrong.
-
-
Solution
- Across-the-board use of a Kubernetes-Docker-CoreOS Container Linux stack.
-
-
-
-
Impact
- "Two of the biggest American retailers [Home Depot and Walmart] are carrying and promoting the brand and the hardware,” Wink Head of Engineering Kit Klein says proudly – though he adds that "it really comes with a lot of pressure. It’s not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses.” And that’s further testament to how much faith Klein has in the infrastructure that the Wink team has built. With 80 percent of Wink’s workload running on a unified stack of Kubernetes-Docker-CoreOS, the company has put itself in a position to continually innovate and improve its products and services. Committing to this technology, says Klein, "makes building on top of the infrastructure relatively easy.”
-
-
-
-
-
-
-
-
- "It’s not proprietary, it’s totally open, it’s really portable. You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That’s the benefit of having everything unified on one open source Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro/machine image to validate. The benefits are enormous because you save money, and you save time.”
- KIT KLEIN, HEAD OF ENGINEERING, WINK
-
-
-
-
-
-
-
How many people does it take to turn on a light bulb?
-
- Kit Klein whips out his phone to demonstrate. With a few swipes, the head of engineering at Wink pulls up the smart-home app created by the New York City-based company and taps the light button. "Honestly when you’re holding the phone and you’re hitting the light,” he says, "by the time you feel the pressure of your finger on the screen, it’s on. It takes as long as the signal to travel to your brain.”
- Sure, it takes just one finger and less than 200 milliseconds to turn on the light – or lock a door or change a thermostat. But what allows Wink to help consumers manage their connected smart-home products with such speed and ease is a sophisticated, cloud native infrastructure that Klein and his team built and continue to develop using a unified stack of CoreOS, the open-source operating system designed for clustered deployments, and Kubernetes, an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. "When you have a big, complex network of interdependent microservices that need to be able to discover each other, and need to be horizontally scalable and tolerant to failure, that’s what this is really optimized for,” says Klein. "A lot of people end up relying on proprietary services [offered by some big cloud providers] to do some of this stuff, but what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate.”
- Indeed, Wink did. The company’s mission statement is to make the connected home accessible – that is, user-friendly for non-technical owners, affordable and perhaps most importantly, reliable. "If you can’t trust that when you hit the switch, you know a light is going to go on, or if you’re remote and you’re checking on your house and that information isn’t accurate, then the convenience of the system is lost,” says Klein. "So that’s where the infrastructure comes in.”
- Wink was incubated within Quirky, a company that developed crowd-sourced inventions. The Wink app was first introduced in 2013, and at the time, it controlled only a few consumer products such as the PivotPower Strip that Quirky produced in collaboration with GE. As smart-home products proliferated, Wink was launched in 2014 in Home Depot stores nationwide. Its first project: a hub that could integrate with smart products from about a dozen brands like Honeywell and Chamberlain. The biggest challenge would be to build the infrastructure to serve all those communications between the hub and the products, with a focus on maximizing reliability and minimizing latency.
- "When we originally started out, we were moving very fast trying to get the first product to market, the minimum viable product,” says Klein. "Lots of times you go down a path and end up having to backtrack and try different things. But in this particular case, we did a lot of the work up front, which led to us making a really sound decision to deploy it on CoreOS Container Linux. And that was very early in the life of it.”
-
-
-
-
-
-
- "...what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate.”
-
-
-
-
-
- Concern number one: Wink’s products need to connect to consumer devices in people’s homes, behind a firewall. "You don’t have an end point like a URL, and you don’t even know what ports are open behind that firewall,” Klein explains. "So you essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it’s really, really important that it’s persistent because you want to decrease as much as possible the overhead of sending a message – you never know when someone is going to turn on the lights.”
- With the earliest version of the Wink Hub, when you decided to turn your lights on or off, the request would be sent to the cloud and then executed. Subsequent updates to Wink’s software enabled local control, cutting latency down to about 10 milliseconds for many devices. But with the need for cloud-enabled integrations of an ever-growing ecosystem of smart home products, low-latency internet connectivity is still a critical consideration.
-
-
"You essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it’s really, really important that it’s persistent...you never know when someone is going to turn on the lights.”
- In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service,” says Klein. "We’ve always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker.”
- At the time – just over two years ago – Docker wasn’t yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn’t really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads.”
- Once Wink’s backend engineering team decided on a Dockerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can’t just start the containers and hope everything goes well,” Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure.”
-
-
-
-
-
-
- "Obviously you can’t just start the containers and hope everything goes well,” Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure.”
-
-
-
-
-
- Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed,” he says. "We didn’t have to futz around with trying to take something like a Linux distro and install everything. It’s got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It’s not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed.”
- Wink’s hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the Dockerized CoreOS deployment. Since then, they’ve moved almost every other piece of their infrastructure – from third-party cloud-to-cloud integrations to their customer service and payment portals – onto CoreOS Container Linux clusters.
- Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn’t take care of routing, sharing configurations, secrets, et cetera, among instances of a service,” Klein says. "All of those layers of functionality can be implemented, of course, but if you don’t want to spend a lot of time writing unit files manually – which of course nobody does – you need to create a tool to automate some of that, which we did.”
- Wink quickly embraced the Kubernetes container cluster manager when it was launched in 2015 and integrated with CoreOS core technology, and as promised, it ended up providing the features Wink wanted and had planned to build. "If not for Kubernetes, we likely would have taken the logic and library we implemented for the automation tool that we created, and would have used it in a higher level abstraction and tool that could be used by non-DevOps engineers from the command line to create and manage clusters,” Klein says. "But Kubernetes made that totally unnecessary – and is written and maintained by people with a lot more experience in cluster management than us, so all the better.” Now, an estimated 80 percent of Wink’s workload is run on Kubernetes on top of CoreOS Container Linux.
-
-
-
-
-
-
- "Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it.”
-
-
-
-
-
- Wink’s reasons for going all in are clear: "It’s not proprietary, it’s totally open, it’s really portable,” Klein says. "You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That’s the benefit of having everything unified on one Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro to try to validate. The benefits are enormous because you save money, you save time.”
- Klein concedes that there are tradeoffs in every technology decision. "Cutting-edge technology is going to be scary for some people,” he says. "In order to take advantage of this, you really have to keep up with the technology. You can’t treat it like it’s a black box. Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it.”
- Wink, which was acquired by Flex in 2015, now controls 2.3 million connected devices in households all over the country. What’s next for the company? A new version of the hub - Wink Hub 2 - hit shelves last November – and is being offered for the first time at Walmart stores in addition to Home Depot. "Two of the biggest American retailers are carrying and promoting the brand and the hardware,” Klein says proudly – though he adds that "it really comes with a lot of pressure. It’s not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses.” And that’s further testament to how much faith Klein has in the infrastructure that the Wink team has have built.
- Wink’s engineering team has grown exponentially since its early days, and behind the scenes, Klein is most excited about the machine learning Wink is using. "We built [a system of] containerized small sections of the data pipeline that feed each other and can have multiple outputs,” he says. "It’s like data pipelines as microservices.” Again, Klein points to having a unified stack running on CoreOS Container Linux and Kubernetes as the primary driver for the innovations to come. "You’re not reinventing the wheel every time,” he says. "You can just get down to work.”
-
+
Challenge
+
+
Building a low-latency, highly reliable infrastructure to serve communications between millions of connected smart-home devices and the company's consumer hubs and mobile app, with an emphasis on horizontal scalability, the ability to encrypt everything quickly and connections that could be easily brought back up if anything went wrong.
+
+
Solution
+
+
Across-the-board use of a Kubernetes-Docker-CoreOS Container Linux stack.
+
+
Impact
+
+
"Two of the biggest American retailers [Home Depot and Walmart] are carrying and promoting the brand and the hardware," Wink Head of Engineering Kit Klein says proudly – though he adds that "it really comes with a lot of pressure. It's not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses." And that's further testament to how much faith Klein has in the infrastructure that the Wink team has built. With 80 percent of Wink's workload running on a unified stack of Kubernetes-Docker-CoreOS, the company has put itself in a position to continually innovate and improve its products and services. Committing to this technology, says Klein, "makes building on top of the infrastructure relatively easy."
+
+{{< case-studies/quote author="KIT KLEIN, HEAD OF ENGINEERING, WINK" >}}
+"It's not proprietary, it's totally open, it's really portable. You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That's the benefit of having everything unified on one open source Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro/machine image to validate. The benefits are enormous because you save money, and you save time."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+How many people does it take to turn on a light bulb?
+{{< /case-studies/lead >}}
+
+
Kit Klein whips out his phone to demonstrate. With a few swipes, the head of engineering at Wink pulls up the smart-home app created by the New York City-based company and taps the light button. "Honestly when you're holding the phone and you're hitting the light," he says, "by the time you feel the pressure of your finger on the screen, it's on. It takes as long as the signal to travel to your brain."
+
+
Sure, it takes just one finger and less than 200 milliseconds to turn on the light – or lock a door or change a thermostat. But what allows Wink to help consumers manage their connected smart-home products with such speed and ease is a sophisticated, cloud native infrastructure that Klein and his team built and continue to develop using a unified stack of CoreOS, the open-source operating system designed for clustered deployments, and Kubernetes, an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. "When you have a big, complex network of interdependent microservices that need to be able to discover each other, and need to be horizontally scalable and tolerant to failure, that's what this is really optimized for," says Klein. "A lot of people end up relying on proprietary services [offered by some big cloud providers] to do some of this stuff, but what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate."
+
+
Indeed, Wink did. The company's mission statement is to make the connected home accessible – that is, user-friendly for non-technical owners, affordable and perhaps most importantly, reliable. "If you can't trust that when you hit the switch, you know a light is going to go on, or if you're remote and you're checking on your house and that information isn't accurate, then the convenience of the system is lost," says Klein. "So that's where the infrastructure comes in."
+
+
Wink was incubated within Quirky, a company that developed crowd-sourced inventions. The Wink app was first introduced in 2013, and at the time, it controlled only a few consumer products such as the PivotPower Strip that Quirky produced in collaboration with GE. As smart-home products proliferated, Wink was launched in 2014 in Home Depot stores nationwide. Its first project: a hub that could integrate with smart products from about a dozen brands like Honeywell and Chamberlain. The biggest challenge would be to build the infrastructure to serve all those communications between the hub and the products, with a focus on maximizing reliability and minimizing latency.
+
+
"When we originally started out, we were moving very fast trying to get the first product to market, the minimum viable product," says Klein. "Lots of times you go down a path and end up having to backtrack and try different things. But in this particular case, we did a lot of the work up front, which led to us making a really sound decision to deploy it on CoreOS Container Linux. And that was very early in the life of it."
+
+{{< case-studies/quote image="/images/case-studies/wink/banner3.jpg">}}
+"...what you get by adopting CoreOS/Kubernetes is portability, to not be locked in to anyone. You can really make your own fate."
+{{< /case-studies/quote >}}
+
+
Concern number one: Wink's products need to connect to consumer devices in people's homes, behind a firewall. "You don't have an end point like a URL, and you don't even know what ports are open behind that firewall," Klein explains. "So you essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it's really, really important that it's persistent because you want to decrease as much as possible the overhead of sending a message – you never know when someone is going to turn on the lights."
+
+
With the earliest version of the Wink Hub, when you decided to turn your lights on or off, the request would be sent to the cloud and then executed. Subsequent updates to Wink's software enabled local control, cutting latency down to about 10 milliseconds for many devices. But with the need for cloud-enabled integrations of an ever-growing ecosystem of smart home products, low-latency internet connectivity is still a critical consideration.
+
+{{< case-studies/lead >}}
+"You essentially need to have this thing wake up and talk to your system and then open real-time, bidirectional communication between the cloud and the device. And it's really, really important that it's persistent...you never know when someone is going to turn on the lights."
+{{< /case-studies/lead >}}
+
+
In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service," says Klein. "We've always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker."
+
+
At the time – just over two years ago – Docker wasn't yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn't really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads."
+
+
Once Wink's backend engineering team decided on a Dockerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."
+
+{{< case-studies/quote image="/images/case-studies/wink/banner4.jpg" >}}
+"Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."
+{{< /case-studies/quote >}}
+
+
Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed," he says. "We didn't have to futz around with trying to take something like a Linux distro and install everything. It's got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It's not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed."
+
+
Wink's hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the Dockerized CoreOS deployment. Since then, they've moved almost every other piece of their infrastructure – from third-party cloud-to-cloud integrations to their customer service and payment portals – onto CoreOS Container Linux clusters.
+
+
Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn't take care of routing, sharing configurations, secrets, et cetera, among instances of a service," Klein says. "All of those layers of functionality can be implemented, of course, but if you don't want to spend a lot of time writing unit files manually – which of course nobody does – you need to create a tool to automate some of that, which we did."
+
+
Wink quickly embraced the Kubernetes container cluster manager when it was launched in 2015 and integrated with CoreOS core technology, and as promised, it ended up providing the features Wink wanted and had planned to build. "If not for Kubernetes, we likely would have taken the logic and library we implemented for the automation tool that we created, and would have used it in a higher level abstraction and tool that could be used by non-DevOps engineers from the command line to create and manage clusters," Klein says. "But Kubernetes made that totally unnecessary – and is written and maintained by people with a lot more experience in cluster management than us, so all the better." Now, an estimated 80 percent of Wink's workload is run on Kubernetes on top of CoreOS Container Linux.
+
+{{< case-studies/quote >}}
+"Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it."
+{{< /case-studies/quote >}}
+
+
Wink's reasons for going all in are clear: "It's not proprietary, it's totally open, it's really portable," Klein says. "You can run all the workloads across different cloud providers. You can easily run a hybrid AWS or even bring in your own data center. That's the benefit of having everything unified on one Kubernetes-Docker-CoreOS Container Linux stack. There are massive security benefits if you only have one Linux distro to try to validate. The benefits are enormous because you save money, you save time."
+
+
Klein concedes that there are tradeoffs in every technology decision. "Cutting-edge technology is going to be scary for some people," he says. "In order to take advantage of this, you really have to keep up with the technology. You can't treat it like it's a black box. Stay close to the development. Understand why decisions are being made. If you understand the intent behind the project, from the technological intent to a certain philosophical intent, then it helps you understand how to build your system in harmony with those systems as opposed to trying to work against it."
+
+
Wink, which was acquired by Flex in 2015, now controls 2.3 million connected devices in households all over the country. What's next for the company? A new version of the hub - Wink Hub 2 - hit shelves last November – and is being offered for the first time at Walmart stores in addition to Home Depot. "Two of the biggest American retailers are carrying and promoting the brand and the hardware," Klein says proudly – though he adds that "it really comes with a lot of pressure. It's not a retail situation where you have a lot of tech enthusiasts. These are everyday people who want something that works and have no tolerance for technical excuses." And that's further testament to how much faith Klein has in the infrastructure that the Wink team has have built.
+
+
Wink's engineering team has grown exponentially since its early days, and behind the scenes, Klein is most excited about the machine learning Wink is using. "We built [a system of] containerized small sections of the data pipeline that feed each other and can have multiple outputs," he says. "It's like data pipelines as microservices." Again, Klein points to having a unified stack running on CoreOS Container Linux and Kubernetes as the primary driver for the innovations to come. "You're not reinventing the wheel every time," he says. "You can just get down to work."
diff --git a/content/ko/case-studies/workiva/index.html b/content/ko/case-studies/workiva/index.html
index 1c09503bfb351..d8945bfd3431d 100644
--- a/content/ko/case-studies/workiva/index.html
+++ b/content/ko/case-studies/workiva/index.html
@@ -3,111 +3,91 @@
linkTitle: Workiva
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
draft: true
featured: true
weight: 20
quote: >
With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code.
+
+new_case_study_styles: true
+heading_background: /images/case-studies/workiva/banner1.jpg
+heading_title_logo: /images/workiva_logo.png
+subheading: >
+ Using OpenTracing to Help Pinpoint the Bottlenecks
+case_study_details:
+ - Company: Workiva
+ - Location: Ames, Iowa
+ - Industry: Enterprise Software
---
-
-
CASE STUDY:
Using OpenTracing to Help Pinpoint the Bottlenecks
+
Challenge
-
+
Workiva offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company's first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva's existing system, Broad's team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn't impact overall speed.
-
+
Solution
-
- Company Workiva Location Ames, Iowa Industry Enterprise Software
-
+
Broad's team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
-
-
-
-
-
Challenge
- Workiva offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company’s first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva’s existing system, Broad’s team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn’t impact overall speed.
+
Impact
-
+
Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
-
+{{< case-studies/quote author="MacLeod Broad, Senior Software Architect at Workiva" >}}
+"With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code."
+{{< /case-studies/quote >}}
-
-
Solution
- Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
-
-
Impact
- Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
-
-
-
-
-
-
-
-"With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code." — MacLeod Broad, Senior Software Architect at Workiva
-
-
-
-
-
-
Last fall, MacLeod Broad’s platform team at Workiva was prepping one of the company’s first products utilizing Amazon Web Services when they ran into a roadblock.
- Early on, Workiva’s backend had run mostly on Google App Engine. But things changed along the way as Workiva’s SaaS offering, Wdesk, a cloud-based platform for managing and reporting business data, grew its customer base to more than 70 percent of the Fortune 500 companies. "As customer needs grew and the product offering expanded, we started to leverage a wider offering of services such as Amazon Web Services as well as other Google Cloud Platform services, creating a multi-vendor environment."
-With this new product, there was a "sync and link" feature by which data "went through a whole host of services starting with the new spreadsheet system [Amazon Aurora] into what we called our linking system, and then pushed through http to our existing system, and then a number of calculations would go on, and the results would be transmitted back into the new system," says Broad. "We were trying to optimize that for speed. We thought we had made this great optimization and then it would turn out to be a micro optimization, which didn’t really affect the overall speed of things."
-The challenges faced by Broad’s team may sound familiar to other companies that have also made the shift from monoliths to more distributed, microservice-based systems. "We had a number of people working on this, all on different teams, so it was difficult to get our head around what the issues were and where the bottlenecks were," says Broad.
- "Each service team was going through different iterations of their architecture and it was very hard to follow what was actually going on in each teams’ system," he adds. "We had circular dependencies where we’d have three or four different service teams unsure of where the issues really were, requiring a lot of back and forth communication. So we wasted a lot of time saying, ‘What part of this is slow? Which part of this is sometimes slow depending on the use case? Which part is degrading over time? Which part of this process is asynchronous so it doesn’t really matter if it’s long-running or not? What are we doing that’s redundant, and which part of this is buggy?’"
-
-
-
-
-
-
- "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level. Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on." — MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA
-
-
-
-
-
-Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."
-With Workiva’s back-end code running on Google Compute Engine as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."
-Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva’s existing code that was slowing things down, that wasn’t exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, ‘Why is it doing all this work again?’"
-Using the insight OpenTracing gave them, "My team was able to look at a trace and make optimization suggestions to another team without ever looking at their code," says Broad. "The way we named our traces gave us insight whether it’s doing a SQL call or it’s making an RPC. And so it was really easy to say, ‘OK, we know that it’s going to page through all these requests. Do the work once and stuff it in cache.’ And we were done basically. All those calls became sub-second calls immediately."
-
-
-
-
-
-
-
-"We were looking at different tracing solutions and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use." — MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA
-
-
-
-
-
- After the success of the first use case, everyone involved in the trial went back and fully instrumented their products. Tracing was added to a few more use cases. "We wanted to get through the initial implementation pains early without bringing the whole department along for the ride," says Broad. "Now, a lot of teams add it when they’re starting up a new service. We’re really pushing adoption now more than we were before."
-Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
-Most of Workiva’s major products are now traced using OpenTracing, with data pushed into Google StackDriver. Even the products that aren’t fully traced have some components and libraries that are.
-Broad points out that because some of the engineers were working on App Engine and already had experience with the platform’s Appstats library for profiling performance, it didn’t take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they’re working on. Questions about passing the context around and how that’s done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they’d used other systems that didn’t require that."
-But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
-In fact, Broad believes that tracing naturally fits in with Workiva’s existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it’s being created and when it’s being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing."
-
-
-
-
-
-
- "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." — Michael Davis, Software Engineer, Workiva
-
-
-
-
- For Workiva, OpenTracing has become an essential tool for zeroing in on optimizations and determining what’s actually a micro-optimization by observing usage patterns. "On some projects we often assume what the customer is doing, and we optimize for these crazy scale cases that we hit 1 percent of the time," says Broad. "It’s been really helpful to be able to say, ‘OK, we’re adding 100 milliseconds on every request that does X, and we only need to add that 100 milliseconds if it’s the worst of the worst case, which only happens one out of a thousand requests or one out of a million requests."
-Unlike many other companies, Workiva also traces the client side. "For us, the user experience is important—it doesn’t matter if the RPC takes 100 milliseconds if it still takes 5 seconds to do the rendering to show it in the browser," says Broad. "So for us, those client times are important. We trace it to see what parts of loading take a long time. We’re in the middle of working on a definition of what is ‘loaded.’ Is it when you have it, or when it’s rendered, or when you can interact with it? Those are things we’re planning to use tracing for to keep an eye on and to better understand."
-That also requires adjusting for differences in external and internal clocks. "Before time correcting, it was horrible; our traces were more misleading than anything," says Broad. "So we decided that we would return a timestamp on the response headers, and then have the client reorient its time based on that—not change its internal clock but just calculate the offset on the response time to when the client got it. And if you end up in an impossible situation where a client RPC spans 210 milliseconds but the time on the response time is outside of that window, then we have to reorient that."
-Broad is excited about the impact OpenTracing has already had on the company, and is also looking ahead to what else the technology can enable. One possibility is using tracing to update documentation in real time. "Keeping documentation up to date with reality is a big challenge," he says. "Say, we just ran a trace simulation or we just ran a smoke test on this new deploy, and the architecture doesn’t match the documentation. We can find whose responsibility it is and let them know and have them update it. That’s one of the places I’d like to get in the future with tracing."
-
-
-
-
+{{< case-studies/lead >}}
+Last fall, MacLeod Broad's platform team at Workiva was prepping one of the company's first products utilizing Amazon Web Services when they ran into a roadblock.
+{{< /case-studies/lead >}}
+
+
Early on, Workiva's backend had run mostly on Google App Engine. But things changed along the way as Workiva's SaaS offering, Wdesk, a cloud-based platform for managing and reporting business data, grew its customer base to more than 70 percent of the Fortune 500 companies. "As customer needs grew and the product offering expanded, we started to leverage a wider offering of services such as Amazon Web Services as well as other Google Cloud Platform services, creating a multi-vendor environment."
+
+
With this new product, there was a "sync and link" feature by which data "went through a whole host of services starting with the new spreadsheet system [Amazon Aurora] into what we called our linking system, and then pushed through http to our existing system, and then a number of calculations would go on, and the results would be transmitted back into the new system," says Broad. "We were trying to optimize that for speed. We thought we had made this great optimization and then it would turn out to be a micro optimization, which didn't really affect the overall speed of things."
+
+
The challenges faced by Broad's team may sound familiar to other companies that have also made the shift from monoliths to more distributed, microservice-based systems. "We had a number of people working on this, all on different teams, so it was difficult to get our head around what the issues were and where the bottlenecks were," says Broad.
+
+
"Each service team was going through different iterations of their architecture and it was very hard to follow what was actually going on in each teams' system," he adds. "We had circular dependencies where we'd have three or four different service teams unsure of where the issues really were, requiring a lot of back and forth communication. So we wasted a lot of time saying, 'What part of this is slow? Which part of this is sometimes slow depending on the use case? Which part is degrading over time? Which part of this process is asynchronous so it doesn't really matter if it's long-running or not? What are we doing that's redundant, and which part of this is buggy?'"
+
+{{< case-studies/quote
+ image="/images/case-studies/workiva/banner3.jpg"
+ author="MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA"
+>}}
+"A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level. Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it's a lot faster than never figuring out the problem and just moving on."
+{{< /case-studies/quote >}}
+
+
Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it's a lot faster than never figuring out the problem and just moving on."
+
+
With Workiva's back-end code running on Google Compute Engine as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn't want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."
+
+
Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva's existing code that was slowing things down, that wasn't exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, 'Why is it doing all this work again?'"
+
+
Using the insight OpenTracing gave them, "My team was able to look at a trace and make optimization suggestions to another team without ever looking at their code," says Broad. "The way we named our traces gave us insight whether it's doing a SQL call or it's making an RPC. And so it was really easy to say, 'OK, we know that it's going to page through all these requests. Do the work once and stuff it in cache.' And we were done basically. All those calls became sub-second calls immediately."
+
+{{< case-studies/quote
+ image="/images/case-studies/workiva/banner4.jpg"
+ author="MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA"
+>}}
+"We were looking at different tracing solutions and we decided that because it seemed to be a very evolving market, we didn't want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."
+{{< /case-studies/quote >}}
+
+
After the success of the first use case, everyone involved in the trial went back and fully instrumented their products. Tracing was added to a few more use cases. "We wanted to get through the initial implementation pains early without bringing the whole department along for the ride," says Broad. "Now, a lot of teams add it when they're starting up a new service. We're really pushing adoption now more than we were before."
+
+
Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
+
+
Most of Workiva's major products are now traced using OpenTracing, with data pushed into Google StackDriver. Even the products that aren't fully traced have some components and libraries that are.
+
+
Broad points out that because some of the engineers were working on App Engine and already had experience with the platform's Appstats library for profiling performance, it didn't take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they're working on. Questions about passing the context around and how that's done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they'd used other systems that didn't require that."But the benefits clearly outweighed the concerns, and today, Workiva's official policy is to use tracing."
+
+
In fact, Broad believes that tracing naturally fits in with Workiva's existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it's being created and when it's being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing."
+
+{{< case-studies/quote author="Michael Davis, Software Engineer, Workiva" >}}
+"Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
+{{< /case-studies/quote >}}
+
+
For Workiva, OpenTracing has become an essential tool for zeroing in on optimizations and determining what's actually a micro-optimization by observing usage patterns. "On some projects we often assume what the customer is doing, and we optimize for these crazy scale cases that we hit 1 percent of the time," says Broad. "It's been really helpful to be able to say, 'OK, we're adding 100 milliseconds on every request that does X, and we only need to add that 100 milliseconds if it's the worst of the worst case, which only happens one out of a thousand requests or one out of a million requests."
+
+
Unlike many other companies, Workiva also traces the client side. "For us, the user experience is important—it doesn't matter if the RPC takes 100 milliseconds if it still takes 5 seconds to do the rendering to show it in the browser," says Broad. "So for us, those client times are important. We trace it to see what parts of loading take a long time. We're in the middle of working on a definition of what is 'loaded.' Is it when you have it, or when it's rendered, or when you can interact with it? Those are things we're planning to use tracing for to keep an eye on and to better understand."
+
+
That also requires adjusting for differences in external and internal clocks. "Before time correcting, it was horrible; our traces were more misleading than anything," says Broad. "So we decided that we would return a timestamp on the response headers, and then have the client reorient its time based on that—not change its internal clock but just calculate the offset on the response time to when the client got it. And if you end up in an impossible situation where a client RPC spans 210 milliseconds but the time on the response time is outside of that window, then we have to reorient that."
+
+
Broad is excited about the impact OpenTracing has already had on the company, and is also looking ahead to what else the technology can enable. One possibility is using tracing to update documentation in real time. "Keeping documentation up to date with reality is a big challenge," he says. "Say, we just ran a trace simulation or we just ran a smoke test on this new deploy, and the architecture doesn't match the documentation. We can find whose responsibility it is and let them know and have them update it. That's one of the places I'd like to get in the future with tracing."
diff --git a/content/ko/case-studies/ygrene/index.html b/content/ko/case-studies/ygrene/index.html
index c07443249a5f9..9afc1ec45b23e 100644
--- a/content/ko/case-studies/ygrene/index.html
+++ b/content/ko/case-studies/ygrene/index.html
@@ -1,111 +1,82 @@
---
title: Ygrene Case Study
-
linkTitle: Ygrene
case_study_styles: true
cid: caseStudies
-css: /css/style_case_studies.css
logo: ygrene_featured_logo.png
featured: true
weight: 48
quote: >
- We had to change some practices and code, and the way things were built, but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company.
+ We had to change some practices and code, and the way things were built, but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company.
+
+new_case_study_styles: true
+heading_background: /images/case-studies/ygrene/banner1.jpg
+heading_title_logo: /images/ygrene_logo.png
+subheading: >
+ Ygrene: Using Cloud Native to Bring Security and Scalability to the Finance Industry
+case_study_details:
+ - Company: Ygrene
+ - Location: Petaluma, Calif.
+ - Industry: Clean energy financing
---
-
-
CASE STUDY:
Ygrene: Using Cloud Native to Bring Security and Scalability to the Finance Industry
-
-
-
-
-
-
- Company Ygrene Location Petaluma, Calif. Industry Clean energy financing
-
-
-
-
-
-
-
Challenge
- A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldn’t require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely.
-
-
+
Challenge
-
Solution
- Moving from an Engine Yard platform and Amazon Elastic Beanstalk, the Ygrene team embraced cloud native technologies and practices: Kubernetes to help scale out vertically and distribute workloads, Notary to put in build-time controls and get trust on the Docker images being used with third-party dependencies, and Fluentd for "observing every part of our stack," all running on Amazon EC2 Spot.
+
A PACE (Property Assessed Clean Energy) financing company, Ygrene has funded more than $1 billion in loans since 2010. In order to approve and process those loans, "We have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data," says Ygrene Development Manager Austin Adams. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically. We had a really unstable system that became overwhelmed with requests just for doing background data processing in real time. The performance the users saw was very poor. We needed a solution that wouldn't require us to make huge refactors to the code base." As a finance company, Ygrene also needed to ensure that they were shipping their applications securely.
-
+
Solution
-
+
Moving from an Engine Yard platform and Amazon Elastic Beanstalk, the Ygrene team embraced cloud native technologies and practices: Kubernetes to help scale out vertically and distribute workloads, Notary to put in build-time controls and get trust on the Docker images being used with third-party dependencies, and Fluentd for "observing every part of our stack," all running on Amazon EC2 Spot.
Impact
- Before, deployments typically took three to four hours, and two or three months’ worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "we’re able to deploy three or four times a week, with just one week’s or two days’ worth of work," Adams says. "We’re deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on."
+
Before, deployments typically took three to four hours, and two or three months' worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for the overall deploy with smoke testing. And "we're able to deploy three or four times a week, with just one week's or two days' worth of work," Adams says. "We're deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed." Additionally, by using the kops project, Ygrene can now run its Kubernetes clusters with AWS EC2 Spot, at a tenth of the previous cost. These cloud native technologies have "changed the game for scalability, observability, and security—we're adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn't tell our investors and team members that we knew what was going on."
-
+{{< case-studies/quote author="Austin Adams, Development Manager, Ygrene Energy Fund" >}}
+"CNCF projects are helping Ygrene determine the security and observability standards for the entire PACE industry. We're an emerging finance industry, and without these projects, especially Kubernetes, we couldn't be the industry leader that we are today."
+{{< /case-studies/quote >}}
-
-
-
-
-"CNCF projects are helping Ygrene determine the security and observability standards for the entire PACE industry. We’re an emerging finance industry, and without these projects, especially Kubernetes, we couldn’t be the industry leader that we are today."
— Austin Adams, Development Manager, Ygrene Energy Fund
+{{< case-studies/lead >}}
+In less than a decade, Ygrene has funded more than $1 billion in loans for renewable energy projects.
+{{< /case-studies/lead >}}
-
-
+
A PACE (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams.
-
-
In less than a decade, Ygrene has funded more than $1 billion in loans for renewable energy projects.
A PACE (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams.
-In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time."
-By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn’t solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn’t handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."
+
In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time."
-
-
-
-
- "CNCF has been an amazing incubator for so many projects. Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It’s actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
— Austin Adams, Development Manager, Ygrene Energy Fund
-
-
-
-
+
By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn't solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn't handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."
-Adams along with the rest of the team set out to find a solution that would be transformational, but "wouldn’t require us to make huge refactors to the code base," he says. And as a finance company, Ygrene needed security as much as scalability. They found the answer by embracing cloud native technologies: Kubernetes to help scale out vertically and distribute workloads, Notary to achieve reliable security at every level, and Fluentd for observability. "Kubernetes was where the community was going, and we wanted to be future proof," says Adams.
-With Kubernetes, the team was able to quickly containerize the Ygrene application with Docker. "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company."
-How? Cloud native has "changed the game for scalability, observability, and security—we’re adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn’t tell our investors and team members that we knew what was going on."
-Notary, in particular, "has been a godsend," says Adams. "We need to know that our attack surface on third-party dependencies is low, or at least managed. We use it as a trust system and we also use it as a separation, so production images are signed by Notary, but some development images we don’t sign. That is to ensure that they can’t get into the production cluster. We’ve been using it in the test cluster to feel more secure about our builds."
+{{< case-studies/quote
+ image="/images/case-studies/ygrene/banner3.jpg"
+ author="Austin Adams, Development Manager, Ygrene Energy Fund"
+>}}
+"CNCF has been an amazing incubator for so many projects. Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It's actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
+{{< /case-studies/quote >}}
+
Adams along with the rest of the team set out to find a solution that would be transformational, but "wouldn't require us to make huge refactors to the code base," he says. And as a finance company, Ygrene needed security as much as scalability. They found the answer by embracing cloud native technologies: Kubernetes to help scale out vertically and distribute workloads, Notary to achieve reliable security at every level, and Fluentd for observability. "Kubernetes was where the community was going, and we wanted to be future proof," says Adams.
+
With Kubernetes, the team was able to quickly containerize the Ygrene application with Docker. "We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company."
-
-
-
-
-"We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That’s very fast for a finance company."
-
-
+
How? Cloud native has "changed the game for scalability, observability, and security—we're adding new data sources that are very secure," says Adams. "Without Kubernetes, Notary, and Fluentd, we couldn't tell our investors and team members that we knew what was going on."
-
-
- By using the kops project, Ygrene was able to move from Elastic Beanstalk to running its Kubernetes clusters on AWS EC2 Spot, at a tenth of the previous cost. "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
-That also helped them mitigate the risk that comes with running in the public cloud. "We figured out, essentially, that if we’re able to select instance classes using EC2 Spot that had an extremely low likelihood of interruption and zero history of interruption, and we’re willing to pay a price high enough, that we could virtually get the same guarantee using Kubernetes because we have enough nodes," says Software Engineer Zach Arnold, who led the migration to Kubernetes. "Now that we’ve re-architected these pieces of the application to not live on the same server, we can push out to many different servers and have a more stable deployment."
-As a result, the team can now ship code any time of day. "That was risky because it could bring down your whole loan management software with it," says Arnold. "But we now can deploy safely and securely during the day."
+
Notary, in particular, "has been a godsend," says Adams. "We need to know that our attack surface on third-party dependencies is low, or at least managed. We use it as a trust system and we also use it as a separation, so production images are signed by Notary, but some development images we don't sign. That is to ensure that they can't get into the production cluster. We've been using it in the test cluster to feel more secure about our builds."
+{{< case-studies/quote image="/images/case-studies/ygrene/banner4.jpg">}}
+"We had to change some practices and code, and the way things were built," Adams says, "but we were able to get our main systems onto Kubernetes in a month or so, and then into production within two months. That's very fast for a finance company."
+{{< /case-studies/quote >}}
-
+
By using the kops project, Ygrene was able to move from Elastic Beanstalk to running its Kubernetes clusters on AWS EC2 Spot, at a tenth of the previous cost. "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
-
-
- "In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
-
-
+
That also helped them mitigate the risk that comes with running in the public cloud. "We figured out, essentially, that if we're able to select instance classes using EC2 Spot that had an extremely low likelihood of interruption and zero history of interruption, and we're willing to pay a price high enough, that we could virtually get the same guarantee using Kubernetes because we have enough nodes," says Software Engineer Zach Arnold, who led the migration to Kubernetes. "Now that we've re-architected these pieces of the application to not live on the same server, we can push out to many different servers and have a more stable deployment."
-
- Before, deployments typically took three to four hours, and two or three months’ worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for an overall deploy with smoke testing. And "we’re able to deploy three or four times a week, with just one week’s or two days’ worth of work," Adams says. "We’re deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down for 30 minutes to an hour, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed."
-Cloud native also affected how Ygrene’s 50+ developers and contractors work. Adams and Arnold spent considerable time "teaching people to think distributed out of the box," says Arnold. "We ended up picking what we call the Four S’s of Shipping: safely, securely, stably, and speedily." (For more on the security piece of it, see their article on their "continuous hacking" strategy.) As for the engineers, says Adams, "they have been able to advance as their software has advanced. I think that at the end of the day, the developers feel better about what they’re doing, and they also feel more connected to the modern software development community."
-Looking ahead, Adams is excited to explore more CNCF projects, including SPIFFE and SPIRE. "CNCF has been an amazing incubator for so many projects," he says. "Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It’s actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
+
As a result, the team can now ship code any time of day. "That was risky because it could bring down your whole loan management software with it," says Arnold. "But we now can deploy safely and securely during the day."
+{{< case-studies/quote >}}
+"In order to scale before, we would need to up our instance sizes, incurring high cost for low value," says Adams. "Now with Kubernetes and kops, we are able to scale horizontally on Spot with multiple instance groups."
+{{< /case-studies/quote >}}
+
Before, deployments typically took three to four hours, and two or three months' worth of work would be deployed at low-traffic times every week or two weeks. Now, they take five minutes for Kubernetes, and an hour for an overall deploy with smoke testing. And "we're able to deploy three or four times a week, with just one week's or two days' worth of work," Adams says. "We're deploying during the work week, in the daytime and without any downtime. We had to ask for business approval to take the systems down for 30 minutes to an hour, even in the middle of the night, because people could be doing loans. Now we can deploy, ship code, and migrate databases, all without taking the system down. The company gets new features without worrying that some business will be lost or delayed."
-
+
Cloud native also affected how Ygrene's 50+ developers and contractors work. Adams and Arnold spent considerable time "teaching people to think distributed out of the box," says Arnold. "We ended up picking what we call the Four S's of Shipping: safely, securely, stably, and speedily." (For more on the security piece of it, see their article on their "continuous hacking" strategy.) As for the engineers, says Adams, "they have been able to advance as their software has advanced. I think that at the end of the day, the developers feel better about what they're doing, and they also feel more connected to the modern software development community."
-
+
Looking ahead, Adams is excited to explore more CNCF projects, including SPIFFE and SPIRE. "CNCF has been an amazing incubator for so many projects," he says. "Now we look at its webpage regularly to find out if there are any new, awesome, high-quality projects we can implement into our stack. It's actually become a hub for us for knowing what software we need to be looking at to make our systems more secure or more scalable."
diff --git a/content/ko/case-studies/zalando/index.html b/content/ko/case-studies/zalando/index.html
index 49bad6ff9e596..23363da4011a0 100644
--- a/content/ko/case-studies/zalando/index.html
+++ b/content/ko/case-studies/zalando/index.html
@@ -1,101 +1,83 @@
---
title: Zalando Case Study
-
case_study_styles: true
cid: caseStudies
-css: /css/style_zalando.css
----
-
-
CASE STUDY:
Europe’s Leading Online Fashion Platform Gets Radical with Cloud Native
-
- Company Zalando Location Berlin, Germany Industry Online Fashion
-
+
Zalando, Europe's leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a radical transformation resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalando's technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasn't immediately considered, as teams migrated to Amazon Web Services (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "There's still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play.
-
-
-
-
-
Challenge
- Zalando, Europe’s leading online fashion platform, has experienced exponential growth since it was founded in 2008. In 2015, with plans to further expand its original e-commerce site to include new services and products, Zalando embarked on a radical transformation resulting in autonomous self-organizing teams. This change requires an infrastructure that could scale with the growth of the engineering organization. Zalando’s technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud. While orchestration wasn’t immediately considered, as teams migrated to Amazon Web Services (AWS): "We saw the pain teams were having with infrastructure and Cloud Formation on AWS," says Henning Jacobs, Head of Developer Productivity. "There’s still too much operational overhead for the teams and compliance. " To provide better support, cluster management was brought into play.
+
Solution
-
+
The company now runs its Docker containers on AWS using Kubernetes orchestration.
-
-
Solution
- The company now runs its Docker containers on AWS using Kubernetes orchestration.
-
Impact
- With the old infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs on the Linux kernel. This makes a lot of people pretty happy. The engineers love autonomy."
-
-
-
-
-
- "We envision all Zalando delivery teams running their containerized applications on a state-of-the-art, reliable and scalable cluster infrastructure provided by Kubernetes." - Henning Jacobs, Head of Developer Productivity at Zalando
-
-
-
-
-
When Henning Jacobs arrived at Zalando in 2010, the company was just two years old with 180 employees running an online store for European shoppers to buy fashion items.
- "It started as a PHP e-commerce site which was easy to get started with, but was not scaling with the business' needs" says Jacobs, Head of Developer Productivity at Zalando.
- At that time, the company began expanding beyond its German origins into other European markets. Fast-forward to today and Zalando now has more than 14,000 employees, 3.6 billion Euro in revenue for 2016 and operates across 15 countries. "With growth in all dimensions, and constant scaling, it has been a once-in-a-lifetime experience," he says.
- Not to mention a unique opportunity for an infrastructure specialist like Jacobs. Just after he joined, the company began rewriting all their applications in-house. "That was generally our strategy," he says. "For example, we started with our own logistics warehouses but at first you don’t know how to do logistics software, so you have some vendor software. And then we replaced it with our own because with off-the-shelf software you’re not competitive. You need to optimize these processes based on your specific business needs."
- In parallel to rewriting their applications, Zalando had set a goal of expanding beyond basic e-commerce to a platform offering multi-tenancy, a dramatic increase in assortments and styles, same-day delivery and even your own personal online stylist.
- The need to scale ultimately led the company on a cloud-native journey. As did its embrace of a microservices-based software architecture that gives engineering teams more autonomy and ownership of projects. "This move to the cloud was necessary because in the data center you couldn’t have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app," Jacobs says.
-
-
-
-
-
- "This move to the cloud was necessary because in the data center you couldn’t have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app."
-
-
-
-
-
- Zalando began moving its infrastructure from two on-premise data centers to the cloud, requiring the migration of older applications for cloud-readiness. "We decided to have a clean break," says Jacobs. "Our Amazon Web Services infrastructure was set up like so: Every team has its own AWS account, which is completely isolated, meaning there’s no ‘lift and shift.’ You basically have to rewrite your application to make it cloud-ready even down to the persistence layer. We bravely went back to the drawing board and redid everything, first choosing Docker as a common containerization, then building the infrastructure from there."
- The company decided to hold off on orchestration at the beginning, but as teams were migrated to AWS, "we saw the pain teams were having with infrastructure and cloud formation on AWS," says Jacobs.
- Zalandos 200+ autonomous engineering teams decide what technologies to use and could operate their own applications using their own AWS accounts. This setup proved to be a compliance challenge. Even with strict rules-of-play and automated compliance checks in place, engineering teams and IT-compliance were overburdened addressing compliance issues. "Violations appear for non-compliant behavior, which we detect when scanning the cloud infrastructure," says Jacobs. "Everything is possible and nothing enforced, so you have to live with violations (and resolve them) instead of preventing the error in the first place. This means overhead for teams—and overhead for compliance and operations. It also takes time to spin up new EC2 instances on AWS, which affects our deployment velocity."
- The team realized they needed to "leverage the value you get from cluster management," says Jacobs. When they first looked at Platform as a Service (PaaS) options in 2015, the market was fragmented; but "now there seems to be a clear winner. It seemed like a good bet to go with Kubernetes."
- The transition to Kubernetes started in 2016 during Zalando’s Hack Week where participants deployed their projects to a Kubernetes cluster. From there 60 members of the tech infrastructure department were on-boarded - and then engineering teams were brought on one at a time. "We always start by talking with them and make sure everyone’s expectations are clear," says Jacobs. "Then we conduct some Kubernetes training, which is mostly training for our CI/CD setup, because the user interface for our users is primarily through the CI/CD system. But they have to know fundamental Kubernetes concepts and the API. This is followed by a weekly sync with each team to check their progress. Once they have something in production, we want to see if everything is fine on top of what we can improve."
-
-
-
-
-
- Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs.
-
-
-
-
-
- At the moment, Zalando is running an initial 40 Kubernetes clusters with plans to scale for the foreseeable future.
- Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs. "The self-healing infrastructure provides a frictionless experience with higher-level abstractions built upon low-level best practices. We envision all Zalando delivery teams will run their containerized applications on a state-of-the-art reliable and scalable cluster infrastructure provided by Kubernetes."
- With the old on-premise infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs in the Linux kernel. This makes a lot of people pretty happy. The engineers love the autonomy."
- There were a few challenges in Zalando’s Kubernetes implementation. "We are a team of seven people providing clusters to different engineering teams, and our goal is to provide a rock-solid experience for all of them," says Jacobs. "We don’t want pet clusters. We don’t want to have to understand what workload they have; it should just work out of the box. With that in mind, cluster autoscaling is important. There are many different ways of doing cluster management, and this is not part of the core. So we created two components to provision clusters, have a registry for clusters, and to manage the whole cluster life cycle."
- Jacobs’s team also worked to improve the Kubernetes-AWS integration. "Thus you're very restricted. You need infrastructure to scale each autonomous team’s idea.""
- Plus, "there are still a lot of best practices missing," says Jacobs. The team, for example, recently solved a pod security policy issue. "There was already a concept in Kubernetes but it wasn’t documented, so it was kind of tricky," he says. The large Kubernetes community was a big help to resolve the issue. To help other companies start down the same path, Jacobs compiled his team’s learnings in a document called Running Kubernetes in Production.
-
-
-
-
-
-
- "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years... We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
-
-
-
-
- In the end, Kubernetes made it possible for Zalando to introduce and maintain the new products the company envisioned to grow its platform. "The fashion advice product used Scala, and there were struggles to make this possible with our former infrastructure," says Jacobs. "It was a workaround, and that team needed more and more support from the platform team, just because they used different technologies. Now with Kubernetes, it’s autonomous. Whatever the workload is, that team can just go their way, and Kubernetes prevents other bottlenecks."
- Looking ahead, Jacobs sees Zalando’s new infrastructure as a great enabler for other things the company has in the works, from its new logistics software, to a platform feature connecting brands, to products dreamed up by data scientists. "One vision is if you watch the next James Bond movie and see the suit he’s wearing, you should be able to automatically order it, and have it delivered to you within an hour," says Jacobs. "It’s about connecting the full fashion sphere. This is definitely not possible if you have a bottleneck with everyone running in the same data center and thus very restricted. You need infrastructure to scale each autonomous team’s idea."
- For other companies considering this technology, Jacobs says he wouldn’t necessarily advise doing it exactly the same way Zalando did. "It’s okay to do so if you’re ready to fail at some things," he says. "You need to set the right expectations. Not everything will work. Rewriting apps and this type of organizational change can be disruptive. The first product we moved was critical. There were a lot of dependencies, and it took longer than expected. Maybe we should have started with something less complicated, less business critical, just to get our toes wet."
- But once they got to the other side "it was clear for everyone that there’s no big alternative," Jacobs adds. "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years. Zalando Technology benefits from migrating to Kubernetes as we are able to leverage our existing knowledge to create an engineering platform offering flexibility and speed to our engineers while significantly reducing the operational overhead. We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
-
-
-
-
+
+
With the old infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs on the Linux kernel. This makes a lot of people pretty happy. The engineers love autonomy."
+
+{{< case-studies/quote author="Henning Jacobs, Head of Developer Productivity at Zalando" >}}
+"We envision all Zalando delivery teams running their containerized applications on a state-of-the-art, reliable and scalable cluster infrastructure provided by Kubernetes."
+{{< /case-studies/quote >}}
+
+{{< case-studies/lead >}}
+When Henning Jacobs arrived at Zalando in 2010, the company was just two years old with 180 employees running an online store for European shoppers to buy fashion items.
+{{< /case-studies/lead >}}
+
+
"It started as a PHP e-commerce site which was easy to get started with, but was not scaling with the business' needs" says Jacobs, Head of Developer Productivity at Zalando.
+
+
At that time, the company began expanding beyond its German origins into other European markets. Fast-forward to today and Zalando now has more than 14,000 employees, 3.6 billion Euro in revenue for 2016 and operates across 15 countries. "With growth in all dimensions, and constant scaling, it has been a once-in-a-lifetime experience," he says.
+
+
Not to mention a unique opportunity for an infrastructure specialist like Jacobs. Just after he joined, the company began rewriting all their applications in-house. "That was generally our strategy," he says. "For example, we started with our own logistics warehouses but at first you don't know how to do logistics software, so you have some vendor software. And then we replaced it with our own because with off-the-shelf software you're not competitive. You need to optimize these processes based on your specific business needs."
+
+
In parallel to rewriting their applications, Zalando had set a goal of expanding beyond basic e-commerce to a platform offering multi-tenancy, a dramatic increase in assortments and styles, same-day delivery and even your own personal online stylist.
+
+
The need to scale ultimately led the company on a cloud-native journey. As did its embrace of a microservices-based software architecture that gives engineering teams more autonomy and ownership of projects. "This move to the cloud was necessary because in the data center you couldn't have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app," Jacobs says.
+
+{{< case-studies/quote image="/images/case-studies/zalando/banner3.jpg" >}}
+"This move to the cloud was necessary because in the data center you couldn't have autonomous teams. You have the same infrastructure and it was very homogeneous, so you could only run your Java or Python app."
+{{< /case-studies/quote >}}
+
+
Zalando began moving its infrastructure from two on-premise data centers to the cloud, requiring the migration of older applications for cloud-readiness. "We decided to have a clean break," says Jacobs. "Our Amazon Web Services infrastructure was set up like so: Every team has its own AWS account, which is completely isolated, meaning there's no 'lift and shift.' You basically have to rewrite your application to make it cloud-ready even down to the persistence layer. We bravely went back to the drawing board and redid everything, first choosing Docker as a common containerization, then building the infrastructure from there."
+
+
The company decided to hold off on orchestration at the beginning, but as teams were migrated to AWS, "we saw the pain teams were having with infrastructure and cloud formation on AWS," says Jacobs.
+
+
Zalandos 200+ autonomous engineering teams decide what technologies to use and could operate their own applications using their own AWS accounts. This setup proved to be a compliance challenge. Even with strict rules-of-play and automated compliance checks in place, engineering teams and IT-compliance were overburdened addressing compliance issues. "Violations appear for non-compliant behavior, which we detect when scanning the cloud infrastructure," says Jacobs. "Everything is possible and nothing enforced, so you have to live with violations (and resolve them) instead of preventing the error in the first place. This means overhead for teams—and overhead for compliance and operations. It also takes time to spin up new EC2 instances on AWS, which affects our deployment velocity."
+
+
The team realized they needed to "leverage the value you get from cluster management," says Jacobs. When they first looked at Platform as a Service (PaaS) options in 2015, the market was fragmented; but "now there seems to be a clear winner. It seemed like a good bet to go with Kubernetes."
+
+
The transition to Kubernetes started in 2016 during Zalando's Hack Week where participants deployed their projects to a Kubernetes cluster. From there 60 members of the tech infrastructure department were on-boarded - and then engineering teams were brought on one at a time. "We always start by talking with them and make sure everyone's expectations are clear," says Jacobs. "Then we conduct some Kubernetes training, which is mostly training for our CI/CD setup, because the user interface for our users is primarily through the CI/CD system. But they have to know fundamental Kubernetes concepts and the API. This is followed by a weekly sync with each team to check their progress. Once they have something in production, we want to see if everything is fine on top of what we can improve."
+
+{{< case-studies/quote image="/images/case-studies/zalando/banner4.jpg" >}}
+Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs.
+{{< /case-studies/quote >}}
+
+
At the moment, Zalando is running an initial 40 Kubernetes clusters with plans to scale for the foreseeable future. Once Zalando began migrating applications to Kubernetes, the results were immediate. "Kubernetes is a cornerstone for our seamless end-to-end developer experience. We are able to ship ideas to production using a single consistent and declarative API," says Jacobs. "The self-healing infrastructure provides a frictionless experience with higher-level abstractions built upon low-level best practices. We envision all Zalando delivery teams will run their containerized applications on a state-of-the-art reliable and scalable cluster infrastructure provided by Kubernetes."
+
+
With the old on-premise infrastructure "it was difficult to properly embrace new technologies, and DevOps teams were considered to be a bottleneck," says Jacobs. "Now, with this cloud infrastructure, they have this packaging format, which can contain anything that runs in the Linux kernel. This makes a lot of people pretty happy. The engineers love the autonomy."
+
+
There were a few challenges in Zalando's Kubernetes implementation. "We are a team of seven people providing clusters to different engineering teams, and our goal is to provide a rock-solid experience for all of them," says Jacobs. "We don't want pet clusters. We don't want to have to understand what workload they have; it should just work out of the box. With that in mind, cluster autoscaling is important. There are many different ways of doing cluster management, and this is not part of the core. So we created two components to provision clusters, have a registry for clusters, and to manage the whole cluster life cycle."
+
+
Jacobs's team also worked to improve the Kubernetes-AWS integration. "Thus you're very restricted. You need infrastructure to scale each autonomous team's idea." Plus, "there are still a lot of best practices missing," says Jacobs. The team, for example, recently solved a pod security policy issue. "There was already a concept in Kubernetes but it wasn't documented, so it was kind of tricky," he says. The large Kubernetes community was a big help to resolve the issue. To help other companies start down the same path, Jacobs compiled his team's learnings in a document called Running Kubernetes in Production.
+
+{{< case-studies/quote >}}
+"The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years... We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
+{{< /case-studies/quote >}}
+
+
In the end, Kubernetes made it possible for Zalando to introduce and maintain the new products the company envisioned to grow its platform. "The fashion advice product used Scala, and there were struggles to make this possible with our former infrastructure," says Jacobs. "It was a workaround, and that team needed more and more support from the platform team, just because they used different technologies. Now with Kubernetes, it's autonomous. Whatever the workload is, that team can just go their way, and Kubernetes prevents other bottlenecks."
+
+
Looking ahead, Jacobs sees Zalando's new infrastructure as a great enabler for other things the company has in the works, from its new logistics software, to a platform feature connecting brands, to products dreamed up by data scientists. "One vision is if you watch the next James Bond movie and see the suit he's wearing, you should be able to automatically order it, and have it delivered to you within an hour," says Jacobs. "It's about connecting the full fashion sphere. This is definitely not possible if you have a bottleneck with everyone running in the same data center and thus very restricted. You need infrastructure to scale each autonomous team's idea."
+
+
For other companies considering this technology, Jacobs says he wouldn't necessarily advise doing it exactly the same way Zalando did. "It's okay to do so if you're ready to fail at some things," he says. "You need to set the right expectations. Not everything will work. Rewriting apps and this type of organizational change can be disruptive. The first product we moved was critical. There were a lot of dependencies, and it took longer than expected. Maybe we should have started with something less complicated, less business critical, just to get our toes wet."
+
+
But once they got to the other side "it was clear for everyone that there's no big alternative," Jacobs adds. "The Kubernetes API allows us to run applications in a cloud provider-agnostic way, which gives us the freedom to revisit IaaS providers in the coming years. Zalando Technology benefits from migrating to Kubernetes as we are able to leverage our existing knowledge to create an engineering platform offering flexibility and speed to our engineers while significantly reducing the operational overhead. We expect the Kubernetes API to be the global standard for PaaS infrastructure and are excited about the continued journey."
From 6d27247c1eb0b76bbe702e9ddd956db987abb15f Mon Sep 17 00:00:00 2001
From: Jerry Park
Date: Sun, 18 Oct 2020 16:55:36 +0900
Subject: [PATCH 35/50] Forth Korean l10n work for release-1.19
- Fix ko glossary managed service title (#24621)
- Translate reference/glossary/service-broker.md in Korean (#24632)
- Translate reference/command-line-tools-reference/kubelet-authentication-authorization.md into korean (#24623)
- Update outdated files in the dev-1.19-ko.4 branch (#24622)
- Translate setup/production-environment/tools/kubeadm/self-hosting/ into Korean (#24655)
- Translate reference/kubectl/kubectl.md into Korean (#24482)
- docs: fix typo (#24713)
- Translate connecting-frontend-backend to Korean (#24422)
- Translate reference/kubectl/conventions.md into Korean (#24614)
- Translate k8s 1.19 relaese note in korean (#24633)
Co-authored-by: seokho-son
Co-authored-by: santachopa
Co-authored-by: kosehy@gmail.com
Co-authored-by: Jerry Park
Co-authored-by: markruler
Co-authored-by: noel
Co-authored-by: coolguyhong
Co-authored-by: chhanz
Co-authored-by: bluefriday
---
content/ko/_index.html | 4 +-
content/ko/docs/_index.md | 2 +
.../ko/docs/concepts/architecture/nodes.md | 2 +-
.../docs/concepts/configuration/configmap.md | 76 +-
.../manage-resources-containers.md | 7 +
.../containers/container-lifecycle-hooks.md | 56 +-
.../docs/concepts/containers/runtime-class.md | 4 +-
.../docs/concepts/extend-kubernetes/_index.md | 2 +-
.../compute-storage-net/network-plugins.md | 21 +-
.../extend-kubernetes/extend-cluster.md | 2 +-
.../concepts/extend-kubernetes/operator.md | 2 +-
content/ko/docs/concepts/overview/_index.md | 2 +
.../docs/concepts/overview/kubernetes-api.md | 17 +-
.../concepts/overview/what-is-kubernetes.md | 2 +
.../working-with-objects/namespaces.md | 3 -
.../pod-overhead.md | 0
.../ingress-controllers.md | 4 +-
.../concepts/services-networking/ingress.md | 2 +-
.../concepts/services-networking/service.md | 6 +-
.../concepts/storage/persistent-volumes.md | 41 +-
content/ko/docs/concepts/workloads/_index.md | 51 +
.../concepts/workloads/controllers/_index.md | 2 +-
.../workloads/controllers/deployment.md | 2 +-
.../controllers/garbage-collection.md | 2 +-
.../concepts/workloads/controllers/job.md | 2 +-
.../ko/docs/concepts/workloads/pods/_index.md | 2 +-
.../workloads/pods/init-containers.md | 4 +-
.../concepts/workloads/pods/pod-lifecycle.md | 9 +-
.../pods/pod-topology-spread-constraints.md | 2 +-
content/ko/docs/contribute/_index.md | 9 +-
.../docs/contribute/review/for-approvers.md | 5 +-
.../access-authn-authz/authorization.md | 2 +-
.../kubelet-authentication-authorization.md | 83 +
content/ko/docs/reference/glossary/kubelet.md | 6 +-
.../reference/glossary/managed-service.md | 2 +-
.../docs/reference/glossary/service-broker.md | 22 +
.../ko/docs/reference/kubectl/conventions.md | 62 +
content/ko/docs/reference/kubectl/kubectl.md | 370 +++
.../reference/using-api/client-libraries.md | 1 +
.../setup/best-practices/multiple-zones.md | 502 +---
.../container-runtimes.md | 112 +-
.../production-environment/tools/_index.md | 2 +-
.../production-environment/tools/kops.md | 2 +-
.../tools/kubeadm/self-hosting.md | 67 +
.../windows/intro-windows-in-kubernetes.md | 4 +-
content/ko/docs/setup/release/notes.md | 2603 +++++++++++++++++
content/ko/docs/sitemap.md | 114 -
.../connecting-frontend-backend.md | 212 ++
...port-forward-access-application-cluster.md | 10 +-
.../web-ui-dashboard.md | 2 +-
.../cilium-network-policy.md | 2 +-
.../configure-pod-initialization.md | 15 +-
.../configure-volume-storage.md | 2 +-
.../resource-metrics-pipeline.md | 12 +-
.../horizontal-pod-autoscale-walkthrough.md | 8 +-
.../horizontal-pod-autoscale.md | 5 +-
content/ko/docs/tasks/tools/_index.md | 35 +-
.../ko/docs/tasks/tools/install-kubectl.md | 2 +-
.../ko/docs/tasks/tools/install-minikube.md | 262 --
content/ko/docs/tutorials/hello-minikube.md | 3 +
.../create-cluster/cluster-intro.html | 14 +-
.../explore/explore-intro.html | 2 +-
.../guestbook/redis-master-service.yaml | 3 +-
content/ko/examples/pods/init-containers.yaml | 2 +-
content/ko/examples/service/access/Dockerfile | 4 +
.../ko/examples/service/access/frontend.conf | 11 +
.../ko/examples/service/access/frontend.yaml | 39 +
.../service/access/hello-service.yaml | 12 +
content/ko/examples/service/access/hello.yaml | 24 +
69 files changed, 3997 insertions(+), 983 deletions(-)
rename content/ko/docs/concepts/{configuration => scheduling-eviction}/pod-overhead.md (100%)
create mode 100644 content/ko/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md
create mode 100644 content/ko/docs/reference/glossary/service-broker.md
create mode 100644 content/ko/docs/reference/kubectl/conventions.md
create mode 100644 content/ko/docs/reference/kubectl/kubectl.md
create mode 100644 content/ko/docs/setup/production-environment/tools/kubeadm/self-hosting.md
create mode 100644 content/ko/docs/setup/release/notes.md
delete mode 100644 content/ko/docs/sitemap.md
create mode 100644 content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md
delete mode 100644 content/ko/docs/tasks/tools/install-minikube.md
create mode 100644 content/ko/examples/service/access/Dockerfile
create mode 100644 content/ko/examples/service/access/frontend.conf
create mode 100644 content/ko/examples/service/access/frontend.yaml
create mode 100644 content/ko/examples/service/access/hello-service.yaml
create mode 100644 content/ko/examples/service/access/hello.yaml
diff --git a/content/ko/_index.html b/content/ko/_index.html
index fd7053211b55f..93dc84d13b0d9 100644
--- a/content/ko/_index.html
+++ b/content/ko/_index.html
@@ -2,11 +2,13 @@
title: "운영 수준의 컨테이너 오케스트레이션"
abstract: "자동화된 컨테이너 배포, 스케일링과 관리"
cid: home
+sitemap:
+ priority: 1.0
---
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
-### [쿠버네티스(K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})는 컨테이너화된 애플리케이션을 자동으로 배포, 스케일링 및 관리해주는 오픈소스 시스템입니다.
+[쿠버네티스(K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})는 컨테이너화된 애플리케이션을 자동으로 배포, 스케일링 및 관리해주는 오픈소스 시스템입니다.
애플리케이션을 구성하는 컨테이너들의 쉬운 관리 및 발견을 위해서 컨테이너들을 논리적인 단위로 그룹화합니다. 쿠버네티스는 [Google에서 15년간 프로덕션 워크로드 운영한 경험](http://queue.acm.org/detail.cfm?id=2898444)을 토대로 구축되었으며, 커뮤니티에서 제공한 최상의 아이디어와 방법들이 결합되어 있습니다.
{{% /blocks/feature %}}
diff --git a/content/ko/docs/_index.md b/content/ko/docs/_index.md
index 8a2dbc31a5e14..af7043cf619cd 100644
--- a/content/ko/docs/_index.md
+++ b/content/ko/docs/_index.md
@@ -1,4 +1,6 @@
---
linktitle: 쿠버네티스 문서
title: 문서
+sitemap:
+ priority: 1.0
---
diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md
index eab7eb0de29fc..d27845e654819 100644
--- a/content/ko/docs/concepts/architecture/nodes.md
+++ b/content/ko/docs/concepts/architecture/nodes.md
@@ -12,7 +12,7 @@ weight: 10
{{< glossary_tooltip text="파드" term_id="pod" >}}를
실행하는데 필요한 서비스가 포함되어 있다.
-일반적으로 클러스터에는 여러개의 노드가 있으며, 학습 또는 리소스가 제한되는
+일반적으로 클러스터에는 여러 개의 노드가 있으며, 학습 또는 리소스가 제한되는
환경에서는 하나만 있을 수도 있다.
노드의 [컴포넌트](/ko/docs/concepts/overview/components/#노드-컴포넌트)에는
diff --git a/content/ko/docs/concepts/configuration/configmap.md b/content/ko/docs/concepts/configuration/configmap.md
index 0c89f83bdf1e6..d26f800f8c7b3 100644
--- a/content/ko/docs/concepts/configuration/configmap.md
+++ b/content/ko/docs/concepts/configuration/configmap.md
@@ -16,7 +16,6 @@ weight: 20
{{< /caution >}}
-
## 사용 동기
@@ -24,25 +23,38 @@ weight: 20
예를 들어, 자신의 컴퓨터(개발용)와 클라우드(실제 트래픽 처리)에서
실행할 수 있는 애플리케이션을 개발한다고 가정해보자.
-`DATABASE_HOST` 라는
-환경 변수를 찾기 위해 코드를 작성한다. 로컬에서는 해당 변수를
-`localhost` 로 설정한다. 클라우드에서는, 데이터베이스
-컴포넌트를 클러스터에 노출하는 쿠버네티스 {{< glossary_tooltip text="서비스" term_id="service" >}}를 참조하도록
-설정한다.
-
+`DATABASE_HOST` 라는 환경 변수를 찾기 위해 코드를 작성한다.
+로컬에서는 해당 변수를 `localhost` 로 설정한다. 클라우드에서는, 데이터베이스
+컴포넌트를 클러스터에 노출하는 쿠버네티스 {{< glossary_tooltip text="서비스" term_id="service" >}}를
+참조하도록 설정한다.
이를 통해 클라우드에서 실행 중인 컨테이너 이미지를 가져와
필요한 경우 정확히 동일한 코드를 로컬에서 디버깅할 수 있다.
+컨피그맵은 많은 양의 데이터를 보유하도록 설계되지 않았다. 컨피그맵에 저장된
+데이터는 1MiB를 초과할 수 없다. 이 제한보다 큰 설정을
+저장해야 하는 경우, 볼륨을 마운트하는 것을 고려하거나 별도의
+데이터베이스 또는 파일 서비스를 사용할 수 있다.
+
## 컨피그맵 오브젝트
컨피그맵은 다른 오브젝트가 사용할 구성을 저장할 수 있는
API [오브젝트](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/)이다.
-`spec` 이 있는 대부분의 쿠버네티스 오브젝트와 달리,
-컨피그맵에는 항목(키)과 해당 값을 저장하는 `data` 섹션이 있다.
+`spec` 이 있는 대부분의 쿠버네티스 오브젝트와 달리, 컨피그맵에는 `data` 및 `binaryData`
+필드가 있다. 이러한 필드는 키-값 쌍을 값으로 허용한다. `data` 필드와
+`binaryData` 는 모두 선택 사항이다. `data` 필드는
+UTF-8 바이트 시퀀스를 포함하도록 설계되었으며 `binaryData` 필드는 바이너리 데이터를
+포함하도록 설계되었다.
컨피그맵의 이름은 유효한
[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름)이어야 한다.
+`data` 또는 `binaryData` 필드 아래의 각 키는
+영숫자 문자, `-`, `_` 또는 `.` 으로 구성되어야 한다. `data` 에 저장된 키는
+`binaryData` 필드의 키와 겹치지 않아야 한다.
+
+v1.19부터 컨피그맵 정의에 `immutable` 필드를 추가하여
+[변경할 수 없는 컨피그맵](#configmap-immutable)을 만들 수 있다.
+
## 컨피그맵과 파드
컨피그맵을 참조하는 파드 `spec` 을 작성하고 컨피그맵의 데이터를
@@ -62,7 +74,7 @@ data:
# 속성과 비슷한 키; 각 키는 간단한 값으로 매핑됨
player_initial_lives: "3"
ui_properties_file_name: "user-interface.properties"
- #
+
# 파일과 비슷한 키
game.properties: |
enemy.types=aliens,monsters
@@ -94,6 +106,7 @@ data:
기술을 사용하여 다른 네임스페이스의 컨피그맵에 접근할 수도 있다.
다음은 `game-demo` 의 값을 사용하여 파드를 구성하는 파드 예시이다.
+
```yaml
apiVersion: v1
kind: Pod
@@ -102,7 +115,8 @@ metadata:
spec:
containers:
- name: demo
- image: game.example/demo-game
+ image: alpine
+ command: ["sleep", "3600"]
env:
# 환경 변수 정의
- name: PLAYER_INITIAL_LIVES # 참고로 여기서는 컨피그맵의 키 이름과
@@ -134,7 +148,6 @@ spec:
path: "user-interface.properties"
```
-
컨피그맵은 단일 라인 속성(single line property) 값과 멀티 라인의 파일과 비슷한(multi-line file-like) 값을
구분하지 않는다.
더 중요한 것은 파드와 다른 오브젝트가 이러한 값을 소비하는 방식이다.
@@ -153,7 +166,6 @@ spec:
노출되지 않고, 시스템의 다른 부분에서도 사용할 수 있다. 예를 들어,
컨피그맵은 시스템의 다른 부분이 구성을 위해 사용해야 하는 데이터를 보유할 수 있다.
-{{< note >}}
컨피그맵을 사용하는 가장 일반적인 방법은 동일한 네임스페이스의
파드에서 실행되는 컨테이너에 대한 설정을 구성하는 것이다. 컨피그맵을
별도로 사용할 수도 있다.
@@ -162,16 +174,23 @@ spec:
컨피그맵에 기반한 동작을 조정하는 {{< glossary_tooltip text="애드온" term_id="addons" >}}이나
{{< glossary_tooltip text="오퍼레이터" term_id="operator-pattern" >}}를
사용할 수도 있다.
-{{< /note >}}
### 파드에서 컨피그맵을 파일로 사용하기
파드의 볼륨에서 컨피그맵을 사용하려면 다음을 수행한다.
-1. 컨피그맵을 생성하거나 기존 컨피그맵을 사용한다. 여러 파드가 동일한 컨피그맵을 참조할 수 있다.
-1. 파드 정의를 수정해서 `.spec.volumes[]` 아래에 볼륨을 추가한다. 볼륨 이름은 원하는 대로 정하고, 컨피그맵 오브젝트를 참조하도록 `.spec.volumes[].configMap.name` 필드를 설정한다.
-1. 컨피그맵이 필요한 각 컨테이너에 `.spec.containers[].volumeMounts[]` 를 추가한다. `.spec.containers[].volumeMounts[].readOnly = true` 를 설정하고 컨피그맵이 연결되기를 원하는 곳에 사용하지 않은 디렉터리 이름으로 `.spec.containers[].volumeMounts[].mountPath` 를 지정한다.
-1. 프로그램이 해당 디렉터리에서 파일을 찾도록 이미지 또는 커맨드 라인을 수정한다. 컨피그맵의 `data` 맵 각 키는 `mountPath` 아래의 파일 이름이 된다.
+1. 컨피그맵을 생성하거나 기존 컨피그맵을 사용한다. 여러 파드가 동일한 컨피그맵을
+ 참조할 수 있다.
+1. 파드 정의를 수정해서 `.spec.volumes[]` 아래에 볼륨을 추가한다. 볼륨 이름은
+ 원하는 대로 정하고, 컨피그맵 오브젝트를 참조하도록 `.spec.volumes[].configMap.name`
+ 필드를 설정한다.
+1. 컨피그맵이 필요한 각 컨테이너에 `.spec.containers[].volumeMounts[]` 를
+ 추가한다. `.spec.containers[].volumeMounts[].readOnly = true` 를 설정하고
+ 컨피그맵이 연결되기를 원하는 곳에 사용하지 않은 디렉터리 이름으로
+ `.spec.containers[].volumeMounts[].mountPath` 를 지정한다.
+1. 프로그램이 해당 디렉터리에서 파일을 찾도록 이미지 또는 커맨드 라인을
+ 수정한다. 컨피그맵의 `data` 맵 각 키는 `mountPath` 아래의
+ 파일 이름이 된다.
다음은 볼륨에 컨피그맵을 마운트하는 파드의 예시이다.
@@ -225,12 +244,14 @@ kubelet은 모든 주기적인 동기화에서 마운트된 컨피그맵이 최
데이터 변경을 방지하면 다음과 같은 이점이 있다.
- 애플리케이션 중단을 일으킬 수 있는 우발적(또는 원하지 않는) 업데이트로부터 보호
-- immutable로 표시된 컨피그맵에 대한 감시를 중단하여, kube-apiserver의 부하를 크게 줄임으로써 클러스터의 성능을 향상시킴
+- immutable로 표시된 컨피그맵에 대한 감시를 중단하여, kube-apiserver의 부하를 크게 줄임으로써
+ 클러스터의 성능을 향상시킴
+
+이 기능은 `ImmutableEphemeralVolumes`
+[기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)에 의해 제어된다.
+`immutable` 필드를 `true` 로 설정하여 변경할 수 없는 컨피그맵을 생성할 수 있다.
+다음은 예시이다.
-이 기능은 v1.19부터 기본적으로 활성화된 `ImmutableEphemeralVolumes` [기능
-게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)에
-의해 제어된다. `immutable` 필드를 `true` 로 설정하여
-변경할 수 없는 컨피그맵을 생성할 수 있다. 다음은 예시이다.
```yaml
apiVersion: v1
kind: ConfigMap
@@ -242,15 +263,14 @@ immutable: true
```
{{< note >}}
-컨피그맵 또는 시크릿을 immutable로 표시하면, 이 변경 사항을 되돌리거나
-`data` 필드 내용을 변경할 수 _없다_. 컨피그맵만 삭제하고 다시 작성할 수 있다.
-기존 파드는 삭제된 컨피그맵에 대한 마운트 지점을 유지하며, 이러한 파드를 다시 작성하는
-것을 권장한다.
+컨피그맵을 immutable로 표시하면, 이 변경 사항을 되돌리거나
+`data` 또는 `binaryData` 필드 내용을 변경할 수 _없다_. 컨피그맵만
+삭제하고 다시 작성할 수 있다. 기존 파드는 삭제된 컨피그맵에 대한 마운트 지점을
+유지하므로, 이러한 파드를 다시 작성하는 것을 권장한다.
{{< /note >}}
## {{% heading "whatsnext" %}}
-
* [시크릿](/ko/docs/concepts/configuration/secret/)에 대해 읽어본다.
* [컨피그맵을 사용하도록 파드 구성하기](/docs/tasks/configure-pod-container/configure-pod-configmap/)를 읽어본다.
* 코드를 구성에서 분리하려는 동기를 이해하려면
diff --git a/content/ko/docs/concepts/configuration/manage-resources-containers.md b/content/ko/docs/concepts/configuration/manage-resources-containers.md
index 1b8a3d1393a71..1f01481270827 100644
--- a/content/ko/docs/concepts/configuration/manage-resources-containers.md
+++ b/content/ko/docs/concepts/configuration/manage-resources-containers.md
@@ -47,6 +47,13 @@ feature:
또는 강제적(시스템이 컨테이너가 제한을 초과하지 않도록 방지)으로 구현할 수 있다. 런타임마다
다른 방식으로 동일한 제약을 구현할 수 있다.
+{{< note >}}
+컨테이너가 자체 메모리 제한을 지정하지만, 메모리 요청을 지정하지 않는 경우, 쿠버네티스는
+제한과 일치하는 메모리 요청을 자동으로 할당한다. 마찬가지로, 컨테이너가 자체 CPU 제한을
+지정하지만, CPU 요청을 지정하지 않는 경우, 쿠버네티스는 제한과 일치하는 CPU 요청을 자동으로
+할당한다.
+{{< /note >}}
+
## 리소스 타입
*CPU* 와 *메모리* 는 각각 *리소스 타입* 이다. 리소스 타입에는 기본 단위가 있다.
diff --git a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
index 0fe4bb9b9daa9..662ac71522d05 100644
--- a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
+++ b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
@@ -6,7 +6,7 @@ weight: 30
-이 페이지는 kubelet이 관리하는 컨테이너가 관리 라이프사이클 동안의 이벤트에 의해 발동되는 코드를 실행하기 위해서
+이 페이지는 kubelet이 관리하는 컨테이너가 관리 라이프사이클 동안의 이벤트에 의해 발동되는 코드를 실행하기 위해서
컨테이너 라이프사이클 훅 프레임워크를 사용하는 방법에 대해서 설명한다.
@@ -16,9 +16,9 @@ weight: 30
## 개요
-Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로그래밍 언어 프레임워크와 유사하게,
+Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로그래밍 언어 프레임워크와 유사하게,
쿠버네티스도 컨테이너에 라이프사이클 훅을 제공한다.
-훅은 컨테이너가 관리 라이프사이클의 이벤트를 인지하고 상응하는
+훅은 컨테이너가 관리 라이프사이클의 이벤트를 인지하고 상응하는
라이프사이클 훅이 실행될 때 핸들러에 구현된 코드를 실행할 수 있게 한다.
## 컨테이너 훅
@@ -33,12 +33,12 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로
`PreStop`
-이 훅은 API 요청이나 활성 프로브(liveness probe) 실패, 선점, 자원 경합 등의 관리 이벤트로 인해 컨테이너가 종료되기 직전에 호출된다. 컨테이너가 이미 terminated 또는 completed 상태인 경우에는 preStop 훅 요청이 실패한다.
-그것은 동기적인 동작을 의미하는, 차단(blocking)을 수행하고 있으므로,
-컨테이너를 삭제하기 위한 호출이 전송되기 전에 완료되어야한다.
+이 훅은 API 요청이나 활성 프로브(liveness probe) 실패, 선점, 자원 경합 등의 관리 이벤트로 인해 컨테이너가 종료되기 직전에 호출된다. 컨테이너가 이미 terminated 또는 completed 상태인 경우에는 preStop 훅 요청이 실패한다.
+그것은 동기적인 동작을 의미하는, 차단(blocking)을 수행하고 있으므로,
+컨테이너를 중지하기 위한 신호가 전송되기 전에 완료되어야 한다.
파라미터는 핸들러에 전달되지 않는다.
-종료 동작에 더 자세한 대한 설명은
+종료 동작에 더 자세한 대한 설명은
[파드의 종료](/ko/docs/concepts/workloads/pods/pod-lifecycle/#파드의-종료)에서 찾을 수 있다.
### 훅 핸들러 구현
@@ -52,34 +52,46 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로
### 훅 핸들러 실행
-컨테이너 라이프사이클 관리 훅이 호출되면,
-쿠버네티스 관리 시스템은 해당 훅이 등록된 컨테이너에서 핸들러를 실행한다.
+컨테이너 라이프사이클 관리 훅이 호출되면,
+쿠버네티스 관리 시스템은 훅 동작에 따라 핸들러를 실행하고,
+`exec` 와 `tcpSocket` 은 컨테이너에서 실행되고, `httpGet` 은 kubelet 프로세스에 의해 실행된다.
-훅 핸들러 호출은 해당 컨테이너를 포함하고 있는 파드의 맥락과 동기적으로 동작한다.
-이것은 `PostStart` 훅에 대해서,
+훅 핸들러 호출은 해당 컨테이너를 포함하고 있는 파드의 컨텍스트와 동기적으로 동작한다.
+이것은 `PostStart` 훅에 대해서,
훅이 컨테이너 엔트리포인트와는 비동기적으로 동작함을 의미한다.
-그러나, 만약 해당 훅이 너무 오래 동작하거나 어딘가에 걸려 있다면,
+그러나, 만약 해당 훅이 너무 오래 동작하거나 어딘가에 걸려 있다면,
컨테이너는 `running` 상태에 이르지 못한다.
-이러한 동작은 `PreStop` 훅에 대해서도 비슷하게 일어난다.
-만약 훅이 실행되던 도중에 매달려 있다면,
-파드의 단계(phase)는 `Terminating` 상태에 머물고 해당 훅은 파드의 `terminationGracePeriodSeconds`가 끝난 다음에 종료된다.
+`PreStop` 훅은 컨테이너 중지 신호에서 비동기적으로
+실행되지 않는다. 훅은 신호를 보내기 전에 실행을
+완료해야 한다.
+실행 중에 `PreStop` 훅이 중단되면,
+파드의 단계는 `Terminating` 이며 `terminationGracePeriodSeconds` 가
+만료된 후 파드가 종료될 때까지 남아 있다.
+이 유예 기간은 `PreStop` 훅이 실행되고 컨테이너가
+정상적으로 중지되는 데 걸리는 총 시간에 적용된다.
+예를 들어, `terminationGracePeriodSeconds` 가 60이고, 훅이
+완료되는 데 55초가 걸리고, 컨테이너가 신호를 수신한 후
+정상적으로 중지하는 데 10초가 걸리면, `terminationGracePeriodSeconds` 이후
+컨테이너가 정상적으로 중지되기 전에 종료된다. 이 두 가지 일이 발생하는 데
+걸리는 총 시간(55+10)보다 적다.
+
만약 `PostStart` 또는 `PreStop` 훅이 실패하면,
그것은 컨테이너를 종료시킨다.
사용자는 훅 핸들러를 가능한 한 가볍게 만들어야 한다.
-그러나, 컨테이너가 멈추기 전 상태를 저장하는 것과 같이,
+그러나, 컨테이너가 멈추기 전 상태를 저장하는 것과 같이,
오래 동작하는 커맨드가 의미 있는 경우도 있다.
### 훅 전달 보장
-훅 전달은 *한 번 이상* 으로 의도되어 있는데,
-이는 `PostStart` 또는 `PreStop`와 같은 특정 이벤트에 대해서,
+훅 전달은 *한 번 이상* 으로 의도되어 있는데,
+이는 `PostStart` 또는 `PreStop`와 같은 특정 이벤트에 대해서,
훅이 여러 번 호출될 수 있다는 것을 의미한다.
이것을 올바르게 처리하는 것은 훅의 구현에 달려 있다.
일반적으로, 전달은 단 한 번만 이루어진다.
-예를 들어, HTTP 훅 수신기가 다운되어 트래픽을 받을 수 없는 경우에도,
+예를 들어, HTTP 훅 수신기가 다운되어 트래픽을 받을 수 없는 경우에도,
재전송을 시도하지 않는다.
그러나, 드문 경우로, 이중 전달이 발생할 수 있다.
예를 들어, 훅을 전송하는 도중에 kubelet이 재시작된다면,
@@ -88,8 +100,8 @@ Kubelet이 구동된 후에 해당 훅은 재전송될 것이다.
### 디버깅 훅 핸들러
훅 핸들러의 로그는 파드 이벤트로 노출되지 않는다.
-만약 핸들러가 어떠한 이유로 실패하면, 핸들러는 이벤트를 방송한다.
-`PostStart`의 경우, 이것은 `FailedPostStartHook` 이벤트이며,
+만약 핸들러가 어떠한 이유로 실패하면, 핸들러는 이벤트를 방송한다.
+`PostStart`의 경우, 이것은 `FailedPostStartHook` 이벤트이며,
`PreStop`의 경우, 이것은 `FailedPreStopHook` 이벤트이다.
이 이벤트는 `kubectl describe pod <파드_이름>`를 실행하면 볼 수 있다.
다음은 이 커맨드 실행을 통한 이벤트 출력의 몇 가지 예다.
@@ -117,5 +129,3 @@ Events:
* [컨테이너 환경](/ko/docs/concepts/containers/container-environment/)에 대해 더 배우기.
* [컨테이너 라이프사이클 이벤트에 핸들러 부착](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)
실습 경험하기.
-
-
diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md
index 743027b0d44a6..315b83ce9916e 100644
--- a/content/ko/docs/concepts/containers/runtime-class.md
+++ b/content/ko/docs/concepts/containers/runtime-class.md
@@ -178,7 +178,7 @@ PodOverhead를 사용하려면, PodOverhead [기능 게이트](/ko/docs/referenc
## {{% heading "whatsnext" %}}
-- [런타임클래스 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
-- [런타임클래스 스케줄링 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
+- [런타임클래스 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
+- [런타임클래스 스케줄링 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- [파드 오버헤드](/ko/docs/concepts/configuration/pod-overhead/) 개념에 대해 읽기
- [파드 오버헤드 기능 설계](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
diff --git a/content/ko/docs/concepts/extend-kubernetes/_index.md b/content/ko/docs/concepts/extend-kubernetes/_index.md
index 4bab0ad21439d..ec09e00446182 100644
--- a/content/ko/docs/concepts/extend-kubernetes/_index.md
+++ b/content/ko/docs/concepts/extend-kubernetes/_index.md
@@ -91,7 +91,7 @@ kubectl에서
1. 사용자는 종종 `kubectl`을 사용하여 쿠버네티스 API와 상호 작용한다. [Kubectl 플러그인](/ko/docs/tasks/extend-kubectl/kubectl-plugins/)은 kubectl 바이너리를 확장한다. 개별 사용자의 로컬 환경에만 영향을 미치므로 사이트 전체 정책을 적용할 수는 없다.
2. apiserver는 모든 요청을 처리한다. apiserver의 여러 유형의 익스텐션 포인트는 요청을 인증하거나, 콘텐츠를 기반으로 요청을 차단하거나, 콘텐츠를 편집하고, 삭제 처리를 허용한다. 이 내용은 [API 접근 익스텐션](#api-접근-익스텐션) 섹션에 설명되어 있다.
-3. apiserver는 다양한 종류의 *리소스* 를 제공한다. `pods`와 같은 *빌트인 리소스 종류* 는 쿠버네티스 프로젝트에 의해 정의되며 변경할 수 없다. 직접 정의한 리소스를 추가할 수도 있고, [커스텀 리소스](#사용자-정의-유형) 섹션에 설명된대로 *커스텀 리소스* 라고 부르는 다른 프로젝트에서 정의한 리소스를 추가할 수도 있다. 커스텀 리소스는 종종 API 접근 익스텐션과 함께 사용된다.
+3. apiserver는 다양한 종류의 *리소스* 를 제공한다. `pods`와 같은 *빌트인 리소스 종류* 는 쿠버네티스 프로젝트에 의해 정의되며 변경할 수 없다. 직접 정의한 리소스를 추가할 수도 있고, [커스텀 리소스](#사용자-정의-유형) 섹션에 설명된 대로 *커스텀 리소스* 라고 부르는 다른 프로젝트에서 정의한 리소스를 추가할 수도 있다. 커스텀 리소스는 종종 API 접근 익스텐션과 함께 사용된다.
4. 쿠버네티스 스케줄러는 파드를 배치할 노드를 결정한다. 스케줄링을 확장하는 몇 가지 방법이 있다. 이들은 [스케줄러 익스텐션](#스케줄러-익스텐션) 섹션에 설명되어 있다.
5. 쿠버네티스의 많은 동작은 API-Server의 클라이언트인 컨트롤러(Controller)라는 프로그램으로 구현된다. 컨트롤러는 종종 커스텀 리소스와 함께 사용된다.
6. kubelet은 서버에서 실행되며 파드가 클러스터 네트워크에서 자체 IP를 가진 가상 서버처럼 보이도록 한다. [네트워크 플러그인](#네트워크-플러그인)을 사용하면 다양한 파드 네트워킹 구현이 가능하다.
diff --git a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index df26abccc5af2..76c2bdd003792 100644
--- a/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -7,21 +7,17 @@ weight: 10
-{{< feature-state state="alpha" >}}
-{{< caution >}}알파 기능은 빨리 변경될 수 있다. {{< /caution >}}
-
쿠버네티스의 네트워크 플러그인은 몇 가지 종류가 있다.
-* CNI 플러그인: 상호 운용성을 위해 설계된 appc/CNI 명세를 준수한다.
+* CNI 플러그인: 상호 운용성을 위해 설계된 [컨테이너 네트워크 인터페이스](https://github.com/containernetworking/cni)(CNI) 명세를 준수한다.
+* 쿠버네티스는 CNI 명세의 [v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) 릴리스를 따른다.
* Kubenet 플러그인: `bridge` 와 `host-local` CNI 플러그인을 사용하여 기본 `cbr0` 구현한다.
-
-
## 설치
-kubelet에는 단일 기본 네트워크 플러그인과 전체 클러스터에 공통된 기본 네트워크가 있다. 플러그인은 시작할 때 플러그인을 검색하고, 찾은 것을 기억하며, 파드 라이프사이클에서 적절한 시간에 선택한 플러그인을 실행한다(rkt는 자체 CNI 플러그인을 관리하므로 Docker에만 해당됨). 플러그인 사용 시 명심해야 할 두 가지 Kubelet 커맨드라인 파라미터가 있다.
+kubelet에는 단일 기본 네트워크 플러그인과 전체 클러스터에 공통된 기본 네트워크가 있다. 플러그인은 시작할 때 플러그인을 검색하고, 찾은 것을 기억하며, 파드 라이프사이클에서 적절한 시간에 선택한 플러그인을 실행한다(CRI는 자체 CNI 플러그인을 관리하므로 도커에만 해당됨). 플러그인 사용 시 명심해야 할 두 가지 Kubelet 커맨드라인 파라미터가 있다.
* `cni-bin-dir`: Kubelet은 시작할 때 플러그인에 대해 이 디렉터리를 검사한다.
* `network-plugin`: `cni-bin-dir` 에서 사용할 네트워크 플러그인. 플러그인 디렉터리에서 검색한 플러그인이 보고된 이름과 일치해야 한다. CNI 플러그인의 경우, 이는 단순히 "cni"이다.
@@ -30,7 +26,7 @@ kubelet에는 단일 기본 네트워크 플러그인과 전체 클러스터에
파드 네트워킹을 구성하고 정리하기 위해 [`NetworkPlugin` 인터페이스](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go)를 제공하는 것 외에도, 플러그인은 kube-proxy에 대한 특정 지원이 필요할 수 있다. iptables 프록시는 분명히 iptables에 의존하며, 플러그인은 컨테이너 트래픽이 iptables에 사용 가능하도록 해야 한다. 예를 들어, 플러그인이 컨테이너를 리눅스 브릿지에 연결하는 경우, 플러그인은 `net/bridge/bridge-nf-call-iptables` sysctl을 `1` 로 설정하여 iptables 프록시가 올바르게 작동하는지 확인해야 한다. 플러그인이 리눅스 브리지를 사용하지 않는 경우(그러나 Open vSwitch나 다른 메커니즘과 같은 기능을 사용함) 컨테이너 트래픽이 프록시에 대해 적절하게 라우팅되도록 해야 한다.
-kubelet 네트워크 플러그인이 지정되지 않은 경우, 기본적으로 `noop` 플러그인이 사용되며, `net/bridge/bridge-nf-call-iptables=1` 을 설정하여 간단한 구성(브릿지가 있는 Docker 등)이 iptables 프록시에서 올바르게 작동하도록 한다.
+kubelet 네트워크 플러그인이 지정되지 않은 경우, 기본적으로 `noop` 플러그인이 사용되며, `net/bridge/bridge-nf-call-iptables=1` 을 설정하여 간단한 구성(브릿지가 있는 도커 등)이 iptables 프록시에서 올바르게 작동하도록 한다.
### CNI
@@ -146,7 +142,7 @@ Kubenet은 `cbr0` 라는 리눅스 브리지를 만들고 각 쌍의 호스트
최상의 네트워킹 성능을 얻으려면 MTU를 항상 올바르게 구성해야 한다. 네트워크 플러그인은 일반적으로 합리적인 MTU를
유추하려고 시도하지만, 때로는 로직에 따라 최적의 MTU가 지정되지 않는다. 예를 들어,
-Docker 브리지나 다른 인터페이스에 작은 MTU가 지정되어 있으면, kubenet은 현재 해당 MTU를 선택한다. 또는
+도커 브리지나 다른 인터페이스에 작은 MTU가 지정되어 있으면, kubenet은 현재 해당 MTU를 선택한다. 또는
IPSEC 캡슐화를 사용하는 경우, MTU를 줄여야 하며, 이 계산은 대부분의
네트워크 플러그인에서 범위를 벗어난다.
@@ -161,10 +157,3 @@ AWS에서 `eth0` MTU는 일반적으로 9001이므로, `--network-plugin-mtu=900
* `--network-plugin=cni` 는 `--cni-bin-dir`(기본값 `/opt/cni/bin`)에 있는 실제 CNI 플러그인 바이너리와 `--cni-conf-dir`(기본값 `/etc/cni/net.d`)에 있는 CNI 플러그인 구성과 함께 `cni` 네트워크 플러그인을 사용하도록 지정한다.
* `--network-plugin=kubenet` 은 `/opt/cni/bin` 또는 `cni-bin-dir` 에 있는 CNI `bridge` 및 `host-local` 플러그인과 함께 kubenet 네트워크 플러그인을 사용하도록 지정한다.
* 현재 kubenet 네트워크 플러그인에서만 사용하는 `--network-plugin-mtu=9001` 은 사용할 MTU를 지정한다.
-
-
-
-## {{% heading "whatsnext" %}}
-
-
-
diff --git a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md
index d214a2c0cf3dc..3fe6451755037 100644
--- a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md
+++ b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md
@@ -92,7 +92,7 @@ kubectl에서
1. 사용자는 종종 `kubectl`을 사용하여 쿠버네티스 API와 상호 작용한다. [Kubectl 플러그인](/ko/docs/tasks/extend-kubectl/kubectl-plugins/)은 kubectl 바이너리를 확장한다. 개별 사용자의 로컬 환경에만 영향을 미치므로 사이트 전체 정책을 적용할 수는 없다.
2. apiserver는 모든 요청을 처리한다. apiserver의 여러 유형의 익스텐션 포인트는 요청을 인증하거나, 콘텐츠를 기반으로 요청을 차단하거나, 콘텐츠를 편집하고, 삭제 처리를 허용한다. 이 내용은 [API 접근 익스텐션](/ko/docs/concepts/extend-kubernetes/extend-cluster/#api-접근-익스텐션) 섹션에 설명되어 있다.
-3. apiserver는 다양한 종류의 *리소스* 를 제공한다. `pods`와 같은 *빌트인 리소스 종류* 는 쿠버네티스 프로젝트에 의해 정의되며 변경할 수 없다. 직접 정의한 리소스를 추가할 수도 있고, [커스텀 리소스](/ko/docs/concepts/extend-kubernetes/extend-cluster/#사용자-정의-유형) 섹션에 설명된대로 *커스텀 리소스* 라고 부르는 다른 프로젝트에서 정의한 리소스를 추가할 수도 있다. 커스텀 리소스는 종종 API 접근 익스텐션과 함께 사용된다.
+3. apiserver는 다양한 종류의 *리소스* 를 제공한다. `pods`와 같은 *빌트인 리소스 종류* 는 쿠버네티스 프로젝트에 의해 정의되며 변경할 수 없다. 직접 정의한 리소스를 추가할 수도 있고, [커스텀 리소스](/ko/docs/concepts/extend-kubernetes/extend-cluster/#사용자-정의-유형) 섹션에 설명된 대로 *커스텀 리소스* 라고 부르는 다른 프로젝트에서 정의한 리소스를 추가할 수도 있다. 커스텀 리소스는 종종 API 접근 익스텐션과 함께 사용된다.
4. 쿠버네티스 스케줄러는 파드를 배치할 노드를 결정한다. 스케줄링을 확장하는 몇 가지 방법이 있다. 이들은 [스케줄러 익스텐션](/ko/docs/concepts/extend-kubernetes/#스케줄러-익스텐션) 섹션에 설명되어 있다.
5. 쿠버네티스의 많은 동작은 API-Server의 클라이언트인 컨트롤러(Controller)라는 프로그램으로 구현된다. 컨트롤러는 종종 커스텀 리소스와 함께 사용된다.
6. kubelet은 서버에서 실행되며 파드가 클러스터 네트워크에서 자체 IP를 가진 가상 서버처럼 보이도록 한다. [네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/extend-cluster/#네트워크-플러그인)을 사용하면 다양한 파드 네트워킹 구현이 가능하다.
diff --git a/content/ko/docs/concepts/extend-kubernetes/operator.md b/content/ko/docs/concepts/extend-kubernetes/operator.md
index 9e55dfae04575..f6c80d80679ce 100644
--- a/content/ko/docs/concepts/extend-kubernetes/operator.md
+++ b/content/ko/docs/concepts/extend-kubernetes/operator.md
@@ -122,7 +122,7 @@ kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하
* [kubebuilder](https://book.kubebuilder.io/) 사용하기
* 웹훅(WebHook)과 함께 [Metacontroller](https://metacontroller.app/)를
사용하여 직접 구현하기
- * [오퍼레이터 프레임워크](https://github.com/operator-framework/getting-started) 사용하기
+ * [오퍼레이터 프레임워크](https://operatorframework.io) 사용하기
* 다른 사람들이 사용할 수 있도록 자신의 오퍼레이터를 [게시](https://operatorhub.io/)하기
* 오퍼레이터 패턴을 소개한 [CoreOS 원본 기사](https://coreos.com/blog/introducing-operators.html) 읽기
* 오퍼레이터 구축을 위한 모범 사례에 대한 구글 클라우드(Google Cloud)의 [기사](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) 읽기
diff --git a/content/ko/docs/concepts/overview/_index.md b/content/ko/docs/concepts/overview/_index.md
index 0b3df10062a03..353b222785596 100755
--- a/content/ko/docs/concepts/overview/_index.md
+++ b/content/ko/docs/concepts/overview/_index.md
@@ -2,4 +2,6 @@
title: "개요"
weight: 20
description: 쿠버네티스와 그 컴포넌트에 대한 하이-레벨(high-level) 개요를 제공한다.
+sitemap:
+ priority: 0.9
---
diff --git a/content/ko/docs/concepts/overview/kubernetes-api.md b/content/ko/docs/concepts/overview/kubernetes-api.md
index b26efc8085d8b..65662da9d0be9 100644
--- a/content/ko/docs/concepts/overview/kubernetes-api.md
+++ b/content/ko/docs/concepts/overview/kubernetes-api.md
@@ -39,6 +39,7 @@ OpenAPI 규격은 `/openapi/v2` 엔드포인트에서만 제공된다.
다음과 같은 요청 헤더를 사용해서 응답 형식을 요청할 수 있다.
+
Valid request header values for OpenAPI v2 queries
Header
@@ -66,7 +67,6 @@ OpenAPI 규격은 `/openapi/v2` 엔드포인트에서만 제공된다.
serves application/json
-
Valid request header values for OpenAPI v2 queries
쿠버네티스는 주로 클러스터 내부 통신을 위해 대안적인
@@ -100,13 +100,22 @@ API가 시스템 리소스 및 동작에 대한 명확하고 일관된 보기를
수명 종료 및/또는 실험적 API에 대한 접근을
제어할 수 있도록 한다.
-API 버전 수준 정의에 대한 자세한 내용은
-[API 버전 레퍼런스](/ko/docs/reference/using-api/api-overview/#api-버전-규칙)를 참조한다.
-
보다 쉽게 발전하고 API를 확장하기 위해, 쿠버네티스는
[활성화 또는 비활성화](/ko/docs/reference/using-api/api-overview/#api-그룹-활성화-또는-비활성화-하기)가
가능한 [API 그룹](/ko/docs/reference/using-api/api-overview/#api-그룹)을 구현한다.
+API 리소스는 해당 API 그룹, 리소스 유형, 네임스페이스
+(네임스페이스 리소스용) 및 이름으로 구분된다. API 서버는 여러 API 버전을 통해 동일한
+기본 데이터를 제공하고 API 버전 간의 변환을 투명하게
+처리할 수 있다. 이 모든 다른 버전은 실제로
+동일한 리소스의 표현이다. 예를 들어, 동일한 리소스에 대해 두 가지
+버전 `v1` 과 `v1beta1` 이 있다고 가정한다. 그런 다음 `v1beta1` 버전에서
+생성된 오브젝트를 `v1beta1` 또는 `v1` 버전에서 읽고 업데이트하고
+삭제할 수 있다.
+
+API 버전 수준 정의에 대한 자세한 내용은
+[API 버전 레퍼런스](/ko/docs/reference/using-api/api-overview/#api-버전-규칙)를 참조한다.
+
## API 확장
쿠버네티스 API는 다음 두 가지 방법 중 하나로 확장할 수 있다.
diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md
index ad083dd49df08..7d6c8c1f0eb6a 100644
--- a/content/ko/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md
@@ -7,6 +7,8 @@ weight: 10
card:
name: concepts
weight: 10
+sitemap:
+ priority: 0.9
---
diff --git a/content/ko/docs/concepts/overview/working-with-objects/namespaces.md b/content/ko/docs/concepts/overview/working-with-objects/namespaces.md
index 3d11774cce13e..905375bdc5628 100644
--- a/content/ko/docs/concepts/overview/working-with-objects/namespaces.md
+++ b/content/ko/docs/concepts/overview/working-with-objects/namespaces.md
@@ -24,9 +24,6 @@ weight: 30
네임스페이스는 클러스터 자원을 ([리소스 쿼터](/ko/docs/concepts/policy/resource-quotas/)를 통해) 여러 사용자 사이에서 나누는 방법이다.
-이후 버전의 쿠버네티스에서는 같은 네임스페이스의 오브젝트는 기본적으로
-동일한 접근 제어 정책을 갖게 된다.
-
동일한 소프트웨어의 다른 버전과 같이 약간 다른 리소스를 분리하기 위해
여러 네임스페이스를 사용할 필요는 없다. 동일한 네임스페이스 내에서 리소스를
구별하기 위해 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)을
diff --git a/content/ko/docs/concepts/configuration/pod-overhead.md b/content/ko/docs/concepts/scheduling-eviction/pod-overhead.md
similarity index 100%
rename from content/ko/docs/concepts/configuration/pod-overhead.md
rename to content/ko/docs/concepts/scheduling-eviction/pod-overhead.md
diff --git a/content/ko/docs/concepts/services-networking/ingress-controllers.md b/content/ko/docs/concepts/services-networking/ingress-controllers.md
index 5d6dbefafe8b8..72e077de6c286 100644
--- a/content/ko/docs/concepts/services-networking/ingress-controllers.md
+++ b/content/ko/docs/concepts/services-networking/ingress-controllers.md
@@ -44,9 +44,9 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의
* [NGINX, Inc.](https://www.nginx.com/)는
[쿠버네티스를 위한 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx/kubernetes-ingress-controller)에 대한 지원과 유지 보수를 제공한다.
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/)는 쿠버네티스 인그레스와 같은 유스케이스를 포함하는 서비스 구성을 위한 HTTP 라우터와 리버스 프록시는 사용자 정의 프록시를 빌드하기 위한 라이브러리로 설계되었다.
-* [Traefik](https://github.com/containous/traefik)은
+* [Traefik](https://github.com/traefik/traefik)은
모든 기능([Let's Encrypt](https://letsencrypt.org), secrets, http2, 웹 소켓)을 갖춘 인그레스 컨트롤러로,
- [Containous](https://containo.us/services)에서 상업적인 지원을 제공한다.
+ [Traefik Labs](https://traefik.io)에서 상업적인 지원을 제공한다.
## 여러 인그레스 컨트롤러 사용
diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md
index 7b552c424f45d..c4d18ada98e03 100644
--- a/content/ko/docs/concepts/services-networking/ingress.md
+++ b/content/ko/docs/concepts/services-networking/ingress.md
@@ -408,7 +408,7 @@ type: kubernetes.io/tls
인그레스에서 시크릿을 참조하면 인그레스 컨트롤러가 TLS를 사용하여
클라이언트에서 로드 밸런서로 채널을 보호하도록 지시한다. 생성한
-TLS 시크릿이 `sslexample.foo.com` 의 정규화 된 도메인 이름(FQDN)이라고
+TLS 시크릿이 `https-example.foo.com` 의 정규화 된 도메인 이름(FQDN)이라고
하는 일반 이름(CN)을 포함하는 인증서에서 온 것인지 확인해야 한다.
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md
index ee0553a46763f..e79762e9e724b 100644
--- a/content/ko/docs/concepts/services-networking/service.md
+++ b/content/ko/docs/concepts/services-networking/service.md
@@ -234,7 +234,7 @@ DNS 레코드를 구성하고, 라운드-로빈 이름 확인 방식을
이 모드에서는, kube-proxy는 쿠버네티스 마스터의 서비스, 엔드포인트 오브젝트의
추가와 제거를 감시한다. 각 서비스는 로컬 노드에서
포트(임의로 선택됨)를 연다. 이 "프록시 포트"에 대한 모든
-연결은 (엔드포인트를 통해 보고된대로) 서비스의 백엔드 파드 중 하나로 프록시된다.
+연결은 (엔드포인트를 통해 보고된 대로) 서비스의 백엔드 파드 중 하나로 프록시된다.
kube-proxy는 사용할 백엔드 파드를 결정할 때 서비스의
`SessionAffinity` 설정을 고려한다.
@@ -879,6 +879,10 @@ Classic ELB의 연결 드레이닝은
# 이 값은 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
# 값 보다 작아야한다. 기본값은 5이며, 2와 60 사이여야 한다.
+ service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
+ # 생성된 ELB에 추가할 기존 보안 그룹 목록.
+ # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups 어노테이션과 달리, 이는 이전에 ELB에 할당된 다른 모든 보안 그룹을 대체한다.
+
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# ELB에 추가될 추가 보안 그룹(security group) 목록
diff --git a/content/ko/docs/concepts/storage/persistent-volumes.md b/content/ko/docs/concepts/storage/persistent-volumes.md
index 12f35aa4282ba..4cf129a3e6675 100644
--- a/content/ko/docs/concepts/storage/persistent-volumes.md
+++ b/content/ko/docs/concepts/storage/persistent-volumes.md
@@ -140,7 +140,7 @@ Events:
기본 볼륨 플러그인에서 지원하는 경우 `Recycle` 반환 정책은 볼륨에서 기본 스크럽(`rm -rf /thevolume/*`)을 수행하고 새 클레임에 다시 사용할 수 있도록 한다.
그러나 관리자는 [레퍼런스](/docs/reference/command-line-tools-reference/kube-controller-manager/)에
-설명된대로 쿠버네티스 컨트롤러 관리자 커맨드라인 인자(command line arguments)를
+설명된 대로 쿠버네티스 컨트롤러 관리자 커맨드라인 인자(command line arguments)를
사용하여 사용자 정의 재활용 파드 템플릿을 구성할 수 있다.
사용자 정의 재활용 파드 템플릿에는 아래 예와 같이 `volumes` 명세가
포함되어야 한다.
@@ -168,6 +168,45 @@ spec:
그러나 `volumes` 부분의 사용자 정의 재활용 파드 템플릿에 지정된 특정 경로는 재활용되는 볼륨의 특정 경로로 바뀐다.
+### 퍼시스턴트볼륨 예약
+
+컨트롤 플레인은 클러스터에서 [퍼시스턴트볼륨클레임을 일치하는 퍼시스턴트볼륨에 바인딩](#바인딩)할
+수 있다. 그러나, PVC를 특정 PV에 바인딩하려면, 미리 바인딩해야 한다.
+
+퍼시스턴트볼륨클레임에서 퍼시스턴트볼륨을 지정하여, 특정 PV와 PVC 간의 바인딩을 선언한다.
+퍼시스턴트볼륨이 존재하고 `claimRef` 필드를 통해 퍼시스턴트볼륨클레임을 예약하지 않은 경우, 퍼시스턴트볼륨 및 퍼시스턴트볼륨클레임이 바인딩된다.
+
+바인딩은 노드 선호도(affinity)를 포함하여 일부 볼륨 일치(matching) 기준과 관계없이 발생한다.
+컨트롤 플레인은 여전히 [스토리지 클래스](https://kubernetes.io/ko/docs/concepts/storage/storage-classes/), 접근 모드 및 요청된 스토리지 크기가 유효한지 확인한다.
+
+```
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: foo-pvc
+ namespace: foo
+spec:
+ volumeName: foo-pv
+ ...
+```
+
+이 메서드는 퍼시스턴트볼륨에 대한 바인딩 권한을 보장하지 않는다. 다른 퍼시스턴트볼륨클레임에서 지정한 PV를 사용할 수 있는 경우, 먼저 해당 스토리지 볼륨을 예약해야 한다. PV의 `claimRef` 필드에 관련 퍼시스턴트볼륨클레임을 지정하여 다른 PVC가 바인딩할 수 없도록 한다.
+
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: foo-pv
+spec:
+ claimRef:
+ name: foo-pvc
+ namespace: foo
+ ...
+```
+
+이는 기존 PV를 재사용하는 경우를 포함하여 `claimPolicy` 가
+`Retain` 으로 설정된 퍼시스턴트볼륨을 사용하려는 경우에 유용하다.
+
### 퍼시스턴트 볼륨 클레임 확장
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
diff --git a/content/ko/docs/concepts/workloads/_index.md b/content/ko/docs/concepts/workloads/_index.md
index c898502b39503..a020124b6dc89 100644
--- a/content/ko/docs/concepts/workloads/_index.md
+++ b/content/ko/docs/concepts/workloads/_index.md
@@ -3,4 +3,55 @@ title: "워크로드"
weight: 50
description: >
쿠버네티스에서 배포할 수 있는 가장 작은 컴퓨트 오브젝트인 파드와, 이를 실행하는 데 도움이 되는 하이-레벨(higher-level) 추상화
+no_list: true
---
+
+{{< glossary_definition term_id="workload" length="short" >}}
+워크로드가 단일 컴포넌트이거나 함께 작동하는 여러 컴포넌트이든 관계없이, 쿠버네티스에서는 워크로드를 일련의
+[파드](/ko/docs/concepts/workloads/pods) 집합 내에서 실행한다.
+쿠버네티스에서 파드는 클러스터에서 실행 중인 {{< glossary_tooltip text="컨테이너" term_id="container" >}}
+집합을 나타낸다.
+
+파드에는 정의된 라이프사이클이 있다. 예를 들어, 일단 파드가 클러스터에서 실행되고
+해당 파드가 실행 중인 {{< glossary_tooltip text="노드" term_id="node" >}}에서
+심각한 오류가 발생하게 되면 해당 노드의 모든 파드가 실패한다. 쿠버네티스는 이 수준의 실패를
+최종적으로 처리한다. 나중에 노드가 복구되더라도 새 파드를 만들어야 한다.
+
+그러나, 작업이 훨씬 쉽도록, 각 파드를 직접 관리할 필요는 없도록 만들었다.
+대신, 사용자를 대신하여 파드 집합을 관리하는 _워크로드 리소스_ 를 사용할 수 있다.
+이러한 리소스는 지정한 상태와 일치하도록 올바른 수의 올바른 파드 유형이
+실행되고 있는지 확인하는 {{< glossary_tooltip term_id="controller" text="컨트롤러" >}}를
+구성한다.
+
+이러한 워크로드 리소스에는 다음이 포함된다.
+
+* [디플로이먼트(Deployment)](/ko/docs/concepts/workloads/controllers/deployment/) 및 [레플리카셋(ReplicaSet)](/ko/docs/concepts/workloads/controllers/replicaset/)
+ (레거시 리소스 {{< glossary_tooltip text="레플리케이션컨트롤러(ReplicationController)" term_id="replication-controller" >}}를 대체);
+* [스테이트풀셋(StatefulSet)](/ko/docs/concepts/workloads/controllers/statefulset/);
+* 스토리지 드라이버 또는 네트워크 플러그인과 같은 노드-로컬 기능을 제공하는
+ 파드를 실행하기 위한 [데몬셋(DaemonSet)](/ko/docs/concepts/workloads/controllers/daemonset/)
+* 완료될 때까지 실행되는 작업에 대한
+ [잡(Job)](/ko/docs/concepts/workloads/controllers/job/) 및
+ [크론잡(CronJob)](/ko/docs/concepts/workloads/controllers/cronjob/)
+
+관련성을 찾을 수 있는 두 가지 지원 개념도 있다.
+* [가비지(Garbage) 수집](/ko/docs/concepts/workloads/controllers/garbage-collection/)은 _소유하는 리소스_ 가
+ 제거된 후 클러스터에서 오브젝트를 정리한다.
+* [_time-to-live after finished_ 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가
+ 완료된 이후 정의된 시간이 경과되면 잡을 제거한다.
+
+## {{% heading "whatsnext" %}}
+
+각 리소스에 대해 읽을 수 있을 뿐만 아니라, 리소스와 관련된 특정 작업에 대해서도 알아볼 수 있다.
+
+* [디플로이먼트를 사용하여 스테이트리스(stateless) 애플리케이션 실행](/docs/tasks/run-application/run-stateless-application-deployment/)
+* 스테이트풀(stateful) 애플리케이션을 [단일 인스턴스](/ko/docs/tasks/run-application/run-single-instance-stateful-application/)
+ 또는 [복제된 세트](/docs/tasks/run-application/run-replicated-stateful-application/)로 실행
+* [크론잡을 사용하여 자동화된 작업 실행](/ko/docs/tasks/job/automated-tasks-with-cron-jobs/)
+
+일단 애플리케이션이 실행되면, 인터넷에서 [서비스](/ko/docs/concepts/services-networking/service/)로
+사용하거나, 웹 애플리케이션의 경우에만
+[인그레스(Ingress)](/ko/docs/concepts/services-networking/ingress)를 이용하여 사용할 수 있다.
+
+[구성](/ko/docs/concepts/configuration/) 페이지를 방문하여 구성에서 코드를 분리하는 쿠버네티스의
+메커니즘에 대해 알아볼 수도 있다.
diff --git a/content/ko/docs/concepts/workloads/controllers/_index.md b/content/ko/docs/concepts/workloads/controllers/_index.md
index 8193613bfef58..1c4271adf9e8e 100644
--- a/content/ko/docs/concepts/workloads/controllers/_index.md
+++ b/content/ko/docs/concepts/workloads/controllers/_index.md
@@ -1,4 +1,4 @@
---
-title: "컨트롤러"
+title: "워크로드 리소스"
weight: 20
---
diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md
index 745a33b52d031..0c6a27803132e 100644
--- a/content/ko/docs/concepts/workloads/controllers/deployment.md
+++ b/content/ko/docs/concepts/workloads/controllers/deployment.md
@@ -1015,7 +1015,7 @@ echo $?
### 실패한 디플로이먼트에서의 운영
완료된 디플로이먼트에 적용되는 모든 행동은 실패한 디플로이먼트에도 적용된다.
-디플로이먼트 파드 템플릿에서 여러개의 수정사항을 적용해야하는 경우 스케일 업/다운 하거나, 이전 수정 버전으로 롤백하거나, 일시 중지할 수 있다.
+디플로이먼트 파드 템플릿에서 여러 개의 수정사항을 적용해야하는 경우 스케일 업/다운 하거나, 이전 수정 버전으로 롤백하거나, 일시 중지할 수 있다.
## 정책 초기화
diff --git a/content/ko/docs/concepts/workloads/controllers/garbage-collection.md b/content/ko/docs/concepts/workloads/controllers/garbage-collection.md
index 21bfce8fc2d02..cf6641958e939 100644
--- a/content/ko/docs/concepts/workloads/controllers/garbage-collection.md
+++ b/content/ko/docs/concepts/workloads/controllers/garbage-collection.md
@@ -152,7 +152,7 @@ kubectl delete replicaset my-repset --cascade=false
### 디플로이먼트에 대한 추가 참고
1.7 이전에서는 디플로이먼트와 캐스케이딩 삭제를 사용하면 반드시 `propagationPolicy: Foreground`
-를 사용해서 생성된 레플리카셋 뿐만 아니라 해당 파드도 삭제해야 한다. 만약 이 _propagationPolicy_
+를 사용해서 생성된 레플리카셋뿐만 아니라 해당 파드도 삭제해야 한다. 만약 이 _propagationPolicy_
유형을 사용하지 않는다면, 레플리카셋만 삭제되고 파드는 분리된 상태로 남을 것이다.
더 많은 정보는 [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment-284766613)를 본다.
diff --git a/content/ko/docs/concepts/workloads/controllers/job.md b/content/ko/docs/concepts/workloads/controllers/job.md
index 50d446a00eb92..5c505a32860de 100644
--- a/content/ko/docs/concepts/workloads/controllers/job.md
+++ b/content/ko/docs/concepts/workloads/controllers/job.md
@@ -200,7 +200,7 @@ _작업 큐_ 잡은 `.spec.completions` 를 설정하지 않은 상태로 두고
두 번 시작하는 경우가 있다는 점을 참고한다.
`.spec.parallelism` 그리고 `.spec.completions` 를 모두 1보다 크게 지정한다면 한번에
-여러개의 파드가 실행될 수 있다. 따라서 파드는 동시성에 대해서도 관대(tolerant)해야 한다.
+여러 개의 파드가 실행될 수 있다. 따라서 파드는 동시성에 대해서도 관대(tolerant)해야 한다.
### 파드 백오프(backoff) 실패 정책
diff --git a/content/ko/docs/concepts/workloads/pods/_index.md b/content/ko/docs/concepts/workloads/pods/_index.md
index 6eeabfd48e5ec..5c5748e24e8cb 100644
--- a/content/ko/docs/concepts/workloads/pods/_index.md
+++ b/content/ko/docs/concepts/workloads/pods/_index.md
@@ -13,7 +13,7 @@ card:
_파드(Pod)_ 는 쿠버네티스에서 생성하고 관리할 수 있는 배포 가능한 가장 작은 컴퓨팅 단위이다.
_파드_ (고래 떼(pod of whales)나 콩꼬투리(pea pod)와 마찬가지로)는 하나 이상의
-{{< glossary_tooltip text="컨테이너" term_id="container" >}}의 그룹이다.
+[컨테이너](/ko/docs/concepts/containers/)의 그룹이다.
이 그룹은 스토리지/네트워크를 공유하고, 해당 컨테이너를 구동하는 방식에 대한 명세를 갖는다. 파드의 콘텐츠는 항상 함께 배치되고,
함께 스케줄되며, 공유 콘텍스트에서 실행된다. 파드는
애플리케이션 별 "논리 호스트"를 모델링한다. 여기에는 상대적으로 밀접하게 결합된 하나 이상의
diff --git a/content/ko/docs/concepts/workloads/pods/init-containers.md b/content/ko/docs/concepts/workloads/pods/init-containers.md
index 8d0ca2008a7f3..3267f5b04d156 100644
--- a/content/ko/docs/concepts/workloads/pods/init-containers.md
+++ b/content/ko/docs/concepts/workloads/pods/init-containers.md
@@ -26,8 +26,8 @@ weight: 40
* 초기화 컨테이너는 항상 완료를 목표로 실행된다.
* 각 초기화 컨테이너는 다음 초기화 컨테이너가 시작되기 전에 성공적으로 완료되어야 한다.
-만약 파드를 위한 초기화 컨테이너가 실패한다면, 쿠버네티스는 초기화 컨테이너가 성공할 때까지 파드를
-반복적으로 재시작한다. 그러나, 만약 파드의 `restartPolicy` 를 절대 하지 않음(Never)으로 설정했다면, 파드는 재시작되지 않는다.
+만약 파드의 초기화 컨테이너가 실패하면, kubelet은 초기화 컨테이너가 성공할 때까지 반복적으로 재시작한다.
+그러나, 만약 파드의 `restartPolicy` 를 절대 하지 않음(Never)으로 설정하고, 해당 파드를 시작하는 동안 초기화 컨테이너가 실패하면, 쿠버네티스는 전체 파드를 실패한 것으로 처리한다.
컨테이너를 초기화 컨테이너로 지정하기 위해서는,
파드 스펙에 앱 `containers` 배열과 나란히 `initContainers` 필드를
diff --git a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md
index e0579af164b46..00aed3c13d8b5 100644
--- a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md
+++ b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md
@@ -13,7 +13,8 @@ weight: 30
파드가 실행되는 동안, kubelet은 일종의 오류를 처리하기 위해 컨테이너를 다시
시작할 수 있다. 파드 내에서, 쿠버네티스는 다양한 컨테이너
-[상태](#컨테이너-상태)와 핸들을 추적한다.
+[상태](#컨테이너-상태)를 추적하고 파드를 다시 정상 상태로 만들기 위해 취할 조치를
+결정한다.
쿠버네티스 API에서 파드는 명세와 실제 상태를 모두 가진다.
파드 오브젝트의 상태는 일련의 [파드 조건](#파드의-조건)으로 구성된다.
@@ -314,7 +315,7 @@ kubelet은 실행 중인 컨테이너들에 대해서 선택적으로 세 가지
### 언제 스타트업 프로브를 사용해야 하는가?
-{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.18" state="beta" >}}
스타트업 프로브는 서비스를 시작하는 데 오랜 시간이 걸리는 컨테이너가 있는
파드에 유용하다. 긴 활성 간격을 설정하는 대신, 컨테이너가 시작될 때
@@ -342,7 +343,9 @@ kubelet은 실행 중인 컨테이너들에 대해서 선택적으로 세 가지
종료를 시도한다.
일반적으로, 컨테이너 런타임은 각 컨테이너의 기본 프로세스에 TERM 신호를
-전송한다. 일단 유예 기간이 만료되면, KILL 시그널이 나머지 프로세스로
+전송한다. 많은 컨테이너 런타임은 컨테이너 이미지에 정의된 `STOPSIGNAL` 값을 존중하며
+TERM 대신 이 값을 보낸다.
+일단 유예 기간이 만료되면, KILL 시그널이 나머지 프로세스로
전송되고, 그런 다음 파드는
{{< glossary_tooltip text="API 서버" term_id="kube-apiserver" >}}로부터 삭제된다. 프로세스가
종료될 때까지 기다리는 동안 kubelet 또는 컨테이너 런타임의 관리 서비스가 다시 시작되면, 클러스터는
diff --git a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
index 31587a99e64dd..6a38cb3f6c06d 100644
--- a/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+++ b/content/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
@@ -192,7 +192,7 @@ graph BT
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
-이 경우에는, 첫번째 제약 조건에 부합시키려면, 신규 파드는 오직 "zoneB"에만 배치할 수 있다. 두 번째 제약 조건에서는 신규 파드는 오직 "node4"에만 배치할 수 있다. 그런 다음 두 가지 제약 조건의 결과는 AND 가 되므로, 실행 가능한 유일한 옵션은 "node4"에 배치하는 것이다.
+이 경우에는, 첫 번째 제약 조건에 부합시키려면, 신규 파드는 오직 "zoneB"에만 배치할 수 있다. 두 번째 제약 조건에서는 신규 파드는 오직 "node4"에만 배치할 수 있다. 그런 다음 두 가지 제약 조건의 결과는 AND 가 되므로, 실행 가능한 유일한 옵션은 "node4"에 배치하는 것이다.
다중 제약 조건은 충돌로 이어질 수 있다. 3개의 노드를 가지는 클러스터 하나가 2개의 영역에 걸쳐 있다고 가정한다.
diff --git a/content/ko/docs/contribute/_index.md b/content/ko/docs/contribute/_index.md
index d5f424cdc70cb..4b14ad3c803ea 100644
--- a/content/ko/docs/contribute/_index.md
+++ b/content/ko/docs/contribute/_index.md
@@ -13,6 +13,13 @@ card:
+*쿠버네티스는 신규 및 숙련된 모든 기여자의 개선을 환영합니다!*
+
+{{< note >}}
+일반적인 쿠버네티스에 기여하는 방법에 대한 자세한 내용은
+[기여자 문서](https://www.kubernetes.dev/docs/)를 참고한다.
+{{< /note >}}
+
이 웹사이트는 [쿠버네티스 SIG Docs](/ko/docs/contribute/#sig-docs에-참여)에 의해서 관리됩니다.
쿠버네티스 문서 기여자들은
@@ -22,8 +29,6 @@ card:
- 문서를 번역합니다.
- 쿠버네티스 릴리스 주기에 맞추어 문서 부분을 관리하고 발행합니다.
-쿠버네티스 문서는 새롭고 경험이 풍부한 모든 기여자의 개선을 환영합니다!
-
## 시작하기
diff --git a/content/ko/docs/contribute/review/for-approvers.md b/content/ko/docs/contribute/review/for-approvers.md
index 6ceffc9ef49a0..ff1c33f3834c4 100644
--- a/content/ko/docs/contribute/review/for-approvers.md
+++ b/content/ko/docs/contribute/review/for-approvers.md
@@ -190,7 +190,7 @@ SIG Docs가 처리 방법을 문서화할 정도로 다음과 같은 유형의
문서에 대한 일부 이슈는 실제로 기본 코드와 관련된 이슈이거나, 튜토리얼과
같은 무언가가 작동하지 않을 때 도움을 요청하는 것이다.
-문서와 관련이 없는 이슈의 경우, `kind/support` 레이블과 함께 요청자에게 지원받을 수 있는 곳(슬랙, Stack Overflow)을
+문서와 관련이 없는 이슈의 경우, `triage/support` 레이블과 함께 요청자에게 지원받을 수 있는 곳(슬랙, Stack Overflow)을
알려주며 이슈를 닫고, 기능 관련 버그에 대한 이슈인 경우,
관련 리포지터리를 코멘트로 남긴다(`kubernetes/kubernetes` 는
시작하기 좋은 곳이다).
@@ -221,6 +221,3 @@ https://github.com/kubernetes/kubernetes 에서
문서에 대한 이슈인 경우 이 이슈를 다시 여십시오.
```
-
-
-
diff --git a/content/ko/docs/reference/access-authn-authz/authorization.md b/content/ko/docs/reference/access-authn-authz/authorization.md
index f85b1635e656d..3fe13d7ced5b8 100644
--- a/content/ko/docs/reference/access-authn-authz/authorization.md
+++ b/content/ko/docs/reference/access-authn-authz/authorization.md
@@ -47,7 +47,7 @@ weight: 60
* **Resource** - 접근 중인 리소스의 ID 또는 이름(리소스 요청만 해당) -- `get`, `update`, `patch`, `delete` 동사를 사용하는 리소스 요청의 경우 리소스 이름을 지정해야 한다.
* **Subresource** - 접근 중인 하위 리소스(리소스 요청만 해당).
* **Namespace** - 접근 중인 오브젝트의 네임스페이스(네임스페이스에 할당된 리소스 요청만 해당)
- * **API group** - 접근 중인 {{< glossary_tooltip text="API 그룹" term_id="api-group" >}}(리소스 요청에만 해당). 빈 문자열은 [핵심(core) API 그룹](/ko/docs/concepts/overview/kubernetes-api/)을 지정한다.
+ * **API group** - 접근 중인 {{< glossary_tooltip text="API 그룹" term_id="api-group" >}}(리소스 요청에만 해당). 빈 문자열은 [핵심(core) API 그룹](/ko/docs/reference/using-api/api-overview/#api-그룹)을 지정한다.
## 요청 동사 결정
diff --git a/content/ko/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md b/content/ko/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md
new file mode 100644
index 0000000000000..61f0b35d1fbbc
--- /dev/null
+++ b/content/ko/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md
@@ -0,0 +1,83 @@
+---
+title: Kubelet 인증/인가
+---
+
+
+## 개요
+
+kubelet의 HTTPS 엔드포인트는 다양한 민감도의 데이터에 대한 접근을 노출시키며,
+노드와 컨테이너 내에서 다양한 수준의 권한으로 작업을 수행할 수 있도록 허용한다.
+
+이 문서는 kubelet의 HTTPS 엔드포인트에 대한 접근을 인증하고 인가하는 방법을 설명한다.
+
+## Kubelet 인증
+
+기본적으로, 다른 구성의 인증 방법에 의해 거부되지 않은 kubelet의 HTTPS 엔드포인트에 대한 요청은
+익명의 요청으로 처리되며, `system:anonymous`의 사용자 이름과 `system:unauthenticated`
+의 그룹이 부여된다.
+
+익명의 접근을 비활성화하고 인증되지 않은 요청에 `401 Unauthorized` 응답을 보내려면 아래를 참고한다.
+
+* `--anonymous-auth=false` 플래그로 kubelet을 시작
+
+kubelet의 HTTPS 엔드포인트에 대한 X509 클라이언트 인증서 인증을 활성화하려면 아래를 참고한다.
+
+* `--client-ca-file` 플래그로 kubelet을 시작하면 클라이언트 인증서를 확인할 수 있는 CA 번들을 제공
+* `--kubelet-client-certificate` 및 `--kubelet-client-key` 플래그로 apiserver를 시작
+* 자세한 내용은 [apiserver 인증 문서](/docs/reference/access-authn-authz/authentication/#x509-client-certs)를 참고
+
+API bearer 토큰(서비스 계정 토큰 포함)을 kubelet의 HTTPS 엔드포인트 인증에 사용하려면 아래를 참고한다.
+
+* API 서버에서 `authentication.k8s.io/v1beta1` API 그룹이 사용 가능한지 확인
+* `--authentication-token-webhook` 및 `--kubeconfig` 플래그로 kubelet을 시작
+* kubelet은 구성된 API 서버의 `TokenReview` API를 호출하여 bearer 토큰에서 사용자 정보를 결정
+
+## Kubelet 승인
+
+성공적으로 인증된 모든 요청(익명 요청 포함)이 승인된다. 기본 인가 모드는 모든 요청을 허용하는 `AlwaysAllow` 이다.
+
+kubelet API에 대한 접근을 세분화하는 데는 다양한 이유가 있다.
+
+* 익명 인증을 사용할 수 있지만, 익명 사용자의 kubelet API 호출 기능은 제한되어야 함
+* bearer 토큰 인증을 사용할 수 있지만, 임의의 API 사용자(API 계정)의 kubelet API 호출 기능은 제한되어야 함
+* 클라이언트 인증을 사용할 수 있지만, 구성된 CA에서 서명한 일부 클라이언트 인증서만 kubelet API를 사용하도록 허용해야 함
+
+kubelet API에 대한 접근을 세분화하려면 API 서버에 권한을 위임한다.
+
+* `authorization.k8s.io/v1beta1` API 그룹이 API 서버에서 사용 가능한지 확인
+* `--authorization-mode=Webhook` 및 `--kubeconfig` 플래그로 kubelet을 시작
+* kubelet은 구성된 API 서버의 `SubjectAccessReview` API를 호출하여 각각의 요청이 승인되었는지 여부를 확인
+
+kubelet은 API 요청을 apiserver와 동일한 [요청 속성](/ko/docs/reference/access-authn-authz/authorization/#요청-속성-검토) 접근 방식을 사용하여 승인한다.
+
+동사는 들어오는 요청의 HTTP 동사로부터 결정된다.
+
+HTTP 동사 | 요청 동사
+----------|---------------
+POST | create
+GET, HEAD | get
+PUT | update
+PATCH | patch
+DELETE | delete
+
+리소스 및 하위 리소스는 들어오는 요청의 경로로부터 결정된다.
+
+Kubelet API | 리소스 | 하위 리소스
+-------------|----------|------------
+/stats/\* | nodes | stats
+/metrics/\* | nodes | metrics
+/logs/\* | nodes | log
+/spec/\* | nodes | spec
+*all others* | nodes | proxy
+
+네임스페이스와 API 그룹 속성은 항상 빈 문자열이며,
+리소스 이름은 항상 kubelet의 `Node` API 오브젝트 이름이다.
+
+이 모드로 실행할 때, `--kubelet-client-certificate` 및 `--kubelet-client-key` 플래그로 식별된 사용자에게
+다음 속성에 대한 권한이 있는지 확인한다.
+
+* verb=\*, resource=nodes, subresource=proxy
+* verb=\*, resource=nodes, subresource=stats
+* verb=\*, resource=nodes, subresource=log
+* verb=\*, resource=nodes, subresource=spec
+* verb=\*, resource=nodes, subresource=metrics
diff --git a/content/ko/docs/reference/glossary/kubelet.md b/content/ko/docs/reference/glossary/kubelet.md
index 671a50173bad3..caf6ba894b102 100644
--- a/content/ko/docs/reference/glossary/kubelet.md
+++ b/content/ko/docs/reference/glossary/kubelet.md
@@ -6,14 +6,12 @@ full_link: /docs/reference/generated/kubelet
short_description: >
클러스터의 각 노드에서 실행되는 에이전트. Kubelet은 파드에서 컨테이너가 확실하게 동작하도록 관리한다.
-aka:
+aka:
tags:
- fundamental
-- core-object
---
클러스터의 각 {{< glossary_tooltip text="노드" term_id="node" >}}에서 실행되는 에이전트. Kubelet은 {{< glossary_tooltip text="파드" term_id="pod" >}}에서 {{< glossary_tooltip text="컨테이너" term_id="container" >}}가 확실하게 동작하도록 관리한다.
-
+
Kubelet은 다양한 메커니즘을 통해 제공된 파드 스펙(PodSpec)의 집합을 받아서 컨테이너가 해당 파드 스펙에 따라 건강하게 동작하는 것을 확실히 한다. Kubelet은 쿠버네티스를 통해 생성되지 않는 컨테이너는 관리하지 않는다.
-
diff --git a/content/ko/docs/reference/glossary/managed-service.md b/content/ko/docs/reference/glossary/managed-service.md
index d34e2832e31a3..7ecf8aa1d8579 100644
--- a/content/ko/docs/reference/glossary/managed-service.md
+++ b/content/ko/docs/reference/glossary/managed-service.md
@@ -1,5 +1,5 @@
---
-title: 매니지드 서비스
+title: 매니지드 서비스(Managed Service)
id: managed-service
date: 2018-04-12
full_link:
diff --git a/content/ko/docs/reference/glossary/service-broker.md b/content/ko/docs/reference/glossary/service-broker.md
new file mode 100644
index 0000000000000..bd671848984ef
--- /dev/null
+++ b/content/ko/docs/reference/glossary/service-broker.md
@@ -0,0 +1,22 @@
+---
+title: 서비스 브로커(Service Broker)
+id: service-broker
+date: 2018-04-12
+full_link:
+short_description: >
+ 서드파티에서 제공하고 유지 관리하는 일련의 매니지드 서비스에 대한 엔드포인트이다.
+
+aka:
+tags:
+- extension
+---
+ 서드파티에서 제공하고 유지 관리하는 일련의 {{< glossary_tooltip text="매니지드 서비스" term_id="managed-service" >}}에 대한 엔드포인트이다.
+
+
+
+{{< glossary_tooltip text="서비스 브로커" term_id="service-broker" >}}는
+[오픈 서비스 브로커 API 명세](https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md)를
+구현하고 애플리케이션이 매니지드 서비스를 사용할 수 있도록 표준 인터페이스를 제공한다.
+[서비스 카탈로그](/ko/docs/concepts/extend-kubernetes/service-catalog/)는
+서비스 브로커가 제공하는 매니지드 서비스의 목록과 프로비전, 바인딩하는 방법을 제공한다.
+
diff --git a/content/ko/docs/reference/kubectl/conventions.md b/content/ko/docs/reference/kubectl/conventions.md
new file mode 100644
index 0000000000000..de771c2d65c48
--- /dev/null
+++ b/content/ko/docs/reference/kubectl/conventions.md
@@ -0,0 +1,62 @@
+---
+title: kubectl 사용 규칙
+
+
+content_type: concept
+---
+
+
+`kubectl`에 대한 권장 사용 규칙.
+
+
+
+
+## 재사용 가능한 스크립트에서 `kubectl` 사용
+
+스크립트의 안정적인 출력을 위해서
+
+* `-o name`, `-o json`, `-o yaml`, `-o go-template` 혹은 `-o jsonpath`와 같은 머신 지향(machine-oriented) 출력 양식 중 하나를 요청한다.
+* 예를 들어 `jobs.v1.batch/myjob`과 같이 전체 버전을 사용한다. 이를 통해 `kubectl`이 시간이 지남에 따라 변경될 수 있는 기본 버전을 사용하지 않도록 한다.
+* 문맥, 설정 또는 기타 암묵적 상태에 의존하지 않는다.
+
+## 모범 사례
+
+### `kubectl run`
+
+`kubectl run`으로 infrastructure as code를 충족시키기 위해서
+
+* 버전이 명시된 태그로 이미지를 태그하고 그 태그를 새로운 버전으로 이동하지 않는다. 예를 들어, `:latest`가 아닌 `:v1234`, `v1.2.3`, `r03062016-1-4`를 사용한다(자세한 정보는 [구성 모범 사례](/ko/docs/concepts/configuration/overview/#컨테이너-이미지)를 참고한다).
+* 많은 파라미터가 적용된 이미지를 위한 스크립트를 작성한다.
+* 필요하지만 `kubectl run` 플래그를 통해 표현할 수 없는 기능은 구성 파일을 소스 코드 버전 관리 시스템에 넣어서 전환한다.
+
+`--dry-run` 플래그를 사용하여 실제로 제출하지 않고 클러스터로 보낼 오브젝트를 미리 볼 수 있다.
+
+{{< note >}}
+모든 `kubectl`의 생성기(generator)는 더 이상 사용 할 수 없다. 생성기 [목록](https://v1-17.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators) 및 사용 방법은 쿠버네티스 v1.17 문서를 참고한다.
+{{< /note >}}
+
+#### 생성기
+`kubectl create --dry-run -o yaml`라는 kubectl 커맨드를 통해 다음과 같은 리소스를 생성 할 수 있다.
+```
+ clusterrole 클러스터롤(ClusterRole)를 생성한다.
+ clusterrolebinding 특정 클러스터롤에 대한 클러스터롤바인딩(ClusterRoleBinding)을 생성한다.
+ configmap 로컬 파일, 디렉토리 또는 문자 그대로의 값으로 컨피그맵(ConfigMap)을 생성한다.
+ cronjob 지정된 이름으로 크론잡(CronJob)을 생성한다.
+ deployment 지정된 이름으로 디플로이먼트(Deployment)를 생성한다.
+ job 지정된 이름으로 잡(Job)을 생성한다.
+ namespace 지정된 이름으로 네임스페이스(Namespace)를 생성한다.
+ poddisruptionbudget 지정된 이름으로 pod disruption budget을 생성한다.
+ priorityclass 지정된 이름으로 프라이어리티클래스(PriorityClass)을 생성한다.
+ quota 지정된 이름으로 쿼터(Quota)를 생성한다.
+ role 단일 규칙으로 롤(Role)을 생성한다.
+ rolebinding 특정 롤 또는 클러스터롤에 대한 롤바인딩(RoleBinding)을 생성한다.
+ secret 지정된 하위 커맨드를 사용하여 시크릿(Secret)을 생성한다.
+ service 지정된 하위 커맨드를 사용하여 서비스(Service)를 생성한다.
+ serviceaccount 지정된 이름으로 서비스어카운트(ServiceAccount)을 생성한다.
+```
+
+### `kubectl apply`
+
+* `kubectl apply`를 사용해서 리소스를 생성하거나 업데이트 할 수 있다. kubectl apply를 사용하여 리소스를 업데이트하는 방법에 대한 자세한 정보는 [Kubectl 책](https://kubectl.docs.kubernetes.io)을 참고한다.
+
+
diff --git a/content/ko/docs/reference/kubectl/kubectl.md b/content/ko/docs/reference/kubectl/kubectl.md
new file mode 100644
index 0000000000000..fe60f87d8ffef
--- /dev/null
+++ b/content/ko/docs/reference/kubectl/kubectl.md
@@ -0,0 +1,370 @@
+---
+title: kubectl
+content_type: tool-reference
+weight: 30
+---
+
+## {{% heading "synopsis" %}}
+
+
+kubectl은 쿠버네티스 클러스터 관리자를 제어한다.
+
+ 자세한 정보는 https://kubernetes.io/docs/reference/kubectl/overview/ 에서 확인한다.
+
+```
+kubectl [flags]
+```
+
+## {{% heading "options" %}}
+
+
--default-not-ready-toleration-seconds int 기본값: 300
+
+
+
아직 톨러레이션(toleration)이 없는 모든 파드에 기본적으로 추가되는 notReady:NoExecute에 대한 톨러레이션의 tolerationSeconds를 나타낸다.
+
+
+
+
--default-unreachable-toleration-seconds int 기본값: 300
+
+
+
아직 톨러레이션이 없어서 기본인 unreachable:NoExecute가 추가된 모든 파드에 대한 톨러레이션의 tolerationSeconds를 나타낸다.
+
+
+
+
-h, --help
+
+
+
kubectl에 대한 도움말
+
+
+
+
--insecure-skip-tls-verify
+
+
+
true인 경우, 서버 인증서의 유효성을 확인하지 않는다. 이렇게 하면 사용자의 HTTPS 연결이 안전하지 않게 된다.
+
+
+
+
--kubeconfig string
+
+
+
CLI 요청에 사용할 kubeconfig 파일의 경로이다.
+
+
+
+
--log-backtrace-at traceLocation 기본값: :0
+
+
+
로깅이 file:N에 도달했을 때 스택 트레이스를 내보낸다.
+
+
+
+
--log-dir string
+
+
+
비어 있지 않으면, 이 디렉터리에 로그 파일을 작성한다.
+
+
+
+
--log-file string
+
+
+
비어 있지 않으면, 이 로그 파일을 사용한다.
+
+
+
+
--log-file-max-size uint 기본값: 1800
+
+
+
로그 파일이 커질 수 있는 최대 크기를 정의한다. 단위는 메가 바이트이다. 값이 0이면, 파일의 최대 크기는 무제한이다.
+
+
+
+
--log-flush-frequency duration 기본값: 5s
+
+
+
로그를 비우는 간격의 최대 시간(초)
+
+
+
+
--logtostderr 기본값: true
+
+
+
파일 대신 표준 에러에 기록
+
+
+
+
--match-server-version
+
+
+
클라이언트 버전과 일치하는 서버 버전 필요
+
+
+
+
-n, --namespace string
+
+
+
지정된 경우, 해당 네임스페이스가 CLI 요청의 범위가 됨
+
+
+
+
--password string
+
+
+
API 서버에 대한 기본 인증을 위한 비밀번호
+
+
+
+
--profile string 기본값: "none"
+
+
+
캡처할 프로파일의 이름. (none|cpu|heap|goroutine|threadcreate|block|mutex) 중 하나
+
+
+
+
--profile-output string 기본값: "profile.pprof"
+
+
+
프로파일을 쓸 파일의 이름
+
+
+
+
--request-timeout string 기본값: "0"
+
+
+
단일 서버 요청을 포기하기 전에 대기하는 시간이다. 0이 아닌 값에는 해당 시간 단위(예: 1s, 2m, 3h)가 포함되어야 한다. 값이 0이면 요청 시간이 초과되지 않는다.
+
+
+
+
-s, --server string
+
+
+
쿠버네티스 API 서버의 주소와 포트
+
+
+
+
--skip-headers
+
+
+
true이면, 로그 메시지에서 헤더 접두사를 사용하지 않는다.
+
+
+
+
--skip-log-headers
+
+
+
true이면, 로그 파일을 열 때 헤더를 사용하지 않는다.
+
+
+
+
--stderrthreshold severity 기본값: 2
+
+
+
이 임계값 이상의 로그는 표준 에러로 이동한다.
+
+
+
+
--tls-server-name string
+
+
+
서버 인증서 유효성 검사에 사용할 서버 이름. 제공되지 않으면, 서버에 접속하는 데 사용되는 호스트 이름이 사용된다.
+
+
+
+
--token string
+
+
+
API 서버 인증을 위한 베어러(Bearer) 토큰
+
+
+
+
--user string
+
+
+
사용할 kubeconfig 사용자의 이름
+
+
+
+
--username string
+
+
+
API 서버에 대한 기본 인증을 위한 사용자 이름
+
+
+
+
-v, --v Level
+
+
+
로그 수준의 자세한 정도를 나타내는 숫자
+
+
+
+
--version version[=true]
+
+
+
버전 정보를 출력하고 종료
+
+
+
+
--vmodule moduleSpec
+
+
+
파일 필터링 로깅을 위한 쉼표로 구분된 pattern=N 설정 목록
+
+
+
+
--warnings-as-errors
+
+
+
서버에서 받은 경고를 오류로 처리하고 0이 아닌 종료 코드로 종료
+
+
+
+
+
+
+
+## {{% heading "seealso" %}}
+
+* [kubectl alpha](/docs/reference/generated/kubectl/kubectl-commands#alpha) - 알파 기능에 대한 커맨드
+* [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - 리소스에 대한 어노테이션 업데이트
+* [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - 서버에서 지원되는 API 리소스 출력
+* [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - "그룹/버전" 형식으로 서버에서 지원되는 API 버전을 출력
+* [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) - 파일명 또는 표준 입력으로 리소스에 구성 적용
+* [kubectl attach](/docs/reference/generated/kubectl/kubectl-commands#attach) - 실행 중인 컨테이너에 연결
+* [kubectl auth](/docs/reference/generated/kubectl/kubectl-commands#auth) - 권한 검사
+* [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale) - 디플로이먼트(Deployment), 레플리카셋(ReplicaSet) 또는 레플리케이션컨트롤러(ReplicationController) 자동 스케일링
+* [kubectl certificate](/docs/reference/generated/kubectl/kubectl-commands#certificate) - 인증서 리소스 수정
+* [kubectl cluster-info](/docs/reference/generated/kubectl/kubectl-commands#cluster-info) - 클러스터 정보 표시
+* [kubectl completion](/docs/reference/generated/kubectl/kubectl-commands#completion) - 지정된 셸(bash 또는 zsh)에 대한 셸 완성 코드 출력
+* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) - kubeconfig 파일 수정
+* [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) - 다른 API 버전 간에 구성 파일 변환
+* [kubectl cordon](/docs/reference/generated/kubectl/kubectl-commands#cordon) - 노드를 unschedulable로 표시
+* [kubectl cp](/docs/reference/generated/kubectl/kubectl-commands#cp) - 컨테이너 간에 파일과 디렉터리 복사
+* [kubectl create](/docs/reference/generated/kubectl/kubectl-commands#create) - 파일 또는 표준 입력에서 리소스를 생성
+* [kubectl delete](/docs/reference/generated/kubectl/kubectl-commands#delete) - 파일명, 표준 입력, 리소스 및 이름, 또는 리소스 및 레이블 셀렉터로 리소스 삭제
+* [kubectl describe](/docs/reference/generated/kubectl/kubectl-commands#describe) - 특정 리소스 또는 리소스 그룹의 세부 정보를 표시
+* [kubectl diff](/docs/reference/generated/kubectl/kubectl-commands#diff) - 적용 예정 버전과 라이브 버전 비교
+* [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands#drain) - 유지 보수 준비 중 노드 드레인
+* [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands#edit) - 서버에서 리소스 편집
+* [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands#exec) - 컨테이너에서 커맨드 실행
+* [kubectl explain](/docs/reference/generated/kubectl/kubectl-commands#explain) - 리소스의 문서
+* [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands#expose) - 레플리케이션 컨트롤러, 서비스, 디플로이먼트 또는 파드를 가져와서 새로운 쿠버네티스 서비스로 노출
+* [kubectl get](/docs/reference/generated/kubectl/kubectl-commands#get) - 하나 이상의 리소스 표시
+* [kubectl kustomize](/docs/reference/generated/kubectl/kubectl-commands#kustomize) - 디렉터리 또는 원격 URL에서 kustomization 대상을 빌드
+* [kubectl label](/docs/reference/generated/kubectl/kubectl-commands#label) - 리소스의 레이블 업데이트
+* [kubectl logs](/docs/reference/generated/kubectl/kubectl-commands#logs) - 파드의 컨테이너에 대한 로그 출력
+* [kubectl options](/docs/reference/generated/kubectl/kubectl-commands#options) - 모든 커맨드에서 상속된 플래그 목록을 출력
+* [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands#patch) - 전략적 병합 패치를 사용하여 리소스 필드를 업데이트
+* [kubectl plugin](/docs/reference/generated/kubectl/kubectl-commands#plugin) - 플러그인과 상호 작용하기 위한 유틸리티를 제공
+* [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands#port-forward) - 하나 이상의 로컬 포트를 파드로 전달
+* [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands#proxy) - 쿠버네티스 API 서버에 대한 프록시 실행
+* [kubectl replace](/docs/reference/generated/kubectl/kubectl-commands#replace) - 파일명 또는 표준 입력으로 리소스 교체
+* [kubectl rollout](/docs/reference/generated/kubectl/kubectl-commands#rollout) - 리소스 롤아웃 관리
+* [kubectl run](/docs/reference/generated/kubectl/kubectl-commands#run) - 클러스터에서 특정 이미지 실행
+* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - 디플로이먼트, 레플리카셋 또는 레플리케이션 컨트롤러의 새 크기 설정
+* [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - 오브젝트에 특정 기능 설정
+* [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - 하나 이상의 노드에서 테인트(taint) 업데이트
+* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - 리소스(CPU/메모리/스토리지) 사용량을 표시
+* [kubectl uncordon](/docs/reference/generated/kubectl/kubectl-commands#uncordon) - 노드를 schedulable로 표시
+* [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - 클라이언트 및 서버 버전 정보 출력
+* [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - 실험적(experimental) 기능: 하나 이상의 리소스에 대해서 특정 조건이 만족될 때까지 대기(wait)
diff --git a/content/ko/docs/reference/using-api/client-libraries.md b/content/ko/docs/reference/using-api/client-libraries.md
index 45d43567d7edf..9a6e07293f9a2 100644
--- a/content/ko/docs/reference/using-api/client-libraries.md
+++ b/content/ko/docs/reference/using-api/client-libraries.md
@@ -60,6 +60,7 @@ API 호출 또는 요청/응답 타입을 직접 구현할 필요는 없다.
| PHP | [github.com/allansun/kubernetes-php-client](https://github.com/allansun/kubernetes-php-client) |
| PHP | [github.com/maclof/kubernetes-client](https://github.com/maclof/kubernetes-client) |
| PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) |
+| PHP | [github.com/renoki-co/php-k8s](https://github.com/renoki-co/php-k8s) |
| Python | [github.com/eldarion-gondor/pykube](https://github.com/eldarion-gondor/pykube) |
| Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) |
| Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) |
diff --git a/content/ko/docs/setup/best-practices/multiple-zones.md b/content/ko/docs/setup/best-practices/multiple-zones.md
index 0693169e5d761..189dbed654078 100644
--- a/content/ko/docs/setup/best-practices/multiple-zones.md
+++ b/content/ko/docs/setup/best-practices/multiple-zones.md
@@ -1,398 +1,140 @@
---
-title: 여러 영역에서 구동
-weight: 10
+title: 여러 영역에서 실행
+weight: 20
content_type: concept
---
-이 페이지는 여러 영역에서 어떻게 클러스터를 구동하는지 설명한다.
-
-
+이 페이지에서는 여러 영역에서 쿠버네티스를 실행하는 방법을 설명한다.
-## 소개
-
-Kubernetes 1.2 adds support for running a single cluster in multiple failure zones
-(GCE calls them simply "zones", AWS calls them "availability zones", here we'll refer to them as "zones").
-This is a lightweight version of a broader Cluster Federation feature (previously referred to by the affectionate
-nickname ["Ubernetes"](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)).
-Full Cluster Federation allows combining separate
-Kubernetes clusters running in different regions or cloud providers
-(or on-premises data centers). However, many
-users simply want to run a more available Kubernetes cluster in multiple zones
-of their single cloud provider, and this is what the multizone support in 1.2 allows
-(this previously went by the nickname "Ubernetes Lite").
-
-Multizone support is deliberately limited: a single Kubernetes cluster can run
-in multiple zones, but only within the same region (and cloud provider). Only
-GCE and AWS are currently supported automatically (though it is easy to
-add similar support for other clouds or even bare metal, by simply arranging
-for the appropriate labels to be added to nodes and volumes).
-
-
-## 기능
-
-When nodes are started, the kubelet automatically adds labels to them with
-zone information.
-
-Kubernetes will automatically spread the pods in a replication controller
-or service across nodes in a single-zone cluster (to reduce the impact of
-failures.) With multiple-zone clusters, this spreading behavior is
-extended across zones (to reduce the impact of zone failures.) (This is
-achieved via `SelectorSpreadPriority`). This is a best-effort
-placement, and so if the zones in your cluster are heterogeneous
-(e.g. different numbers of nodes, different types of nodes, or
-different pod resource requirements), this might prevent perfectly
-even spreading of your pods across zones. If desired, you can use
-homogeneous zones (same number and types of nodes) to reduce the
-probability of unequal spreading.
-
-When persistent volumes are created, the `PersistentVolumeLabel`
-admission controller automatically adds zone labels to them. The scheduler (via the
-`VolumeZonePredicate` predicate) will then ensure that pods that claim a
-given volume are only placed into the same zone as that volume, as volumes
-cannot be attached across zones.
-
-## 제한 사항
-
-There are some important limitations of the multizone support:
-
-* We assume that the different zones are located close to each other in the
-network, so we don't perform any zone-aware routing. In particular, traffic
-that goes via services might cross zones (even if some pods backing that service
-exist in the same zone as the client), and this may incur additional latency and cost.
-
-* Volume zone-affinity will only work with a `PersistentVolume`, and will not
-work if you directly specify an EBS volume in the pod spec (for example).
-
-* Clusters cannot span clouds or regions (this functionality will require full
-federation support).
-
-* Although your nodes are in multiple zones, kube-up currently builds
-a single master node by default. While services are highly
-available and can tolerate the loss of a zone, the control plane is
-located in a single zone. Users that want a highly available control
-plane should follow the [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) instructions.
-
-### Volume limitations
-The following limitations are addressed with [topology-aware volume binding](/ko/docs/concepts/storage/storage-classes/#볼륨-바인딩-모드).
-
-* StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with
- pod affinity or anti-affinity policies.
-
-* If the name of the StatefulSet contains dashes ("-"), volume zone spreading
- may not provide a uniform distribution of storage across zones.
-
-* When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass
- needs to be configured for a specific single zone, or the PVs need to be
- statically provisioned in a specific zone. Another workaround is to use a
- StatefulSet, which will ensure that all the volumes for a replica are
- provisioned in the same zone.
-
-## 연습
-
-We're now going to walk through setting up and using a multi-zone
-cluster on both GCE & AWS. To do so, you bring up a full cluster
-(specifying `MULTIZONE=true`), and then you add nodes in additional zones
-by running `kube-up` again (specifying `KUBE_USE_EXISTING_MASTER=true`).
-
-### 클러스터 가져오기
-
-Create the cluster as normal, but pass MULTIZONE to tell the cluster to manage multiple zones; creating nodes in us-central1-a.
+## 배경
-GCE:
+쿠버네티스는 단일 쿠버네티스 클러스터가 여러 장애 영역에서
+실행될 수 있도록 설계되었다. 일반적으로 이러한 영역은 _지역(region)_ 이라는
+논리적 그룹 내에 적합하다. 주요 클라우드 제공자는 지역을 일관된 기능 집합을
+제공하는 장애 영역 집합(_가용성 영역_ 이라고도 함)으로
+정의한다. 지역 내에서 각 영역은 동일한 API 및
+서비스를 제공한다.
-```shell
-curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a NUM_NODES=3 bash
-```
+일반적인 클라우드 아키텍처는 한 영역의 장애가 다른 영역의 서비스도
+손상시킬 가능성을 최소화하는 것을 목표로 한다.
-AWS:
+## 컨트롤 플레인 동작
-```shell
-curl -sS https://get.k8s.io | MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a NUM_NODES=3 bash
-```
+모든 [컨트롤 플레인 컴포넌트](/ko/docs/concepts/overview/components/#컨트롤-플레인-컴포넌트)는
+컴포넌트별로 복제되는 교환 가능한 리소스 풀로 실행을
+지원한다.
-This step brings up a cluster as normal, still running in a single zone
-(but `MULTIZONE=true` has enabled multi-zone capabilities).
-
-### 라벨이 지정된 노드 확인
-
-View the nodes; you can see that they are labeled with zone information.
-They are all in `us-central1-a` (GCE) or `us-west-2a` (AWS) so far. The
-labels are `failure-domain.beta.kubernetes.io/region` for the region,
-and `failure-domain.beta.kubernetes.io/zone` for the zone:
-
-```shell
-kubectl get nodes --show-labels
-```
-
-The output is similar to this:
-
-```shell
-NAME STATUS ROLES AGE VERSION LABELS
-kubernetes-master Ready,SchedulingDisabled 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
-kubernetes-minion-87j9 Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
-kubernetes-minion-9vlv Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-a12q Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
-```
-
-### 두번째 영역에 더 많은 노드 추가하기
-
-Let's add another set of nodes to the existing cluster, reusing the
-existing master, running in a different zone (us-central1-b or us-west-2b).
-We run kube-up again, but by specifying `KUBE_USE_EXISTING_MASTER=true`
-kube-up will not create a new master, but will reuse one that was previously
-created instead.
-
-GCE:
-
-```shell
-KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-b NUM_NODES=3 kubernetes/cluster/kube-up.sh
-```
-
-On AWS we also need to specify the network CIDR for the additional
-subnet, along with the master internal IP address:
-
-```shell
-KUBE_USE_EXISTING_MASTER=true MULTIZONE=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2b NUM_NODES=3 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh
-```
-
-
-View the nodes again; 3 more nodes should have launched and be tagged
-in us-central1-b:
-
-```shell
-kubectl get nodes --show-labels
-```
-
-The output is similar to this:
-
-```shell
-NAME STATUS ROLES AGE VERSION LABELS
-kubernetes-master Ready,SchedulingDisabled 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
-kubernetes-minion-281d Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
-kubernetes-minion-87j9 Ready 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
-kubernetes-minion-9vlv Ready 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-a12q Ready 17m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
-kubernetes-minion-pp2f Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
-kubernetes-minion-wf8i Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
-```
-
-### 볼륨 어피니티
-
-Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity):
-
-```bash
-kubectl apply -f - <}}
-For version 1.3+ Kubernetes will distribute dynamic PV claims across
-the configured zones. For version 1.2, dynamic persistent volumes were
-always created in the zone of the cluster master
-(here us-central1-a / us-west-2a); that issue
-([#23330](https://github.com/kubernetes/kubernetes/issues/23330))
-was addressed in 1.3+.
+쿠버네티스는 API 서버 엔드포인트에 대한 교차 영역 복원성을 제공하지
+않는다. DNS 라운드-로빈, SRV 레코드 또는 상태 확인 기능이 있는
+써드파티 로드 밸런싱 솔루션을 포함하여 다양한 기술을 사용하여
+클러스터 API 서버의 가용성을 향상시킬 수 있다.
{{< /note >}}
-Now let's validate that Kubernetes automatically labeled the zone & region the PV was created in.
-
-```shell
-kubectl get pv --show-labels
-```
-
-The output is similar to this:
-
-```shell
-NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
-pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a
-```
-
-So now we will create a pod that uses the persistent volume claim.
-Because GCE PDs / AWS EBS volumes cannot be attached across zones,
-this means that this pod can only be created in the same zone as the volume:
-
-```yaml
-kubectl apply -f - < 34m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-281d Ready 20m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
-kubernetes-minion-olsh Ready 3m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
-```
-
-
-Load-balancers span all zones in a cluster; the guestbook-go example
-includes an example load-balanced service:
-
-```shell
-kubectl describe service guestbook | grep LoadBalancer.Ingress
-```
-
-The output is similar to this:
-
-```shell
-LoadBalancer Ingress: 130.211.126.21
-```
-
-Set the above IP:
-
-```shell
-export IP=130.211.126.21
-```
-
-Explore with curl via IP:
-
-```shell
-curl -s http://${IP}:3000/env | grep HOSTNAME
-```
-
-The output is similar to this:
-
-```shell
- "HOSTNAME": "guestbook-44sep",
-```
-
-Again, explore multiple times:
-
-```shell
-(for i in `seq 20`; do curl -s http://${IP}:3000/env | grep HOSTNAME; done) | sort | uniq
-```
-
-The output is similar to this:
-
-```shell
- "HOSTNAME": "guestbook-44sep",
- "HOSTNAME": "guestbook-hum5n",
- "HOSTNAME": "guestbook-ppm40",
-```
-
-The load balancer correctly targets all the pods, even though they are in multiple zones.
-
-### 클러스터 강제 종료
-
-When you're done, clean up:
-
-GCE:
-
-```shell
-KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-f kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=gce KUBE_USE_EXISTING_MASTER=true KUBE_GCE_ZONE=us-central1-b kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=gce KUBE_GCE_ZONE=us-central1-a kubernetes/cluster/kube-down.sh
-```
-
-AWS:
-
-```shell
-KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh
-KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
-```
+## 노드 동작
+
+쿠버네티스는 클러스터의 여러 노드에 걸쳐
+워크로드 리소스(예: {{< glossary_tooltip text="디플로이먼트(Deployment)" term_id="deployment" >}}
+또는 {{< glossary_tooltip text="스테이트풀셋(StatefulSet)" term_id="statefulset" >}})에
+대한 파드를 자동으로 분배한다. 이러한 분배는
+실패에 대한 영향을 줄이는 데 도움이 된다.
+
+노드가 시작되면, 각 노드의 kubelet이 쿠버네티스 API에서
+특정 kubelet을 나타내는 노드 오브젝트에
+{{< glossary_tooltip text="레이블" term_id="label" >}}을 자동으로 추가한다.
+이러한 레이블에는
+[영역 정보](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)가 포함될 수 있다.
+
+클러스터가 여러 영역 또는 지역에 걸쳐있는 경우,
+[파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)과
+함께 노드 레이블을 사용하여
+파드가 장애 도메인(지역, 영역, 특정 노드) 간 클러스터에
+분산되는 방식을 제어할 수 있다.
+이러한 힌트를 통해
+{{< glossary_tooltip text="스케줄러" term_id="kube-scheduler" >}}는
+더 나은 예상 가용성을 위해 파드를 배치할 수 있으므로, 상관 관계가 있는
+오류가 전체 워크로드에 영향을 미칠 위험을 줄일 수 있다.
+
+예를 들어, 가능할 때마다 스테이트풀셋의
+3개 복제본이 모두 서로 다른 영역에서 실행되도록 제약 조건을
+설정할 수 있다. 각 워크로드에 사용 중인
+가용 영역을 명시적으로 정의하지 않고 이를 선언적으로
+정의할 수 있다.
+
+### 여러 영역에 노드 분배
+
+쿠버네티스의 코어는 사용자를 위해 노드를 생성하지 않는다. 사용자가 직접 수행하거나,
+[클러스터 API](https://cluster-api.sigs.k8s.io/)와 같은 도구를 사용하여
+사용자 대신 노드를 관리해야 한다.
+
+클러스터 API와 같은 도구를 사용하면 여러 장애 도메인에서
+클러스터의 워커 노드로 실행할 머신 집합과 전체 영역 서비스 중단 시
+클러스터를 자동으로 복구하는 규칙을 정의할 수 있다.
+
+## 파드에 대한 수동 영역 할당
+
+생성한 파드와 디플로이먼트, 스테이트풀셋, 잡(Job)과
+같은 워크로드 리소스의 파드 템플릿에 [노드 셀렉터 제약 조건](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#노드-셀렉터-nodeselector)을
+적용할 수 있다.
+
+## 영역에 대한 스토리지 접근
+
+퍼시스턴트 볼륨이 생성되면, `PersistentVolumeLabel`
+[어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/)는
+특정 영역에 연결된 모든 퍼시스턴트볼륨(PersistentVolume)에 영역 레이블을 자동으로
+추가한다. 그런 다음 {{< glossary_tooltip text="스케줄러" term_id="kube-scheduler" >}}는
+`NoVolumeZoneConflict` 프레디케이트(predicate)를 통해 주어진 퍼시스턴트볼륨을 요구하는 파드가
+해당 볼륨과 동일한 영역에만 배치되도록 한다.
+
+해당 클래스의 스토리지가 사용할 수 있는 장애 도메인(영역)을 지정하는
+퍼시스턴트볼륨클레임(PersistentVolumeClaims)에 대한
+{{< glossary_tooltip text="스토리지클래스(StorageClass)" term_id="storage-class" >}}를 지정할 수 있다.
+장애 도메인 또는 영역을 인식하는 스토리지클래스 구성에 대한 자세한 내용은
+[허용된 토폴로지](/ko/docs/concepts/storage/storage-classes/#허용된-토폴로지)를 참고한다.
+
+## 네트워킹
+
+쿠버네티스가 스스로 영역-인지(zone-aware) 네트워킹을 포함하지는 않는다.
+[네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)을
+사용하여 클러스터 네트워킹을 구성할 수 있으며, 해당 네트워크 솔루션에는 영역별 요소가
+있을 수 있다. 예를 들어, 클라우드 제공자가
+`type=LoadBalancer` 를 사용하여 서비스를 지원하는 경우, 로드 밸런서는 지정된 연결을 처리하는
+로드 밸런서 요소와 동일한 영역에서 실행 중인 파드로만 트래픽을 보낼 수 있다.
+자세한 내용은 클라우드 제공자의 문서를 확인한다.
+
+사용자 정의 또는 온-프레미스 배포의 경우, 비슷한 고려 사항이 적용된다.
+다른 장애 영역 처리를 포함한 {{< glossary_tooltip text="서비스" term_id="service" >}}와
+{{< glossary_tooltip text="인그레스(Ingress)" term_id="ingress" >}} 동작은
+클러스터가 설정된 방식에 명확히 의존한다.
+
+## 장애 복구
+
+클러스터를 설정할 때, 한 지역의 모든 장애 영역이 동시에
+오프라인 상태가 되는 경우 설정에서 서비스를 복원할 수 있는지
+여부와 방법을 고려해야 할 수도 있다. 예를 들어, 영역에서 파드를 실행할 수 있는
+노드가 적어도 하나 이상 있어야 하는가?
+클러스터에 중요한 복구 작업이 클러스터에
+적어도 하나 이상의 정상 노드에 의존하지 않는지 확인한다. 예를 들어, 모든 노드가
+비정상인 경우, 하나 이상의 노드를 서비스할 수 있을 만큼 복구를 완료할 수 있도록 특별한
+{{< glossary_tooltip text="톨러레이션(toleration)" term_id="toleration" >}}으로
+복구 작업을 실행해야 할 수 있다.
+
+쿠버네티스는 이 문제에 대한 답을 제공하지 않는다. 그러나,
+고려해야 할 사항이다.
+
+## {{% heading "whatsnext" %}}
+
+스케줄러가 구성된 제약 조건을 준수하면서, 클러스터에 파드를 배치하는 방법을 알아보려면,
+[스케줄링과 축출(eviction)](/ko/docs/concepts/scheduling-eviction/)을 참고한다.
diff --git a/content/ko/docs/setup/production-environment/container-runtimes.md b/content/ko/docs/setup/production-environment/container-runtimes.md
index 0437815d1c6b9..0bc616419b758 100644
--- a/content/ko/docs/setup/production-environment/container-runtimes.md
+++ b/content/ko/docs/setup/production-environment/container-runtimes.md
@@ -29,9 +29,6 @@ weight: 10
다른 운영 체제의 경우, 해당 플랫폼과 관련된 문서를 찾아보자.
{{< /note >}}
-이 가이드의 모든 명령은 `root`로 실행해야 한다.
-예를 들어,`sudo`로 접두사를 붙이거나, `root` 사용자가 되어 명령을 실행한다.
-
### Cgroup 드라이버
리눅스 배포판의 init 시스템이 systemd인 경우, init 프로세스는
@@ -74,18 +71,18 @@ kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다.
# (도커 CE 설치)
## 리포지터리 설정
### apt가 HTTPS 리포지터리를 사용할 수 있도록 해주는 패키지 설치
-apt-get update && apt-get install -y \
+sudo apt-get update && sudo apt-get install -y \
apt-transport-https ca-certificates curl software-properties-common gnupg2
```
```shell
# 도커 공식 GPG 키 추가
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```
```shell
# 도커 apt 리포지터리 추가.
-add-apt-repository \
+sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
@@ -93,7 +90,7 @@ add-apt-repository \
```shell
# 도커 CE 설치.
-apt-get update && apt-get install -y \
+sudo apt-get update && sudo apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
@@ -101,7 +98,7 @@ apt-get update && apt-get install -y \
```shell
# 도커 데몬 설정
-cat > /etc/docker/daemon.json < /etc/docker/daemon.json <}}
@@ -203,17 +200,17 @@ CRI-O 메이저와 마이너 버전은 쿠버네티스 메이저와 마이너
### 선행 조건
```shell
-modprobe overlay
-modprobe br_netfilter
+sudo modprobe overlay
+sudo modprobe br_netfilter
# 요구되는 sysctl 파라미터 설정, 이 설정은 재부팅 간에도 유지된다.
-cat > /etc/sysctl.d/99-kubernetes-cri.conf <}}
@@ -235,16 +232,19 @@ sysctl --system
그런 다음, 아래를 실행한다.
```shell
-echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
-echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
+cat < /etc/modules-load.d/containerd.conf < /etc/sysctl.d/99-kubernetes-cri.conf < /etc/containerd/config.toml
+sudo mkdir -p /etc/containerd
+sudo containerd config default > /etc/containerd/config.toml
```
```shell
# containerd 재시작
-systemctl restart containerd
+sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="CentOS/RHEL 7.4+" %}}
@@ -414,30 +414,30 @@ systemctl restart containerd
# (containerd 설치)
## 리포지터리 설정
### 필요한 패키지 설치
-yum install -y yum-utils device-mapper-persistent-data lvm2
+sudo yum install -y yum-utils device-mapper-persistent-data lvm2
```
```shell
## 도커 리포지터리 추가
-yum-config-manager \
+sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
```shell
## containerd 설치
-yum update -y && yum install -y containerd.io
+sudo yum update -y && yum install -y containerd.io
```
```shell
## containerd 설정
-mkdir -p /etc/containerd
-containerd config default > /etc/containerd/config.toml
+sudo mkdir -p /etc/containerd
+sudo containerd config default > /etc/containerd/config.toml
```
```shell
# containerd 재시작
-systemctl restart containerd
+sudo systemctl restart containerd
```
{{% /tab %}}
{{% tab name="윈도우 (PowerShell)" %}}
diff --git a/content/ko/docs/setup/production-environment/tools/_index.md b/content/ko/docs/setup/production-environment/tools/_index.md
index 5beb1d5a9da43..9b334ba2621b1 100644
--- a/content/ko/docs/setup/production-environment/tools/_index.md
+++ b/content/ko/docs/setup/production-environment/tools/_index.md
@@ -1,4 +1,4 @@
---
-title: Installing Kubernetes with deployment tools
+title: 배포 도구로 쿠버네티스 설치하기
weight: 30
---
diff --git a/content/ko/docs/setup/production-environment/tools/kops.md b/content/ko/docs/setup/production-environment/tools/kops.md
index 0df78ff5804f8..4ec5386d2f1bb 100644
--- a/content/ko/docs/setup/production-environment/tools/kops.md
+++ b/content/ko/docs/setup/production-environment/tools/kops.md
@@ -198,7 +198,7 @@ kops는 클러스터에 사용될 설정을 생성할것이다. 여기서 주의
만약 kops사용이 처음이라면, 얼마 걸리지 않으니 이들을 시험해 본다. 인스턴스 그룹은
쿠버네티스 노드로 등록된 인스턴스의 집합을 말한다. AWS상에서는 auto-scaling-groups를
-통해 만들어진다. 사용자는 여러개의 인스턴스 그룹을 관리할 수 있는데,
+통해 만들어진다. 사용자는 여러 개의 인스턴스 그룹을 관리할 수 있는데,
예를 들어, spot과 on-demand 인스턴스 조합 또는 GPU 와 non-GPU 인스턴스의 조합으로 구성할 수 있다.
diff --git a/content/ko/docs/setup/production-environment/tools/kubeadm/self-hosting.md b/content/ko/docs/setup/production-environment/tools/kubeadm/self-hosting.md
new file mode 100644
index 0000000000000..9763c7decdf62
--- /dev/null
+++ b/content/ko/docs/setup/production-environment/tools/kubeadm/self-hosting.md
@@ -0,0 +1,67 @@
+---
+reviewers:
+title: 컨트롤 플레인을 자체 호스팅하기 위해 쿠버네티스 클러스터 구성하기
+content_type: concept
+weight: 100
+---
+
+
+
+### 쿠버네티스 컨트롤 플레인 자체 호스팅하기 {#self-hosting}
+
+kubeadm은 실험적으로 _자체 호스팅_ 된 쿠버네티스 컨트롤 플레인을 만들 수 있도록
+해준다. API 서버, 컨트롤러 매니저 및 스케줄러와 같은 주요 구성 요소가 정적(static) 파일을
+통해 kubelet에 구성된 [스태틱(static) 파드](/docs/tasks/configure-pod-container/static-pod/)
+대신 쿠버네티스 API를 통해 구성된 [데몬셋(DaemonSet) 파드](/docs/concepts/workloads/controllers/daemonset/)
+로 실행된다.
+
+자체 호스팅된 클러스터를 만들려면 [kubeadm alpha selfhosting pivot](/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-selfhosting)
+명령어를 확인한다.
+
+
+
+#### 주의사항
+
+{{< caution >}}
+이 기능은 클러스터를 지원되지 않는 상태로 전환하여 더 이상 클러스터를 관리할 수 없게 만든다.
+이것은 `kubeadm upgrade`를 포함한다.
+{{< /caution >}}
+
+1. 1.8 이후 버전에서 자체 호스팅은 몇 가지 중요한 한계가 있다.
+ 특히 자체 호스팅된 클러스터는 수동 조정 없이는
+ _컨트롤 플레인 노드를 재부팅하고 나서 복구할 수 없다._
+
+1. 기본적으로 자체 호스팅된 컨트롤 플레인 파드는
+ [`hostPath`](/docs/concepts/storage/volumes/#hostpath) 볼륨에서 불러 온
+ 자격 증명에 의존한다. 초기 생성을 제외하고, 이러한 자격 증명은 kubeadm에 의해
+ 관리되지 않는다.
+
+1. 컨트롤 플레인의 자체 호스팅된 부분에는 스태틱 파드로 실행되는 etcd가
+ 포함되지 않는다.
+
+#### 프로세스
+
+자체 호스팅 부트스트랩 프로세스는 [kubeadm 설계
+문서](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.9.md#optional-self-hosting)에 기록되어 있다.
+
+요약하면 `kubeadm alpha selfhosting`은 다음과 같이 작동한다.
+
+ 1. 부트스트랩 스태틱 컨트롤 플레인이 실행되고 정상 상태가 될 때까지 기다린다.
+ 이것은 자체 호스팅이 없는 `kubeadm init` 프로세스와 동일하다.
+
+ 1. 스태틱 컨트롤 플레인 파드 매니페스트를 사용하여 자체 호스팅된 컨트롤
+ 플레인을 실행할 데몬셋 매니페스트 집합을 구성한다. 또한 필요한 경우
+ 해당 매니페스트를 수정한다. 예를 들어, 시크릿을 위한 새로운 볼륨을
+ 추가한다.
+
+ 1. `kube-system` 네임스페이스에 데몬셋을 생성하고 결과 파드가 실행될 때까지
+ 대기한다.
+
+ 1. 일단 자체 호스팅된 파드가 동작하면 관련 스태틱 파드가 삭제되고
+ kubeadm은 계속해서 다음 구성 요소를 설치한다.
+ 이것은 kubelet이 스태틱 파드를 멈추게 한다.
+
+ 1. 기존의 컨트롤 플레인이 멈추면 새롭게 자체 호스팅된 컨트롤 플레인은
+ 리스닝 포트에 바인딩하여 활성화할 수 있다.
+
+
diff --git a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index 1aea3d6ccfac6..b41120f7bec57 100644
--- a/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -259,7 +259,7 @@ CSI 노드 플러그인(특히 블록 디바이스 또는 공유 파일시스템
오버 프로비저닝을 방지하는 모범 사례는 윈도우, 도커 및 쿠버네티스 프로세스를 고려하여 최소 2GB의 시스템 예약 메모리로 kubelet을 구성하는 것이다.
-플래그의 동작은 아래에 설명된대로 다르게 동작한다.
+플래그의 동작은 아래에 설명된 대로 다르게 동작한다.
* `--kubelet-reserve`, `--system-reserve`, `--eviction-hard` 플래그는 Node Allocatable 업데이트
* `--enforce-node-allocable`을 사용한 축출(Eviction)은 구현되지 않았다.
@@ -570,7 +570,7 @@ PodSecurityContext 필드는 윈도우에서 작동하지 않는다. 참조를
Get-NetAdapter | ? Name -Like "vEthernet (Ethernet*"
```
- 호스트 네트워크 어댑터가 "Ethernet"이 아닌 경우, 종종 start.ps1 스크립트의 [InterfaceName](https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1#L6) 파라미터를 수정하는 것이 좋다. 그렇지 않으면 `start-kubelet.ps1` 스크립트의 출력을 참조하여 가상 네트워크 생성 중에 오류가 있는지 확인한다.
+ 호스트 네트워크 어댑터가 "Ethernet"이 아닌 경우, 종종 start.ps1 스크립트의 [InterfaceName](https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1#L7) 파라미터를 수정하는 것이 좋다. 그렇지 않으면 `start-kubelet.ps1` 스크립트의 출력을 참조하여 가상 네트워크 생성 중에 오류가 있는지 확인한다.
1. 내 파드가 "Container Creating"에서 멈췄거나 계속해서 다시 시작된다.
diff --git a/content/ko/docs/setup/release/notes.md b/content/ko/docs/setup/release/notes.md
new file mode 100644
index 0000000000000..9fec5907b946c
--- /dev/null
+++ b/content/ko/docs/setup/release/notes.md
@@ -0,0 +1,2603 @@
+---
+title: v1.19 릴리스 노트
+weight: 10
+card:
+ name: release-notes
+ weight: 20
+ anchors:
+ - anchor: "#"
+ title: 현재 릴리스 노트
+ - anchor: "#긴급-업그레이드-노트"
+ title: 긴급 업그레이드 노트
+---
+
+
+
+# v1.19.0
+
+[문서](https://docs.k8s.io)
+
+## v1.19.0 다운로드
+
+파일명 | sha512 해시
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes.tar.gz) | `448b941e973a519a500eb24786f6deb7eebd0e1ecb034941e382790ff69dfc2838715a222cfc53bea7b75f2c6aedc7425eded4aad69bf88773393155c737f9c0`
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-src.tar.gz) | `47d253e6eb1f6da730f4f3885e205e6bfde88ffe66d92915465108c9eaf8e3c5d1ef515f8bf804a726db057433ecd25008ecdef624ee68ad9c103d1c7a615aad`
+
+### 클라이언트 바이너리
+
+파일명 | sha512 해시
+-------- | -----------
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-darwin-amd64.tar.gz) | `7093a34298297e46bcd1ccb77a9c83ca93b8ccb63ce2099d3d8cd8911ccc384470ac202644843406f031c505a8960d247350a740d683d8910ca70a0b58791a1b`
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-linux-386.tar.gz) | `891569cff7906732a42b20b86d1bf20a9fe873f87b106e717a5c0f80728b5823c2a00c7ccea7ec368382509f095735089ddd582190bc51dcbbcef6b8ebdbd5cc`
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-linux-amd64.tar.gz) | `1590d4357136a71a70172e32820c4a68430d1b94cf0ac941ea17695fbe0c5440d13e26e24a2e9ebdd360c231d4cd16ffffbbe5b577c898c78f7ebdc1d8d00fa3`
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-linux-arm.tar.gz) | `bc0fb19fb6af47f591adc64b5a36d3dffcadc35fdfd77a4a222e037dbd2ee53fafb84f13c4e307910cfa36b3a46704063b42a14ceaad902755ec14c492ccd51d`
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-linux-arm64.tar.gz) | `6ff47f4fdfb3b5f2bfe18fd792fe9bfc747f06bf52de062ee803cda87ac4a98868d8e1211742e32dd443a4bdb770018bbdde704dae6abfc6d80c02bdfb4e0311`
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-linux-ppc64le.tar.gz) | `d8816518adc3a7fc00f996f23ff84e6782a3ebbba7ef37ba44def47b0e6506fefeeaf37d0e197cecf0deb5bd1a8f9dd1ba82af6c29a6b9d21b8e62af965b6b81`
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-linux-s390x.tar.gz) | `662fd4618f2b747d2b0951454b9148399f6cd25d3ca7c40457b6e02cb20df979138cad8cccd18fc8b265d9426c90828d3f0b2a6b40d9cd1a1bdc17219e35ed33`
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-windows-386.tar.gz) | `d90cb92eb33ecbfab7a0e3a2da60ab10fc59132e4bc9abe0a1461a13222b5016704a7cfe0bf9bcf5d4ec55f505ffbbf53162dfe570e8f210e3f68b0d3a6bf7e3`
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-client-windows-amd64.tar.gz) | `6ec32a8a62b69363a524c4f8db765ff4bd16ea7e5b0eb04aa5a667f8653eda18c357a97513d9e12f0ba1612516acb150deffb6e3608633c62b97a15b6efa7cc0`
+
+### 서버 바이너리
+
+파일명 | sha512 해시
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz) | `7c268bd58e67d3c5016f3fcc9f4b6d2da7558af5a2c708ff3baf767b39e847e3d35d4fd2fa0f640bedbfb09a445036cafbe2f04357a88dada405cfc2ded76972`
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-server-linux-arm.tar.gz) | `fcbf8d9004f1cd244a82b685abaf81f9638c3cc1373d78e705050042cfa6a004f8eed92f4721539dcd169c55b662d10416af19cff7537a8dfef802dc41b4088b`
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-server-linux-arm64.tar.gz) | `e21f54a35ff29e919e98fe81758f654ea735983d5a9d08dab9484598b116843830a86ceb5cf0a23d27b7f9aba77e5f0aa107c171a0837ba781d508ebbea76f55`
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-server-linux-ppc64le.tar.gz) | `c7014c782683f8f612c7805654b632aab4c5dce895ee8f9ef24360616e24240ce59ddf3cf27c3170df5450d8fe14fbca3fb7cddfc9b74ae37943081f0fa4b6b3`
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-server-linux-s390x.tar.gz) | `3ac2d6b273e5b650f63260aae164fc6781ad5760f63cca911f5db9652c4bf32e7e7b25728987befc6dfda89c5c56969681b75f12b17141527d4e1d12f3d41f3c`
+
+### 노드 바이너리
+
+파일명 | sha512 해시
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-node-linux-amd64.tar.gz) | `d5e21432a4ab019f00cd1a52bbbdb00feb3db2ce96b41a58b1ee27d8847c485f5d0efe13036fd1155469d6d15f5873a5a892ecc0198f1bae1bf5b586a0129e75`
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-node-linux-arm.tar.gz) | `bd57adf060813b06be2b33439d6f60d13630c0251ef96ba473274073200ea118f5622ec31ed714cc57bd9da410655e958a7700a5742ae7e4b6406ab12fbf21f3`
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-node-linux-arm64.tar.gz) | `3ee70abc0a5cbf1ef5dde0d27055f4d17084585c36a2cf41e3fd925d206df0b583f50dc1c118472f198788b65b2c447aa40ad41646b88791659d2dfb69b3890b`
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-node-linux-ppc64le.tar.gz) | `0f4368f229c082b2a75e7089a259e487d60b20bc8edf650dd7ca0fe23c51632397c2ef24c9c6cef078c95fce70d9229a5b4ff682c34f65a44bc4be3329c8ccde`
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-node-linux-s390x.tar.gz) | `8f0b6839fc0ad51300221fa7f32134f8c687073715cc0839f7aacb21a075c66dab113369707d03e9e0e53be62ca2e1bdf04d4b26cff805ae9c7a5a4b864e3eae`
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0/kubernetes-node-windows-amd64.tar.gz) | `587651158c9999e64e06186ef2e65fe14d46ffdae28c5d8ee6261193bfe4967717f997ebe13857fa1893bbf492e1cc1f816bce86a94c6df9b7a0264848391397`
+
+## v1.18.0 이후 체인지로그
+
+## 새로운 소식 (주요 테마)
+
+### 사용 중단 경고
+
+SIG API Machinery는 `kubectl` 사용자 및 API 사용자에게 표시되는 [사용 중단된 API 사용 시 경고](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#rest-resources-aka-api-objects)와
+클러스터 관리자에게 표시되는 메트릭을 구현하였다.
+사용 중단된 API에 대한 요청은 대상 제거 릴리스 및 대체 API를 포함하는 경고와 함께 반환된다.
+경고는 [어드미션 웹훅](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admissionreview-response-warning)에서도
+반환될 수 있으며, [사용자 정의 리소스의 사용 중단된 버전](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation)에 대해 지정된다.
+
+### 영구적인 베타(beta) 회피하기
+
+쿠버네티스 1.20에서부터, SIG Architecture는 9개월 이내에 모든 REST API들을 베타에서 안정 버전으로 전환하는 새로운 정책을 구현한다. 새로운 정책의 이면에 있는 아이디어는 기능이 오랫동안 베타 상태로 유지되는 것을 피하기 위함이다. 일단 새로운 API가 베타 단계에 진입하면, 9개월 동안 다음 중 하나를 수행하게 된다.
+
+ - GA에 도달하여, 베타 지원 중단 혹은
+ - 새로운 베타 버전을 보유 _(및 이전 베타 지원 중단)_.
+
+REST API가 9개월의 카운트 다운이 끝나면, 다음 쿠버네티스 릴리스에서 해당 API 버전이 사용 중단된다. 더 자세한 내용은 [쿠버네티스 블로그](https://kubernetes.io/blog/2020/08/21/moving-forward-from-beta/)에서 확인할 수 있다.
+
+### 워크로드 및 노드 디버깅을 위한 확장된 CLI 지원
+
+SIG CLI는 `kubectl` 로 디버깅을 확장하여, 다음의 두 가지 새로운 디버깅 워크플로(workflows)를 지원한다. 복사본을 생성하여 워크로드를 디버깅하고, 호스트 네임스페이스에 컨테이너를 생성하여 노드를 디버깅한다. 이는 아래의 작업을 할 때 유용하다.
+ - 임시(Ephemeral) 컨테이너가 활성화되지 않은 클러스터에서 디버그 컨테이너 삽입
+ - busybox와 같은 이미지나 `sleep 1d` 와 같은 명령어를 변경하여 쉽게 디버깅할 수 있도록, 손상된 컨테이너를 수정하여 `kubectl exec` 를 사용할 수 있는 시간 확보
+ - 노드의 호스트 파일시스템에서 구성 파일 검사
+
+이러한 새로운 워크플로에는 신규 클러스터 기능이 필요하지 않으므로, `kubectl alpha debug` 를 통해 기존 클러스터로 테스트할 수 있다. `kubectl` 을 사용한 디버깅에 대해 사용자의 의견을 기대한다. 이슈를 열거나, [#sig-cli](https://kubernetes.slack.com/messages/sig-cli)를 방문하거나 기능 개선에 대한 이슈 [#1441](https://features.k8s.io/1441)에 코멘트를 남겨서 SIG CLI와 연락할 수 있다.
+
+### 구조화된 로깅
+
+SIG Instrumentation은 로그 메시지의 구조와 쿠버네티스 오브젝트에 대한 참조를 표준화했다. 구조화된 로깅을 사용하면 로그를 더 쉽게 구문 분석, 처리, 저장, 질의 및 분석할 수 있다. klog 라이브러리의 새 메소드는 로그 메시지 구조를 강제 적용한다.
+
+### 엔드포인트슬라이스(EndpointSlices)가 기본적으로 활성화
+
+엔드포인트슬라이스는 엔드포인트(EndPoint) API에 대한 스케일 및 확장 가능한 대안을 제공하는 훌륭한 신규 API이다. 엔드포인트슬라이스는 서비스를 지원하는 파드에 대한 IP 주소, 포트, 준비성 게이트(readiness gate) 상태 및 토폴로지 정보를 추적한다.
+
+쿠버네티스 1.19에서 이 기능은, 엔드포인트 대신 엔드포인트슬라이스에서 읽는 kube-proxy와 함께 기본적으로 활성화된다. 이는 대부분 눈에 띄지 않는 변경이지만, 대규모 클러스터에서는 눈에 띄게 확장성이 향상된다. 또한 토폴로지 인식 라우팅과 같은 향후 쿠버네티스 릴리스에서 중요한 신규 기능을 사용할 수 있다.
+
+### 인그레스(ingress)를 안정 기능(General Availability)으로 전환
+
+SIG Network는 널리 사용되고 있는 [인그레스 API](https://kubernetes.io/ko/docs/concepts/services-networking/ingress/)를 쿠버네티스 1.19의 안정 기능으로 전환했다. 이 변경은 쿠버네티스 기여자들의 수 년간의 노력을 인정하고, 쿠버네티스의 향후 네트워킹 API에 대한 추가 작업을 위한 기반을 마련한다.
+
+### seccomp를 안정 기능으로 전환
+
+쿠버네티스에 대한 seccomp(보안 컴퓨팅 모드) 지원이 안정 기능(GA)로 전환되었다. 이 기능은 파드(모든 컨테이너에 적용) 또는 단일 컨테이너에 대한 시스템 호출을 제한하여 워크로드 보안을 강화하는 데 사용될 수 있다.
+
+기술적으로 이것은 일급 객체(first class)인 `seccompProfile` 필드가 파드 및 컨테이너 `securityContext` 오브젝트에 추가되었음을 의미한다.
+
+```yaml
+securityContext:
+ seccompProfile:
+ type: RuntimeDefault|Localhost|Unconfined # 셋 중 하나를 선택
+ localhostProfile: my-profiles/profile-allow.json # type == Localhost 일때만 필요
+```
+
+`seccomp.security.alpha.kubernetes.io/pod` 및 `container.seccomp.security.alpha.kubernetes.io/...` 어노테이션에 대한 지원은 이제 사용 중단되었으며, 쿠버네티스 v1.22.0에서 제거된다. 현재 자동 버전 차이(skew)에 대한 처리는 새로운 필드를 어노테이션으로 변환하며 그 반대의 경우도 동일하게 동작한다. 즉, 클러스터의 기존 워크로드를 변환하는데 별도의 조치가 필요하지 않다.
+
+새로운 [Kubernetes.io의 문서 페이지][seccomp-docs]에서 seccomp로 컨테이너 시스템 호출을 제한하는 방법에 대한 자세한 정보를 찾을 수 있다.
+
+[seccomp-docs]: https://kubernetes.io/docs/tutorials/clusters/seccomp/
+
+
+### 운영용 이미지가 커뮤니티 제어로 이동
+
+쿠버네티스 v1.19부터 쿠버네티스 컨테이너 이미지는 `{asia,eu,us}.gcr.io/k8s-artifacts-prod` 에 있는
+커뮤니티 제어 기반의 스토리지 버킷에 저장된다. `k8s.gcr.io` 가상 도메인(vanity domain)이 새 버킷으로
+업데이트되었다. 이는 커뮤니티 제어 하에 운영 아티팩트(production artifacts)를 가져온다.
+
+### KubeSchedulerConfiguration를 베타로 전환
+
+SIG Scheduling은 `KubeSchedulerConfiguration` 를 베타로 전환했다. [KubeSchedulerConfiguration](https://kubernetes.io/docs/reference/scheduling/config) 기능을 사용하면 kube-scheduler의 알고리즘 및 기타 설정을 조정할 수 있다. 남은 구성을 다시 작성하지 않고도, 선택한 일정 단계에서 특정 기능(플러그인에 포함된)을 쉽게 활성화하거나 비활성화할 수 있다. 또한 단일 kube-scheduler 인스턴스는 프로파일이라고 불리는 다양한 구성을 제공한다. 파드는 `.spec.schedulerName` 필드를 통해 스케줄링하려는 프로파일을 선택할 수 있다.
+
+### CSI 마이그레이션 - AzureDisk 및 vSphere (베타)
+
+인-트리(In-tree) 볼륨 플러그인 및 모든 클라우드 공급자의 의존성이 쿠버네티스 코어 밖으로 이동된다. CSI 마이그레이션 기능을 사용하면 모든 볼륨 작업을 해당 CSI 드라이버로 라우팅하여 코드가 제거된 경우에도 레거시 API를 사용하는 기존 볼륨이 계속 동작할 수 있다. 이 기능의 AzureDisk 및 vSphere 구현이 베타로 승격되었다.
+
+### 스토리지 용량 추적
+
+전통적으로 쿠버네티스 스케줄러는 추가 퍼시스턴트 스토리지가 클러스터의 모든 곳에서 사용할 수 있고 그 용량이 무한하다고 가정했다. 토폴로지 제약 사항은 첫 번째 요점을 다루었지만, 남은 스토리지 용량이 새 파드를 시작하기에 충분하지 않을 수 있다는 점을 고려하지 않은 채로 파드 스케줄링이 수행되었다. 새로운 알파(alpha) 기능인 [스토리지 용량 추적](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking)은 CSI 드라이버에 API를 추가하여, 스토리지 용량을 전달하고 파드의 노드를 선택할 때 쿠버네티스 스케줄러에서 해당 정보를 사용한다. 이 기능은 용량이 더 제한된 로컬 볼륨 및 기타 볼륨 유형에 대한 동적 프로비저닝을 지원하기 위한 디딤돌 역할이다.
+
+### CSI 볼륨 상태 모니터링
+
+CSI 상태 모니터링의 알파 버전은 쿠버네티스 1.19와 함께 릴리스된다. 이 기능을 사용하면 CSI 드라이버가 기본 스토리지 시스템의 비정상적인 볼륨 조건을 쿠버네티스와 공유하여 PVC 또는 파드의 이벤트를 전달할 수 있다. 이 기능은 쿠버네티스 개별 볼륨 상태 문제를 프로그래밍 방식으로 감지하고 해결하기 위한 디딤돌 역할이다.
+
+### 일반적인 임시 볼륨
+
+쿠버네티스는 라이프사이클이 파드에 연결되어 있고 스크래치 공간(예: 기본 제공되는 "empty dir" 볼륨 타입)으로 사용되거나 일부 데이터를 파드에 로드(예: 기본 제공되는 컨피그맵(ConfigMap) 및 시크릿(Secret) 볼륨 유형 또는 "CSI 인라인 볼륨")할 수 있는 볼륨 플러그인을 제공한다. 새로운 [일반적인 임시 볼륨](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1698-generic-ephemeral-volumes) 알파 기능은 동적 프로비저닝을 지원하는 기존 스토리지 드라이버를 볼륨의 라이프사이클이 포함된 파드에 바인딩된 임시 볼륨으로 사용한다.
+ - 예를 들어 퍼시스턴트 메모리나 해당 노드의 별도 로컬 디스크와 같은, 루트 디스크와는 다른 스크래치 스토리지를 제공하는데 사용
+ - 볼륨 프로비저닝을 위한 모든 스토리지클래스(StorageClass) 파라미터 지원
+ - 스토리지 용량 추적, 스냅샷 및 복원, 볼륨 크기 조정과 같이 퍼시스턴트볼륨클레임(PersistentVolumeClaim)에서 지원하는 모든 기능을 지원
+
+### 변경할 수 없는(immutable) 시크릿 및 컨피그맵 (베타)
+
+시크릿 및 컨피그맵 볼륨은 immutable로 표시될 수 있으므로, 클러스터에 시크릿 및 컨피그맵 볼륨이 많은 경우 API 서버의 로드가 크게 줄어든다.
+자세한 내용은 [컨피그맵](https://kubernetes.io/ko/docs/concepts/configuration/configmap/) 및 [시크릿](https://kubernetes.io/ko/docs/concepts/configuration/secret/)을 참고한다.
+
+### 윈도우에 대한 CSI 프록시
+
+윈도우용 CSI 프록시는 1.19 릴리스와 함께 베타로 승격되었다. 이 CSI 프록시를 사용하면 윈도우의 컨테이너가 권한 있는 스토리지 작업을 수행할 수 있도록 CSI 드라이버를 윈도우에서 실행할 수 있다. 베타에서 윈도우용 CSI 프록시는 직접 연결된 디스크 및 SMB를 사용하는 스토리지 드라이버를 지원한다.
+
+### 대시보드 v2
+
+SIG UI는 쿠버네티스 대시보드 애드온 v2를 릴리스했다. [쿠버네티스/대시보드](https://github.com/kubernetes/dashboard/releases) 리포지터리에서 최신 릴리스를 찾을 수 있다. 쿠버네티스 대시보드에는 이제 CRD 지원, 새로운 번역 및 AngularJS의 업데이트된 버전이 포함된다.
+
+### 윈도우 컨테이너 지원을 베타로 전환
+
+쿠버네티스 1.18에서 처음 도입된 윈도우 컨테이너 지원이 이번 릴리스에서 베타로 전환된다. 여기에는 윈도우 서버 버전 2004에 대한 추가 지원이 포함된다(전체 버전 호환성은 [윈도우용 문서](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#cri-containerd)에서 찾을 수 있다).
+
+SIG Windows는 이번 릴리스에 대한 몇 가지 추가 사항도 포함한다.
+ - DSR(Direct Server Return) 모드 지원으로 많은 서비스를 효율적으로 확장 가능
+ - 윈도우 컨테이너가 CPU 한계를 준수
+ - 메트릭 및 요약 모음에 대한 성능 개선
+
+### 쿠버네티스 지원 기간을 1년으로 늘림
+
+쿠버네티스 1.19부터, 쿠버네티스 마이너(minor) 릴리스에 대한 패치 릴리스를 통한 버그 수정 지원이 9개월에서 1년으로 늘었다.
+
+2019년 초에 워킹그룹(WG) 장기 지원(LTS)에서 실시한 설문 조사에 따르면 쿠버네티스 최종 사용자의 상당 부분이 이전 9개월 지원 기간 내에 업그레이드에 실패한 것으로 보인다.
+연간 지원 기간은 최종 사용자가 원하는 대로 '추가적인 기간(쿠션)'을 제공하며, 이러한 방식을 통해 연간 계획 주기와도 더 적절하게 어울린다.
+
+## 알려진 이슈
+
+새로운 스토리지 용량 추적 알파 기능은 WaitForFirstConsumer 볼륨 바인딩 모드 제한의 영향을 받는 것으로 알려져 있다. [#94217](https://github.com/kubernetes/kubernetes/issues/94217)
+
+## 긴급 업그레이드 노트
+
+### (업그레이드 전에 반드시 읽어야 함)
+
+- 조치 필요: 코어 마스터 기본 이미지(kube-controller-manager)를 debian에서 distroless로 전환한다. 스크립트를 사용하여 Flex 볼륨 지원이 필요한 경우 필요한 패키지(bash와 같은)로 자체 이미지를 빌드하라. ([#91329](https://github.com/kubernetes/kubernetes/pull/91329), [@dims](https://github.com/dims)) [SIG Cloud Provider, Release, Storage 및 Testing]
+- 조치 필요: --basic-auth-file 플래그를 통한 기본 인증 지원이 제거되었다. 사용자는 유사한 기능을 위해 --token-auth-file 플래그로 마이그레이션해야 한다. ([#89069](https://github.com/kubernetes/kubernetes/pull/89069), [@enj](https://github.com/enj)) [SIG API Machinery]
+ - Azure blob 디스크 기능(`kind`: `Shared`, `Dedicated`)이 더 이상 사용되지 않는다. `kubernetes.io/azure-disk` 스토리지 클래스에 있는 `kind`: `Managed` 를 사용해야 한다. ([#92905](https://github.com/kubernetes/kubernetes/pull/92905), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+ - CVE-2020-8559(Medium): 손상된 노드에서 클러스터로 권한 상승. 자세한 내용은 https://github.com/kubernetes/kubernetes/issues/92914 를 참고한다.
+ API 서버는 더이상 업그레이드 요청에 대해 101이 아닌 응답을 프록시하지 않는다. 이로 인해 101이 아닌 응답 코드로 업그레이드 요청에 응답하는 프록시된 백엔드(확장 API 서버와 같은)가 손상될 수 있다. ([#92941](https://github.com/kubernetes/kubernetes/pull/92941), [@tallclair](https://github.com/tallclair)) [SIG API Machinery]
+ - Kubeadm은 /var/lib/kubelet/kubeadm-flags.env에서 사용 중단된 '--cgroup-driver' 플래그를 설정하지 않으며, kubelet의 config.yaml 파일에 설정한다. /var/lib/kubelet/kubeadm-flags.env 파일이나 /etc/default/kubelet 파일에 (RPM의 경우 /etc/sysconfig/kubelet) 이 플래그가 있는 경우 제거 후에 KubeletConfiguration을 사용하여 값을 지정해야 한다. ([#90513](https://github.com/kubernetes/kubernetes/pull/90513), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+ - Kubeadm은 이제 ClusterConfiguration에서 사용자가 지정한 etcd 버전을 준수하고 올바르게 사용한다. 사용자가 ClusterConfiguration에 지정된 버전을 유지하지 않으려면, kubeadm-config 컨피그맵을 편집하고 삭제해야 한다. ([#89588](https://github.com/kubernetes/kubernetes/pull/89588), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+ - Kubeadm은 systemd-resolved 서비스가 활성화된 경우에도 사용자가 설정한 resolvConfg 값을 따른다. kubeadm은 더이상 /var/lib/kubelet/kubeadm-flags.env 파일에 있는 '--resolv-conf' 플래그를 설정하지 않는다. /var/lib/kubelet/kubeadm-flags.env 파일이나 /etc/default/kubelet 파일에(RPM의 경우 /etc/sysconfig/kubelet) 이 플래그가 있는 경우 제거 후에 KubeletConfiguration을 사용하여 값을 지정해야 한다. ([#90394](https://github.com/kubernetes/kubernetes/pull/90394), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+ - Kubeadm: "kubeadm init 단계: kubelet-start"가 "kubeconfig" 단계 이후인, 초기화 워크 플로의 후반부로 이동한다. 이렇게 하면 kubeadm이 KubeletConfiguration 구성 설정 파일이 생성된 후에만 kubelet을 시작하고, OpenRC와 같은 초기화 시스템이 kubelet 서비스를 크래시루프(crashloop)할 수 없는 문제가 해결된다.
+ - 'kubeadm config upload' 명령어는 전체 GA 사용 중단 주기 후에 최종적으로 제거되었다. 만약 그래도 사용해야 할 경우, 대안으로 'kubeadm init phase upload-config'를 사용한다. ([#92610](https://github.com/kubernetes/kubernetes/pull/92610), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+ - kubescheduler.config.k8s.io/v1alpha2를 kubescheduler.config.k8s.io/v1beta1로 업그레이드
+
+ - `.bindTimeoutSeconds` 가 `VolumeBinding` 에 대한 플러그인 인수의 일부로 이동되었으며,
+ [프로파일](#profiles)별로 별도 구성할 수 있다.
+ - `.extenders` 가 API 표준을 충족하도록 업데이트되었다. 다음을 참고한다.
+ - `.extenders` 디코딩은 대소문자를 구분한다. 모든 필드가 영향을 받는다.
+ - `.extenders[*].httpTimeout` 은 `metav1.Duration` 유형이다.
+ - `.extenders[*].enableHttps` 가 `.extenders[*].enableHTTPS` 로 이름이 변경되었다.
+ - `RequestedToCapacityRatio` 인수 디코딩은 대소문자를 구분한다. 모든 필드가 영향을 받는다.
+ - `DefaultPodTopologySpread` [플러그인](#scheduling-plugins)이 `SelectorSpread` 로 이름이 변경되었다.
+ - `Unreserve` 익스텐션 포인트가 프로파일 정의에서 제거되었다. 모든 `Reserve`
+ 플러그인은 `Unreserve` 호출을 구현한다.
+ - `.disablePreemption` 이 제거되었다. 사용자는 "DefaultPreemption" PostFilter 플러그인을
+ 비활성화하여 선점에 대해 비활성화할 수 있다. ([#91420](https://github.com/kubernetes/kubernetes/pull/91420), [@pancernik](https://github.com/pancernik)) [SIG Scheduling]
+
+## 종류(Kind)별 변경
+
+### 사용 중단
+
+- vSphere CSI 드라이버로의 vSphere 인-트리 볼륨 마이그레이션에 대한 지원이 추가되었다. 인-트리 vSphere 볼륨 플러그인은 더이상 사용되지 않으며, 향후 릴리스에서 제거된다.
+
+ vSphere에 쿠버네티스를 자체 배포하는 사용자는 CSIMigration + CSIMigrationSphere 기능을 활성화하고 vSphere CSI 드라이버(https://github.com/kubernetes-sigs/vsphere-csi-driver)를 설치하여 기존 파드 및 PVC 오브젝트에 대한 중단을 방지해야 한다. 사용자는 새 볼륨에 대해 vSphere CSI 드라이버를 직접 사용하기 시작해야 한다.
+
+ vSphere 볼륨용 CSI 마이그레이션 기능을 사용하려면 최소 vSphere vCenter/ESXi 버전이 7.0u1이고 최소 HW 버전이 VM 버전 15여야 한다.
+
+ vSAN 원시 정책 파라미터는 인-트리 vSphere 볼륨 플러그인에서 더이상 사용되지 않으며 향후 릴리스에서 제거될 예정이다. ([#90911](https://github.com/kubernetes/kubernetes/pull/90911), [@divyenpatel](https://github.com/divyenpatel)) [SIG API Machinery, Node 및 Storage]
+- Apiextensions.k8s.io/v1beta1은 사용 중단되었으며, apiextensions.k8s.io/v1을 사용한다. ([#90673](https://github.com/kubernetes/kubernetes/pull/90673), [@deads2k](https://github.com/deads2k)) [SIG API Machinery]
+- Apiregistration.k8s.io/v1beta1은 사용 중단되었으며, apiregistration.k8s.io/v1을 사용한다. ([#90672](https://github.com/kubernetes/kubernetes/pull/90672), [@deads2k](https://github.com/deads2k)) [SIG API Machinery]
+- Authentication.k8s.io/v1beta1 및 authorization.k8s.io/v1beta1은 1.19에서 사용 중단되었으며, v1 level을 사용하고, 1.22에서 제거될 예정이다. ([#90458](https://github.com/kubernetes/kubernetes/pull/90458), [@deads2k](https://github.com/deads2k)) [SIG API Machinery 및 Auth]
+- Autoscaling/v2beta1은 사용 중단되었으며, autoscaling/v2beta2를 사용한다. ([#90463](https://github.com/kubernetes/kubernetes/pull/90463), [@deads2k](https://github.com/deads2k)) [SIG Autoscaling]
+- Coordination.k8s.io/v1beta1은 1.19에서 사용 중단되어, 1.22에서 제거될 예정이며, 대신 v1을 사용한다. ([#90559](https://github.com/kubernetes/kubernetes/pull/90559), [@deads2k](https://github.com/deads2k)) [SIG Scalability]
+- nodeExpansion CSI 호출에 볼륨 기능 및 스테이징 대상 필드가 있는지 확인해야 한다.
+
+ NodeStage 및 NodePublish 간에 호출되는 NodeExpandVolume의 동작은 CSI 볼륨에 대해 사용 중단된다. CSI 드라이버는 노드 EXPAND_VOLUME 기능이 있는 경우 NodePublish 후 NodeExpandVolume 호출을 지원해야 한다. ([#86968](https://github.com/kubernetes/kubernetes/pull/86968), [@gnufied](https://github.com/gnufied)) [SIG Storage]
+- 기능: azure 디스크 마이그레이션이 1.19에서 베타 버전으로 변경된다. 기능 게이트 CSIMigration이 베타(기본적으로 설정되는)로 전환되며, CSIMigrationAzureDisk가 베타(AzureDisk CSI 드라이버가 설치되야 하므로 기본적으로 설정되어 있지 않은)로 전환된다.
+ 인-트리 AzureDisk 플러그인 "kubernetes.io/azure-disk"는 이제 사용 중단되며, 1.23에서 제거된다. 사용자는 CSIMigration + CSIMigrationAzureDisk 기능을 사용하도록 설정하고 AzureDisk CSI 드라이버(https://github.com/kubernetes-sigs/azuredisk-csi-driver)를 설치하여 해당 시점에 기존 파드 및 PVC 오브젝트가 중단되지 않도록 해야 한다.
+ 사용자는 모든 새 볼륨에 대해 AzureDisk CSI 드라이버를 직접 사용해야 한다. ([#90896](https://github.com/kubernetes/kubernetes/pull/90896), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- Kube-apiserver: componentstatus API가 사용 중단된다. 이 API는 etcd, kube-scheduler 및 kube-controller-manager 컴포넌트의 상태를 제공했지만, 해당 컴포넌트가 API 서버에 대해 로컬이며, kube-scheduler 및 kube-controller-manager가 보안되지 않은 상태의 엔드포인트를 노출한 경우에만 동작했다. 이 API 대신 etcd의 상태 확인은 kube-apiserver의 상태 확인에 포함되고, kube-scheduler/kube-controller-manager 상태 확인은 해당 컴포넌트의 상태 엔드포인트에 대해 직접 수행될 수 있다. ([#93570](https://github.com/kubernetes/kubernetes/pull/93570), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps 및 Cluster Lifecycle]
+- Kubeadm: `kubeadm config view` 명령은 사용 중단되며, 기능 릴리스에서 제거될 예정이다. `kubectl get cm -o yaml -n kube-system kubeadm-config` 를 사용하여 kubeadm 설정을 직접 가져와야 한다. ([#92740](https://github.com/kubernetes/kubernetes/pull/92740), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubeadm: "kubeadm alpha kubelet config enable-dynamic" 명령이 사용 중단된다. 이 기능을 계속 사용하려면 k8s.io에서 "동적 Kubelet 구성"을 참조한다. ([#92881](https://github.com/kubernetes/kubernetes/pull/92881), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: `--experimental-kustomize` 기능이 사용 중단되며, 대신 `--experimental-patches` 기능을 사용한다. 지원되는 패치 형식은 "kubectl patch"와 동일하다. 디렉터리에서 파일로 읽히며, 초기화/조인/업그레이드 도중에 kubeadm 컴포넌트에 적용될 수 있다. 당분간은 정적 파드의 패치만 지원된다. ([#92017](https://github.com/kubernetes/kubernetes/pull/92017), [@neolit123](https://github.com/neolit123))
+- Kubeadm: "kubeadm alpha certs renew" 명령어에 대해 사용 중단된 "--use-api" 플래그가 제거되었다. ([#90143](https://github.com/kubernetes/kubernetes/pull/90143), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- 쿠버네티스는 더이상 hyperkube 이미지 빌드를 지원하지 않는다. ([#88676](https://github.com/kubernetes/kubernetes/pull/88676), [@dims](https://github.com/dims)) [SIG Cluster Lifecycle 및 Release]
+- kubectl get 명령어에서 --export 플래그가 제거되었다. ([#88649](https://github.com/kubernetes/kubernetes/pull/88649), [@oke-py](https://github.com/oke-py)) [SIG CLI 및 Testing]
+- 스케줄러의 알파 기능인 'ResourceLimitsPriorityFunction'은 사용을 많이 하지 않아 완전히 제거되었다. ([#91883](https://github.com/kubernetes/kubernetes/pull/91883), [@SataQiu](https://github.com/SataQiu)) [SIG Scheduling 및 Testing]
+- Storage.k8s.io/v1beta1은 사용 중단되었으며, 대신 storage.k8s.io/v1이 사용된다. ([#90671](https://github.com/kubernetes/kubernetes/pull/90671), [@deads2k](https://github.com/deads2k)) [SIG Storage]
+
+### API 변경
+
+- CSI드라이버(CSIDriver)가 볼륨 소유권 및 권한 수정을 지원하는지 여부를 지정할 수 있도록, 새로운 알파 수준의 필드인, `SupportsFsGroup` 가 도입되었다. 이 필드를 사용하려면 `CSIVolumeSupportFSGroup` 기능 게이트를 활성화해야 한다. ([#92001](https://github.com/kubernetes/kubernetes/pull/92001), [@huffmanca](https://github.com/huffmanca)) [SIG API Machinery, CLI 및 Storage]
+- 사용 중단된 어노테이션을 신규 API 서버의 필드와 동기화하기 위해 seccomp 프로파일에 대한 파드 버전 비대칭 전략이 추가되었다. 자세한 설명은 [KEP의](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190717-seccomp-ga.md#version-skew-strategy) 해당 섹션을 참고한다. ([#91408](https://github.com/kubernetes/kubernetes/pull/91408), [@saschagrunert](https://github.com/saschagrunert)) [SIG Apps, Auth, CLI 및 Node]
+- Kubelet에서 수집한 가속기/GPU 메트릭을 비활성화하는 기능이 추가되었다. ([#91930](https://github.com/kubernetes/kubernetes/pull/91930), [@RenaudWasTaken](https://github.com/RenaudWasTaken)) [SIG Node]
+- 이제 어드미션 웹훅은 어드미션 리뷰 응답의 `.response.warnings` 필드를 사용하여 API 클라이언트에 표시되는 경고 메시지를 반환할 수 있다. ([#92667](https://github.com/kubernetes/kubernetes/pull/92667), [@liggitt](https://github.com/liggitt)) [SIG API Machinery 및 Testing]
+- CertificateSigningRequest API의 조건이 업데이트되었다.
+ - `status` 필드가 추가되었다. 이 필드의 기본값은 `True` 이며 `Approved`, `Denied` 및 `Failed` 조건에 대해서만 `True` 로 설정될 수 있다.
+ - `lastTransitionTime` 필드가 추가되었다.
+ - 서명자가 영구적인 실패를 표시할 수 있도록 `Failed` 조건 유형이 추가되었다. 이 조건은 `certificatesigningrequests/status` 하위 리소스를 통해 추가될 수 있다.
+ - `Approved` 및 `Denied` 조건은 상호 배타적이다.
+ - `Approved`, `Denied` , `Failed` 조건은 더이상 CSR에서 제거할 수 없다. ([#90191](https://github.com/kubernetes/kubernetes/pull/90191), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps, Auth, CLI 및 Node]
+- 클러스터 관리자는 이제 kubelet 구성 파일에서 enableSystemLogHandler를 false로 설정하여 kubelet에서 /logs 엔드포인트를 끌 수 있다. enableSystemLogHandler는 오직 enableDebuggingHandlers도 true로 설정된 경우에만 true로 설정할 수 있다. ([#87273](https://github.com/kubernetes/kubernetes/pull/87273), [@SaranBalaji90](https://github.com/SaranBalaji90)) [SIG Node]
+- 사용자 정의 엔드포인트는 이제 새로운 EndpointSliceMirroring 컨트롤러에 의해 엔드포인트슬라이스에 미러링된다. ([#91637](https://github.com/kubernetes/kubernetes/pull/91637), [@robscott](https://github.com/robscott)) [SIG API Machinery, Apps, Auth, Cloud Provider, Instrumentation, Network 및 Testing]
+- CustomResourceDefinitions은 `spec.versions[*].deprecated` 를 `true` 로 설정하여 버전을 사용 중단됨으로 표시하고, 선택적으로 `spec.versions[*].deprecationWarning` 필드로 기본 지원 중단 경고를 재정의할 수 있도록 지원한다. ([#92329](https://github.com/kubernetes/kubernetes/pull/92329), [@liggitt](https://github.com/liggitt)) [SIG API Machinery]
+- EnvVarSource api 문서의 버그가 수정되었다. ([#91194](https://github.com/kubernetes/kubernetes/pull/91194), [@wawa0210](https://github.com/wawa0210)) [SIG Apps]
+- "Too large resource version" 오류에서 복구할 수 없는 리플렉터(reflector)의 버그를 수정하였다. ([#92537](https://github.com/kubernetes/kubernetes/pull/92537), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery]
+- 고침: 이제 로그 타임스탬프에 고정 너비를 유지하기 위해 후행에 0이 포함된다. ([#91207](https://github.com/kubernetes/kubernetes/pull/91207), [@iamchuckss](https://github.com/iamchuckss)) [SIG Apps 및 Node]
+- `GenericEphemeralVolume` 기능 게이트의 새로운 알파 기능인 일반 임시 볼륨은 `EmptyDir` 볼륨에 대한 보다 유연한 대안을 제공한다. `EmptyDir` 과 마찬가지로 쿠버네티스에 의해 각 파드에 대한 볼륨이 자동으로 생성되고 삭제된다. 그러나 일반적인 프로비저닝 프로세스(`PersistentVolumeClaim`)를 사용하기 때문에, 스토리지는 타사 스토리지 공급 업체에서 제공할 수 있으며, 일반적인 모든 볼륨 기능이 작동한다. 볼륨은 비워둘 필요가 없다. 예를 들어, 스냅샷으로부터의 복원이 지원된다. ([#92784](https://github.com/kubernetes/kubernetes/pull/92784), [@pohly](https://github.com/pohly)) [SIG API Machinery, Apps, Auth, CLI, Instrumentation, Node, Scheduling, Storage 및 Testing]
+- 이제 쿠버네티스 빌드를 위하여 최소 Go1.14.4 이상의 버전이 필요하다. ([#92438](https://github.com/kubernetes/kubernetes/pull/92438), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Auth, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Network, Node, Release, Storage 및 Testing]
+- kubectl edit 명령어에서 managedFields가 숨겨졌다. ([#91946](https://github.com/kubernetes/kubernetes/pull/91946), [@soltysh](https://github.com/soltysh)) [SIG CLI]
+- K8s.io/apimachinery - scheme.Convert()는 이제 명시적으로 등록된 변환만을 사용한다. 기본 리플렉션 기반 변환은 더이상 사용할 수 없다. `+k8s:conversion-gen` 태그는 `k8s.io/code-generator` 컴포넌트와 함께 사용하여 변환을 생성할 수 있다. ([#90018](https://github.com/kubernetes/kubernetes/pull/90018), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery, Apps 및 Testing]
+- Kube-proxy: 포트 바인딩 실패를 치명적인 오류로 처리하기 위해 `--bind-address-hard-fail` 플래그가 추가되었다. ([#89350](https://github.com/kubernetes/kubernetes/pull/89350), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle 및 Network]
+- Kubebuilder 유효성 검사 태그는 CRD 생성을 위해 metav1.Condition에 설정된다. ([#92660](https://github.com/kubernetes/kubernetes/pull/92660), [@damemi](https://github.com/damemi)) [SIG API Machinery]
+- Kubelet의 --runonce 옵션은 이제 Kubelet의 설정 파일에서 `runOnce` 로도 사용할 수 있다. ([#89128](https://github.com/kubernetes/kubernetes/pull/89128), [@vincent178](https://github.com/vincent178)) [SIG Node]
+- Kubelet: 구조화된 로깅을 지원하기 위해 '--logging-format' 플래그가 추가되었다. ([#91532](https://github.com/kubernetes/kubernetes/pull/91532), [@afrouzMashaykhi](https://github.com/afrouzMashaykhi)) [SIG API Machinery, Cluster Lifecycle, Instrumentation 및 Node]
+- 쿠버네티스는 이제 golang 1.15.0-rc.1로 빌드된다.
+ - 주체 대체 이름(Subject Alternative Names)이 없을 때, 인증서를 제공하는 X.509의 CommonName 필드를 호스트 이름으로 처리하는, 더이상 사용되지 않는 레거시 동작은 이제 기본적으로 비활성화된다. GODEBUG 환경 변수에 x509ignoreCN=0 값을 추가하여 일시적으로 다시 활성화할 수도 있다. ([#93264](https://github.com/kubernetes/kubernetes/pull/93264), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, Auth, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Network, Node, Release, Scalability, Storage 및 Testing]
+- 변경할 수 없는 시크릿/컨피그맵 기능을 베타로 승격하고 기본적으로 기능이 활성화된다.
+ 이를 통해 시크릿 또는 컨피그맵 오브젝트의 `Immutable` 필드를 설정하여 내용을 immutable로 표시할 수 있다. ([#89594](https://github.com/kubernetes/kubernetes/pull/89594), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps 및 Testing]
+- 스케줄 구성 `KubeSchedulerConfiguration` 에서 `BindTimeoutSeconds` 가 제거되었다. ([#91580](https://github.com/kubernetes/kubernetes/pull/91580), [@cofyc](https://github.com/cofyc)) [SIG Scheduling 및 Testing]
+- kubescheduler.config.k8s.io/v1alpha1이 제거되었다. ([#89298](https://github.com/kubernetes/kubernetes/pull/89298), [@gavinfish](https://github.com/gavinfish)) [SIG Scheduling]
+- 예약에 실패한 예약 플러그인은 예약 해제 익스텐션 포인트를 트리거하게 된다. ([#92391](https://github.com/kubernetes/kubernetes/pull/92391), [@adtac](https://github.com/adtac)) [SIG Scheduling 및 Testing]
+- 예전 API 클라이언트가 제출한 업데이트/패치 요청에서 처리하는 `metadata.managedFields` 의 회귀 문제가 해결되었다. ([#91748](https://github.com/kubernetes/kubernetes/pull/91748), [@apelisse](https://github.com/apelisse))
+- 스케줄러: 바인딩되지 않은 볼륨이 있는 파드를 예약하기 전에 선택적으로 사용 가능한 스토리지 용량을 확인한다. (신규 기능 게이트인 `CSIStorageCapacity` 가 있는 알파 기능, CSI 드라이버에서만 작동하며, CSI 드라이버 배포의 기능 지원에 따라 상이하다.) ([#92387](https://github.com/kubernetes/kubernetes/pull/92387), [@pohly](https://github.com/pohly)) [SIG API Machinery, Apps, Auth, Scheduling, Storage 및 Testing]
+- Seccomp 지원이 GA로 전환되었다. 새로운 `seccompProfile` 필드가 파드 및 컨테이너의 securityContext 오브젝트에 추가된다. `seccomp.security.alpha.kubernetes.io/pod` 및 `container.seccomp.security.alpha.kubernetes.io/...` 어노테이션에 대한 지원은 사용 중단되며, v1.22에서 제거된다. ([#91381](https://github.com/kubernetes/kubernetes/pull/91381), [@pjbgf](https://github.com/pjbgf)) [SIG Apps, Auth, Node, Release, Scheduling 및 Testing]
+- ServiceAppProtocol 기능 게이트는 이제 베타 버전이며 기본적으로 활성화되어 서비스 및 엔드포인트에 새 AppProtocol 필드를 추가한다. ([#90023](https://github.com/kubernetes/kubernetes/pull/90023), [@robscott](https://github.com/robscott)) [SIG Apps 및 Network]
+- SetHostnameAsFQDN은 PodSpec의 새 필드이다. true로 설정하면,
+ 파드의 정규화된 도메인 이름(FQDN)이 해당 컨테이너의 호스트 이름으로 설정된다.
+ 리눅스 컨테이너에서 이는 커널의 hostname 필드(utsname 구조체의 nodename 필드)에
+ FQDN을 설정하는 것을 의미한다. 윈도우 컨테이너에서 이는
+ 레지스트리 키 HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters에 대한 호스트 이름의 레지스트리 값을 FQDN으로 설정하는 것을 의미한다.
+ 파드에 FQDN이 없는 경우에는 아무런 효과가 없다. ([#91699](https://github.com/kubernetes/kubernetes/pull/91699), [@javidiaz](https://github.com/javidiaz)) [SIG Apps, Network, Node 및 Testing]
+- The CertificateSigningRequest API는 다음의 변경 사항과 함께 certificates.k8s.io/v1로 승격되었다.
+ - 이제 `spec.signerName` 이 필요하며, `certificates.k8s.io/v1` API를 통해 `kubernetes.io/legacy-unknown` 요청을 생성하는것이 허용되지 않는다.
+ - `spec.usages` 가 필요하며, 중복값을 포함할 수 없으며, 알려진 사용법만 포함해야 한다.
+ - `status.conditions` 는 중복 유형을 포함할 수 없다.
+ - `status.conditions[*].status` 가 필요하다.
+ - `status.certificate` 는 PEM으로 반드시 인코딩되어야 하며, CERTIFICATE 블록만을 포함해야 한다. ([#91685](https://github.com/kubernetes/kubernetes/pull/91685), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Architecture, Auth, CLI 및 Testing]
+- 이제 HugePageStorageMediumSize 기능 게이트는 기본적으로 켜져 있으며, 컨테이너 수준에서 여러 크기의 huge page 리소스를 사용할 수 있다. ([#90592](https://github.com/kubernetes/kubernetes/pull/90592), [@bart0sh](https://github.com/bart0sh)) [SIG Node]
+- Kubelet의 --node-status-max-images 옵션은 이제 Kubelet 구성 파일의 nodeStatusMaxImage 필드를 통해 사용할 수 있다. ([#91275](https://github.com/kubernetes/kubernetes/pull/91275), [@knabben](https://github.com/knabben)) [SIG Node]
+- Kubelet의 --seccomp-profile-root 옵션은 사용 중단된 것으로 표시된다. ([#91182](https://github.com/kubernetes/kubernetes/pull/91182), [@knabben](https://github.com/knabben)) [SIG Node]
+- Kubelet의 `--bootstrap-checkpoint-path` 옵션이 제거되었다. ([#91577](https://github.com/kubernetes/kubernetes/pull/91577), [@knabben](https://github.com/knabben)) [SIG Apps 및 Node]
+- Kubelet의 `--cloud-provider` 및 `--cloud-config` 옵션은 사용 중단된 것으로 표시된다. ([#90408](https://github.com/kubernetes/kubernetes/pull/90408), [@knabben](https://github.com/knabben)) [SIG Cloud Provider 및 Node]
+- Kubelet의 `--enable-server` 및 `--provider-id` 옵션은 이제 각각 Kubelet 구성 파일의 `enableServer` 및 `providerID` 필드를 통해 사용할 수 있다. ([#90494](https://github.com/kubernetes/kubernetes/pull/90494), [@knabben](https://github.com/knabben)) [SIG Node]
+- Kubelet의 `--kernel-memcg-notification` 옵션은 이제 Kubelet 구성 파일의 kernelMemcgNotification 필드를 통해 사용할 수 있다. ([#91863](https://github.com/kubernetes/kubernetes/pull/91863), [@knabben](https://github.com/knabben)) [SIG Cloud Provider, Node 및 Testing]
+- Kubelet의 `--really-crash-for-testing` 및 `--chaos-chance` 옵션은 사용 중단된 것으로 표시된다. ([#90499](https://github.com/kubernetes/kubernetes/pull/90499), [@knabben](https://github.com/knabben)) [SIG Node]
+- Kubelet의 `--volume-plugin-dir` 옵션은 이제 Kubelet 구성 파일의 `VolumePluginDir` 필드를 통해 사용할 수 있다. ([#88480](https://github.com/kubernetes/kubernetes/pull/88480), [@savitharaghunathan](https://github.com/savitharaghunathan)) [SIG Node]
+- `DefaultIngressClass` 기능은 이제 GA로 전환되었다. `--feature-gate` 파라미터는 1.20에서 제거된다. ([#91957](https://github.com/kubernetes/kubernetes/pull/91957), [@cmluciano](https://github.com/cmluciano)) [SIG API Machinery, Apps, Network 및 Testing]
+- 알파 `DynamicAuditing` 기능 게이트 및 `auditregistration.k8s.io/v1alpha1` API가 제거되었으며 더이상 지원되지 않는다. ([#91502](https://github.com/kubernetes/kubernetes/pull/91502), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Auth 및 Testing]
+- kube-controller-manager 관리 서명자는 이제 고유한 서명 인증서와 키를 소유할 수 있다. `--cluster-signing-[signer-name]-{cert,key}-file` 에 대한 도움말을 참고한다. `--cluster-signing-{cert,key}-file` 는 여전히 기본값이다. ([#90822](https://github.com/kubernetes/kubernetes/pull/90822), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Apps 및 Auth]
+- v1.14부터 사용 중단된, 사용하지 않는 `series.state` 필드는 `events.k8s.io/v1beta1` 및 `v1` 이벤트 유형에서 삭제되었다. ([#90449](https://github.com/kubernetes/kubernetes/pull/90449), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps]
+- 스케줄러 플러그인에 대한 예약 해제 익스텐션 포인트가 예약 익스텐션 포인트로 병합되었다. ([#92200](https://github.com/kubernetes/kubernetes/pull/92200), [@adtac](https://github.com/adtac)) [SIG Scheduling 및 Testing]
+- Golang 버전이 v1.14.4로 변경되었다. ([#88638](https://github.com/kubernetes/kubernetes/pull/88638), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, Cloud Provider, Release 및 Testing]
+- Service.Spec.IPFamily에 대한 API 문서를 변경하여, 이중 스택 기능이
+ GA 되기 전에 정확한 의미가 변경될 수 있으며, 사용자는
+ 기존 서비스가 IPv4, IPv6 혹은 이중 스택인지 확인하기 위해
+ IPFamily가 아닌 ClusterIP 또는 엔드포인트를 살펴봐야 한다. ([#91527](https://github.com/kubernetes/kubernetes/pull/91527), [@danwinship](https://github.com/danwinship)) [SIG Apps 및 Network]
+- 사용자는 리소스 그룹을 무시하도록 리소스 접두사를 구성할 수 있다. ([#88842](https://github.com/kubernetes/kubernetes/pull/88842), [@angao](https://github.com/angao)) [SIG Node 및 Scheduling]
+- `Ingress` 및 `IngressClass` 리소스가 `networking.k8s.io/v1` 로 변경되었다. `extensions/v1beta1` 및 `networking.k8s.io/v1beta1` API 버전의 인그레스 및 인그레스클래스 유형은 사용 중단되었으며, 더이상 1.22 이상에서 제공되지 않는다. 지속형 오브젝트는 `networking.k8s.io/v1` API를 통해 접근할 수 있다. v1 인그레스 오브젝트의 주목할만한 변경 사항은 다음과 같다. (v1beta1 필드 이름은 변경되지 않았다.)
+ - `spec.backend` -> `spec.defaultBackend`
+ - `serviceName` -> `service.name`
+ - `servicePort` -> `service.port.name` (string 값에 대하여)
+ - `servicePort` -> `service.port.number` (numeric 값에 대하여)
+ - `pathType` 은 더이상 v1에서 기본값을 갖지 않는다. "Exact", "Prefix" 또는 "ImplementationSpecific" 중에서 지정되어야 한다.
+ 기타 인그레스 API 변경 사항:
+ - 백엔드는 이제 리소스 또는 서비스 백엔드가 될 수 있다.
+ - `path` 는 더이상 유효한 정규 표현식이 아니어도 된다. ([#89778](https://github.com/kubernetes/kubernetes/pull/89778), [@cmluciano](https://github.com/cmluciano)) [SIG API Machinery, Apps, CLI, Network 및 Testing]
+- `NodeResourcesLeastAllocated` 및 `NodeResourcesMostAllocated` 플러그인은 이제 CPU 및 메모리에 대한 사용자 정의 가중치를 지원한다. [#90544](https://github.com/kubernetes/kubernetes/pull/90544), [@chendave](https://github.com/chendave)) [SIG Scheduling]
+- v1beta1 버전의 스케줄러 구성 설정 API에 `PostFilter` 유형이 추가되었다. ([#91547](https://github.com/kubernetes/kubernetes/pull/91547), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- `RequestedToCapacityRatioArgs` 인코딩은 이제 엄격하게 작동한다. ([#91603](https://github.com/kubernetes/kubernetes/pull/91603), [@pancernik](https://github.com/pancernik)) [SIG Scheduling]
+- `v1beta1` 스케줄러의 `Extender` 인코딩은 대소문자를 구분하며(`v1alpha1`/`v1alpha2` 는 대소문자를 구분하지 않았다.), `httpTimeout` 필드는 기간 인코딩을 사용하며(예를 들어, 1초는 `"1s"` 로 지정된다.), `v1alpha1`/`v1alpha2` 의 `enableHttps` 필드 이름이 `enableHTTPS` 로 변경되었다. ([#91625](https://github.com/kubernetes/kubernetes/pull/91625), [@pancernik](https://github.com/pancernik)) [SIG Scheduling]
+
+### 기능
+
+- defaultpreemption 플러그인은 기존 하드코딩된 파드 선점 로직을 대체하는 스케줄러에 등록되고 활성화된다. ([#92049](https://github.com/kubernetes/kubernetes/pull/92049), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling 및 Testing]
+- 스케줄링 필터 실패를 해결하기 위해, 필터 단계 후에 실행되는 스케줄러 프레임워크에 새로운 익스텐션 포인트인 `PostFilter` 가 도입되었다. 전형적인 구현은 선점 로직을 실행하는 것이다. ([#91314](https://github.com/kubernetes/kubernetes/pull/91314), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling 및 Testing]
+- 조치 필요: CoreDNS v1.7.0에서, [메트릭의 이름이 변경되어](https://github.com/coredns/coredns/blob/master/notes/coredns-1.7.0.md#metric-changes) 이전 메트릭의 이름을 사용하는 기존 리포팅 공식과 역호환(backward incompatible)되지 않는다. 업그레이드하기 전에 공식을 새 이름으로 조정해야 한다.
+
+ Kubeadm은 이제 CoreDNS 버전 v1.7.0이 포함된다. 주요 변경 사항은 다음과 같다.
+ - CoreDNS가 서비스 기록 업데이트를 중단할 수 있는 버그가 수정되었다.
+ - 어떤 정책이 설정되어 있어도 항상 첫 번째 업스트림 서버만 선택되는 포워드 플러그인의 버그가 수정되었다.
+ - 쿠버네티스 플러그인에서 이미 사용 중단된 `resyncperiod` 및 `upstream` 이 제거되었다.
+ - 프로메테우스(Prometheus) 메트릭 이름 변경을 포함한다. (표준 프로메테우스 메트릭 명명 규칙에 맞게 변경). 이전 메트릭의 이름을 사용하는 기존 리포팅 공식과 역호환된다.
+ - 페더레이션 플러그인(v1 쿠버네티스 페더레이션을 허용)이 제거되었다.
+ 자세한 내용은 https://coredns.io/2020/06/15/coredns-1.7.0-release/ 에서 확인할 수 있다. ([#92651](https://github.com/kubernetes/kubernetes/pull/92651), [@rajansandeep](https://github.com/rajansandeep)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle 및 Instrumentation]
+- 사용 중단된 버전에 대한 API 요청은 이제 API 응답에서 경고 헤더를 수신하고, 사용 중단된 API의 사용을 나타내는 메트릭이 표시되도록 한다.
+ - `kubectl` 은 표준 에러 출력으로 경고를 출력하며, `--warnings-as-errors` 옵션을 허용하여 경고를 치명적인 오류로 처리한다.
+ - `k8s.io/client-go` 은 기본적으로 표준 에러 출력에 경고를 출력한다. `config.WarningHandler` 를 설정하여 클라이언트 별로 재정의하거나, `rest.SetDefaultWarningHandler()` 로 프로세스별로 재정의한다.
+ - `kube-apiserver` 는 요청된, 사용 중단된 API에 대해 `group`, `version`, `resource`, `subresource` 및 `removed_release` 레이블을 표시하고 `apiserver_requested_deprecated_apis` 게이지 메트릭을 `1` 로 설정한다. ([#73032](https://github.com/kubernetes/kubernetes/pull/73032), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, CLI, Instrumentation 및 Testing]
+- component-base에 --logging-format 플래그가 추가되었다. 기본값은 변경되지 않은 klog를 사용하는 "text"이다. ([#89683](https://github.com/kubernetes/kubernetes/pull/89683), [@yuzhiquan](https://github.com/yuzhiquan)) [SIG Instrumentation]
+- kubectl create deployment 명령어에 --port 플래그가 추가되었다. ([#91113](https://github.com/kubernetes/kubernetes/pull/91113), [@soltysh](https://github.com/soltysh)) [SIG CLI 및 Testing]
+- cmd/cloud-controller-manager에 .import-restrictions 파일이 추가되었다. ([#90630](https://github.com/kubernetes/kubernetes/pull/90630), [@nilo19](https://github.com/nilo19)) [SIG API Machinery 및 Cloud Provider]
+- CRI-API ImageSpec 오브젝트에 어노테이션이 추가되었다. ([#90061](https://github.com/kubernetes/kubernetes/pull/90061), [@marosset](https://github.com/marosset)) [SIG Node 및 Windows]
+- 스케줄러의 PodSchedulingDuration 메트릭에 시도(attempts)에 대한 레이블이 추가되었다. ([#92650](https://github.com/kubernetes/kubernetes/pull/92650), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation 및 Scheduling]
+- kubectl scale 명령어에 클라이언트측과 서버측에 대한 드라이런(dry-run) 지원이 추가되었다. ([#89666](https://github.com/kubernetes/kubernetes/pull/89666), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI 및 Testing]
+- kubectl diff에 셀렉터가 추가되었다. ([#90857](https://github.com/kubernetes/kubernetes/pull/90857), [@sethpollack](https://github.com/sethpollack)) [SIG CLI]
+- cgroups v2 노드의 유효성 검사에 대한 지원이 추가되었다. ([#89901](https://github.com/kubernetes/kubernetes/pull/89901), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle 및 Node]
+- 노드 수준에서, 다른 크기로 사전 할당된 huge page에 대한 지원이 추가되었다. ([#89252](https://github.com/kubernetes/kubernetes/pull/89252), [@odinuge](https://github.com/odinuge)) [SIG Apps 및 Node]
+- Azure 파일 드라이버에 대한 태그 지원이 추가되었다. ([#92825](https://github.com/kubernetes/kubernetes/pull/92825), [@ZeroMagic](https://github.com/ZeroMagic)) [SIG Cloud Provider 및 Storage]
+- azure 디스크 드라이버에 대한 태그 지원이 추가되었다. ([#92356](https://github.com/kubernetes/kubernetes/pull/92356), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- kubectl run 명령어에 --privileged 플래그가 추가되었다. ([#90569](https://github.com/kubernetes/kubernetes/pull/90569), [@brianpursley](https://github.com/brianpursley)) [SIG CLI]
+- `v1beta1` 장비 플러그인 API에 새로운 `GetPreferredAllocation()` 호출이 추가되었다. ([#92665](https://github.com/kubernetes/kubernetes/pull/92665), [@klueska](https://github.com/klueska)) [SIG Node 및 Testing]
+- 쿠버네티스 서비스의 세션 어피니티를 설정하기 위해 윈도우에 대한 기능 지원이 추가되었다.
+ 필요: [윈도우 서버 vNext Insider Preview Build 19551](https://blogs.windows.com/windowsexperience/2020/01/28/announcing-windows-server-vnext-insider-preview-build-19551/)(또는 그 이상) ([#91701](https://github.com/kubernetes/kubernetes/pull/91701), [@elweb9858](https://github.com/elweb9858)) [SIG Network 및 Windows]
+- kube-apiserver 메트릭 추가: apiserver_current_inflight_request_measures. (API 우선순위(Priority) 및 공정성(Fairness)이 활성화된 경우, windowed_request_stats). ([#91177](https://github.com/kubernetes/kubernetes/pull/91177), [@MikeSpreitzer](https://github.com/MikeSpreitzer)) [SIG API Machinery, Instrumentation 및 Testing]
+- AWS 로드밸런서 서비스의 대상 노드에 service.beta.kubernetes.io/aws-load-balancer-target-node-labels 어노테이션이 추가되었다. ([#90943](https://github.com/kubernetes/kubernetes/pull/90943), [@foobarfran](https://github.com/foobarfran)) [SIG Cloud Provider]
+- 서로 다른 세분성을 갖는 흐름-제어 시스템의 내부 상태를 덤프하기 위해 접두사 "/debug/flowcontrol/*" 아래에 디버깅 엔드포인트 집합이 추가되었다. ([#90967](https://github.com/kubernetes/kubernetes/pull/90967), [@yue9944882](https://github.com/yue9944882)) [SIG API Machinery]
+- kube-scheduler의 메트릭인 framework_extension_point_duration_seconds에 프로파일 레이블이 추가되었다.([#92268](https://github.com/kubernetes/kubernetes/pull/92268), [@alculquicondor](https://github.com/alculquicondor)) [SIG Instrumentation 및 Scheduling]
+- kube-scheduler의 메트릭인 schedule_attempts_total에 프로파일 레이블이 추가되었다.
+ - e2e_scheduling_duration_seconds에 결과 및 프로파일 레이블이 추가되었다. 이제 스케줄링 불가능 및 오류 시도 횟수가 기록된다. ([#92202](https://github.com/kubernetes/kubernetes/pull/92202), [@alculquicondor](https://github.com/alculquicondor)) [SIG Instrumentation 및 Scheduling]
+- 사용 중단된 API 버전에 대한 API 요청의 감사(audit) 이벤트는 이제 `"k8s.io/deprecated": "true"` 감사 어노테이션을 포함한다. 대상에 대한 제거 릴리스(removal release)가 식별되면, 해당 감사 이벤트는 `"k8s.io/removal-release": "."` 감사 어노테이션도 포함한다. ([#92842](https://github.com/kubernetes/kubernetes/pull/92842), [@liggitt](https://github.com/liggitt)) [SIG API Machinery 및 Instrumentation]
+- 대시보드가 v2.0.1로 격상되었다. ([#91526](https://github.com/kubernetes/kubernetes/pull/91526), [@maciaszczykm](https://github.com/maciaszczykm)) [SIG Cloud Provider]
+- 클라우드 노드-컨트롤러는 InstancesV2를 사용한다. ([#91319](https://github.com/kubernetes/kubernetes/pull/91319), [@gongguan](https://github.com/gongguan)) [SIG Apps, Cloud Provider, Scalability 및 Storage]
+- 의존성: Golang 버전이 1.13.9로 변경되었다.
+ - 빌드: kube-cross 이미지 빌드가 제거되었다. ([#89275](https://github.com/kubernetes/kubernetes/pull/89275), [@justaugustus](https://github.com/justaugustus)) [SIG Release 및 Testing]
+- 세부 스케줄러 점수 결과는 레벨(verbose) 10에서 표시된다. ([#89384](https://github.com/kubernetes/kubernetes/pull/89384), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- E2e.test는 클러스터가 적합하다고 평가되기 위해, 통과해야 하는 적합성 테스트 목록을 표시할 수 있다. ([#88924](https://github.com/kubernetes/kubernetes/pull/88924), [@dims](https://github.com/dims)) [SIG Architecture 및 Testing]
+- PodTopologySpread 플러그인을 사용하여 defaultspreading을 수행하려면 DefaultPodTopologySpread 기능 게이트를 활성화해야 한다. 이렇게 하면, 레거시 DefaultPodTopologySpread 플러그인이 비활성화된다. ([#91793](https://github.com/kubernetes/kubernetes/pull/91793), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- EndpointSlice 컨트롤러가 실패한 동기화를 재시도하기 위해 더 오래 대기하게 된다. ([#89438](https://github.com/kubernetes/kubernetes/pull/89438), [@robscott](https://github.com/robscott)) [SIG Apps 및 Network]
+- 로컬 영역(AWS local zone)을 지원하기 위해, AWS azToRegion 메서드가 확장된다. ([#90874](https://github.com/kubernetes/kubernetes/pull/90874), [@Jeffwan](https://github.com/Jeffwan)) [SIG Cloud Provider]
+- 기능: azure 공유 디스크 지원이 추가되었다. ([#89511](https://github.com/kubernetes/kubernetes/pull/89511), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- 기능: azure 디스크 api-version이 변경되었다. ([#89250](https://github.com/kubernetes/kubernetes/pull/89250), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- 기능: [Azure 공유 디스크](https://docs.microsoft.com/ko-kr/azure/virtual-machines/disks-shared-enable?tabs=azure-cli)를 지원하며, azure 디스크 스토리지 클래스에 새로운 필드(`maxShares`)가 추가되었다.
+
+ kind: StorageClass
+ apiVersion: storage.k8s.io/v1
+ metadata:
+ name: shared-disk
+ provisioner: kubernetes.io/azure-disk
+ parameters:
+ skuname: Premium_LRS # 현재는 프리미엄 SSD만 사용 가능하다.
+ cachingMode: None # 읽기전용 호스트 캐싱은 maxShares>1 인 프리미엄 SSD에 사용할 수 없다.
+ maxShares: 2 ([#89328](https://github.com/kubernetes/kubernetes/pull/89328), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- 가상 서버 주소가 이미 바인딩되지 않은 경우, `EnsureDummyInterface` 만 실행하여 IPVS 프록시 성능을 향상시킬 수 있다. ([#92609](https://github.com/kubernetes/kubernetes/pull/92609), [@andrewsykim](https://github.com/andrewsykim)) [SIG Network]
+- 이제 Kube-Proxy는 EndpointSliceProxying 기능 게이트를 통해 윈도우에서 엔드포인트슬라이스를 지원한다. ([#90909](https://github.com/kubernetes/kubernetes/pull/90909), [@kumarvin123](https://github.com/kumarvin123)) [SIG Network 및 Windows]
+- 이제 Kube-Proxy는 IPv6DualStack 기능 게이트를 사용하여 윈도우에서 IPv6DualStack 기능을 지원한다. ([#90853](https://github.com/kubernetes/kubernetes/pull/90853), [@kumarvin123](https://github.com/kumarvin123)) [SIG Network, Node 및 Windows]
+- Kube-addon-manager가 v9.1.1로 변경되어, 허용된 리소스의 기본 목록을 재정의할 수 있다. (https://github.com/kubernetes/kubernetes/pull/91018) ([#91240](https://github.com/kubernetes/kubernetes/pull/91240), [@tosi3k](https://github.com/tosi3k)) [SIG Cloud Provider, Scalability 및 Testing]
+- etcd3에서 지원하는 Kube-apiserver는 데이터베이스 파일 크기를 표시하는 메트릭을 추출할 수 있다. ([#89151](https://github.com/kubernetes/kubernetes/pull/89151), [@jingyih](https://github.com/jingyih)) [SIG API Machinery]
+- Kube-apiserver, kube-scheduler 및 kube-controller manager는 이제 유닉스 시스템에서 실행되는 경우 --bind-address 및 --secure-port 플래그로 정의된 주소에서 수신 대기할 때, SO_REUSEPORT 소켓 옵션을 사용한다(윈도우는 지원되지 않는다). 이를 통해 동일한 구성으로 단일 호스트에서 이러한 프로세스의 여러 인스턴스를 실행할 수 있으므로, 다운타임 없이 정상적으로 변경/재시작이 가능하다. ([#88893](https://github.com/kubernetes/kubernetes/pull/88893), [@invidian](https://github.com/invidian)) [SIG API Machinery, Scheduling 및 Testing]
+- Kube-apiserver: NodeRestriction 어드미션 플러그인은 이제 새 노드를 만들 때, kubelet이 설정할 수 있는 노드 레이블을 1.16 이상에서 kubelet이 허용하는, `--node-labels` 파라미터로 제한한다. ([#90307](https://github.com/kubernetes/kubernetes/pull/90307), [@liggitt](https://github.com/liggitt)) [SIG Auth 및 Node]
+- Kube-controller-manager: 구조화된 로깅을 지원하기 위해 '--logging-format' 플래그가 추가되었다. ([#91521](https://github.com/kubernetes/kubernetes/pull/91521), [@SataQiu](https://github.com/SataQiu)) [SIG API Machinery 및 Instrumentation]
+- Kube-controller-manager: `--experimental-cluster-signing-duration` 플래그는 v1.22에서 사용 중단된 것으로 표시되며, `--cluster-signing-duration` 으로 대체된다. ([#91154](https://github.com/kubernetes/kubernetes/pull/91154), [@liggitt](https://github.com/liggitt)) [SIG Auth 및 Cloud Provider]
+- 이제 Kube-proxy는 리눅스에서 기본적으로 엔드포인트 대신 엔드포인트슬라이스를 사용한다. 새로운 알파 `WindowsEndpointSliceProxying` 기능 게이트를 통해 윈도우에서 기능을 활성화할 수 있다. ([#92736](https://github.com/kubernetes/kubernetes/pull/92736), [@robscott](https://github.com/robscott)) [SIG Network]
+- Kube-scheduler: 구조화된 로깅을 지원하기 위해 '--logging-format' 플래그가 추가되었다. ([#91522](https://github.com/kubernetes/kubernetes/pull/91522), [@SataQiu](https://github.com/SataQiu)) [SIG API Machinery, Cluster Lifecycle, Instrumentation 및 Scheduling]
+- Kubeadm은 이제 생성된 컴포넌트와 사용자 제공 컴포넌트의 설정을 구분하여, 구성 업그레이드가 필요한 경우 이전 컴포넌트를 다시 생성한다. ([#86070](https://github.com/kubernetes/kubernetes/pull/86070), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: 수동으로 업그레이드된 컴포넌트의 설정이, 업그레이드 계획 및 적용 중에 --config 옵션을 통해 YAML 파일에서 적용될 수 있다. 클러스터에 저장된 모든 것을 덮어쓰는 kubeadm 구성 및 컴포넌트의 설정이 보존되는 기존의 --config 동작도 유지된다. --config와 함께 사용되는 동작은 이제 kubeadm 설정 API 오브젝트(API group "kubeadm.kubernetes.io")가 파일에 제공되었는지의 여부에 따라 결정된다. ([#91980](https://github.com/kubernetes/kubernetes/pull/91980), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: 구동이 느린 컨테이너를 보호하기 위해 정적 파드에 대한 시작 프로브가 추가되었다. ([#91179](https://github.com/kubernetes/kubernetes/pull/91179), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubeadm: "kubeadm init phase certs" 하위 명령어의 "--csr-only" 및 "--csr-dir" 플래그는 사용 중단되었다. 대신 "kubeadm alpha certs generate-csr" 를 사용해야 한다. 이 신규 명령어를 사용하면 모든 컨트롤 플레인 컴포넌트에 대한 새로운 개인키 및 인증서 서명 요청을 생성할 수 있으므로, 외부 CA로 인증서에 서명할 수 있다. ([#92183](https://github.com/kubernetes/kubernetes/pull/92183), [@wallrj](https://github.com/wallrj)) [SIG Cluster Lifecycle]
+- Kubeadm: 'upgrade apply' 도중에, kube-proxy 컨피그맵이 누락된 경우, kube-proxy가 업그레이드되지 않아야 한다고 가정한다. DNS 서버 애드온에서 kube-dns/coredns 컨피그맵이 누락된 경우에도 동일하게 적용된다. 이는 'upgrade apply' 지원 단계까지의 일시적인 해결 방법이다. 해당 단계가 지원되면, kube-proxy/dns 수동으로 넘겨져야 한다. ([#89593](https://github.com/kubernetes/kubernetes/pull/89593), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: 컨트롤-플레인 정적 파드는 "시스템-노드-크리티컬" 우선순위 클래스로 전환되었다. ([#90063](https://github.com/kubernetes/kubernetes/pull/90063), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: upgrade plan은 이제 업그레이드 전에, 알려진 컴포넌트의 설정 상태를 나타내는 표를 출력한다. ([#88124](https://github.com/kubernetes/kubernetes/pull/88124), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubectl 명령어는 지정하지 않고(전체 리소스 이름을 입력할 필요 없음) 바로 taint no와 같이 사용할 수 있다. ([#88723](https://github.com/kubernetes/kubernetes/pull/88723), [@wawa0210](https://github.com/wawa0210)) [SIG CLI]
+- Kubelet: 다음의 메트릭 이름이 변경되었다.
+ kubelet_running_container_count --> kubelet_running_containers
+ kubelet_running_pod_count --> kubelet_running_pods ([#92407](https://github.com/kubernetes/kubernetes/pull/92407), [@RainbowMango](https://github.com/RainbowMango)) [SIG API Machinery, Cluster Lifecycle, Instrumentation 및 Node]
+- 클라이언트 인증서를 교체하도록 설정된 kubelet은 이제 인증서 만료까지 남은 시간(초)을 나타내는 `certificate_manager_server_ttl_seconds` 게이지 메트릭을 표시한다. ([#91148](https://github.com/kubernetes/kubernetes/pull/91148), [@liggitt](https://github.com/liggitt)) [SIG Auth 및 Node]
+- PodTopologySpreading이 더 잘 확산될 수 있도록 새로운 점수 계산 함수가 도입되었다. ([#90475](https://github.com/kubernetes/kubernetes/pull/90475), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- 유틸리티 라이브러리가 소폭 향상되었으며, 별도의 조치가 필요하진 않다. ([#92440](https://github.com/kubernetes/kubernetes/pull/92440), [@luigibk](https://github.com/luigibk)) [SIG Network]
+- PodTolerationRestriction: 에러에서 네임스페이스 허용 목록이 표시된다. ([#87582](https://github.com/kubernetes/kubernetes/pull/87582), [@mrueg](https://github.com/mrueg)) [SIG Scheduling]
+- 공급자별 참고 사항: vsphere: vsphere.conf - 성능 문제를 위해 자격증명 시크릿 관리를 비활성화하는 새로운 옵션 ([#90836](https://github.com/kubernetes/kubernetes/pull/90836), [@Danil-Grigorev](https://github.com/Danil-Grigorev)) [SIG Cloud Provider]
+- pod_preemption_metrics의 이름이 preemption_metrics로 변경되었다. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation 및 Scheduling]
+- Rest.Config는 이제, 이전에는 환경 변수를 통해서만 구성할 수 있었던 프록시 구성을 재정의하는 플래그를 지원한다. ([#81443](https://github.com/kubernetes/kubernetes/pull/81443), [@mikedanese](https://github.com/mikedanese)) [SIG API Machinery 및 Node]
+- PodTopologySpreading의 점수는 maxSkew가 증가함에 따라 차이가 감소되었다. ([#90820](https://github.com/kubernetes/kubernetes/pull/90820), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- 적용된 구성으로부터 필드가 제거된 경우 서버측 적용 동작이 정규화되었다. 다른 소유자가 없는 제거된 필드는 생성된 오브젝트에서 제거되거나 기본값이 있는 경우, 기본값으로 재설정된다. 기본값으로 재설정하지 않고 HPA로 `replicas` 필드를 이전하는 것과 같은 안전한 소유권 이전은 [소유권 이전](https://kubernetes.io/docs/reference/using-api/api-concepts/#transferring-ownership)에 설명되어 있다. ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation 및 Testing]
+- 서비스 컨트롤러: 노드의 관련 필드가 변경될 때에만 LB 노드 풀이 동기화된다. ([#90769](https://github.com/kubernetes/kubernetes/pull/90769), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps 및 Network]
+- CSIMigrationvSphere 기능 게이트가 베타로 전환되었다.
+ 사용자는 CSIMigration + CSIMigrationvSphere 기능을 활성화하고 vSphere CSI 드라이버(https://github.com/kubernetes-sigs/vsphere-csi-driver)를 설치하여 인-트리 vSphere 플러그인 "kubernetes.io/vsphere-volume" 에서 vSphere CSI 드라이버로 워크로드를 이동해야 한다.
+
+ 요구사항: vSphere vCenter/ESXi 버전: 7.0u1, HW 버전: VM 버전 15 ([#92816](https://github.com/kubernetes/kubernetes/pull/92816), [@divyenpatel](https://github.com/divyenpatel)) [SIG Cloud Provider 및 Storage]
+- `kubectl create deployment` 명령에 대하여 --replicas 플래그가 지원된다. ([#91562](https://github.com/kubernetes/kubernetes/pull/91562), [@zhouya0](https://github.com/zhouya0))
+- 클라이언트측 적용에서 서버측 적용으로의 원활한 업그레이드를 지원하며, 해당하는 다운그레이드 또한 지원한다. ([#90187](https://github.com/kubernetes/kubernetes/pull/90187), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG API Machinery 및 Testing]
+- VMSS의 비동기식 생성/변경을 지원한다. ([#89248](https://github.com/kubernetes/kubernetes/pull/89248), [@nilo19](https://github.com/nilo19)) [SIG Cloud Provider]
+- cgroups v2 통합 모드를 사용하는 호스트에서의 실행을 지원한다. ([#85218](https://github.com/kubernetes/kubernetes/pull/85218), [@giuseppe](https://github.com/giuseppe)) [SIG Node]
+- 코어 마스터 베이스 이미지(kube-apiserver, kube-scheduler)를 debian에서 distroless로 전환하였다. ([#90674](https://github.com/kubernetes/kubernetes/pull/90674), [@dims](https://github.com/dims)) [SIG Cloud Provider, Release 및 Scalability]
+- etcd 이미지(마이그레이션 스크립트 포함)를 debian에서 distroless로 전환하였다. ([#91171](https://github.com/kubernetes/kubernetes/pull/91171), [@dims](https://github.com/dims)) [SIG API Machinery 및 Cloud Provider]
+- RotateKubeletClientCertificate 기능 게이트가 GA로 승격되었으며, kubelet --feature-gate RotateKubeletClientCertificate 파라미터는 1.20에서 제거된다. ([#91780](https://github.com/kubernetes/kubernetes/pull/91780), [@liggitt](https://github.com/liggitt)) [SIG Auth 및 Node]
+- SCTPSupport 기능이 기본적으로 활성화된다. ([#88932](https://github.com/kubernetes/kubernetes/pull/88932), [@janosi](https://github.com/janosi)) [SIG Network]
+- `certificatesigningrequests/approval` 하위 리소스는 이제 패치 API 요청을 지원한다. ([#91558](https://github.com/kubernetes/kubernetes/pull/91558), [@liggitt](https://github.com/liggitt)) [SIG Auth 및 Testing]
+- `kubernetes_build_info` 의 메트릭 레이블 이름이 `camel case` 에서 `snake case` 로 변경되었다.
+ - gitVersion --> git_version
+ - gitCommit --> git_commit
+ - gitTreeState --> git_tree_state
+ - buildDate --> build_date
+ - goVersion --> go_version
+
+ 이 변경 사항은 `kube-apiserver`、`kube-scheduler`、`kube-proxy` 및 `kube-controller-manager` 에서 발생한다. ([#91805](https://github.com/kubernetes/kubernetes/pull/91805), [@RainbowMango](https://github.com/RainbowMango)) [SIG API Machinery, Cluster Lifecycle 및 Instrumentation]
+- apiserver 로그의 추적 출력이 더 체계적이고 포괄적으로 변경되었다. 추적은 중첩되며, 오래 실행되지 않은 모든 요청 엔드포인트에 대해 전체 필터 체인이 계측된다(예: 인증 확인이 포함된다.). ([#88936](https://github.com/kubernetes/kubernetes/pull/88936), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation 및 Scheduling]
+- 시간 초과 직전에 감시 북마크를 보내는 것 외에도, 주기적으로(요청된 경우) 보내도록 변경되었다. ([#90560](https://github.com/kubernetes/kubernetes/pull/90560), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery]
+- cri-tools이 v1.18.0으로 변경되었다. ([#89720](https://github.com/kubernetes/kubernetes/pull/89720), [@saschagrunert](https://github.com/saschagrunert)) [SIG Cloud Provider, Cluster Lifecycle, Release 및 Scalability]
+- etcd 클라이언트측이 v3.4.4로 변경되었다. ([#89169](https://github.com/kubernetes/kubernetes/pull/89169), [@jingyih](https://github.com/jingyih)) [SIG API Machinery 및 Cloud Provider]
+- etcd 클라이언트측이 v3.4.7로 변경되었다. ([#89822](https://github.com/kubernetes/kubernetes/pull/89822), [@jingyih](https://github.com/jingyih)) [SIG API Machinery 및 Cloud Provider]
+- etcd 클라이언트측이 v3.4.9로 변경되었다. ([#92075](https://github.com/kubernetes/kubernetes/pull/92075), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cloud Provider 및 Instrumentation]
+- azure-sdk가 v40.2.0으로 변경되었다. ([#89105](https://github.com/kubernetes/kubernetes/pull/89105), [@andyzhangx](https://github.com/andyzhangx)) [SIG CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Storage 및 Testing]
+- 이제 `kubectl port-forward` 가 UDP를 지원하지 않음을 사용자에게 경고하게 된다. ([#91616](https://github.com/kubernetes/kubernetes/pull/91616), [@knight42](https://github.com/knight42)) [SIG CLI]
+- PodTopologySpread의 가중치 확산 스케줄링 점수가 2배로 늘어났다. ([#91258](https://github.com/kubernetes/kubernetes/pull/91258), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- `EventRecorder()` 는 `FrameworkHandle` 인터페이스에 노출되어 스케줄러 플러그인 개발자가 클러스터 수준 이벤트를 기록하도록 선택할 수 있다. ([#92010](https://github.com/kubernetes/kubernetes/pull/92010), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- `kubectl alpha debug` 명령어는 이제 원본을 복사하여 파드 디버깅을 지원한다. ([#90094](https://github.com/kubernetes/kubernetes/pull/90094), [@aylei](https://github.com/aylei)) [SIG CLI]
+- `kubectl alpha debug` 명령어는 이제 노드의 호스트 네임스페이스에서 실행되는 디버깅 컨테이너를 생성하여 노드를 디버깅하는 방식을 지원한다. ([#92310](https://github.com/kubernetes/kubernetes/pull/92310), [@verb](https://github.com/verb)) [SIG CLI]
+- `local-up-cluster.sh` 는 이제 기본적으로 CSI snapshotter를 설치하며, `ENABLE_CSI_SNAPSHOTTER=false` 로 비활성화할 수 있다. ([#91504](https://github.com/kubernetes/kubernetes/pull/91504), [@pohly](https://github.com/pohly))
+- `ImageLocality` 플러그인의 `maxThreshold` 는 이제 파드의 이미지 수에 따라 크기가 조정되어, 여러 이미지가 있는 파드의 노드 우선순위를 구분하는데 도움을 줄 수 있다. ([#91138](https://github.com/kubernetes/kubernetes/pull/91138), [@chendave](https://github.com/chendave)) [SIG Scheduling]
+
+### 문서
+
+- 샘플 앱 배포 지침이 변경되었다. ([#82785](https://github.com/kubernetes/kubernetes/pull/82785), [@ashish-billore](https://github.com/ashish-billore)) [SIG API Machinery]
+
+### 실패 테스트
+
+- Kube-proxy iptables min-sync-period 의 기본값은 1초로 변경되었다. (이전 값은 0 이다.) ([#92836](https://github.com/kubernetes/kubernetes/pull/92836), [@aojea](https://github.com/aojea)) [SIG Network]
+
+### 버그 또는 회귀(Regression)
+
+- 인-트리 소스의 PV 집합은 CSIPersistentVolumeSource로 변환될 때 노드 어피니티에서 주문된 요구사항 값을 갖는다. ([#88987](https://github.com/kubernetes/kubernetes/pull/88987), [@jiahuif](https://github.com/jiahuif)) [SIG Storage]
+- `informer-sync` 헬스 체커로 인한 apiserver의 패닉이 수정되었다. ([#93600](https://github.com/kubernetes/kubernetes/pull/93600), [@ialidzhikov](https://github.com/ialidzhikov)) [SIG API Machinery]
+- 이제 GCP cloud-controller-manager 가 클러스터 외부에서 실행되어 새 노드를 초기화하지 못하는 문제가 수정되었다. ([#90057](https://github.com/kubernetes/kubernetes/pull/90057), [@ialidzhikov](https://github.com/ialidzhikov)) [SIG Apps 및 Cloud Provider]
+- Kubelets 용 GCE CloudProvider를 초기화 할 때 GCE API 호출을 하지 않는다. ([#90218](https://github.com/kubernetes/kubernetes/pull/90218), [@wojtek-t](https://github.com/wojtek-t)) [SIG Cloud Provider 및 Scalability]
+- IP 별칭을 추가하거나 GCE 클라우드 제공 업체의 노드 객체에 반영할 때, 불필요한 GCE API 호출을 하지 않는다. ([#90242](https://github.com/kubernetes/kubernetes/pull/90242), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps, Cloud Provider 및 Network]
+- 파드가 스케줄 되는 동안, 어노테이션이 변경될 때 불필요한 스케줄링 변동을 방지한다. ([#90373](https://github.com/kubernetes/kubernetes/pull/90373), [@fabiokung](https://github.com/fabiokung)) [SIG Scheduling]
+- kubectl 용 Azure 인증 모듈은 이제 갱신 토큰이 만료된 후에 로그인을 요청한다. ([#86481](https://github.com/kubernetes/kubernetes/pull/86481), [@tdihp](https://github.com/tdihp)) [SIG API Machinery 및 Auth]
+- Azure: lb 생성시 발생하는 동시성 해결 ([#89604](https://github.com/kubernetes/kubernetes/pull/89604), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- Azure: 연결된 VMSS 가 많은 클러스터에서 스로틀링(throttling)을 방지하기 위해 전역 VMSS VMs 캐시를 VMSS 별 VM 캐시로 전환하였다. ([#93107](https://github.com/kubernetes/kubernetes/pull/93107), [@bpineau](https://github.com/bpineau)) [SIG Cloud Provider]
+- Azure: IPv6 인바운드 보안 규칙에 대한 대상 접두사 및 포트를 설정하였다. ([#91831](https://github.com/kubernetes/kubernetes/pull/91831), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- 베이스-이미지: kube-cross:v1.13.9-5 으로 업데이트 되었다. ([#90963](https://github.com/kubernetes/kubernetes/pull/90963), [@justaugustus](https://github.com/justaugustus)) [SIG Release 및 Testing]
+- 기존 servicePort의 nodePort가 수동으로 변경된 경우 AWS NLB 서비스에 대한 버그가 수정되었다. ([#89562](https://github.com/kubernetes/kubernetes/pull/89562), [@M00nF1sh](https://github.com/M00nF1sh)) [SIG Cloud Provider]
+- APIServer에 연결할 수 없거나 kubelet에 아직 올바른 자격증명이 없는 경우, CSINode 초기화는 시작할때 kubelet을 중단하지 않는다. ([#89589](https://github.com/kubernetes/kubernetes/pull/89589), [@jsafrane](https://github.com/jsafrane)) [SIG Storage]
+- CVE-2020-8557 (Medium): 컨테이너의 /etc/hosts 파일을 통한 노드-로컬 간 서비스 거부. 자세한 내용은 https://github.com/kubernetes/kubernetes/issues/93032 을 참조한다. ([#92916](https://github.com/kubernetes/kubernetes/pull/92916), [@joelsmith](https://github.com/joelsmith)) [SIG Node]
+- Client-go: watch를 재구성하는 대신 타임아웃이 발생하면, 정보 제공자가 전체 목록 요청으로 돌아가는 문제가 해결되었다. ([#89652](https://github.com/kubernetes/kubernetes/pull/89652), [@liggitt](https://github.com/liggitt)) [SIG API Machinery 및 Testing]
+- CloudNodeLifecycleController는 노드 모니터링을 할 때, 셧다운 상태 전에 노드의 존재 상태를 확인하게 된다. ([#90737](https://github.com/kubernetes/kubernetes/pull/90737), [@jiahuif](https://github.com/jiahuif)) [SIG Apps 및 Cloud Provider]
+- `startupProbe` 는 지정하지만 `readinessProbe` 는 지정하지 않은 컨테이너는 이전에는 `startupProbe` 가 완료되기 전에 "준비(ready)"된 것으로 간주되었지만 이제는 "준비되지 않은(not-ready)" 것으로 간주된다. ([#92196](https://github.com/kubernetes/kubernetes/pull/92196), [@thockin](https://github.com/thockin)) [SIG Node]
+- 이제 Cordon 된 노드가 AWS 대상 그룹에서 등록 취소된다. ([#85920](https://github.com/kubernetes/kubernetes/pull/85920), [@hoelzro](https://github.com/hoelzro)) [SIG Cloud Provider]
+- kubernetes.azure.com/managed=false 레이블이 지정된 노드를 로드 밸런서의 백엔드 풀에 추가하지 않는다. ([#93034](https://github.com/kubernetes/kubernetes/pull/93034), [@matthias50](https://github.com/matthias50)) [SIG Cloud Provider]
+- CSI 드라이버가 FailedPrecondition 오류를 반환하는 경우 볼륨 확장을 재시도 하지 않는다. ([#92986](https://github.com/kubernetes/kubernetes/pull/92986), [@gnufied](https://github.com/gnufied)) [SIG Node 및 Storage]
+- Dockershim 보안: 이제 파드 샌드박스가 항상 `no-new-privileges` 및 `runtime/default` seccomp 프로필로 실행된다.
+ dockershim seccomp: 이제 파드 수준에서 설정할 때 사용자 지정 프로필이 더 작은 seccomp 프로필을 가질 수 있다. ([#90948](https://github.com/kubernetes/kubernetes/pull/90948), [@pjbgf](https://github.com/pjbgf)) [SIG Node]
+- 이중-스택: 서비스 clusterIP가 지정된 ipFamily를 따르지 않는 버그가 수정되었다. ([#89612](https://github.com/kubernetes/kubernetes/pull/89612), [@SataQiu](https://github.com/SataQiu)) [SIG Network]
+- EndpointSliceMirroring 컨트롤러는 이제 엔드포인트에서 엔드포인트슬라이스로 레이블을 복사한다. ([#93442](https://github.com/kubernetes/kubernetes/pull/93442), [@robscott](https://github.com/robscott)) [SIG Apps 및 Network]
+- Azure 가용성 영역(availability zone)이 항상 소문자인지 확인한다. ([#89722](https://github.com/kubernetes/kubernetes/pull/89722), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- DeletionTimestamp가 0이 아닌 파드에 대한 축출 요청이 항상 성공하게 된다. ([#91342](https://github.com/kubernetes/kubernetes/pull/91342), [@michaelgugino](https://github.com/michaelgugino)) [SIG Apps]
+- 리소스 이름이 내장 오브젝트와 동일한 CRD를 설명한다. ([#89505](https://github.com/kubernetes/kubernetes/pull/89505), [@knight42](https://github.com/knight42)) [SIG API Machinery, CLI 및 Testing]
+- 내부 정보가 동기화 되었는지를 확인하는 새로운 "informer-sync" 검사로 kube-apiserver의 /readyz 를 확장한다. ([#92644](https://github.com/kubernetes/kubernetes/pull/92644), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery 및 Testing]
+- winkernel kube-proxy 에 있는 DSR 로드밸런서 버전이 HNS 버전 9.3-9.max, 10.2+ 로 확장되었다. ([#93080](https://github.com/kubernetes/kubernetes/pull/93080), [@elweb9858](https://github.com/elweb9858)) [SIG Network]
+- 필수 어피니티 용어가 있는 첫번째 파드는 토폴로지 키가 일치하는 노드에서만 예약할 수 있게 된다. ([#91168](https://github.com/kubernetes/kubernetes/pull/91168), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+- 연결 해제된 상태의 CIDR이 존재하는 경우에 대한 AWS 로드밸런서 VPC CIDR 계산이 수정되었다. ([#92227](https://github.com/kubernetes/kubernetes/pull/92227), [@M00nF1sh](https://github.com/M00nF1sh)) [SIG Cloud Provider]
+- 관리되지 않는 노드에 대한 InstanceMetadataByProviderID 가 수정되었다. ([#92572](https://github.com/kubernetes/kubernetes/pull/92572), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- 고객이 VMSS orchestrationMode를 설정한 경우 `VirtualMachineScaleSets.virtualMachines.GET` 이 허용되지 않았던 문제가 수정되었다. ([#91097](https://github.com/kubernetes/kubernetes/pull/91097), [@feiskyer](https://github.com/feiskyer))
+- 앞에 0이 있는 IPv6 주소 사용을 허용하지 않던 버그가 수정되었다. ([#89341](https://github.com/kubernetes/kubernetes/pull/89341), [@aojea](https://github.com/aojea)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle 및 Instrumentation]
+- ExternalTrafficPolicy 가 ExternalIPs 서비스에 적용되지 않던 버그를 수정하였다. ([#90537](https://github.com/kubernetes/kubernetes/pull/90537), [@freehan](https://github.com/freehan)) [SIG Network]
+- VMSS 캐시에서 nil VM 항목이 만료될 때의 조건이 수정되었다. ([#92681](https://github.com/kubernetes/kubernetes/pull/92681), [@ArchangelSDY](https://github.com/ArchangelSDY)) [SIG Cloud Provider]
+- 스케줄러가 불필요한 스케줄 시도를 수행할 수 있는 레이싱(racing) 이슈가 수정되었다. ([#90660](https://github.com/kubernetes/kubernetes/pull/90660), [@Huang-Wei](https://github.com/Huang-Wei))
+- 수정된 컨피그맵 또는 시크릿 하위 경로(subpath)의 볼륨 마운트를 사용하여 컨테이너가 다시 시작되는 문제가 수정되었다. ([#89629](https://github.com/kubernetes/kubernetes/pull/89629), [@fatedier](https://github.com/fatedier)) [SIG Architecture, Storage 및 Testing]
+- 다중 마스터 HA 클러스터에서 정적으로 할당 된 portNumber가 있는 NodePort 생성이 충돌하는 포트 할당 로직 버그가 수정되었다. ([#89937](https://github.com/kubernetes/kubernetes/pull/89937), [@aojea](https://github.com/aojea)) [SIG Network 및 Testing]
+- xfs_repair가 xfs 마운트를 중지하는 버그가 수정되었다. ([#89444](https://github.com/kubernetes/kubernetes/pull/89444), [@gnufied](https://github.com/gnufied)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation 및 Storage]
+- 클러스터 덤프 정보에 대한 네임스페이스 플래그가 작동하지 않는 문제가 수정되었다. ([#91890](https://github.com/kubernetes/kubernetes/pull/91890), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- OOM 으로 컨테이너가 손상된 경우, SystemOOM 감지에 대한 버그가 수정되었다. ([#88871](https://github.com/kubernetes/kubernetes/pull/88871), [@dashpole](https://github.com/dashpole)) [SIG Node]
+- 이미지 파일 시스템 감지, devicemapper에 대한 디스크 메트릭, 5.0+ 리눅스 커널에서 OOM Kill 감지에 대한 버그가 수정되었다. ([#92919](https://github.com/kubernetes/kubernetes/pull/92919), [@dashpole](https://github.com/dashpole)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation 및 Node]
+- etcd 이미지에서 etcd 버전 마이그레이션 스크립트가 수정되었다. ([#91925](https://github.com/kubernetes/kubernetes/pull/91925), [@wenjiaswe](https://github.com/wenjiaswe)) [SIG API Machinery]
+- Azure 파일 CSI 번역의 결함이 수정되었다. ([#90162](https://github.com/kubernetes/kubernetes/pull/90162), [@rfranzke](https://github.com/rfranzke)) [SIG Release 및 Storage]
+- 짧은 시간에 Azure 노드를 다시 만들 경우, 인스턴스를 찾을 수 없는 문제가 수정되었다. ([#93316](https://github.com/kubernetes/kubernetes/pull/93316), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- 지원되는 거대 페이지의 크기를 변경할 시 발생하는 문제가 수정되었다. ([#80831](https://github.com/kubernetes/kubernetes/pull/80831), [@odinuge](https://github.com/odinuge)) [SIG Node 및 Testing]
+- 준비 상태(readiness)를 보고하기 전까지 APIServices가 HTTP 핸들러에 설치될 때까지 기다리도록 kube-apiserver 시작 작업이 수정되었다. ([#89147](https://github.com/kubernetes/kubernetes/pull/89147), [@sttts](https://github.com/sttts)) [SIG API Machinery]
+- kubectl create --dryrun 클라이언트가 네임스페이스를 무시하는 문제가 수정되었다. ([#90502](https://github.com/kubernetes/kubernetes/pull/90502), [@zhouya0](https://github.com/zhouya0))
+- kubectl create secret docker-registry 명령어의 --from-file 플래그를 사용할 수 없는 문제가 수정되었다. ([#90960](https://github.com/kubernetes/kubernetes/pull/90960), [@zhouya0](https://github.com/zhouya0)) [SIG CLI 및 Testing]
+- kubectl describe CSINode 명령어의 nil pointer 에러가 수정되었다. ([#89646](https://github.com/kubernetes/kubernetes/pull/89646), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- 리스(Lease) 정보에 접근할 수 없는 사용자를 위해 kubectl describe node 명령어가 수정되었다. ([#90469](https://github.com/kubernetes/kubernetes/pull/90469), [@uthark](https://github.com/uthark)) [SIG CLI]
+- 빈 어노테이션에 대한 kubectl describe 명령어의 출력 형식이 수정되었다. ([#91405](https://github.com/kubernetes/kubernetes/pull/91405), [@iyashu](https://github.com/iyashu)) [SIG CLI]
+- 실제로 패치를 지속하지 않도록 kubectl diff 명령어가 수정되었다. ([#89795](https://github.com/kubernetes/kubernetes/pull/89795), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI 및 Testing]
+- kubectl run 명령어의 --dry-run=client 플래그가 네임스페이스를 무시하는 문제가 수정되었다. ([#90785](https://github.com/kubernetes/kubernetes/pull/90785), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- kubectl version 명령이 설정 파일 없이 버전 정보를 표시하도록 변경되었다. ([#89913](https://github.com/kubernetes/kubernetes/pull/89913), [@zhouya0](https://github.com/zhouya0)) [SIG API Machinery 및 CLI]
+- `kubectl alpha debug` 명령어의 `--container` 플래그에 대하여 `-c` 축약표현이 누락된 문제가 수정되었다. ([#89674](https://github.com/kubernetes/kubernetes/pull/89674), [@superbrothers](https://github.com/superbrothers)) [SIG CLI]
+- 오브젝트의 평균 값이 무시되어 출력되는 문제가 수정되었다. ([#89142](https://github.com/kubernetes/kubernetes/pull/89142), [@zhouya0](https://github.com/zhouya0)) [SIG API Machinery]
+- Azure VMs에 공용 IP를 할당 한 후 공용 IP가 표시되지 않는 문제가 수정되었다. ([#90886](https://github.com/kubernetes/kubernetes/pull/90886), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- 파드보다 먼저 노드를 제거할 때 스케줄러의 충돌이 일어나는 문제를 수정하였다. ([#89908](https://github.com/kubernetes/kubernetes/pull/89908), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- 로드밸런서 backendPools 에 대해 Azure VMSS를 업데이트 할 때 VMSS 이름 및 리소스 그룹의 이름 수정에 대한 버그가 수정되었다. ([#89337](https://github.com/kubernetes/kubernetes/pull/89337), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Azure VM 컴퓨터 이름의 접두사가 VMSS 이름과 다른 경우의 스로틀링 문제가 해결되었다. ([#92793](https://github.com/kubernetes/kubernetes/pull/92793), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- 수정: Azure 할당 해제 노드가 종료된 것으로 간주 ([#92257](https://github.com/kubernetes/kubernetes/pull/92257), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- 수정: azure 디스크 PV에 대한 GetLabelsForVolume 패닉 문제 ([#92166](https://github.com/kubernetes/kubernetes/pull/92166), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- 수정: 어노테이션 지원에 Azure 파일 마이그레이션 지원 추가 ([#91093](https://github.com/kubernetes/kubernetes/pull/91093), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Node]
+- 수정: API 스로틀링(throttling)을 유발할 수 있는 Azure 디스크 댕글링(dangling) 연결 문제 ([#90749](https://github.com/kubernetes/kubernetes/pull/90749), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- 수정: ip 제품군을 기반으로 올바른 ip 결정 ([#93043](https://github.com/kubernetes/kubernetes/pull/93043), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- 수정: 비어있는 경우 도커 구성 캐시를 사용하지 않음 ([#92330](https://github.com/kubernetes/kubernetes/pull/92330), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- 수정: Azure 디스크 스토리지 클래스 마이그레이션에서 토폴로지 문제 수정 ([#91196](https://github.com/kubernetes/kubernetes/pull/91196), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- 수정: 최대 개수 테이블에 항목이 누락되어 디스크 연결 오류 발생 ([#89768](https://github.com/kubernetes/kubernetes/pull/89768), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- 수정: 잘못된 최대 Azure 디스크 최대 개수 ([#92331](https://github.com/kubernetes/kubernetes/pull/92331), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- 수정: Azure 디스크 및 파일 마운트 시 초기 지연 ([#93052](https://github.com/kubernetes/kubernetes/pull/93052), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider 및 Storage]
+- 수정: Azure에서 삭제된 비 VMSS 인스턴스에 의해 지원되는 노드 제거 지원 ([#91184](https://github.com/kubernetes/kubernetes/pull/91184), [@bpineau](https://github.com/bpineau)) [SIG Cloud Provider]
+- 수정: Azure 디스크에 강제 분리 사용 ([#91948](https://github.com/kubernetes/kubernetes/pull/91948), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- wait.Forever에서 첫 번째 반복에서 백오프 기간을 건너 뛰는 1.18 회귀 문제가 수정되었다. ([#90476](https://github.com/kubernetes/kubernetes/pull/90476), [@zhan849](https://github.com/zhan849)) [SIG API Machinery]
+- 엔드포인트 슬라이스 업데이트에서 newObj 를 oldObj 로 잘못 사용하는 버그가 수정되었다. ([#92339](https://github.com/kubernetes/kubernetes/pull/92339), [@fatkun](https://github.com/fatkun)) [SIG Apps 및 Network]
+- 중첩 범위가 있는 jsonpath 출력 표현식으로 kubectl 명령을 실행하면, 중첩 범위 다음의 표현식이 무시되는 버그가 수정되었다. ([#88464](https://github.com/kubernetes/kubernetes/pull/88464), [@brianpursley](https://github.com/brianpursley)) [SIG API Machinery]
+- TopologyManager가 활성화 되었을 때, 재사용 가능한 CPU 및 장치 할당이 적용되지 않는 버그가 수정되었다. ([#93189](https://github.com/kubernetes/kubernetes/pull/93189), [@klueska](https://github.com/klueska)) [SIG Node]
+- 깊이 중첩된 오브젝트에 json 패치를 적용하는 성능 문제가 수정되었다. ([#92069](https://github.com/kubernetes/kubernetes/pull/92069), [@tapih](https://github.com/tapih)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle 및 Instrumentation]
+- RBAC 역할 및 바인딩 오브젝트의 가비지 수집을 방지하는 회귀 문제가 수정되었다. ([#90534](https://github.com/kubernetes/kubernetes/pull/90534), [@apelisse](https://github.com/apelisse)) [SIG Auth]
+- kubeconfig 파일이 없을 때 --local 또는 --dry-run 플래그를 사용하여 kubectl 명령을 실행하는 회귀 문제가 수정되었다. ([#90243](https://github.com/kubernetes/kubernetes/pull/90243), [@soltysh](https://github.com/soltysh)) [SIG API Machinery, CLI 및 Testing]
+- 베어러 토큰(kubectl --token=..)과 exec 자격증명 플러그인이 동일한 컨텍스트로 구성되었을 때 모호하게 동작하는 문제가 수정되었다. 이제 전달자 토큰이 우선하게 된다. ([#91745](https://github.com/kubernetes/kubernetes/pull/91745), [@anderseknert](https://github.com/anderseknert)) [SIG API Machinery, Auth 및 Testing]
+- 이름에 `.` 문자가 포함된 서비스 어카운트의 사용자 인증 정보 마운트 문제가 수정되었다. ([#89696](https://github.com/kubernetes/kubernetes/pull/89696), [@nabokihms](https://github.com/nabokihms)) [SIG Auth]
+- 노드 삭제 시 파드의 nominatedNodeName을 지울 수 없는 문제가 수정되었다. ([#91750](https://github.com/kubernetes/kubernetes/pull/91750), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling 및 Testing]
+- zsh 완료가 성공적으로 초기화되었지만 zsh 완료를 초기화 할때 0이 아닌 종료 코드가 반환되는 버그가 수정되었다. ([#88165](https://github.com/kubernetes/kubernetes/pull/88165), [@brianpursley](https://github.com/brianpursley)) [SIG CLI]
+- endpointSliceTracker의 메모리 누수 문제가 수정되었다. ([#92838](https://github.com/kubernetes/kubernetes/pull/92838), [@tnqn](https://github.com/tnqn)) [SIG Apps 및 Network]
+- iSCSI 및 FibreChannel 볼륨 플러그인의 고정 마운트 옵션이 수정되었다. ([#89172](https://github.com/kubernetes/kubernetes/pull/89172), [@jsafrane](https://github.com/jsafrane)) [SIG Storage]
+- 영역 전체의 노드 수에 불균형이 있는 클러스터의 kube-scheduler 에서 손실된 노드 데이터에 대한 문제가 수정되었다. ([#93355](https://github.com/kubernetes/kubernetes/pull/93355), [@maelk](https://github.com/maelk))
+- IPv6DualStack 기능 게이트가 활성화된 클러스터에서 서비스를 생성하거나
+ 변경할 때 IPFamily 필드와 관련된 몇 가지 버그가 수정되었다.
+
+ IPFamily 필드의 동작이 이상하고 일관성이 없으며, 이중-스택
+ 기능이 GA가 되기 전에 변경될 가능성이 있다. 사용자는 현재
+ 이 필드를 "쓰기 전용" 으로 취급해야 하며 현재 IPFamily 값을
+ 기반으로 서비스에 대한 어떤 가정도 하지 않아야 한다. ([#91400](https://github.com/kubernetes/kubernetes/pull/91400), [@danwinship](https://github.com/danwinship)) [SIG Apps 및 Network]
+- OwnerReferencesPermissionEnforcement 유효성 검사 승인 플러그인이 활성화된 클러스터에서 엔드포인트슬라이스 컨트롤러가 오류없이 실행되도록 수정되었다. ([#89741](https://github.com/kubernetes/kubernetes/pull/89741), [@marun](https://github.com/marun)) [SIG Auth 및 Networking]
+- IPv6-전용 파드에 대한 엔드포인트를 올바르게 생성하도록 엔드포인트슬라이스컨트롤러가 수정되었다.
+
+ IPv6DualStack 기능 게이트가 활성화된 경우 서비스에서
+ `ipFamily: IPv6` 을 지정하여 IPv6 헤드리스 서비스를 허용하도록
+ 엔드포인트컨트롤러가 수정되었다. (이것은 이미 엔드포인트슬라이스컨트롤러와 함께 작동되었다.) ([#91399](https://github.com/kubernetes/kubernetes/pull/91399), [@danwinship](https://github.com/danwinship)) [SIG Apps 및 Network]
+- 여러 파드에서 읽기 전용 iSCSI 볼륨을 사용할 때 발생하는 문제가 수정되었다. ([#91738](https://github.com/kubernetes/kubernetes/pull/91738), [@jsafrane](https://github.com/jsafrane)) [SIG Storage 및 Testing]
+- 정보제공자(informer)를 사용하는 CSI 볼륨 첨부 스케일링의 문제가 수정되었다. ([#91307](https://github.com/kubernetes/kubernetes/pull/91307), [@yuga711](https://github.com/yuga711)) [SIG API Machinery, Apps, Node, Storage 및 Testing]
+- 스케일 하위리소스가 활성화된 사용자 지정 리소스 정의의 복제본 필드에 대한 기본값을 수정하는 버그가 수정되었다. ([#89833](https://github.com/kubernetes/kubernetes/pull/89833), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle 및 Instrumentation]
+- 디렉토리가 아닌 hostpath 타입이 HostPathFile로 인식될 수 있는 버그가 수정되었으며, HostPathType에 대한 e2e 테스트가 추가되었다. ([#64829](https://github.com/kubernetes/kubernetes/pull/64829), [@dixudx](https://github.com/dixudx)) [SIG Apps, Storage 및 Testing]
+- 1.16 에서 처음 널리 알려졌던, 일부 VXLAN-기반 네트워크
+ 플러그인에서 63-초 또는 1-초 연결 지연 문제가 수정되었다.
+ (일부 사용자는 특정 네트워크 플러그인에서 이 문제를 더 일찍 확인했다.)
+ 이전에 ethtool 을 사용하여 기본 네트워크 인터페이스에서 체크섬 오프로드를
+ 비활성화한 경우 이제 그 작업을 중지할 수 있다. ([#92035](https://github.com/kubernetes/kubernetes/pull/92035), [@danwinship](https://github.com/danwinship)) [SIG Network 및 Node]
+- API 요청에서 캐시-제어 헤더가 삭제된 1.17 의 회귀문제가 수정되었다. ([#90468](https://github.com/kubernetes/kubernetes/pull/90468), [@liggitt](https://github.com/liggitt)) [SIG API Machinery 및 Testing]
+- 잘못된 주석이 있는 HorizontalPodAutoscaler 오브젝트의 변환 오류가 수정되었다. ([#89963](https://github.com/kubernetes/kubernetes/pull/89963), [@liggitt](https://github.com/liggitt)) [SIG Autoscaling]
+- 오류가 발생했을 경우, 중지하는 대신 올바르게 빌드된 모든 오브젝트에 적용하도록 kubectl 이 수정되었다. ([#89848](https://github.com/kubernetes/kubernetes/pull/89848), [@seans3](https://github.com/seans3)) [SIG CLI 및 Testing]
+- 부정확한 시간에 독점 CPU를 해제하는 CPUManager 의 회귀 문제가 수정되었다. ([#90377](https://github.com/kubernetes/kubernetes/pull/90377), [@cbf123](https://github.com/cbf123)) [SIG Cloud Provider 및 Node]
+- 초기화(init) 컨테이너로부터 상속된 앱 컨테이너에서 독점 CPU를 해제할 가능성이 거의 없는 CPUManager 의 회귀 문제가 수정되었다. ([#90419](https://github.com/kubernetes/kubernetes/pull/90419), [@klueska](https://github.com/klueska)) [SIG Node]
+- 로컬 및 원격 포트를 지정할 때 `kubectl port-forward` 에서 v1.18.0-rc.1 로의 회귀문제가 수정되었다. ([#89401](https://github.com/kubernetes/kubernetes/pull/89401), [@liggitt](https://github.com/liggitt))
+- 엔드포인트슬라이스 컨트롤러 가비지 수집에 대한 경합(race) 조건이 수정되었다. ([#91311](https://github.com/kubernetes/kubernetes/pull/91311), [@robscott](https://github.com/robscott)) [SIG Apps, Network 및 Testing]
+- GCE 클러스터 공급자의 경우, 단일 영역에서 노드가 1000 개 이상인 클러스터에 대해 내부 유형의 로드 밸런서를 만들수 없는 버그가 수정되었다. ([#89902](https://github.com/kubernetes/kubernetes/pull/89902), [@wojtek-t](https://github.com/wojtek-t)) [SIG Cloud Provider, Network 및 Scalability]
+- 외부 스토리지 e2e 테스트 스위트(suite) 의 경우, VolumeSnapshotClass가 입력으로 명시적으로 제공되는 경우, VolumeSnapshotClass에서 스냅샷 프로비저닝 도구를 선택하도록 외부 드라이버가 변경되었다. ([#90878](https://github.com/kubernetes/kubernetes/pull/90878), [@saikat-royc](https://github.com/saikat-royc)) [SIG Storage 및 Testing]
+- Get-kube.sh: 올바른 버킷에서 바이너리를 가져오도록 순서가 수정되었다. ([#91635](https://github.com/kubernetes/kubernetes/pull/91635), [@cpanato](https://github.com/cpanato)) [SIG Release]
+- firstTimestamp가 설정되지 않은 경우 이벤트를 출력할 때, eventTime 을 사용하도록 수정되었다. ([#89999](https://github.com/kubernetes/kubernetes/pull/89999), [@soltysh](https://github.com/soltysh)) [SIG CLI]
+- 파라미터 cgroupPerQos=false 및 cgroupRoot=/docker 를 설정하면 이 함수는 nodeAllocatableRoot=/docker/kubepods 를 반환하는데 이는 옳지 않다. 올바른 반환은 /docker.cm.NodeAllocatableRoot(s.CgroupRoot, s.CgroupDriver)
+ 이어야 한다.
+
+ kubeDeps.CAdvisorInterface, err = cadvisor.New(imageFsInfoProvider, s.RootDirectory, cgroupRoots, cadvisor.UsingLegacyCadvisorStats(s.ContainerRuntime, s.RemoteRuntimeEndpoint))
+ 위 함수는 cgroupRoots를 사용하여 cadvisor 인터페이스를 만들 때, 잘못된 파라미터 cgroupRoots 로 인해 제거 관리자가 /docker 에서 메트릭을 수집하지 않도록 유도하고 kubelet은 이러한 오류를 자주 표시하게된다.
+ E0303 17:25:03.436781 63839 summary_sys_containers.go:47] Failed to get system container stats for "/docker": failed to get cgroup stats for "/docker": failed to get container info for "/docker": unknown container "/docker"
+ E0303 17:25:03.436809 63839 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics ([#88970](https://github.com/kubernetes/kubernetes/pull/88970), [@mysunshine92](https://github.com/mysunshine92)) [SIG Node]
+- HA 환경에서, 대기 스케줄러가 API 서버와의 연결이 끊어진 기간 동안 파드를 삭제하고 다시 생성한 후 대기 스케줄러가 마스터가 되면 스케줄러 캐시가 손상될 수 있다. 이 PR은 이 문제를 해결한다. ([#91126](https://github.com/kubernetes/kubernetes/pull/91126), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- /metrics/resource 의 kubelet 리소스 메트릭 엔드포인트에서 다음 메트릭의 이름이 변경되었다.
+ - node_cpu_usage_seconds --> node_cpu_usage_seconds_total
+ - container_cpu_usage_seconds --> container_cpu_usage_seconds_total
+ 이것은 1.18.0에서 추가된 #86282의 부분적인 되돌리기이며, 처음에는 _total 접미사가 제거되었다. ([#89540](https://github.com/kubernetes/kubernetes/pull/89540), [@dashpole](https://github.com/dashpole)) [SIG Instrumentation 및 Node]
+- Ipvs: 지원되는 커널에서만 sysctlconnreuse 설정을 시도한다. ([#88541](https://github.com/kubernetes/kubernetes/pull/88541), [@cmluciano](https://github.com/cmluciano)) [SIG Network]
+- kubectl/client-go의 Jsonpath 지원은 복잡한 유형(맵/슬라이스/구조체)을 Go-구문 대신 json으로 직렬화 한다. ([#89660](https://github.com/kubernetes/kubernetes/pull/89660), [@pjferrell](https://github.com/pjferrell)) [SIG API Machinery, CLI 및 Cluster Lifecycle]
+- Kube-aggregator 인증서는 디스크에서 변경시 동적으로 로드된다. ([#92791](https://github.com/kubernetes/kubernetes/pull/92791), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery]
+- Kube-apiserver: 불필요한 409 충돌 오류를 클라이언트에 반환하지 않도록 스케일 하위 리소스 패치 처리가 수정되었다. ([#90342](https://github.com/kubernetes/kubernetes/pull/90342), [@liggitt](https://github.com/liggitt)) [SIG Apps, Autoscaling 및 Testing]
+- Kube-apiserver: 연속적인 재귀 하강 연산자가 있는 jsonpath 표현식은 더 이상 사용자 정의 리소스 표시 열에 대해 해석되지 않는다. ([#93408](https://github.com/kubernetes/kubernetes/pull/93408), [@joelsmith](https://github.com/joelsmith)) [SIG API Machinery]
+- Kube-apiserver: 단일 X-Stream-Protocol-Version 헤더의 여러 쉼표로 구분된 프로토콜이 이제 RFC2616을 준수하는 여러 헤더와 함께 인식된다. ([#89857](https://github.com/kubernetes/kubernetes/pull/89857), [@tedyu](https://github.com/tedyu)) [SIG API Machinery]
+- Kube-proxy IP 군은 프록시가 사용하는 nodeIP에 의해 결정된다. 우선순위는:
+ 1. 바인드 주소가 0.0.0.0 또는 ::가 아닌 경우 구성된 --bind-address 의 값
+ 2. 설정된 경우, 노드 개체의 기본 IP
+ 3. IP를 찾을 수 없는 경우, NodeIP 의 기본값은 127.0.0.1 이며 IP 군은 IPv4 이다. ([#91725](https://github.com/kubernetes/kubernetes/pull/91725), [@aojea](https://github.com/aojea)) [SIG Network]
+- 이중-스택 모드에서 Kube-proxy는 `Service.Spec.IPFamily` 필드를 사용하는 대신 ClusterIP에서 서비스 IP 계열을 유추한다. ([#91357](https://github.com/kubernetes/kubernetes/pull/91357), [@aojea](https://github.com/aojea))
+- 이제 Kube-up에 CoreDNS 버전 v1.7.0이 포함된다. 주요 변경 사항은 다음과 같다.
+ - CoreDNS가 서비스 기록 업데이트를 중단할 수 있는 버그가 수정되었다.
+ - 어떤 정책이 설정되어도 항상 첫 번째 업스트림 서버만 선택되는 포워드 플러그인의 버그가 수정되었다.
+ - 쿠버네티스 플러그인에서 이미 사용 중단된 옵션 `resyncperiod` 및 `upstream` 가 제거되었다.
+ - Prometheus 메트릭 이름 변경을 포함한다(표준 Prometheus 메트릭 명명 규칙에 맞게 변경). 이전 메트릭의 이름을 사용하는 기존 보고 공식과 역호환 된다.
+ - 페더레이션 플러그인 (v1 쿠버네티스 페더레이션 허용) 이 제거되었다.
+ 자세한 내용은 https://coredns.io/2020/06/15/coredns-1.7.0-release/ 에서 확인할 수 있다. ([#92718](https://github.com/kubernetes/kubernetes/pull/92718), [@rajansandeep](https://github.com/rajansandeep)) [SIG Cloud Provider]
+- Kube-up: 승인 웹훅 자격증명 구성의 확인 설정이 수정되었다. ([#91995](https://github.com/kubernetes/kubernetes/pull/91995), [@liggitt](https://github.com/liggitt)) [SIG Cloud Provider 및 Cluster Lifecycle]
+- Kubeadm의 TLS 부트스트랩 프로세스가 조인 시 완료되는 시간 제한이 5 분으로 증가하였다. ([#89735](https://github.com/kubernetes/kubernetes/pull/89735), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: 이 작업을 반복 재시도 함으로써, 업데이트 상태를 보다 탄력적으로 만들기 위하여, join / UpdateStatus에 대한 재시도를 추가하였다. ([#91952](https://github.com/kubernetes/kubernetes/pull/91952), [@xlgao-zju](https://github.com/xlgao-zju)) [SIG Cluster Lifecycle]
+- Kubeadm: 안전하지 않은 서비스를 비활성화하기 위하여 사용 중단된 플래그인 --port=0 을 kube-controller-manager 및 kube-scheduler 매니페스트에 추가하였다. 이 플래그가 없으면 기본적으로 구성 요소(예: /metrics)가 (--adress에 의해 제어되는)기본노드 인터페이스에서 안전하지 않게 제공된다. 이 동작을 재정의하고 안전하지 않은 서비스를 활성화하려는 사용자의 경우, 이러한 구성 요소에 대한 kubeadm의 "extraArgs" 메커니즘을 통해 사용자 정의 포트로 --port=X 를 전달할 수 있다. ([#92720](https://github.com/kubernetes/kubernetes/pull/92720), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: "조인" 과정에서, 이미 클러스터에 존재하는 경우 etcd 구성원을 다시 추가하지 않는다. ([#92118](https://github.com/kubernetes/kubernetes/pull/92118), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: "리셋" 과정에서, 클러스터에 남아서 쌓여있는 etcd 멤버만 제거하지 않고, 로컬 etcd 스토리지의 정리를 계속 진행한다. ([#91145](https://github.com/kubernetes/kubernetes/pull/91145), [@tnqn](https://github.com/tnqn)) [SIG Cluster Lifecycle]
+- Kubeadm: 조인 중에 동일한 이름의 노드가 클러스터에 이미 존재하는지 확인하는 경우 NodeReady 조건이 올바르게 검증되었는지를 확인한다. ([#89602](https://github.com/kubernetes/kubernetes/pull/89602), [@kvaps](https://github.com/kvaps)) [SIG Cluster Lifecycle]
+- Kubeadm: 업그레이드 단계에서 `image-pull-timeout` 플래그가 준수되었는지 확인한다. ([#90328](https://github.com/kubernetes/kubernetes/pull/90328), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubeadm: 1.18.x로 업그레이드 후 RBAC 누락으로 인해 노드가 클러스터에 조인할 수 없는 버그가 수정되었다. ([#89537](https://github.com/kubernetes/kubernetes/pull/89537), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: 'kubeadm join'에서 컨트롤 플레인과 관련된 플래그 전달에 대한 잘못된 경고가 수정되었다. ([#89596](https://github.com/kubernetes/kubernetes/pull/89596), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: 느린 설정에서 etcd 멤버를 추가할 때 "kubeadm join" 에 대한 견고성이 향상되었다. ([#90645](https://github.com/kubernetes/kubernetes/pull/90645), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: 생성된 인증서에서 중복된 DNS 이름 및 IP 주소가 제거된다. ([#92753](https://github.com/kubernetes/kubernetes/pull/92753), [@QianChenglong](https://github.com/QianChenglong)) [SIG Cluster Lifecycle]
+- Kubectl azure 인증: 1.18.0에서 "spn:" 접두사가 예기치 않게 kubeconfig 파일의 `apiserver-id` 구성에 추가되는 회귀현상을 수정하였다. ([#89706](https://github.com/kubernetes/kubernetes/pull/89706), [@weinong](https://github.com/weinong)) [SIG API Machinery 및 Auth]
+- Kubectl: 오토스케일이 '--name' 플래그를 따르지 않던 버그가 수정되었다. ([#91855](https://github.com/kubernetes/kubernetes/pull/91855), [@SataQiu](https://github.com/SataQiu)) [SIG CLI]
+- Kubectl: kubectl scale이 '--timeout' 플래그를 따르지 않던 버그가 수정되었다. ([#91858](https://github.com/kubernetes/kubernetes/pull/91858), [@SataQiu](https://github.com/SataQiu)) [SIG CLI]
+- Kubelet: kubelet 도움말 정보가 올바른 유형의 플래그를 표시할 수 없던 버그가 수정되었다. ([#88515](https://github.com/kubernetes/kubernetes/pull/88515), [@SataQiu](https://github.com/SataQiu)) [SIG Docs 및 Node]
+- Kuberuntime 보안: 파드 샌드박스는 이제 항상 `runtime/default` seccomp 프로필로 실행된다.
+ kuberuntime seccomp: 사용자 정의 프로필은 이제 파드 수준에서 설정할 때 더 작은 seccomp 프로필을 가질 수 있다. ([#90949](https://github.com/kubernetes/kubernetes/pull/90949), [@pjbgf](https://github.com/pjbgf)) [SIG Node]
+- Kubelet 부트 스트랩 인증서 신호를 인식하게 되었다. ([#92786](https://github.com/kubernetes/kubernetes/pull/92786), [@answer1991](https://github.com/answer1991)) [SIG API Machinery, Auth 및 Node]
+- 노드 ([#89677](https://github.com/kubernetes/kubernetes/pull/89677), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- 여러 네트워크 인터페이스가 있는 AWS 노드에서, kubelet은 이제 보조 인터페이스의 주소를 보다 안정적으로 보고해야 한다. ([#91889](https://github.com/kubernetes/kubernetes/pull/91889), [@anguslees](https://github.com/anguslees)) [SIG Cloud Provider]
+- 리스케줄링 시도를 위한 파드의 조건 업데이트는 생략된다. ([#91252](https://github.com/kubernetes/kubernetes/pull/91252), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- 이제 이전에 지정된 노드가 예약 및 확인할 수 없게 된 후 파드를 선점 대상으로 고려할 수 있게 된다. ([#92604](https://github.com/kubernetes/kubernetes/pull/92604), [@soulxu](https://github.com/soulxu))
+- 볼륨을 확장하거나 생성할 때 PVC 요청크기에 대한 오버플로를 방지한다. ([#90907](https://github.com/kubernetes/kubernetes/pull/90907), [@gnufied](https://github.com/gnufied)) [SIG Cloud Provider 및 Storage]
+- 프라이빗 Azure 클라우드의 클러스터가 동일한 클라우드의 ACR에 인증할 수 있도록 수정을 허용한다. ([#90425](https://github.com/kubernetes/kubernetes/pull/90425), [@DavidParks8](https://github.com/DavidParks8)) [SIG Cloud Provider]
+- AWS 로드밸런서 워커 노드의 SG 규칙 논리를 결정적으로 변경하였다. ([#92224](https://github.com/kubernetes/kubernetes/pull/92224), [@M00nF1sh](https://github.com/M00nF1sh)) [SIG Cloud Provider]
+- 서버측 적용을 사용하지 않는 생성/업데이트/패치 요청에서 처리되는 metadata.managedFields의 회귀 문제가 해결되었다. ([#91690](https://github.com/kubernetes/kubernetes/pull/91690), [@apelisse](https://github.com/apelisse)) [SIG API Machinery 및 Testing]
+- 윈도우 볼륨을 마운트하는 v1.18.0-rc.1의 회귀 문제가 해결되었다. ([#89319](https://github.com/kubernetes/kubernetes/pull/89319), [@mboersma](https://github.com/mboersma)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation 및 Storage]
+- v1 CSR API를 제공하는 서버에 대해 `kubectl certificate approve/deny` 를 사용하는 문제가 해결되었다. ([#91691](https://github.com/kubernetes/kubernetes/pull/91691), [@liggitt](https://github.com/liggitt)) [SIG Auth 및 CLI]
+- --namespace 플래그 없이 `kubectl apply --prune` 기능이 복원되었다. 1.17부터 `kubectl apply --prune` 은 기본 네임스페이스(또는 kubeconfig에서) 또는 명령줄 플래그에 명시적으로 지정된 리소스만 제거하였다. 그러나 이것은 구성 파일의 모든 네임스페이스에서 리소스를 제거할 수 있는 kubectl 1.16의 변경사항을 호환되지 않는다. 이 패치는 kubectl 1.16 의 동작을 복원한다. ([#89551](https://github.com/kubernetes/kubernetes/pull/89551), [@tatsuhiro-t](https://github.com/tatsuhiro-t)) [SIG CLI 및 Testing]
+- 클러스터/gce/매니페스트의 컨트롤 플레인 매니페스트에서 정적 컨트롤 플레인 파드의 우선순위를 복원한다. ([#89970](https://github.com/kubernetes/kubernetes/pull/89970), [@liggitt](https://github.com/liggitt)) [SIG Cluster Lifecycle 및 Node]
+- 1.19-rc1에 추가된, 윈도우 노드를 위한 devicemanager 가 복귀되었다. ([#93263](https://github.com/kubernetes/kubernetes/pull/93263), [@liggitt](https://github.com/liggitt)) Node 및 Windows]
+- 스케줄러 v1 정책 구성 또는 알고리즘 제공 업체 설정을 v1beta1 컴포넌트컨피그(ComponentConfig)와 함께 전달하여 정책에서 CC로의 전환을 지원한다. ([#92531](https://github.com/kubernetes/kubernetes/pull/92531), [@damemi](https://github.com/damemi)) [SIG Scheduling]
+- 사용 가능한 노드가 없어서 발생하는 스케줄링 실패의 경우 이제 ```schedule_attempts_total``` 메트릭 하위에 스케줄링 불가능으로 보고된다. ([#90989](https://github.com/kubernetes/kubernetes/pull/90989), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+- 이제 파드에 바인딩 된 서비스 어카운트 토큰을 파드 삭제 유예 기간 동안 사용할 수 있다. ([#89583](https://github.com/kubernetes/kubernetes/pull/89583), [@liggitt](https://github.com/liggitt)) [SIG Auth]
+- 서비스 로드 밸런서는 더 이상 스케줄링 불가능으로 표시된 노드를 후보 노드에서 제외하지 않는다. 대신 서비스 로드 밸런서 제외 레이블을 사용해야 한다.
+
+ 코돈(cordon) 노드를 보유하고 있는 1.18 클러스터에서 업그레이드 하려는 사용자는, 해당 노드가 서비스 로드 밸런서에서 제외시키려 할 경우, 업그레이드 하기 전에 영향을 받는 노드에 `node.kubernetes.io/exclude-from-external-load-balancers` 레이블을 설정해야 한다. ([#90823](https://github.com/kubernetes/kubernetes/pull/90823), [@smarterclayton](https://github.com/smarterclayton)) [SIG Apps, Cloud Provider 및 Network]
+- kubectl annotate 명령어의 --list 옵션이 지원된다. ([#92576](https://github.com/kubernetes/kubernetes/pull/92576), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- 노드 추가/삭제 이벤트에서 서비스 `Type=LoadBalancer`에 대한 LB 백엔드 노드를 동기화한다. ([#81185](https://github.com/kubernetes/kubernetes/pull/81185), [@andrewsykim](https://github.com/andrewsykim))
+- 비어있지 않고, 인수가 필요하지 않는 다음의 구성요소는 이제 인수가 지정된 경우 오류 메세지를 출력하고 종료된다. cloud-controller-manager, kube-apiserver, kube-controller-manager, kube-proxy, kubeadm {alpha|config|token|version}, kubemark. 플래그는 짦은 형식의 경우 단일 대시 "-" (0x45), 긴 형식의 경우 이중 대시 "--"로 시작해야 한다. 이 변경 이전에는 잘못된 플래그 (예를 들어, 0x8211: "–" 와 같은 아스키가 아닌 대시문자로 시작)가 위치 인수로 처리되고 무시되었다. ([#91349](https://github.com/kubernetes/kubernetes/pull/91349), [@neolit123](https://github.com/neolit123)) [SIG API Machinery, Cloud Provider, Cluster Lifecycle, Network 및 Scheduling]
+- 파드 사양의 terminationGracePeriodSeconds가 미러 파드에 적용된다. ([#92442](https://github.com/kubernetes/kubernetes/pull/92442), [@tedyu](https://github.com/tedyu)) [SIG Node 및 Testing]
+- 이전 커널과의 IPVS 호환성 문제를 해결하기 위해 github.com/moby/ipvs를 v1.0.1로 변경하였다. ([#90555](https://github.com/kubernetes/kubernetes/pull/90555), [@andrewsykim](https://github.com/andrewsykim)) [SIG Network]
+- status 하위 리소스를 경유하는 파드의 상태 업데이트는 이제 `status.podIP` 및 `status.podIPs` 필드의 형식이 올바른지 확인한다. ([#90628](https://github.com/kubernetes/kubernetes/pull/90628), [@liggitt](https://github.com/liggitt)) [SIG 앱 및 Node]
+- 준비상태(readiness)를 보고하기 전에 모든 CRD는 디스커버리 엔드포인트가 확인될 때까지 기다린다. ([#89145](https://github.com/kubernetes/kubernetes/pull/89145), [@sttts](https://github.com/sttts)) [SIG API Machinery]
+- 파드를 제거할 때, PDB를 확인하지 않고 Pending 상태의 파드가 제거된다. ([#83906](https://github.com/kubernetes/kubernetes/pull/83906), [@michaelgugino](https://github.com/michaelgugino)) [SIG API Machinery, Apps, Node 및 Scheduling]
+- [보안] golang.org/x/text/encoding/unicode 의 취약성 ([#92219](https://github.com/kubernetes/kubernetes/pull/92219), [@voor](https://github.com/voor)) Cloud Provider, Cluster Lifecycle, Instrumentation 및 Node
+
+### 기타 (정리(Cleanup) or Flake(플레이크))
+
+- --cache-dir은 http 및 디스커버리 모두에 대해 캐시 디렉토리를 설정하며 기본값은 $HOME/.kube/cache이다. ([#92910](https://github.com/kubernetes/kubernetes/pull/92910), [@soltysh](https://github.com/soltysh)) [SIG API Machinery 및 CLI]
+- 이미지 로그에 `pod.Namespace` 가 추가되었다. ([#91945](https://github.com/kubernetes/kubernetes/pull/91945), [@zhipengzuo](https://github.com/zhipengzuo))
+- DISABLE_KUBECONFIG_LOCK 환경 변수를 통해 kubeconfig 파일 잠금을 비활성화 하는 기능이 추가되었다. ([#92513](https://github.com/kubernetes/kubernetes/pull/92513), [@soltysh](https://github.com/soltysh)) [SIG API Machinery 및 CLI]
+- udp 파드의 conntrack 모듈이 정리되었는지 확인하기 위한 별도의 테스트가 추가되었다. ([#90180](https://github.com/kubernetes/kubernetes/pull/90180), [@JacobTanenbaum](https://github.com/JacobTanenbaum)) [SIG Architecture, Network 및 Testing]
+- fsType이 지정되지 않은 경우, cinder 값의 fsType 을 `ext4` 로 조정한다. ([#90608](https://github.com/kubernetes/kubernetes/pull/90608), [@huffmanca](https://github.com/huffmanca)) [SIG Storage]
+- 베이스-이미지(base-image): debian-base:v2.1.0를 사용한다. ([#90697](https://github.com/kubernetes/kubernetes/pull/90697), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery 및 Release]
+- 베이스-이미지: debian-iptables:v12.1.0를 사용한다. ([#90782](https://github.com/kubernetes/kubernetes/pull/90782), [@justaugustus](https://github.com/justaugustus)) [SIG Release]
+- Beta.kubernetes.io/arch는 v1.14 이후 이미 사용 중단되었으며, v1.18은 제거 대상이다. ([#89462](https://github.com/kubernetes/kubernetes/pull/89462), [@wawa0210](https://github.com/wawa0210)) [SIG Testing]
+- 빌드: debian-base@v2.1.2 및 debian-iptables@v12.1.1로 변경되었다. ([#93667](https://github.com/kubernetes/kubernetes/pull/93667), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, Release 및 Testing]
+- beta.kubernetes.io/os를 kubernetes.io/os로 변경하였다. ([#89460](https://github.com/kubernetes/kubernetes/pull/89460), [@wawa0210](https://github.com/wawa0210)) [SIG Testing 및 Windows]
+- beta.kubernetes.io/os를 kubernetes.io/os로 변경하였다. ([#89461](https://github.com/kubernetes/kubernetes/pull/89461), [@wawa0210](https://github.com/wawa0210)) [SIG Cloud Provider 및 Cluster Lifecycle]
+- `kubectl get` 을 사용하여 네임스페이스가 지정되지 않는 리소스를 검색할 때, 리소스가 없을 때의 출력 메세지가 변경되었다. ([#89861](https://github.com/kubernetes/kubernetes/pull/89861), [@rccrdpccl](https://github.com/rccrdpccl)) [SIG CLI]
+- CoreDNS는 더 이상 kube-dns 컨피그맵에 대한 페더레이션 데이터 변환을 지원하지 않는다. ([#92716](https://github.com/kubernetes/kubernetes/pull/92716), [@rajansandeep](https://github.com/rajansandeep)) [SIG Cluster Lifecycle]
+- 힙스터와 관련된 kubectl top 명령어의 플래그가 사용 중단 되었다.
+ kubectl top 에서 힙스터 지원이 중단 되었다. ([#87498](https://github.com/kubernetes/kubernetes/pull/87498), [@serathius](https://github.com/serathius)) [SIG CLI]
+- 더 이상 사용되지 않는 `--target-ram-md` 플래그가 사용 중단되었다. ([#91818](https://github.com/kubernetes/kubernetes/pull/91818), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery]
+- Kubelet API에 직접 의존하는 일부 적합성 테스트가 삭제되었다. ([#90615](https://github.com/kubernetes/kubernetes/pull/90615), [@dims](https://github.com/dims)) [SIG Architecture, Network, Release 및 Testing]
+- 바인딩 되지 않은 PVC가 지연 바인딩 모드에 있지만 파드에서 사용하는 경우, `WaitingForPodScheduled` 이벤트를 보내게 된다. ([#91455](https://github.com/kubernetes/kubernetes/pull/91455), [@cofyc](https://github.com/cofyc)) [SIG Storage]
+- 수정: blob 디스크 기능의 라이선스 문제 ([#92824](https://github.com/kubernetes/kubernetes/pull/92824), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- 전용 kubectl 하위명령어의 필드 관리자를 설정하여 서버측 적용의 충돌 오류를 개선하였다. ([#88885](https://github.com/kubernetes/kubernetes/pull/88885), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI 및 Testing]
+- IsFullyQualifiedDomainName()은 IsDNS1123Label을 기반으로 각 레이블의 유효성을 검사한다. ([#90172](https://github.com/kubernetes/kubernetes/pull/90172), [@nak3](https://github.com/nak3)) [SIG API Machinery 및 Network]
+- 이제 서비스 어노테이션 `cloud.google.com/network-tier: Standard` 을 사용하여 GCE 로드밸런서의 네트워크 등급을 구성할 수 있다. ([#88532](https://github.com/kubernetes/kubernetes/pull/88532), [@zioproto](https://github.com/zioproto)) [SIG Cloud Provider, Network 및 Testing]
+- Kube-aggregator: aggregator_unavailable_apiservice_count 메트릭의 이름을 aggregator_unavailable_apiservice_total로 변경하였다. ([#88156](https://github.com/kubernetes/kubernetes/pull/88156), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery]
+- Kube-apiserver: 사용자 정의 리소스 용으로 게시된 openapi 스키마는 이제 표준 ListMeta 스키마 정의를 참조한다. ([#92546](https://github.com/kubernetes/kubernetes/pull/92546), [@liggitt](https://github.com/liggitt)) [SIG API Machinery]
+- Kube-proxy는 kube-proxy에 대한 변경 사항이 적용되기 위해 대기열에 마지막으로 대기한 시간을 나타내는 새로운 메트릭인, `kubeproxy_sync_proxy_rules_last_queued_timestamp_seconds` 를 노출해야 한다. ([#90175](https://github.com/kubernetes/kubernetes/pull/90175), [@squeed](https://github.com/squeed)) [SIG Instrumentation 및 Network]
+- Kube-scheduler: 메트릭 항목 `scheduler_total_preemption_attempts` 의 이름이 `scheduler_preemption_attempts_total` 로 변경되었다. ([#91448](https://github.com/kubernetes/kubernetes/pull/91448), [@RainbowMango](https://github.com/RainbowMango)) [SIG API Machinery, Cluster Lifecycle, Instrumentation 및 Scheduling]
+- Kube-up: 1.17 이전의 동작과 일치하도록 중요 파드를 kube-system 네임스페이스로 기본적으로 제한한다. ([#93121](https://github.com/kubernetes/kubernetes/pull/93121), [@liggitt](https://github.com/liggitt)) [SIG Cloud Provider 및 Scheduling]
+- Kubeadm은 이제 kubelet 명령줄 대신 kubelet 구성 요소 설정을 사용하여 IPv6DualStack 기능 게이트를 전달한다. ([#90840](https://github.com/kubernetes/kubernetes/pull/90840), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: "kubeadm upgrade apply" 중에 컨트롤 플레인 이미지를 미리 가져오기 위하여 데몬셋(DaemonSet)을 사용하지 않는다. 개별 노드의 업그레이드는 이제 사전 구동 검사를 하여 필요한 이미지를 가져온다. "kubeadm upgrade apply" 에 대한 "--image-pull-timeout" 플래그는 이제 사용 중단되었으며, GA 지원 중단 정책에 따라 향후 릴리스에서 제거된다. ([#90788](https://github.com/kubernetes/kubernetes/pull/90788), [@xlgao-zju](https://github.com/xlgao-zju)) [SIG Cluster Lifecycle]
+- Kubeadm: /healthz 를 사용하는 대신 kube-apiserver의 스태틱(static) 파드에 대해 /livez 및 /readyz 에 대해 두 개의 개별 검사를 사용한다. ([#90970](https://github.com/kubernetes/kubernetes/pull/90970), [@johscheuer](https://github.com/johscheuer)) [SIG Cluster Lifecycle]
+- 없음 ([#91597](https://github.com/kubernetes/kubernetes/pull/91597), [@elmiko](https://github.com/elmiko)) [SIG Autoscaling 및 Testing]
+- Openapi-controller: 등급 제한 메트릭 `APIServiceOpenAPIAggregationControllerQueue1` 에서 후행의 `1` 문자리터럴을 제거하고, 프로메테우스의 모범 사례를 준수하도록 이름을 `open_api_aggregation_controller` 로 변경하였다. ([#77979](https://github.com/kubernetes/kubernetes/pull/77979), [@s-urbaniak](https://github.com/s-urbaniak)) [SIG API Machinery]
+- 볼륨 작동 오류시 이벤트 스팸이 감소하였다. ([#89794](https://github.com/kubernetes/kubernetes/pull/89794), [@msau42](https://github.com/msau42)) [SIG Storage]
+- 로컬 nodeipam 범위 할당자를 리팩토링하고 다음의 메트릭을 사용하여, 할당된 CIDR을 저장하는데 사용된 cidr집합을 계측한다.
+ "cidrset_cidrs_allocations_total",
+ "cidrset_cidrs_releases_total",
+ "cidrset_usage_cidrs",
+ "cidrset_allocation_tries_per_request", ([#90288](https://github.com/kubernetes/kubernetes/pull/90288), [@aojea](https://github.com/aojea)) [SIG 앱, Instrumentation, Network 및 확장성]
+- kubectl apply 명령어에서 사용 중단된 --server-dry-run 플래그가 제거되었다. ([#91308](https://github.com/kubernetes/kubernetes/pull/91308), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI 및 Testing]
+- 기능 게이트 DefaultPodTopologySpread와의 이름 충돌을 피하기 위해 DefaultPodTopologySpread 플러그인의 이름을 SelectorSpread 플러그인으로 변경하였다. ([#92501](https://github.com/kubernetes/kubernetes/pull/92501), [@rakeshreddybandi](https://github.com/rakeshreddybandi)) [SIG Release, Scheduling 및 Testing]
+- framework.Failf가 ExpectNoError로 변경되었다. ([#91811](https://github.com/kubernetes/kubernetes/pull/91811), [@lixiaobing1](https://github.com/lixiaobing1)) [SIG Instrumentation, Storage 및 Testing]
+- 필터링 된 노드가 하나 이하인 경우, 스케줄러 PreScore 플러그인이 실행되지 않는다. ([#89370](https://github.com/kubernetes/kubernetes/pull/89370), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+- "HostPath는 볼륨에 올바른 모드를 제공해야 한다."는 이제 적합성 테스트에서 제외된다. ([#90861](https://github.com/kubernetes/kubernetes/pull/90861), [@dims](https://github.com/dims)) [SIG Architecture 및 Testing]
+- Kubelet의 `--experimental-allocatable-ignore-eviction` 옵션은 이제 사용 중단된 것으로 표시된다. ([#91578](https://github.com/kubernetes/kubernetes/pull/91578), [@knabben](https://github.com/knabben)) [SIG Node]
+- Kubelet의 `--experimental-mounter-path` 및 `--experimental-check-node-capabilities-before-mount` 옵션은 이제 사용 중단된 것으로 표시된다. ([#91373](https://github.com/kubernetes/kubernetes/pull/91373), [@knabben](https://github.com/knabben))
+- PR은 PV 또는 PVC 처리에서 특정 실패가 발생할 때 이벤트를 생성하는 기능을 추가한다. 이 이벤트는 사용자가 사용 실패 이유를 파악하여 필요한 복구 조치를 취할 수 있도록 도와준다. ([#89845](https://github.com/kubernetes/kubernetes/pull/89845), [@yuga711](https://github.com/yuga711)) [SIG Apps]
+- PodShareProcessNamespace 기능 게이트가 제거되고, PodShareProcessNamespace 는 항상 활성화된다. ([#90099](https://github.com/kubernetes/kubernetes/pull/90099), [@tanjunchen](https://github.com/tanjunchen)) [SIG Node]
+- kube-apiserver의 `--kubelet-https` 플래그가 사용 중단 된다. kubelet에 대한 kube-apiserver의 연결은 이제 항상 `https` 를 사용한다. (kubelet은 v1.0 이전부터 apiserver가 통신하는 엔드포인트를 제공하기 위해 언제나 `https` 를 사용하였다.) ([#91630](https://github.com/kubernetes/kubernetes/pull/91630), [@liggitt](https://github.com/liggitt)) [SIG API Machinery 및 Node]
+- CNI가 v0.8.6로 변경되었다. ([#91370](https://github.com/kubernetes/kubernetes/pull/91370), [@justaugustus](https://github.com/justaugustus)) [SIG Cloud Provider, Network, Release 및 Testing]
+- Golang이 v1.14.5로 변경되었다.
+ - Update repo-infra to 0.0.7 (go1.14.5 및 go1.13.13을 지원하기 위해)
+ - 포함:
+ - bazelbuild/bazel-toolchains@3.3.2
+ - bazelbuild/rules_go@v0.22.7 ([#93088](https://github.com/kubernetes/kubernetes/pull/93088), [@justaugustus](https://github.com/justaugustus)) [SIG Release 및 Testing]
+- Golang이 v1.14.6로 변경되었다.
+ - repo-infra가 0.0.8로 변경되었다. (go1.14.6 및 go1.13.14을 지원하기 위해)
+ - 포함:
+ - bazelbuild/bazel-toolchains@3.4.0
+ - bazelbuild/rules_go@v0.22.8 ([#93198](https://github.com/kubernetes/kubernetes/pull/93198), [@justaugustus](https://github.com/justaugustus)) [SIG Release 및 Testing]
+- corefile-migration 라이브러리가 1.0.8로 변경되었다. ([#91856](https://github.com/kubernetes/kubernetes/pull/91856), [@wawa0210](https://github.com/wawa0210)) [SIG Node]
+- 기본 etcd 서버 버전이 3.4.4로 변경되었다. ([#89214](https://github.com/kubernetes/kubernetes/pull/89214), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cluster Lifecycle 및 Testing]
+- 기본 etcd 서버 버전이 3.4.7로 변경되었다. ([#89895](https://github.com/kubernetes/kubernetes/pull/89895), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cluster Lifecycle 및 Testing]
+- 기본 etcd 서버 버전이 3.4.9로 변경되었다. ([#92349](https://github.com/kubernetes/kubernetes/pull/92349), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cloud Provider, Cluster Lifecycle 및 Testing]
+- go.etcd.io/bbolt가 v1.3.5로 변경되었다. ([#92350](https://github.com/kubernetes/kubernetes/pull/92350), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery 및 Cloud Provider]
+- opencontainers/runtime-spec 의존성이 v1.0.2로 변경되었다. ([#89644](https://github.com/kubernetes/kubernetes/pull/89644), [@saschagrunert](https://github.com/saschagrunert)) [SIG Node]
+- `beta.kubernetes.io/os` 및 `beta.kubernetes.io/arch` 노드 레이블은 사용 중단되었다. `kubernetes.io/os` 및 `kubernetes.io/arch` 를 사용하도록 노드 셀렉터를 변경해야한다. ([#91046](https://github.com/kubernetes/kubernetes/pull/91046), [@wawa0210](https://github.com/wawa0210)) [SIG Apps 및 Node]
+- `kubectl config view` 는 이제 클라이언트 인증서와 유사하게 기본적으로 베어러 토큰을 수정한다. `--raw` 플래그는 여전히 전체 컨텐츠를 출력하는데 사용될 수 있다. ([#88985](https://github.com/kubernetes/kubernetes/pull/88985), [@puerco](https://github.com/puerco))
+
+## 의존성
+
+### 추가
+- cloud.google.com/go/bigquery: v1.0.1
+- cloud.google.com/go/datastore: v1.0.0
+- cloud.google.com/go/pubsub: v1.0.1
+- cloud.google.com/go/storage: v1.0.0
+- dmitri.shuralyov.com/gpu/mtl: 666a987
+- github.com/cespare/xxhash/v2: [v2.1.1](https://github.com/cespare/xxhash/v2/tree/v2.1.1)
+- github.com/checkpoint-restore/go-criu/v4: [v4.0.2](https://github.com/checkpoint-restore/go-criu/v4/tree/v4.0.2)
+- github.com/chzyer/logex: [v1.1.10](https://github.com/chzyer/logex/tree/v1.1.10)
+- github.com/chzyer/readline: [2972be2](https://github.com/chzyer/readline/tree/2972be2)
+- github.com/chzyer/test: [a1ea475](https://github.com/chzyer/test/tree/a1ea475)
+- github.com/containerd/cgroups: [0dbf7f0](https://github.com/containerd/cgroups/tree/0dbf7f0)
+- github.com/containerd/continuity: [aaeac12](https://github.com/containerd/continuity/tree/aaeac12)
+- github.com/containerd/fifo: [a9fb20d](https://github.com/containerd/fifo/tree/a9fb20d)
+- github.com/containerd/go-runc: [5a6d9f3](https://github.com/containerd/go-runc/tree/5a6d9f3)
+- github.com/containerd/ttrpc: [v1.0.0](https://github.com/containerd/ttrpc/tree/v1.0.0)
+- github.com/coreos/bbolt: [v1.3.2](https://github.com/coreos/bbolt/tree/v1.3.2)
+- github.com/coreos/go-systemd/v22: [v22.1.0](https://github.com/coreos/go-systemd/v22/tree/v22.1.0)
+- github.com/cpuguy83/go-md2man/v2: [v2.0.0](https://github.com/cpuguy83/go-md2man/v2/tree/v2.0.0)
+- github.com/docopt/docopt-go: [ee0de3b](https://github.com/docopt/docopt-go/tree/ee0de3b)
+- github.com/go-gl/glfw/v3.3/glfw: [12ad95a](https://github.com/go-gl/glfw/v3.3/glfw/tree/12ad95a)
+- github.com/go-ini/ini: [v1.9.0](https://github.com/go-ini/ini/tree/v1.9.0)
+- github.com/godbus/dbus/v5: [v5.0.3](https://github.com/godbus/dbus/v5/tree/v5.0.3)
+- github.com/ianlancetaylor/demangle: [5e5cf60](https://github.com/ianlancetaylor/demangle/tree/5e5cf60)
+- github.com/ishidawataru/sctp: [7c296d4](https://github.com/ishidawataru/sctp/tree/7c296d4)
+- github.com/moby/ipvs: [v1.0.1](https://github.com/moby/ipvs/tree/v1.0.1)
+- github.com/moby/sys/mountinfo: [v0.1.3](https://github.com/moby/sys/mountinfo/tree/v0.1.3)
+- github.com/moby/term: [672ec06](https://github.com/moby/term/tree/672ec06)
+- github.com/russross/blackfriday/v2: [v2.0.1](https://github.com/russross/blackfriday/v2/tree/v2.0.1)
+- github.com/shurcooL/sanitized_anchor_name: [v1.0.0](https://github.com/shurcooL/sanitized_anchor_name/tree/v1.0.0)
+- github.com/ugorji/go: [v1.1.4](https://github.com/ugorji/go/tree/v1.1.4)
+- github.com/yuin/goldmark: [v1.1.27](https://github.com/yuin/goldmark/tree/v1.1.27)
+- google.golang.org/protobuf: v1.24.0
+- gotest.tools/v3: v3.0.2
+- k8s.io/klog/v2: v2.2.0
+
+### 변경
+- cloud.google.com/go: v0.38.0 → v0.51.0
+- github.com/Azure/azure-sdk-for-go: [v35.0.0+incompatible → v43.0.0+incompatible](https://github.com/Azure/azure-sdk-for-go/compare/v35.0.0...v43.0.0)
+- github.com/Azure/go-autorest/autorest/adal: [v0.5.0 → v0.8.2](https://github.com/Azure/go-autorest/autorest/adal/compare/v0.5.0...v0.8.2)
+- github.com/Azure/go-autorest/autorest/date: [v0.1.0 → v0.2.0](https://github.com/Azure/go-autorest/autorest/date/compare/v0.1.0...v0.2.0)
+- github.com/Azure/go-autorest/autorest/mocks: [v0.2.0 → v0.3.0](https://github.com/Azure/go-autorest/autorest/mocks/compare/v0.2.0...v0.3.0)
+- github.com/Azure/go-autorest/autorest: [v0.9.0 → v0.9.6](https://github.com/Azure/go-autorest/autorest/compare/v0.9.0...v0.9.6)
+- github.com/GoogleCloudPlatform/k8s-cloud-provider: [27a4ced → 7901bc8](https://github.com/GoogleCloudPlatform/k8s-cloud-provider/compare/27a4ced...7901bc8)
+- github.com/Microsoft/go-winio: [v0.4.14 → fc70bd9](https://github.com/Microsoft/go-winio/compare/v0.4.14...fc70bd9)
+- github.com/Microsoft/hcsshim: [672e52e → 5eafd15](https://github.com/Microsoft/hcsshim/compare/672e52e...5eafd15)
+- github.com/alecthomas/template: [a0175ee → fb15b89](https://github.com/alecthomas/template/compare/a0175ee...fb15b89)
+- github.com/alecthomas/units: [2efee85 → c3de453](https://github.com/alecthomas/units/compare/2efee85...c3de453)
+- github.com/beorn7/perks: [v1.0.0 → v1.0.1](https://github.com/beorn7/perks/compare/v1.0.0...v1.0.1)
+- github.com/cilium/ebpf: [95b36a5 → 1c8d4c9](https://github.com/cilium/ebpf/compare/95b36a5...1c8d4c9)
+- github.com/containerd/console: [84eeaae → v1.0.0](https://github.com/containerd/console/compare/84eeaae...v1.0.0)
+- github.com/containerd/containerd: [v1.0.2 → v1.3.3](https://github.com/containerd/containerd/compare/v1.0.2...v1.3.3)
+- github.com/containerd/typeurl: [2a93cfd → v1.0.0](https://github.com/containerd/typeurl/compare/2a93cfd...v1.0.0)
+- github.com/containernetworking/cni: [v0.7.1 → v0.8.0](https://github.com/containernetworking/cni/compare/v0.7.1...v0.8.0)
+- github.com/coredns/corefile-migration: [v1.0.6 → v1.0.10](https://github.com/coredns/corefile-migration/compare/v1.0.6...v1.0.10)
+- github.com/coreos/pkg: [97fdf19 → 399ea9e](https://github.com/coreos/pkg/compare/97fdf19...399ea9e)
+- github.com/docker/docker: [be7ac8b → aa6a989](https://github.com/docker/docker/compare/be7ac8b...aa6a989)
+- github.com/docker/go-connections: [v0.3.0 → v0.4.0](https://github.com/docker/go-connections/compare/v0.3.0...v0.4.0)
+- github.com/evanphx/json-patch: [v4.2.0+incompatible → e83c0a1](https://github.com/evanphx/json-patch/compare/v4.2.0...e83c0a1)
+- github.com/fsnotify/fsnotify: [v1.4.7 → v1.4.9](https://github.com/fsnotify/fsnotify/compare/v1.4.7...v1.4.9)
+- github.com/go-kit/kit: [v0.8.0 → v0.9.0](https://github.com/go-kit/kit/compare/v0.8.0...v0.9.0)
+- github.com/go-logfmt/logfmt: [v0.3.0 → v0.4.0](https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0)
+- github.com/go-logr/logr: [v0.1.0 → v0.2.0](https://github.com/go-logr/logr/compare/v0.1.0...v0.2.0)
+- github.com/golang/groupcache: [02826c3 → 215e871](https://github.com/golang/groupcache/compare/02826c3...215e871)
+- github.com/golang/protobuf: [v1.3.2 → v1.4.2](https://github.com/golang/protobuf/compare/v1.3.2...v1.4.2)
+- github.com/google/cadvisor: [v0.35.0 → v0.37.0](https://github.com/google/cadvisor/compare/v0.35.0...v0.37.0)
+- github.com/google/go-cmp: [v0.3.0 → v0.4.0](https://github.com/google/go-cmp/compare/v0.3.0...v0.4.0)
+- github.com/google/pprof: [3ea8567 → d4f498a](https://github.com/google/pprof/compare/3ea8567...d4f498a)
+- github.com/googleapis/gax-go/v2: [v2.0.4 → v2.0.5](https://github.com/googleapis/gax-go/v2/compare/v2.0.4...v2.0.5)
+- github.com/googleapis/gnostic: [v0.1.0 → v0.4.1](https://github.com/googleapis/gnostic/compare/v0.1.0...v0.4.1)
+- github.com/gorilla/mux: [v1.7.0 → v1.7.3](https://github.com/gorilla/mux/compare/v1.7.0...v1.7.3)
+- github.com/json-iterator/go: [v1.1.8 → v1.1.10](https://github.com/json-iterator/go/compare/v1.1.8...v1.1.10)
+- github.com/jstemmer/go-junit-report: [af01ea7 → v0.9.1](https://github.com/jstemmer/go-junit-report/compare/af01ea7...v0.9.1)
+- github.com/konsorten/go-windows-terminal-sequences: [v1.0.1 → v1.0.3](https://github.com/konsorten/go-windows-terminal-sequences/compare/v1.0.1...v1.0.3)
+- github.com/kr/pretty: [v0.1.0 → v0.2.0](https://github.com/kr/pretty/compare/v0.1.0...v0.2.0)
+- github.com/mattn/go-isatty: [v0.0.9 → v0.0.4](https://github.com/mattn/go-isatty/compare/v0.0.9...v0.0.4)
+- github.com/matttproud/golang_protobuf_extensions: [v1.0.1 → c182aff](https://github.com/matttproud/golang_protobuf_extensions/compare/v1.0.1...c182aff)
+- github.com/mistifyio/go-zfs: [v2.1.1+incompatible → f784269](https://github.com/mistifyio/go-zfs/compare/v2.1.1...f784269)
+- github.com/mrunalp/fileutils: [7d4729f → abd8a0e](https://github.com/mrunalp/fileutils/compare/7d4729f...abd8a0e)
+- github.com/opencontainers/runc: [v1.0.0-rc10 → 819fcc6](https://github.com/opencontainers/runc/compare/v1.0.0-rc10...819fcc6)
+- github.com/opencontainers/runtime-spec: [v1.0.0 → 237cc4f](https://github.com/opencontainers/runtime-spec/compare/v1.0.0...237cc4f)
+- github.com/opencontainers/selinux: [5215b18 → v1.5.2](https://github.com/opencontainers/selinux/compare/5215b18...v1.5.2)
+- github.com/pkg/errors: [v0.8.1 → v0.9.1](https://github.com/pkg/errors/compare/v0.8.1...v0.9.1)
+- github.com/prometheus/client_golang: [v1.0.0 → v1.7.1](https://github.com/prometheus/client_golang/compare/v1.0.0...v1.7.1)
+- github.com/prometheus/common: [v0.4.1 → v0.10.0](https://github.com/prometheus/common/compare/v0.4.1...v0.10.0)
+- github.com/prometheus/procfs: [v0.0.2 → v0.1.3](https://github.com/prometheus/procfs/compare/v0.0.2...v0.1.3)
+- github.com/rubiojr/go-vhd: [0bfd3b3 → 02e2102](https://github.com/rubiojr/go-vhd/compare/0bfd3b3...02e2102)
+- github.com/sirupsen/logrus: [v1.4.2 → v1.6.0](https://github.com/sirupsen/logrus/compare/v1.4.2...v1.6.0)
+- github.com/spf13/cobra: [v0.0.5 → v1.0.0](https://github.com/spf13/cobra/compare/v0.0.5...v1.0.0)
+- github.com/spf13/viper: [v1.3.2 → v1.4.0](https://github.com/spf13/viper/compare/v1.3.2...v1.4.0)
+- github.com/tmc/grpc-websocket-proxy: [89b8d40 → 0ad062e](https://github.com/tmc/grpc-websocket-proxy/compare/89b8d40...0ad062e)
+- github.com/urfave/cli: [v1.20.0 → v1.22.2](https://github.com/urfave/cli/compare/v1.20.0...v1.22.2)
+- github.com/vishvananda/netlink: [v1.0.0 → v1.1.0](https://github.com/vishvananda/netlink/compare/v1.0.0...v1.1.0)
+- github.com/vishvananda/netns: [be1fbed → 52d707b](https://github.com/vishvananda/netns/compare/be1fbed...52d707b)
+- go.etcd.io/bbolt: v1.3.3 → v1.3.5
+- go.etcd.io/etcd: 3cf2f69 → 18dfb9c
+- go.opencensus.io: v0.21.0 → v0.22.2
+- go.uber.org/atomic: v1.3.2 → v1.4.0
+- golang.org/x/crypto: bac4c82 → 75b2880
+- golang.org/x/exp: 4b39c73 → da58074
+- golang.org/x/image: 0694c2d → cff245a
+- golang.org/x/lint: 959b441 → fdd1cda
+- golang.org/x/mobile: d3739f8 → d2bd2a2
+- golang.org/x/mod: 4bf6d31 → v0.3.0
+- golang.org/x/net: 13f9640 → ab34263
+- golang.org/x/oauth2: 0f29369 → 858c2ad
+- golang.org/x/sys: fde4db3 → ed371f2
+- golang.org/x/text: v0.3.2 → v0.3.3
+- golang.org/x/time: 9d24e82 → 555d28b
+- golang.org/x/tools: 65e3620 → c1934b7
+- golang.org/x/xerrors: a985d34 → 9bdfabe
+- google.golang.org/api: 5213b80 → v0.15.1
+- google.golang.org/appengine: v1.5.0 → v1.6.5
+- google.golang.org/genproto: 24fa4b2 → cb27e3a
+- google.golang.org/grpc: v1.26.0 → v1.27.0
+- gopkg.in/check.v1: 788fd78 → 41f04d3
+- honnef.co/go/tools: v0.0.1-2019.2.2 → v0.0.1-2019.2.3
+- k8s.io/gengo: 36b2048 → 8167cfd
+- k8s.io/kube-openapi: bf4fb3b → 656914f
+- k8s.io/system-validators: v1.0.4 → v1.1.2
+- k8s.io/utils: 0a110f9 → d5654de
+- sigs.k8s.io/apiserver-network-proxy/konnectivity-client: v0.0.7 → v0.0.9
+- sigs.k8s.io/structured-merge-diff/v3: v3.0.0 → 43c19bb
+
+### 제거
+- github.com/OpenPeeDeeP/depguard: [v1.0.1](https://github.com/OpenPeeDeeP/depguard/tree/v1.0.1)
+- github.com/Rican7/retry: [v0.1.0](https://github.com/Rican7/retry/tree/v0.1.0)
+- github.com/StackExchange/wmi: [5d04971](https://github.com/StackExchange/wmi/tree/5d04971)
+- github.com/anmitsu/go-shlex: [648efa6](https://github.com/anmitsu/go-shlex/tree/648efa6)
+- github.com/bazelbuild/bazel-gazelle: [70208cb](https://github.com/bazelbuild/bazel-gazelle/tree/70208cb)
+- github.com/bazelbuild/buildtools: [69366ca](https://github.com/bazelbuild/buildtools/tree/69366ca)
+- github.com/bazelbuild/rules_go: [6dae44d](https://github.com/bazelbuild/rules_go/tree/6dae44d)
+- github.com/bradfitz/go-smtpd: [deb6d62](https://github.com/bradfitz/go-smtpd/tree/deb6d62)
+- github.com/cespare/prettybench: [03b8cfe](https://github.com/cespare/prettybench/tree/03b8cfe)
+- github.com/checkpoint-restore/go-criu: [17b0214](https://github.com/checkpoint-restore/go-criu/tree/17b0214)
+- github.com/client9/misspell: [v0.3.4](https://github.com/client9/misspell/tree/v0.3.4)
+- github.com/coreos/go-etcd: [v2.0.0+incompatible](https://github.com/coreos/go-etcd/tree/v2.0.0)
+- github.com/cpuguy83/go-md2man: [v1.0.10](https://github.com/cpuguy83/go-md2man/tree/v1.0.10)
+- github.com/docker/libnetwork: [c8a5fca](https://github.com/docker/libnetwork/tree/c8a5fca)
+- github.com/gliderlabs/ssh: [v0.1.1](https://github.com/gliderlabs/ssh/tree/v0.1.1)
+- github.com/go-critic/go-critic: [1df3008](https://github.com/go-critic/go-critic/tree/1df3008)
+- github.com/go-lintpack/lintpack: [v0.5.2](https://github.com/go-lintpack/lintpack/tree/v0.5.2)
+- github.com/go-ole/go-ole: [v1.2.1](https://github.com/go-ole/go-ole/tree/v1.2.1)
+- github.com/go-toolsmith/astcast: [v1.0.0](https://github.com/go-toolsmith/astcast/tree/v1.0.0)
+- github.com/go-toolsmith/astcopy: [v1.0.0](https://github.com/go-toolsmith/astcopy/tree/v1.0.0)
+- github.com/go-toolsmith/astequal: [v1.0.0](https://github.com/go-toolsmith/astequal/tree/v1.0.0)
+- github.com/go-toolsmith/astfmt: [v1.0.0](https://github.com/go-toolsmith/astfmt/tree/v1.0.0)
+- github.com/go-toolsmith/astinfo: [9809ff7](https://github.com/go-toolsmith/astinfo/tree/9809ff7)
+- github.com/go-toolsmith/astp: [v1.0.0](https://github.com/go-toolsmith/astp/tree/v1.0.0)
+- github.com/go-toolsmith/pkgload: [v1.0.0](https://github.com/go-toolsmith/pkgload/tree/v1.0.0)
+- github.com/go-toolsmith/strparse: [v1.0.0](https://github.com/go-toolsmith/strparse/tree/v1.0.0)
+- github.com/go-toolsmith/typep: [v1.0.0](https://github.com/go-toolsmith/typep/tree/v1.0.0)
+- github.com/gobwas/glob: [v0.2.3](https://github.com/gobwas/glob/tree/v0.2.3)
+- github.com/godbus/dbus: [2ff6f7f](https://github.com/godbus/dbus/tree/2ff6f7f)
+- github.com/golangci/check: [cfe4005](https://github.com/golangci/check/tree/cfe4005)
+- github.com/golangci/dupl: [3e9179a](https://github.com/golangci/dupl/tree/3e9179a)
+- github.com/golangci/errcheck: [ef45e06](https://github.com/golangci/errcheck/tree/ef45e06)
+- github.com/golangci/go-misc: [927a3d8](https://github.com/golangci/go-misc/tree/927a3d8)
+- github.com/golangci/go-tools: [e32c541](https://github.com/golangci/go-tools/tree/e32c541)
+- github.com/golangci/goconst: [041c5f2](https://github.com/golangci/goconst/tree/041c5f2)
+- github.com/golangci/gocyclo: [2becd97](https://github.com/golangci/gocyclo/tree/2becd97)
+- github.com/golangci/gofmt: [0b8337e](https://github.com/golangci/gofmt/tree/0b8337e)
+- github.com/golangci/golangci-lint: [v1.18.0](https://github.com/golangci/golangci-lint/tree/v1.18.0)
+- github.com/golangci/gosec: [66fb7fc](https://github.com/golangci/gosec/tree/66fb7fc)
+- github.com/golangci/ineffassign: [42439a7](https://github.com/golangci/ineffassign/tree/42439a7)
+- github.com/golangci/lint-1: [ee948d0](https://github.com/golangci/lint-1/tree/ee948d0)
+- github.com/golangci/maligned: [b1d8939](https://github.com/golangci/maligned/tree/b1d8939)
+- github.com/golangci/misspell: [950f5d1](https://github.com/golangci/misspell/tree/950f5d1)
+- github.com/golangci/prealloc: [215b22d](https://github.com/golangci/prealloc/tree/215b22d)
+- github.com/golangci/revgrep: [d9c87f5](https://github.com/golangci/revgrep/tree/d9c87f5)
+- github.com/golangci/unconvert: [28b1c44](https://github.com/golangci/unconvert/tree/28b1c44)
+- github.com/google/go-github: [v17.0.0+incompatible](https://github.com/google/go-github/tree/v17.0.0)
+- github.com/google/go-querystring: [v1.0.0](https://github.com/google/go-querystring/tree/v1.0.0)
+- github.com/gostaticanalysis/analysisutil: [v0.0.3](https://github.com/gostaticanalysis/analysisutil/tree/v0.0.3)
+- github.com/jellevandenhooff/dkim: [f50fe3d](https://github.com/jellevandenhooff/dkim/tree/f50fe3d)
+- github.com/klauspost/compress: [v1.4.1](https://github.com/klauspost/compress/tree/v1.4.1)
+- github.com/logrusorgru/aurora: [a7b3b31](https://github.com/logrusorgru/aurora/tree/a7b3b31)
+- github.com/mattn/go-shellwords: [v1.0.5](https://github.com/mattn/go-shellwords/tree/v1.0.5)
+- github.com/mattn/goveralls: [v0.0.2](https://github.com/mattn/goveralls/tree/v0.0.2)
+- github.com/mesos/mesos-go: [v0.0.9](https://github.com/mesos/mesos-go/tree/v0.0.9)
+- github.com/mitchellh/go-ps: [4fdf99a](https://github.com/mitchellh/go-ps/tree/4fdf99a)
+- github.com/mozilla/tls-observatory: [8791a20](https://github.com/mozilla/tls-observatory/tree/8791a20)
+- github.com/nbutton23/zxcvbn-go: [eafdab6](https://github.com/nbutton23/zxcvbn-go/tree/eafdab6)
+- github.com/pquerna/ffjson: [af8b230](https://github.com/pquerna/ffjson/tree/af8b230)
+- github.com/quasilyte/go-consistent: [c6f3937](https://github.com/quasilyte/go-consistent/tree/c6f3937)
+- github.com/ryanuber/go-glob: [256dc44](https://github.com/ryanuber/go-glob/tree/256dc44)
+- github.com/shirou/gopsutil: [c95755e](https://github.com/shirou/gopsutil/tree/c95755e)
+- github.com/shirou/w32: [bb4de01](https://github.com/shirou/w32/tree/bb4de01)
+- github.com/shurcooL/go-goon: [37c2f52](https://github.com/shurcooL/go-goon/tree/37c2f52)
+- github.com/shurcooL/go: [9e1955d](https://github.com/shurcooL/go/tree/9e1955d)
+- github.com/sourcegraph/go-diff: [v0.5.1](https://github.com/sourcegraph/go-diff/tree/v0.5.1)
+- github.com/tarm/serial: [98f6abe](https://github.com/tarm/serial/tree/98f6abe)
+- github.com/timakin/bodyclose: [87058b9](https://github.com/timakin/bodyclose/tree/87058b9)
+- github.com/ugorji/go/codec: [d75b2dc](https://github.com/ugorji/go/codec/tree/d75b2dc)
+- github.com/ultraware/funlen: [v0.0.2](https://github.com/ultraware/funlen/tree/v0.0.2)
+- github.com/valyala/bytebufferpool: [v1.0.0](https://github.com/valyala/bytebufferpool/tree/v1.0.0)
+- github.com/valyala/fasthttp: [v1.2.0](https://github.com/valyala/fasthttp/tree/v1.2.0)
+- github.com/valyala/quicktemplate: [v1.1.1](https://github.com/valyala/quicktemplate/tree/v1.1.1)
+- github.com/valyala/tcplisten: [ceec8f9](https://github.com/valyala/tcplisten/tree/ceec8f9)
+- go4.org: 417644f
+- golang.org/x/build: 2835ba2
+- golang.org/x/perf: 6e6d33e
+- gopkg.in/airbrake/gobrake.v2: v2.0.9
+- gopkg.in/gemnasium/logrus-airbrake-hook.v2: v2.1.2
+- gotest.tools/gotestsum: v0.3.5
+- grpc.go4.org: 11d0a25
+- k8s.io/klog: v1.0.0
+- k8s.io/repo-infra: v0.0.1-alpha.1
+- mvdan.cc/interfacer: c200402
+- mvdan.cc/lint: adc824a
+- mvdan.cc/unparam: fbb5962
+- sourcegraph.com/sqs/pbtypes: d3ebe8f
+
+
+## 의존성
+
+### 추가
+- cloud.google.com/go/bigquery: v1.0.1
+- cloud.google.com/go/datastore: v1.0.0
+- cloud.google.com/go/pubsub: v1.0.1
+- cloud.google.com/go/storage: v1.0.0
+- dmitri.shuralyov.com/gpu/mtl: 666a987
+- github.com/cespare/xxhash/v2: [v2.1.1](https://github.com/cespare/xxhash/v2/tree/v2.1.1)
+- github.com/checkpoint-restore/go-criu/v4: [v4.0.2](https://github.com/checkpoint-restore/go-criu/v4/tree/v4.0.2)
+- github.com/chzyer/logex: [v1.1.10](https://github.com/chzyer/logex/tree/v1.1.10)
+- github.com/chzyer/readline: [2972be2](https://github.com/chzyer/readline/tree/2972be2)
+- github.com/chzyer/test: [a1ea475](https://github.com/chzyer/test/tree/a1ea475)
+- github.com/containerd/cgroups: [0dbf7f0](https://github.com/containerd/cgroups/tree/0dbf7f0)
+- github.com/containerd/continuity: [aaeac12](https://github.com/containerd/continuity/tree/aaeac12)
+- github.com/containerd/fifo: [a9fb20d](https://github.com/containerd/fifo/tree/a9fb20d)
+- github.com/containerd/go-runc: [5a6d9f3](https://github.com/containerd/go-runc/tree/5a6d9f3)
+- github.com/containerd/ttrpc: [v1.0.0](https://github.com/containerd/ttrpc/tree/v1.0.0)
+- github.com/coreos/bbolt: [v1.3.2](https://github.com/coreos/bbolt/tree/v1.3.2)
+- github.com/coreos/go-systemd/v22: [v22.1.0](https://github.com/coreos/go-systemd/v22/tree/v22.1.0)
+- github.com/cpuguy83/go-md2man/v2: [v2.0.0](https://github.com/cpuguy83/go-md2man/v2/tree/v2.0.0)
+- github.com/docopt/docopt-go: [ee0de3b](https://github.com/docopt/docopt-go/tree/ee0de3b)
+- github.com/go-gl/glfw/v3.3/glfw: [12ad95a](https://github.com/go-gl/glfw/v3.3/glfw/tree/12ad95a)
+- github.com/go-ini/ini: [v1.9.0](https://github.com/go-ini/ini/tree/v1.9.0)
+- github.com/godbus/dbus/v5: [v5.0.3](https://github.com/godbus/dbus/v5/tree/v5.0.3)
+- github.com/ianlancetaylor/demangle: [5e5cf60](https://github.com/ianlancetaylor/demangle/tree/5e5cf60)
+- github.com/ishidawataru/sctp: [7c296d4](https://github.com/ishidawataru/sctp/tree/7c296d4)
+- github.com/moby/ipvs: [v1.0.1](https://github.com/moby/ipvs/tree/v1.0.1)
+- github.com/moby/sys/mountinfo: [v0.1.3](https://github.com/moby/sys/mountinfo/tree/v0.1.3)
+- github.com/moby/term: [672ec06](https://github.com/moby/term/tree/672ec06)
+- github.com/russross/blackfriday/v2: [v2.0.1](https://github.com/russross/blackfriday/v2/tree/v2.0.1)
+- github.com/shurcooL/sanitized_anchor_name: [v1.0.0](https://github.com/shurcooL/sanitized_anchor_name/tree/v1.0.0)
+- github.com/ugorji/go: [v1.1.4](https://github.com/ugorji/go/tree/v1.1.4)
+- github.com/yuin/goldmark: [v1.1.27](https://github.com/yuin/goldmark/tree/v1.1.27)
+- google.golang.org/protobuf: v1.24.0
+- gotest.tools/v3: v3.0.2
+- k8s.io/klog/v2: v2.2.0
+- sigs.k8s.io/structured-merge-diff/v4: v4.0.1
+
+### 변경
+- cloud.google.com/go: v0.38.0 → v0.51.0
+- github.com/Azure/azure-sdk-for-go: [v35.0.0+incompatible → v43.0.0+incompatible](https://github.com/Azure/azure-sdk-for-go/compare/v35.0.0...v43.0.0)
+- github.com/Azure/go-autorest/autorest/adal: [v0.5.0 → v0.8.2](https://github.com/Azure/go-autorest/autorest/adal/compare/v0.5.0...v0.8.2)
+- github.com/Azure/go-autorest/autorest/date: [v0.1.0 → v0.2.0](https://github.com/Azure/go-autorest/autorest/date/compare/v0.1.0...v0.2.0)
+- github.com/Azure/go-autorest/autorest/mocks: [v0.2.0 → v0.3.0](https://github.com/Azure/go-autorest/autorest/mocks/compare/v0.2.0...v0.3.0)
+- github.com/Azure/go-autorest/autorest: [v0.9.0 → v0.9.6](https://github.com/Azure/go-autorest/autorest/compare/v0.9.0...v0.9.6)
+- github.com/GoogleCloudPlatform/k8s-cloud-provider: [27a4ced → 7901bc8](https://github.com/GoogleCloudPlatform/k8s-cloud-provider/compare/27a4ced...7901bc8)
+- github.com/Microsoft/go-winio: [v0.4.14 → fc70bd9](https://github.com/Microsoft/go-winio/compare/v0.4.14...fc70bd9)
+- github.com/Microsoft/hcsshim: [672e52e → 5eafd15](https://github.com/Microsoft/hcsshim/compare/672e52e...5eafd15)
+- github.com/alecthomas/template: [a0175ee → fb15b89](https://github.com/alecthomas/template/compare/a0175ee...fb15b89)
+- github.com/alecthomas/units: [2efee85 → c3de453](https://github.com/alecthomas/units/compare/2efee85...c3de453)
+- github.com/beorn7/perks: [v1.0.0 → v1.0.1](https://github.com/beorn7/perks/compare/v1.0.0...v1.0.1)
+- github.com/cilium/ebpf: [95b36a5 → 1c8d4c9](https://github.com/cilium/ebpf/compare/95b36a5...1c8d4c9)
+- github.com/containerd/console: [84eeaae → v1.0.0](https://github.com/containerd/console/compare/84eeaae...v1.0.0)
+- github.com/containerd/containerd: [v1.0.2 → v1.3.3](https://github.com/containerd/containerd/compare/v1.0.2...v1.3.3)
+- github.com/containerd/typeurl: [2a93cfd → v1.0.0](https://github.com/containerd/typeurl/compare/2a93cfd...v1.0.0)
+- github.com/containernetworking/cni: [v0.7.1 → v0.8.0](https://github.com/containernetworking/cni/compare/v0.7.1...v0.8.0)
+- github.com/coredns/corefile-migration: [v1.0.6 → v1.0.10](https://github.com/coredns/corefile-migration/compare/v1.0.6...v1.0.10)
+- github.com/coreos/pkg: [97fdf19 → 399ea9e](https://github.com/coreos/pkg/compare/97fdf19...399ea9e)
+- github.com/docker/docker: [be7ac8b → aa6a989](https://github.com/docker/docker/compare/be7ac8b...aa6a989)
+- github.com/docker/go-connections: [v0.3.0 → v0.4.0](https://github.com/docker/go-connections/compare/v0.3.0...v0.4.0)
+- github.com/evanphx/json-patch: [v4.2.0+incompatible → v4.9.0+incompatible](https://github.com/evanphx/json-patch/compare/v4.2.0...v4.9.0)
+- github.com/fsnotify/fsnotify: [v1.4.7 → v1.4.9](https://github.com/fsnotify/fsnotify/compare/v1.4.7...v1.4.9)
+- github.com/go-kit/kit: [v0.8.0 → v0.9.0](https://github.com/go-kit/kit/compare/v0.8.0...v0.9.0)
+- github.com/go-logfmt/logfmt: [v0.3.0 → v0.4.0](https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0)
+- github.com/go-logr/logr: [v0.1.0 → v0.2.0](https://github.com/go-logr/logr/compare/v0.1.0...v0.2.0)
+- github.com/golang/groupcache: [02826c3 → 215e871](https://github.com/golang/groupcache/compare/02826c3...215e871)
+- github.com/golang/protobuf: [v1.3.2 → v1.4.2](https://github.com/golang/protobuf/compare/v1.3.2...v1.4.2)
+- github.com/google/cadvisor: [v0.35.0 → v0.37.0](https://github.com/google/cadvisor/compare/v0.35.0...v0.37.0)
+- github.com/google/go-cmp: [v0.3.0 → v0.4.0](https://github.com/google/go-cmp/compare/v0.3.0...v0.4.0)
+- github.com/google/pprof: [3ea8567 → d4f498a](https://github.com/google/pprof/compare/3ea8567...d4f498a)
+- github.com/googleapis/gax-go/v2: [v2.0.4 → v2.0.5](https://github.com/googleapis/gax-go/v2/compare/v2.0.4...v2.0.5)
+- github.com/googleapis/gnostic: [v0.1.0 → v0.4.1](https://github.com/googleapis/gnostic/compare/v0.1.0...v0.4.1)
+- github.com/gorilla/mux: [v1.7.0 → v1.7.3](https://github.com/gorilla/mux/compare/v1.7.0...v1.7.3)
+- github.com/json-iterator/go: [v1.1.8 → v1.1.10](https://github.com/json-iterator/go/compare/v1.1.8...v1.1.10)
+- github.com/jstemmer/go-junit-report: [af01ea7 → v0.9.1](https://github.com/jstemmer/go-junit-report/compare/af01ea7...v0.9.1)
+- github.com/konsorten/go-windows-terminal-sequences: [v1.0.1 → v1.0.3](https://github.com/konsorten/go-windows-terminal-sequences/compare/v1.0.1...v1.0.3)
+- github.com/kr/pretty: [v0.1.0 → v0.2.0](https://github.com/kr/pretty/compare/v0.1.0...v0.2.0)
+- github.com/mattn/go-isatty: [v0.0.9 → v0.0.4](https://github.com/mattn/go-isatty/compare/v0.0.9...v0.0.4)
+- github.com/matttproud/golang_protobuf_extensions: [v1.0.1 → c182aff](https://github.com/matttproud/golang_protobuf_extensions/compare/v1.0.1...c182aff)
+- github.com/mistifyio/go-zfs: [v2.1.1+incompatible → f784269](https://github.com/mistifyio/go-zfs/compare/v2.1.1...f784269)
+- github.com/mrunalp/fileutils: [7d4729f → abd8a0e](https://github.com/mrunalp/fileutils/compare/7d4729f...abd8a0e)
+- github.com/opencontainers/runc: [v1.0.0-rc10 → 819fcc6](https://github.com/opencontainers/runc/compare/v1.0.0-rc10...819fcc6)
+- github.com/opencontainers/runtime-spec: [v1.0.0 → 237cc4f](https://github.com/opencontainers/runtime-spec/compare/v1.0.0...237cc4f)
+- github.com/opencontainers/selinux: [5215b18 → v1.5.2](https://github.com/opencontainers/selinux/compare/5215b18...v1.5.2)
+- github.com/pkg/errors: [v0.8.1 → v0.9.1](https://github.com/pkg/errors/compare/v0.8.1...v0.9.1)
+- github.com/prometheus/client_golang: [v1.0.0 → v1.7.1](https://github.com/prometheus/client_golang/compare/v1.0.0...v1.7.1)
+- github.com/prometheus/common: [v0.4.1 → v0.10.0](https://github.com/prometheus/common/compare/v0.4.1...v0.10.0)
+- github.com/prometheus/procfs: [v0.0.2 → v0.1.3](https://github.com/prometheus/procfs/compare/v0.0.2...v0.1.3)
+- github.com/rubiojr/go-vhd: [0bfd3b3 → 02e2102](https://github.com/rubiojr/go-vhd/compare/0bfd3b3...02e2102)
+- github.com/sirupsen/logrus: [v1.4.2 → v1.6.0](https://github.com/sirupsen/logrus/compare/v1.4.2...v1.6.0)
+- github.com/spf13/cobra: [v0.0.5 → v1.0.0](https://github.com/spf13/cobra/compare/v0.0.5...v1.0.0)
+- github.com/spf13/viper: [v1.3.2 → v1.4.0](https://github.com/spf13/viper/compare/v1.3.2...v1.4.0)
+- github.com/tmc/grpc-websocket-proxy: [89b8d40 → 0ad062e](https://github.com/tmc/grpc-websocket-proxy/compare/89b8d40...0ad062e)
+- github.com/urfave/cli: [v1.20.0 → v1.22.2](https://github.com/urfave/cli/compare/v1.20.0...v1.22.2)
+- github.com/vishvananda/netlink: [v1.0.0 → v1.1.0](https://github.com/vishvananda/netlink/compare/v1.0.0...v1.1.0)
+- github.com/vishvananda/netns: [be1fbed → 52d707b](https://github.com/vishvananda/netns/compare/be1fbed...52d707b)
+- go.etcd.io/bbolt: v1.3.3 → v1.3.5
+- go.etcd.io/etcd: 3cf2f69 → 17cef6e
+- go.opencensus.io: v0.21.0 → v0.22.2
+- go.uber.org/atomic: v1.3.2 → v1.4.0
+- golang.org/x/crypto: bac4c82 → 75b2880
+- golang.org/x/exp: 4b39c73 → da58074
+- golang.org/x/image: 0694c2d → cff245a
+- golang.org/x/lint: 959b441 → fdd1cda
+- golang.org/x/mobile: d3739f8 → d2bd2a2
+- golang.org/x/mod: 4bf6d31 → v0.3.0
+- golang.org/x/net: 13f9640 → ab34263
+- golang.org/x/oauth2: 0f29369 → 858c2ad
+- golang.org/x/sys: fde4db3 → ed371f2
+- golang.org/x/text: v0.3.2 → v0.3.3
+- golang.org/x/time: 9d24e82 → 555d28b
+- golang.org/x/tools: 65e3620 → c1934b7
+- golang.org/x/xerrors: a985d34 → 9bdfabe
+- google.golang.org/api: 5213b80 → v0.15.1
+- google.golang.org/appengine: v1.5.0 → v1.6.5
+- google.golang.org/genproto: 24fa4b2 → cb27e3a
+- google.golang.org/grpc: v1.26.0 → v1.27.0
+- gopkg.in/check.v1: 788fd78 → 41f04d3
+- honnef.co/go/tools: v0.0.1-2019.2.2 → v0.0.1-2019.2.3
+- k8s.io/gengo: 36b2048 → 8167cfd
+- k8s.io/kube-openapi: bf4fb3b → 6aeccd4
+- k8s.io/system-validators: v1.0.4 → v1.1.2
+- k8s.io/utils: a9aa75a → d5654de
+- sigs.k8s.io/apiserver-network-proxy/konnectivity-client: v0.0.7 → v0.0.9
+
+### 제거
+- github.com/OpenPeeDeeP/depguard: [v1.0.1](https://github.com/OpenPeeDeeP/depguard/tree/v1.0.1)
+- github.com/Rican7/retry: [v0.1.0](https://github.com/Rican7/retry/tree/v0.1.0)
+- github.com/StackExchange/wmi: [5d04971](https://github.com/StackExchange/wmi/tree/5d04971)
+- github.com/anmitsu/go-shlex: [648efa6](https://github.com/anmitsu/go-shlex/tree/648efa6)
+- github.com/bazelbuild/bazel-gazelle: [70208cb](https://github.com/bazelbuild/bazel-gazelle/tree/70208cb)
+- github.com/bazelbuild/buildtools: [69366ca](https://github.com/bazelbuild/buildtools/tree/69366ca)
+- github.com/bazelbuild/rules_go: [6dae44d](https://github.com/bazelbuild/rules_go/tree/6dae44d)
+- github.com/bradfitz/go-smtpd: [deb6d62](https://github.com/bradfitz/go-smtpd/tree/deb6d62)
+- github.com/cespare/prettybench: [03b8cfe](https://github.com/cespare/prettybench/tree/03b8cfe)
+- github.com/checkpoint-restore/go-criu: [17b0214](https://github.com/checkpoint-restore/go-criu/tree/17b0214)
+- github.com/client9/misspell: [v0.3.4](https://github.com/client9/misspell/tree/v0.3.4)
+- github.com/coreos/go-etcd: [v2.0.0+incompatible](https://github.com/coreos/go-etcd/tree/v2.0.0)
+- github.com/cpuguy83/go-md2man: [v1.0.10](https://github.com/cpuguy83/go-md2man/tree/v1.0.10)
+- github.com/docker/libnetwork: [c8a5fca](https://github.com/docker/libnetwork/tree/c8a5fca)
+- github.com/gliderlabs/ssh: [v0.1.1](https://github.com/gliderlabs/ssh/tree/v0.1.1)
+- github.com/go-critic/go-critic: [1df3008](https://github.com/go-critic/go-critic/tree/1df3008)
+- github.com/go-lintpack/lintpack: [v0.5.2](https://github.com/go-lintpack/lintpack/tree/v0.5.2)
+- github.com/go-ole/go-ole: [v1.2.1](https://github.com/go-ole/go-ole/tree/v1.2.1)
+- github.com/go-toolsmith/astcast: [v1.0.0](https://github.com/go-toolsmith/astcast/tree/v1.0.0)
+- github.com/go-toolsmith/astcopy: [v1.0.0](https://github.com/go-toolsmith/astcopy/tree/v1.0.0)
+- github.com/go-toolsmith/astequal: [v1.0.0](https://github.com/go-toolsmith/astequal/tree/v1.0.0)
+- github.com/go-toolsmith/astfmt: [v1.0.0](https://github.com/go-toolsmith/astfmt/tree/v1.0.0)
+- github.com/go-toolsmith/astinfo: [9809ff7](https://github.com/go-toolsmith/astinfo/tree/9809ff7)
+- github.com/go-toolsmith/astp: [v1.0.0](https://github.com/go-toolsmith/astp/tree/v1.0.0)
+- github.com/go-toolsmith/pkgload: [v1.0.0](https://github.com/go-toolsmith/pkgload/tree/v1.0.0)
+- github.com/go-toolsmith/strparse: [v1.0.0](https://github.com/go-toolsmith/strparse/tree/v1.0.0)
+- github.com/go-toolsmith/typep: [v1.0.0](https://github.com/go-toolsmith/typep/tree/v1.0.0)
+- github.com/gobwas/glob: [v0.2.3](https://github.com/gobwas/glob/tree/v0.2.3)
+- github.com/godbus/dbus: [2ff6f7f](https://github.com/godbus/dbus/tree/2ff6f7f)
+- github.com/golangci/check: [cfe4005](https://github.com/golangci/check/tree/cfe4005)
+- github.com/golangci/dupl: [3e9179a](https://github.com/golangci/dupl/tree/3e9179a)
+- github.com/golangci/errcheck: [ef45e06](https://github.com/golangci/errcheck/tree/ef45e06)
+- github.com/golangci/go-misc: [927a3d8](https://github.com/golangci/go-misc/tree/927a3d8)
+- github.com/golangci/go-tools: [e32c541](https://github.com/golangci/go-tools/tree/e32c541)
+- github.com/golangci/goconst: [041c5f2](https://github.com/golangci/goconst/tree/041c5f2)
+- github.com/golangci/gocyclo: [2becd97](https://github.com/golangci/gocyclo/tree/2becd97)
+- github.com/golangci/gofmt: [0b8337e](https://github.com/golangci/gofmt/tree/0b8337e)
+- github.com/golangci/golangci-lint: [v1.18.0](https://github.com/golangci/golangci-lint/tree/v1.18.0)
+- github.com/golangci/gosec: [66fb7fc](https://github.com/golangci/gosec/tree/66fb7fc)
+- github.com/golangci/ineffassign: [42439a7](https://github.com/golangci/ineffassign/tree/42439a7)
+- github.com/golangci/lint-1: [ee948d0](https://github.com/golangci/lint-1/tree/ee948d0)
+- github.com/golangci/maligned: [b1d8939](https://github.com/golangci/maligned/tree/b1d8939)
+- github.com/golangci/misspell: [950f5d1](https://github.com/golangci/misspell/tree/950f5d1)
+- github.com/golangci/prealloc: [215b22d](https://github.com/golangci/prealloc/tree/215b22d)
+- github.com/golangci/revgrep: [d9c87f5](https://github.com/golangci/revgrep/tree/d9c87f5)
+- github.com/golangci/unconvert: [28b1c44](https://github.com/golangci/unconvert/tree/28b1c44)
+- github.com/google/go-github: [v17.0.0+incompatible](https://github.com/google/go-github/tree/v17.0.0)
+- github.com/google/go-querystring: [v1.0.0](https://github.com/google/go-querystring/tree/v1.0.0)
+- github.com/gostaticanalysis/analysisutil: [v0.0.3](https://github.com/gostaticanalysis/analysisutil/tree/v0.0.3)
+- github.com/jellevandenhooff/dkim: [f50fe3d](https://github.com/jellevandenhooff/dkim/tree/f50fe3d)
+- github.com/klauspost/compress: [v1.4.1](https://github.com/klauspost/compress/tree/v1.4.1)
+- github.com/logrusorgru/aurora: [a7b3b31](https://github.com/logrusorgru/aurora/tree/a7b3b31)
+- github.com/mattn/go-shellwords: [v1.0.5](https://github.com/mattn/go-shellwords/tree/v1.0.5)
+- github.com/mattn/goveralls: [v0.0.2](https://github.com/mattn/goveralls/tree/v0.0.2)
+- github.com/mesos/mesos-go: [v0.0.9](https://github.com/mesos/mesos-go/tree/v0.0.9)
+- github.com/mitchellh/go-ps: [4fdf99a](https://github.com/mitchellh/go-ps/tree/4fdf99a)
+- github.com/mozilla/tls-observatory: [8791a20](https://github.com/mozilla/tls-observatory/tree/8791a20)
+- github.com/nbutton23/zxcvbn-go: [eafdab6](https://github.com/nbutton23/zxcvbn-go/tree/eafdab6)
+- github.com/pquerna/ffjson: [af8b230](https://github.com/pquerna/ffjson/tree/af8b230)
+- github.com/quasilyte/go-consistent: [c6f3937](https://github.com/quasilyte/go-consistent/tree/c6f3937)
+- github.com/ryanuber/go-glob: [256dc44](https://github.com/ryanuber/go-glob/tree/256dc44)
+- github.com/shirou/gopsutil: [c95755e](https://github.com/shirou/gopsutil/tree/c95755e)
+- github.com/shirou/w32: [bb4de01](https://github.com/shirou/w32/tree/bb4de01)
+- github.com/shurcooL/go-goon: [37c2f52](https://github.com/shurcooL/go-goon/tree/37c2f52)
+- github.com/shurcooL/go: [9e1955d](https://github.com/shurcooL/go/tree/9e1955d)
+- github.com/sourcegraph/go-diff: [v0.5.1](https://github.com/sourcegraph/go-diff/tree/v0.5.1)
+- github.com/tarm/serial: [98f6abe](https://github.com/tarm/serial/tree/98f6abe)
+- github.com/timakin/bodyclose: [87058b9](https://github.com/timakin/bodyclose/tree/87058b9)
+- github.com/ugorji/go/codec: [d75b2dc](https://github.com/ugorji/go/codec/tree/d75b2dc)
+- github.com/ultraware/funlen: [v0.0.2](https://github.com/ultraware/funlen/tree/v0.0.2)
+- github.com/valyala/bytebufferpool: [v1.0.0](https://github.com/valyala/bytebufferpool/tree/v1.0.0)
+- github.com/valyala/fasthttp: [v1.2.0](https://github.com/valyala/fasthttp/tree/v1.2.0)
+- github.com/valyala/quicktemplate: [v1.1.1](https://github.com/valyala/quicktemplate/tree/v1.1.1)
+- github.com/valyala/tcplisten: [ceec8f9](https://github.com/valyala/tcplisten/tree/ceec8f9)
+- go4.org: 417644f
+- golang.org/x/build: 2835ba2
+- golang.org/x/perf: 6e6d33e
+- gopkg.in/airbrake/gobrake.v2: v2.0.9
+- gopkg.in/gemnasium/logrus-airbrake-hook.v2: v2.1.2
+- gotest.tools/gotestsum: v0.3.5
+- grpc.go4.org: 11d0a25
+- k8s.io/klog: v1.0.0
+- k8s.io/repo-infra: v0.0.1-alpha.1
+- mvdan.cc/interfacer: c200402
+- mvdan.cc/lint: adc824a
+- mvdan.cc/unparam: fbb5962
+- sigs.k8s.io/structured-merge-diff/v3: v3.0.0
+- sourcegraph.com/sqs/pbtypes: d3ebe8f
+
+
+
+# v1.19.0-rc.4
+
+
+## Downloads for v1.19.0-rc.4
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes.tar.gz) | 98bb6e2ac98a3176a9592982fec04b037d189de73cb7175d51596075bfd008c7ec0a2301b9511375626581f864ea74b5731e2ef2b4c70363f1860d11eb827de1
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-src.tar.gz) | d4686f8d07fe6f324f46880a4dc5af9afa314a6b7dca82d0edb50290b769d25d18babcc5257a96a51a046052c7747e324b025a90a36ca5e62f67642bb03c44f6
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-darwin-amd64.tar.gz) | e9184ceb37491764c1ea2ef0b1eca55f27109bb973c7ff7c78e83c5945840baf28fdead21ef861b0c5cb81f4dc39d0af86ed7b17ed6f087f211084d0033dad11
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-linux-386.tar.gz) | c9f1ec4e8d9c6245f955b2132c0fae6d851a6a49a5b7a2333c01ba9fafa3c4e8a07c6462e525179c25e308520502544ab4dc570e1b9d0090d58b6d18bcfcba47
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-linux-amd64.tar.gz) | d23858b03c3554ad12517ce5f7855ceccaa9425c2d19fbc9cf902c0e796a8182f8b0e8eeeeefff0f46e960dfee96b2a2033a04a3194ac34dfd2a32003775d060
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-linux-arm.tar.gz) | a745b3a06fe992713e4d7f921e2f36c5b39222d7b1a3e13299d15925743dd99965c2bdf05b4deda30a6f6232a40588e154fdd83f40d9d260d7ac8f70b18cad48
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-linux-arm64.tar.gz) | 719b1f30e4bbb05d638ee78cf0145003a1e783bbd0c2f0952fcb30702dd27dfd44c3bc85baaf9a776e490ed53c638327ed1c0e5a882dc93c24d7cac20e4f1dd0
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-linux-ppc64le.tar.gz) | fba0794e9dc0f231da5a4e85e37c2d8260e5205574e0421f5122a7d60a05ca6546912519a28e8d6c241904617234e1b0b5c94f890853ad5ae4e329ef8085a092
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-linux-s390x.tar.gz) | edce96e37e37fd2b60e34fe56240461094e5784985790453becdfe09011305fcbd8a50ee5bf6d82a70d208d660796d0ddf160bed0292271b6617049db800962f
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-windows-386.tar.gz) | 06c849b35d886bebedfd8d906ac37ccda10e05b06542fefe6440268c5e937f235915e53daafe35076b68e0af0d4ddeab4240da55b09ee52fa26928945f1a2ecd
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-client-windows-amd64.tar.gz) | a13e6ec70f76d6056d5668b678ba6f223e35756cded6c84dfb58e28b3741fecfa7cb5e6ae2239392d770028d1b55ca8eb520c0b24e13fc3bd38720134b472d53
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-server-linux-amd64.tar.gz) | ff7fbf211c29b94c19466337e6c142e733c8c0bac815a97906168e57d21ad1b2965e4b0033b525de8fed9a91ab943e3eb6d330f8095660e32be2791f8161a6a2
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-server-linux-arm.tar.gz) | 218a35466ebcc0bc0e8eff9bbb2e58f0ac3bec6a75f45a7c1487aa4fc3e2bddb90b74e91a2b81bbbbb1eb1de2df310adab4c07c2a2c38a9973580b4f85734a1f
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-server-linux-arm64.tar.gz) | 8a81d727e63875d18336fda8bb6f570084553fc346b7e7df2fc3e1c04a8ef766f61d96d445537e4660ce2f46b170a04218a4d8a270b3ad373caa3f815c0fec93
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-server-linux-ppc64le.tar.gz) | 9b5afa44ac2e1232cd0c54b3602a2027bc8a08b30809b3ef973f75793b35a596491e6056d7995e493a1e4f48d83389240ac2e609b9f76d2715b8e115e6648716
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-server-linux-s390x.tar.gz) | f3034b2e88b5c1d362d84f78dfd1761d0fc21ada1cd6b1a6132a709c663a1206651df96c948534b3661f6b70b651e33021aced3a7574a0e5fc88ace73fff2a6f
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-node-linux-amd64.tar.gz) | 2061a8f5bc2060b071564c92b693950eda7768a9ceb874982f0e91aa78284fb477becb55ecf099acae73c447271240caecefc19b3b29024e9b818e0639c2fc70
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-node-linux-arm.tar.gz) | c06b817b191ff9a4b05bf70fc14edcf01d4ded204e489966b1761dd68d45d054028870301e45ebba648c0be097c7c42120867c8b28fdd625c8eb5a5bc3ace71d
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-node-linux-arm64.tar.gz) | 21efb3bf23628546de97210074f48e928fec211b81215eff8b10c3f5f7e79bb5911c1393a66a8363a0183fe299bf98b316c0c2d77a13c8c5b798255c056bd806
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-node-linux-ppc64le.tar.gz) | ce31dd65b9cbfaabdc3c93e8afee0ea5606c64e6bf4452078bee73b1d328d23ebdbc871a00edd16fa4e676406a707cf9113fdaec76489681c379c35c3fd6aeb0
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-node-linux-s390x.tar.gz) | 523a8e1d6e0eff70810e846c171b7f74a4aaecb25237addf541a9f8adb3797402b6e57abf9030f62d5bb333d5f5e8960199a44fe48696a4da98f7ed7d993cde1
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.4/kubernetes-node-windows-amd64.tar.gz) | a7fbcd11ea8b6427e7846e39b2fdeae41d484320faa8f3e9b6a777d87ba62e7399ad0ec6a33d9a4675001898881e444f336eebd5c97b3903dee803834a646f3d
+
+## Changelog since v1.19.0-rc.3
+
+## Changes by Kind
+
+### Deprecation
+
+- Kube-apiserver: the componentstatus API is deprecated. This API provided status of etcd, kube-scheduler, and kube-controller-manager components, but only worked when those components were local to the API server, and when kube-scheduler and kube-controller-manager exposed unsecured health endpoints. Instead of this API, etcd health is included in the kube-apiserver health check and kube-scheduler/kube-controller-manager health checks can be made directly against those components' health endpoints. ([#93570](https://github.com/kubernetes/kubernetes/pull/93570), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps and Cluster Lifecycle]
+
+### Bug or Regression
+
+- A panic in the apiserver caused by the `informer-sync` health checker is now fixed. ([#93600](https://github.com/kubernetes/kubernetes/pull/93600), [@ialidzhikov](https://github.com/ialidzhikov)) [SIG API Machinery]
+- EndpointSliceMirroring controller now copies labels from Endpoints to EndpointSlices. ([#93442](https://github.com/kubernetes/kubernetes/pull/93442), [@robscott](https://github.com/robscott)) [SIG Apps and Network]
+- Kube-apiserver: jsonpath expressions with consecutive recursive descent operators are no longer evaluated for custom resource printer columns ([#93408](https://github.com/kubernetes/kubernetes/pull/93408), [@joelsmith](https://github.com/joelsmith)) [SIG API Machinery]
+
+### Other (Cleanup or Flake)
+
+- Build: Update to debian-base@v2.1.0 and debian-iptables@v12.1.1 ([#93667](https://github.com/kubernetes/kubernetes/pull/93667), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, Release and Testing]
+
+## Dependencies
+
+### Added
+_Nothing has changed._
+
+### Changed
+- k8s.io/utils: 0bdb4ca → d5654de
+
+### Removed
+_Nothing has changed._
+
+
+
+# v1.19.0-rc.3
+
+
+## Downloads for v1.19.0-rc.3
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes.tar.gz) | 31f98fb8d51c6dfa60e2cf710a35af14bc17a6b3833b3802cebc92586b01996c091943087dc818541fc13ad75f051d20c176d9506fc0c86ab582a9295fb7ed59
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-src.tar.gz) | 4886180edf6287adf9db1cdab1e8439c41296c6b5b40af9c4642bb6cfd1fb894313c6d1210c2b882f1bc40dbfd17ed20c5159ea3a8c79ad2ef7a7630016e99de
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-darwin-amd64.tar.gz) | 19b0f9fe95e135329ce2cb9dd3e95551f3552be035ce7235e055c9d775dfa747c773b0806b5c2eef1e69863368be13adcb4c5ef78ae05af65483434686e9a773
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-linux-386.tar.gz) | 219a15b54ba616938960ac38869c87be573e3cd7897e4790c31cdeb819415fcefb4f293fc49d63901b42f70e66555c72a8a774cced7ec15a93592dffef3b1336
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-linux-amd64.tar.gz) | 7c5a2163e0e163d3b1819acc7c4475d35b853318dd5a6084ea0785744a92063edf65254b0f0eae0f69f4598561c182033a9722c1b8a61894959333f1357cb1f9
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-linux-arm.tar.gz) | 5d48f78da6a54b63d8ea68e983d780c672b546b4a5d1fb2c15033377dd3098f0011516b55cc47db316dacabdbbd3660108014d12899ef1f4a6a03158ef503101
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-linux-arm64.tar.gz) | c2db09db465f8ea2bd7b82971a59a2be394b2f9a8c15ff78ab06c3a41d9f1292175de19fdc7450cc28746027d59dc3162cb47b64555e91d324d33d699d89f408
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-linux-ppc64le.tar.gz) | f28c9c672bc2c5e780f6fdcf019a5dad7172537e38e2ab7d52a1ea55babb41d296cef97b482133c7fce0634b1aed1b5322d1e0061d30c3848e4c912a7e1ca248
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-linux-s390x.tar.gz) | 22844af3c97eb9f36a038c552e9818b8670cd02acc98defe5c131c7f88621014cd51c343c1e0921b88ebbfd9850a5c277f50df78350f7565db4e356521d415d4
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-windows-386.tar.gz) | edabe78a1337f73caa81c885d722544fec443f875297291e57608d4f486c897af6c602656048a4325fcc957ce1d7acb1c1cf06cab0bd2e36acee1d6be206d3c6
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-client-windows-amd64.tar.gz) | f1a370b9ec298838e302909dd826760b50b593ee2d2247416d345ff00331973e7b6b29cef69f07d6c1477ab534d6ec9d1bbf5d3c2d1bb9b5b2933e088c8de3f1
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-server-linux-amd64.tar.gz) | 193c023306d7478c2e43c4039642649c77598c05b07dbc466611e166f0233a7ea2a7f2ff61763b2630988c151a591f44428942d8ee06ce6766641e1dcfaac588
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-server-linux-arm.tar.gz) | c1aa489779fb74855385f24120691771a05b57069064c99471b238e5d541d94d4356e7d2cd5b66c284c46bde1fc3eff2a1cebfcd9e72a78377b76e32a1dbf57a
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-server-linux-arm64.tar.gz) | 73400003571f9f0308051ca448b1f96d83e9d211876a57b572ffb787ad0c3bb5f1e20547d959f0fac51a916cf7f26f8839ddddd55d4a38e59c8270d5eb48a970
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-server-linux-ppc64le.tar.gz) | bebf75d884d024ffebfda40abaa0bfec99a6d4cd3cc0fac904a1c4c190e6eb8fc9412c7790b2f8a2b3cc8ccdf8556d9a93eec37e5c298f8abd62ee41de641a42
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-server-linux-s390x.tar.gz) | 8374dfb689abae31480814d6849aaa51e30666b7203cdcf204d49c9a0344391232f40d135671ec8316e26d1685e1cbbea4b829ff3b9f83c08c2d1ba50cd5aeb2
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-node-linux-amd64.tar.gz) | 194ee29b012463e6d90c346f76d53f94778f75cc916b0e10a5ee174983fac6e848438e0d9e36a475c5d7ba7b0f3ad5debc039ec8f95fdfb6229843f04dfacb53
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-node-linux-arm.tar.gz) | f0d075eaa84dae7ce2101dfa421021b0bfea235fe606d693e881775cd37ff0b82ca6a419dfe48becd2eddc5f882e97ba838164e6ac5991445225c31f147b4f97
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-node-linux-arm64.tar.gz) | 3dc69981f31b01a2d8c439f7047f73e5699a121501c516ed17b3e91ed358ee97e43fa955eb9e1434cbf7864e51097e76c216075d34f4b455930a44af6c64be5c
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-node-linux-ppc64le.tar.gz) | 4a77720373960a0cc20bbcfcdfe17f8d5ddaaf2e38bad607cfe05831029e8e14559e78cd0b5b80ab9c9268a04a8b6bd54ad7232c29301a1f6a6392fcd38ecedf
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-node-linux-s390x.tar.gz) | 319e684340aab739e3da46c6407851ff1c42463ba176bf190e58faa48d143975f02df3443ac287cdfcf652b5d6b6e6721d9e4f35995c4e705297a97dd777fe7e
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.3/kubernetes-node-windows-amd64.tar.gz) | 1ff22497a3f0844ffa8593a2a444a8fcb45d0123da49fd58e17cfc1477d22be7f6809d923898b6aa7a9ce519b0a6e0825f575f6cf71da5c8a0fa5f6b4d0905b6
+
+## Changelog since v1.19.0-rc.2
+
+## Changes by Kind
+
+### API Change
+
+- Adds the ability to disable Accelerator/GPU metrics collected by Kubelet ([#91930](https://github.com/kubernetes/kubernetes/pull/91930), [@RenaudWasTaken](https://github.com/RenaudWasTaken)) [SIG Node]
+- Kubernetes is now built with golang 1.15.0-rc.1.
+ - The deprecated, legacy behavior of treating the CommonName field on X.509 serving certificates as a host name when no Subject Alternative Names are present is now disabled by default. It can be temporarily re-enabled by adding the value x509ignoreCN=0 to the GODEBUG environment variable. ([#93264](https://github.com/kubernetes/kubernetes/pull/93264), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, Auth, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Network, Node, Release, Scalability, Storage and Testing]
+
+### Bug or Regression
+
+- Azure: per VMSS VMSS VMs cache to prevent throttling on clusters having many attached VMSS ([#93107](https://github.com/kubernetes/kubernetes/pull/93107), [@bpineau](https://github.com/bpineau)) [SIG Cloud Provider]
+- Extended DSR loadbalancer feature in winkernel kube-proxy to HNS versions 9.3-9.max, 10.2+ ([#93080](https://github.com/kubernetes/kubernetes/pull/93080), [@elweb9858](https://github.com/elweb9858)) [SIG Network]
+- Fix instance not found issues when an Azure Node is recreated in a short time ([#93316](https://github.com/kubernetes/kubernetes/pull/93316), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+
+## Dependencies
+
+### Added
+- github.com/yuin/goldmark: [v1.1.27](https://github.com/yuin/goldmark/tree/v1.1.27)
+
+### Changed
+- github.com/Microsoft/hcsshim: [v0.8.9 → 5eafd15](https://github.com/Microsoft/hcsshim/compare/v0.8.9...5eafd15)
+- github.com/containerd/cgroups: [bf292b2 → 0dbf7f0](https://github.com/containerd/cgroups/compare/bf292b2...0dbf7f0)
+- github.com/urfave/cli: [v1.22.1 → v1.22.2](https://github.com/urfave/cli/compare/v1.22.1...v1.22.2)
+- golang.org/x/crypto: bac4c82 → 75b2880
+- golang.org/x/mod: v0.1.0 → v0.3.0
+- golang.org/x/net: d3edc99 → ab34263
+- golang.org/x/tools: c00d67e → c1934b7
+
+### Removed
+- github.com/godbus/dbus: [ade71ed](https://github.com/godbus/dbus/tree/ade71ed)
+
+
+
+# v1.19.0-rc.2
+
+
+## Downloads for v1.19.0-rc.2
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes.tar.gz) | 7a9fa6af3772be18f8c427d8b96836bd77e271a08fffeba92d01b3fac4bd69d2be1bbc404cdd4fc259dda42b16790a7943eddb7c889b918d7631857e127a724c
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-src.tar.gz) | 60184627a181ac99cd914acb0ba61c22f31b315ef13be5504f3cb43dea1fa84abb2142c8a1ba9e98e037e0d9d2765e8d85bd12903b03a86538d7638ceb6ac5c9
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-darwin-386.tar.gz) | 03332cd70ce6a9c8e533d93d694da32b549bef486cf73c649bcb1c85fc314b0ac0f95e035de7b54c81112c1ac39029abeb8f246d359384bde2119ea5ea3ebe66
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-darwin-amd64.tar.gz) | e82c2908366cc27cbc1d72f89fdc13414b484dfdf26c39c6180bf2e5734169cc97d77a2d1ac051cdb153582a38f4805e5c5b5b8eb88022c914ffb4ef2a8202d3
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-linux-386.tar.gz) | 948be72e8162ee109c670a88c443ba0907acfd0ffb64df62afe41762717bc2fb9308cbc4eb2012a14e0203197e8576e3700ad9f105379841d46acafad2a4c6dc
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-linux-amd64.tar.gz) | 54e1980b6967dab1e70be2b4df0cd0171f04c92f12dcdf80908b087facb9d5cc1399a7d9455a4a799daa8e9d48b6ad86cb3666a131e5adfcd36b008d25138fa3
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-linux-arm.tar.gz) | 4edcd2e1a866a16b8b0f6228f93b4a61cdd43dca36dcb53a5dbd865cc5a143ef6fd3b8575925acc8af17cff21dee993df9b88db5724320e7b420ca9d0427677f
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-linux-arm64.tar.gz) | 138b215e35cfb5d05bda766763e92352171e018a090d516dbf0c280588c5e6f36228163a75a8147c7bac46e773ad0353daaf550d8fa0e91b1e05c5bc0242531c
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-linux-ppc64le.tar.gz) | 3b8e7f5f1f2e34df5dbb06c12f07f660a2a732846c56d0f4b0a939b8121361d381325565bdda3182ef8951f4c2513a2c255940f97011034063ffb506d5aedeab
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-linux-s390x.tar.gz) | b695cc0695bd324c51084e64bea113aaad3c0b5ba44b5d122db9da6e359a4108008a80944cbe96c405bd2cf57f5f31b3eaf50f33c23d980bdb9f272937c88d1c
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-windows-386.tar.gz) | 8e370a66545cdebe0ae0816afe361c7579c7c6e8ee5652e4e01c6fcc3d6d2a6557101be39be24ceb14302fb30855730894a17f6ae11586759257f12406c653e2
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-client-windows-amd64.tar.gz) | 89e0fe5aac33c991891b08e5a3891ecbda3e038f0ee6a5cdd771ea294ec84292bd5f65f1a895f23e6892ec28f001f66d0166d204bf135cb1aa467ae56ccc1260
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-server-linux-amd64.tar.gz) | 2b0a1b107bf31913d9deec57eab9d3db2ea512c995ce3b4fe247f91c36fdebcc4484a2f8ff53d40a5bc6a04c7144827b85b40ac90c46a9b0cec8a680700f1b1c
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-server-linux-arm.tar.gz) | 2f1ab3bcacd82a9b6d92e09b7cdd63f57fc44623cdfb517512b634264fed87999d78b8571c7930839381b1ed4793b68343e85956d7a8c5bae77ba8f8ade06afa
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-server-linux-arm64.tar.gz) | ea67613c8356f650891a096881546afb27f00e86a9c777617817583628d38b4725f0f65da3b0732414cbc8f97316b3029a355177342a4b1d94cf02d79542e4cd
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-server-linux-ppc64le.tar.gz) | d1b151f3f47c28ead2304d2477fa25f24d12e3fd80e9d1b3b593db99b9a1c5821db4d089f4f1dd041796ea3fd814000c225a7e75aac1e5891a4e16517bcaceee
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-server-linux-s390x.tar.gz) | 69bf215fdc3ad53834eaa9a918452feb0803dffe381b6e03b73141364a697a576e5ed0242d448616707cb386190c21564fe89f8cf3409a7c621a86d86b2c7680
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-node-linux-amd64.tar.gz) | 88ae137316bab3bb1dcb6c78a4d725face618d41714400505b97ce9d3fa37a6caa036b9e8508ade6dd679e3a8c483a32aef9e400ab08d86b6bf39bc13f34e435
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-node-linux-arm.tar.gz) | 7eaaf2a2a4ee5181cb4c1567e99b8bf82a3da342799f5d2b34dd7f133313c3e3d2ac5a778110e178161788cb226bd64836fba35fbec21c8384e7725cae9b756c
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-node-linux-arm64.tar.gz) | 4f0ef95abc52da0e5d0c40434f8c324ddfb218a577114c4ead00f2ac1c01439922aee6fe347f702927a73b0166cd8b9f4c491d3a18a1a951d24c9ea7259d2655
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-node-linux-ppc64le.tar.gz) | 0424896e2fedae3a566a5aa2e4af26977a578066d49e3ad66307839c2d2dd1c53d1afcf16b2f6cebf0c74d2d60dbc118e6446d9c02aaab27e95b3a6d26889f51
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-node-linux-s390x.tar.gz) | 294788687a6e6d1ca2e4d56435b1174e4330abe64cc58b1372c3b9caaab4455586da4e3bfc62616b52ea7d678561fb77ce1f8d0023fd7d1e75e1db348c69939c
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.2/kubernetes-node-windows-amd64.tar.gz) | 61389f8c05c682102e3432a2f05f41b11d531124f61443429627f94ef6e970d44240d44d32aa467b814de0b54a17208b2d2696602ba5dd6d30f64db964900230
+
+## Changelog since v1.19.0-rc.1
+
+## Changes by Kind
+
+### API Change
+
+- A new alpha-level field, `SupportsFsGroup`, has been introduced for CSIDrivers to allow them to specify whether they support volume ownership and permission modifications. The `CSIVolumeSupportFSGroup` feature gate must be enabled to allow this field to be used. ([#92001](https://github.com/kubernetes/kubernetes/pull/92001), [@huffmanca](https://github.com/huffmanca)) [SIG API Machinery, CLI and Storage]
+- The kube-controller-manager managed signers can now have distinct signing certificates and keys. See the help about `--cluster-signing-[signer-name]-{cert,key}-file`. `--cluster-signing-{cert,key}-file` is still the default. ([#90822](https://github.com/kubernetes/kubernetes/pull/90822), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Apps and Auth]
+
+### Feature
+
+- Added kube-apiserver metrics: apiserver_current_inflight_request_measures and, when API Priority and Fairness is enable, windowed_request_stats. ([#91177](https://github.com/kubernetes/kubernetes/pull/91177), [@MikeSpreitzer](https://github.com/MikeSpreitzer)) [SIG API Machinery, Instrumentation and Testing]
+- Rename pod_preemption_metrics to preemption_metrics. ([#93256](https://github.com/kubernetes/kubernetes/pull/93256), [@ahg-g](https://github.com/ahg-g)) [SIG Instrumentation and Scheduling]
+
+### Bug or Regression
+
+- Do not add nodes labeled with kubernetes.azure.com/managed=false to backend pool of load balancer. ([#93034](https://github.com/kubernetes/kubernetes/pull/93034), [@matthias50](https://github.com/matthias50)) [SIG Cloud Provider]
+- Do not retry volume expansion if CSI driver returns FailedPrecondition error ([#92986](https://github.com/kubernetes/kubernetes/pull/92986), [@gnufied](https://github.com/gnufied)) [SIG Node and Storage]
+- Fix: determine the correct ip config based on ip family ([#93043](https://github.com/kubernetes/kubernetes/pull/93043), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- Fix: initial delay in mounting azure disk & file ([#93052](https://github.com/kubernetes/kubernetes/pull/93052), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Fixed the EndpointSliceController to correctly create endpoints for IPv6-only pods.
+
+ Fixed the EndpointController to allow IPv6 headless services, if the IPv6DualStack
+ feature gate is enabled, by specifying `ipFamily: IPv6` on the service. (This already
+ worked with the EndpointSliceController.) ([#91399](https://github.com/kubernetes/kubernetes/pull/91399), [@danwinship](https://github.com/danwinship)) [SIG Apps and Network]
+
+### Other (Cleanup or Flake)
+
+- Kube-up: defaults to limiting critical pods to the kube-system namespace to match behavior prior to 1.17 ([#93121](https://github.com/kubernetes/kubernetes/pull/93121), [@liggitt](https://github.com/liggitt)) [SIG Cloud Provider and Scheduling]
+- Update Golang to v1.14.5
+ - Update repo-infra to 0.0.7 (to support go1.14.5 and go1.13.13)
+ - Includes:
+ - bazelbuild/bazel-toolchains@3.3.2
+ - bazelbuild/rules_go@v0.22.7 ([#93088](https://github.com/kubernetes/kubernetes/pull/93088), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing]
+- Update Golang to v1.14.6
+ - Update repo-infra to 0.0.8 (to support go1.14.6 and go1.13.14)
+ - Includes:
+ - bazelbuild/bazel-toolchains@3.4.0
+ - bazelbuild/rules_go@v0.22.8 ([#93198](https://github.com/kubernetes/kubernetes/pull/93198), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing]
+- Update default etcd server version to 3.4.9 ([#92349](https://github.com/kubernetes/kubernetes/pull/92349), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cloud Provider, Cluster Lifecycle and Testing]
+
+## Dependencies
+
+### Added
+_Nothing has changed._
+
+### Changed
+- go.etcd.io/etcd: 54ba958 → 18dfb9c
+- k8s.io/utils: 6e3d28b → 0bdb4ca
+
+### Removed
+_Nothing has changed._
+
+
+
+# v1.19.0-rc.1
+
+
+## Downloads for v1.19.0-rc.1
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes.tar.gz) | d4bc1d86ff77a1a8695091207b8181a246c8964ae1dd8967392aae95197c0339c7915a016c017ecab0b9d203b3205221ca766ce568d7ee52947e7f50f057af4f
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-src.tar.gz) | 79af4e01b0d5432f92b026730a0c60523069d312858c30fdcaeaf6ee159c71f3413a5075d82c0acd9b135b7a06d5ecb0c0d38b8a8d0f301a9d9bffb35d22f029
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-darwin-386.tar.gz) | 7d21bf9733810659576e67986d129208894adea3c571de662dbf80fb822e18abfc1644ea60a4e5fbe244a23b56aa973b76dafe789ead1bf7539f41bdd9bca886
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-darwin-amd64.tar.gz) | b4622e06c09bb08a0dc0115bfcd991c50459c7b772889820648ed1c05a425605d10b71b92c58c119b77baa3bca209f9c75827d2cde69d128a5cfcada5f37be39
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-linux-386.tar.gz) | f51032ad605543f68a2a4da3bede1f3e7be0dd63b03b751fef5f133e8d64bec02bfe7433b75e3d0c4ae122d4e0cf009095800c638d2cc81f6fb81b488f5a6dab
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-linux-amd64.tar.gz) | 48489d22969f69a5015988e596d597c64ea18675649afe55ad119dbbe98ba9a4104d5e323704cf1f3cbdfca3feac629d3813e260a330a72da12f1a794d054f76
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-linux-arm.tar.gz) | d9f8a6f6f3d09be9c08588c2b5153a4d8cc9db496bde3da2f3af472c260566d1391cd8811f2c05d4f302db849a38432f25228d9bbb59aaaf0dfba64b33f8ee8e
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-linux-arm64.tar.gz) | 1c3590750a3f02e0e5845e1135cc3ab990309bdecfe64c089842a134eae57b573488531696796185ed12dde2d6f95d2e3656dd9893d04cd0adbe025513ffff30
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-linux-ppc64le.tar.gz) | 158a562d5dbbe90cd56b5d757823adda1919e9b5db8005fb6e2523358e5a20628d55ec1903c0e317a0d8ac9b9a649eea23d9ea746db22b73d6d580ae8c067077
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-linux-s390x.tar.gz) | 47c140567dc593caf065f295ed6006efcde010a526a96c8d3ef5f3d9a9dc6b413bc197dc822648067fe16299908ada7046c2a8a3213d4296b04b51a264ad40e9
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-windows-386.tar.gz) | e25d7d4ad3e6f6e6cfba181c5871e56de2751f88b640502745f6693ddd86ccd7eef8aebaa30955afdbbd0320a5b51d4e9e17f71baab37a470aac284178a0e21c
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-client-windows-amd64.tar.gz) | fd8463b04b5d7f115104245fa1dd53de6656b349dad4cfd55f239012d4f2c1a8e35aa3f3554138df9ddfea9d7702b51a249f6db698c0cea7c36e5bc98a017fe7
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-server-linux-amd64.tar.gz) | 96acce78bba3779bef616de28da5d75bc4dc0b52fe0bf03b169c469ade9a8cd38b19c4620d222d67bff9ceeb0c5ebf893f55c1de02356bcebe5689890d0478f7
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-server-linux-arm.tar.gz) | 1e561f3edbc66d2ab7f6f1ffe8dc1c01cec13ee3ba700458bd5d87202723cc832f3305a864a3b569463c96d60c9f60c03b77f210663cc40589e40515b3a32e75
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-server-linux-arm64.tar.gz) | ba8fc011ac0e54cb1a0e0e3ee5f1cff4d877f4fdd75e15bf25b1cf817b3cf2bc85f9809d3cc76e9145f07a837960843ca68bdf02fe970c0043fc9ff7b53da021
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-server-linux-ppc64le.tar.gz) | 1f506676284ab2f6bd3fc8a29a062f4fddf5346ef30be9363f640467c64011144381180c5bf74ec885d2f54524e82e21c745c5d2f1b191948bc40db2a09a2900
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-server-linux-s390x.tar.gz) | 5a7101288d51297c3346d028176b4b997afd8652d6481cec82f8863a91209fec6e8a9286a9bd7543b428e6ef82c1c68a7ce0782191c4682634015a032f749554
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-node-linux-amd64.tar.gz) | 6852edc9818cb51a7e738e44a8bca2290777320e62518c024962fddd05f7ef390fb5696537068fd75e340bae909602f0bbc2aa5ebf6c487c7b1e990250f16810
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-node-linux-arm.tar.gz) | f13edad4684d0de61e4cd7e524f891c75e0efe1050911d9bf0ee3a77cac28f57dca68fb990df6b5d9646e9b389527cbb861de10e12a84e57788f339db05936cb
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-node-linux-arm64.tar.gz) | 69480150325525459aed212b8c96cb1865598cb5ecbeb57741134142d65e8a96258ec298b86d533ce88d2c499c4ad17e66dd3f0f7b5e9c34882889e9cb384805
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-node-linux-ppc64le.tar.gz) | 774cfa9a4319ede166674d3e8c46900c9319d98ffba5b46684244e4bb15d94d31df8a6681e4711bc744d7e92fd23f207505eda98f43c8e2383107badbd43f289
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-node-linux-s390x.tar.gz) | 7e696988febb1e913129353134191b23c6aa5b0bea7c9c9168116596b827c091a88049ca8b8847dda25ecd4467cca4cc48cae8699635b5e78b83aab482c109f5
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-rc.1/kubernetes-node-windows-amd64.tar.gz) | 067182292d9e17f0d4974051681bedcf5ed6017dc80485541f89ea1f211085714165941a5262a4997b7bfc2bd190f2255df4c1b39f86a3278487248111d83cd4
+
+## Changelog since v1.19.0-rc.0
+
+## Urgent Upgrade Notes
+
+### (No, really, you MUST read this before you upgrade)
+
+ - Azure blob disk feature(`kind`: `Shared`, `Dedicated`) has been deprecated, you should use `kind`: `Managed` in `kubernetes.io/azure-disk` storage class. ([#92905](https://github.com/kubernetes/kubernetes/pull/92905), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+
+## Changes by Kind
+
+### Deprecation
+
+- Kubeadm: deprecate the "kubeadm alpha kubelet config enable-dynamic" command. To continue using the feature please defer to the guide for "Dynamic Kubelet Configuration" at k8s.io. ([#92881](https://github.com/kubernetes/kubernetes/pull/92881), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+
+### API Change
+
+- Added pod version skew strategy for seccomp profile to synchronize the deprecated annotations with the new API Server fields. Please see the corresponding section [in the KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190717-seccomp-ga.mdversion-skew-strategy) for more detailed explanations. ([#91408](https://github.com/kubernetes/kubernetes/pull/91408), [@saschagrunert](https://github.com/saschagrunert)) [SIG Apps, Auth, CLI and Node]
+- Custom Endpoints are now mirrored to EndpointSlices by a new EndpointSliceMirroring controller. ([#91637](https://github.com/kubernetes/kubernetes/pull/91637), [@robscott](https://github.com/robscott)) [SIG API Machinery, Apps, Auth, Cloud Provider, Instrumentation, Network and Testing]
+- Generic ephemeral volumes, a new alpha feature under the `GenericEphemeralVolume` feature gate, provide a more flexible alternative to `EmptyDir` volumes: as with `EmptyDir`, volumes are created and deleted for each pod automatically by Kubernetes. But because the normal provisioning process is used (`PersistentVolumeClaim`), storage can be provided by third-party storage vendors and all of the usual volume features work. Volumes don't need to be empt; for example, restoring from snapshot is supported. ([#92784](https://github.com/kubernetes/kubernetes/pull/92784), [@pohly](https://github.com/pohly)) [SIG API Machinery, Apps, Auth, CLI, Instrumentation, Node, Scheduling, Storage and Testing]
+
+### Feature
+
+- ACTION REQUIRED : In CoreDNS v1.7.0, [metrics names have been changed](https://github.com/coredns/coredns/blob/master/notes/coredns-1.7.0.md#metric-changes) which will be backward incompatible with existing reporting formulas that use the old metrics' names. Adjust your formulas to the new names before upgrading.
+
+ Kubeadm now includes CoreDNS version v1.7.0. Some of the major changes include:
+ - Fixed a bug that could cause CoreDNS to stop updating service records.
+ - Fixed a bug in the forward plugin where only the first upstream server is always selected no matter which policy is set.
+ - Remove already deprecated options `resyncperiod` and `upstream` in the Kubernetes plugin.
+ - Includes Prometheus metrics name changes (to bring them in line with standard Prometheus metrics naming convention). They will be backward incompatible with existing reporting formulas that use the old metrics' names.
+ - The federation plugin (allows for v1 Kubernetes federation) has been removed.
+ More details are available in https://coredns.io/2020/06/15/coredns-1.7.0-release/ ([#92651](https://github.com/kubernetes/kubernetes/pull/92651), [@rajansandeep](https://github.com/rajansandeep)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation]
+- Add tags support for Azure File Driver ([#92825](https://github.com/kubernetes/kubernetes/pull/92825), [@ZeroMagic](https://github.com/ZeroMagic)) [SIG Cloud Provider and Storage]
+- Audit events for API requests to deprecated API versions now include a `"k8s.io/deprecated": "true"` audit annotation. If a target removal release is identified, the audit event includes a `"k8s.io/removal-release": "."` audit annotation as well. ([#92842](https://github.com/kubernetes/kubernetes/pull/92842), [@liggitt](https://github.com/liggitt)) [SIG API Machinery and Instrumentation]
+- Cloud node-controller use InstancesV2 ([#91319](https://github.com/kubernetes/kubernetes/pull/91319), [@gongguan](https://github.com/gongguan)) [SIG Apps, Cloud Provider, Scalability and Storage]
+- Kubeadm: deprecate the "--csr-only" and "--csr-dir" flags of the "kubeadm init phase certs" subcommands. Please use "kubeadm alpha certs generate-csr" instead. This new command allows you to generate new private keys and certificate signing requests for all the control-plane components, so that the certificates can be signed by an external CA. ([#92183](https://github.com/kubernetes/kubernetes/pull/92183), [@wallrj](https://github.com/wallrj)) [SIG Cluster Lifecycle]
+- Server-side apply behavior has been regularized in the case where a field is removed from the applied configuration. Removed fields which have no other owners are deleted from the live object, or reset to their default value if they have one. Safe ownership transfers, such as the transfer of a `replicas` field from a user to an HPA without resetting to the default value are documented in [Transferring Ownership](https://kubernetes.io/docs/reference/using-api/api-concepts/#transferring-ownership) ([#92661](https://github.com/kubernetes/kubernetes/pull/92661), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Testing]
+- Set CSIMigrationvSphere feature gates to beta.
+ Users should enable CSIMigration + CSIMigrationvSphere features and install the vSphere CSI Driver (https://github.com/kubernetes-sigs/vsphere-csi-driver) to move workload from the in-tree vSphere plugin "kubernetes.io/vsphere-volume" to vSphere CSI Driver.
+
+ Requires: vSphere vCenter/ESXi Version: 7.0u1, HW Version: VM version 15 ([#92816](https://github.com/kubernetes/kubernetes/pull/92816), [@divyenpatel](https://github.com/divyenpatel)) [SIG Cloud Provider and Storage]
+- Support a smooth upgrade from client-side apply to server-side apply without conflicts, as well as support the corresponding downgrade. ([#90187](https://github.com/kubernetes/kubernetes/pull/90187), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG API Machinery and Testing]
+- Trace output in apiserver logs is more organized and comprehensive. Traces are nested, and for all non-long running request endpoints, the entire filter chain is instrumented (e.g. authentication check is included). ([#88936](https://github.com/kubernetes/kubernetes/pull/88936), [@jpbetz](https://github.com/jpbetz)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Scheduling]
+- `kubectl alpha debug` now supports debugging nodes by creating a debugging container running in the node's host namespaces. ([#92310](https://github.com/kubernetes/kubernetes/pull/92310), [@verb](https://github.com/verb)) [SIG CLI]
+
+### Failing Test
+
+- Kube-proxy iptables min-sync-period defaults to 1 sec. Previously, it was 0. ([#92836](https://github.com/kubernetes/kubernetes/pull/92836), [@aojea](https://github.com/aojea)) [SIG Network]
+
+### Bug or Regression
+
+- Dockershim security: pod sandbox now always run with `no-new-privileges` and `runtime/default` seccomp profile
+ dockershim seccomp: custom profiles can now have smaller seccomp profiles when set at pod level ([#90948](https://github.com/kubernetes/kubernetes/pull/90948), [@pjbgf](https://github.com/pjbgf)) [SIG Node]
+- Eviction requests for pods that have a non-zero DeletionTimestamp will always succeed ([#91342](https://github.com/kubernetes/kubernetes/pull/91342), [@michaelgugino](https://github.com/michaelgugino)) [SIG Apps]
+- Fix detection of image filesystem, disk metrics for devicemapper, detection of OOM Kills on 5.0+ linux kernels. ([#92919](https://github.com/kubernetes/kubernetes/pull/92919), [@dashpole](https://github.com/dashpole)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Node]
+- Fixed memory leak in endpointSliceTracker ([#92838](https://github.com/kubernetes/kubernetes/pull/92838), [@tnqn](https://github.com/tnqn)) [SIG Apps and Network]
+- Kube-aggregator certificates are dynamically loaded on change from disk ([#92791](https://github.com/kubernetes/kubernetes/pull/92791), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery]
+- Kube-up now includes CoreDNS version v1.7.0. Some of the major changes include:
+ - Fixed a bug that could cause CoreDNS to stop updating service records.
+ - Fixed a bug in the forward plugin where only the first upstream server is always selected no matter which policy is set.
+ - Remove already deprecated options `resyncperiod` and `upstream` in the Kubernetes plugin.
+ - Includes Prometheus metrics name changes (to bring them in line with standard Prometheus metrics naming convention). They will be backward incompatible with existing reporting formulas that use the old metrics' names.
+ - The federation plugin (allows for v1 Kubernetes federation) has been removed.
+ More details are available in https://coredns.io/2020/06/15/coredns-1.7.0-release/ ([#92718](https://github.com/kubernetes/kubernetes/pull/92718), [@rajansandeep](https://github.com/rajansandeep)) [SIG Cloud Provider]
+- The apiserver will no longer proxy non-101 responses for upgrade requests. This could break proxied backends (such as an extension API server) that respond to upgrade requests with a non-101 response code. ([#92941](https://github.com/kubernetes/kubernetes/pull/92941), [@tallclair](https://github.com/tallclair)) [SIG API Machinery]
+- The terminationGracePeriodSeconds from pod spec is respected for the mirror pod. ([#92442](https://github.com/kubernetes/kubernetes/pull/92442), [@tedyu](https://github.com/tedyu)) [SIG Node and Testing]
+
+### Other (Cleanup or Flake)
+
+- --cache-dir sets cache directory for both http and discovery, defaults to $HOME/.kube/cache ([#92910](https://github.com/kubernetes/kubernetes/pull/92910), [@soltysh](https://github.com/soltysh)) [SIG API Machinery and CLI]
+- Fix: license issue in blob disk feature ([#92824](https://github.com/kubernetes/kubernetes/pull/92824), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+
+## Dependencies
+
+### Added
+_Nothing has changed._
+
+### Changed
+- github.com/cilium/ebpf: [9f1617e → 1c8d4c9](https://github.com/cilium/ebpf/compare/9f1617e...1c8d4c9)
+- github.com/coredns/corefile-migration: [v1.0.8 → v1.0.10](https://github.com/coredns/corefile-migration/compare/v1.0.8...v1.0.10)
+- github.com/google/cadvisor: [8450c56 → v0.37.0](https://github.com/google/cadvisor/compare/8450c56...v0.37.0)
+- github.com/json-iterator/go: [v1.1.9 → v1.1.10](https://github.com/json-iterator/go/compare/v1.1.9...v1.1.10)
+- github.com/opencontainers/runc: [1b94395 → 819fcc6](https://github.com/opencontainers/runc/compare/1b94395...819fcc6)
+- github.com/prometheus/client_golang: [v1.6.0 → v1.7.1](https://github.com/prometheus/client_golang/compare/v1.6.0...v1.7.1)
+- github.com/prometheus/common: [v0.9.1 → v0.10.0](https://github.com/prometheus/common/compare/v0.9.1...v0.10.0)
+- github.com/prometheus/procfs: [v0.0.11 → v0.1.3](https://github.com/prometheus/procfs/compare/v0.0.11...v0.1.3)
+- github.com/rubiojr/go-vhd: [0bfd3b3 → 02e2102](https://github.com/rubiojr/go-vhd/compare/0bfd3b3...02e2102)
+- sigs.k8s.io/structured-merge-diff/v3: v3.0.0 → 43c19bb
+
+### Removed
+_Nothing has changed._
+
+
+
+# v1.19.0-beta.2
+
+
+## Downloads for v1.19.0-beta.2
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes.tar.gz) | 806c1734a57dfc1800730fcb25aeb60d50d19d248c0e2a92ede4b6c4565745b4f370d4fd925bef302a96fba89102b7560b8f067240e0f35f6ec6caa29971dea4
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-src.tar.gz) | 507372c6d7ea380ec68ea237141a2b62953a2e1d1d16288f37820b605e33778c5f43ac5a3dedf39f7907d501749916221a8fa4d50be1e5a90b3ce23d36eaa075
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-darwin-386.tar.gz) | 6d20ca8d37b01213dcb98a1e49d44d414043ce485ae7df9565dfb7914acb1ec42b7aeb0c503b8febc122a8b444c6ed13eec0ff3c88033c6db767e7af5dbbc65d
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-darwin-amd64.tar.gz) | e9caa5463a662869cfc8b9254302641aee9b53fa2119244bd65ef2c66e8c617f7db9b194a672ff80d7bc42256e6560db9fe8a00b2214c0ef023e2d6feed58a3a
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-linux-386.tar.gz) | 48296417fcd2c2f6d01c30dcf66956401ea46455c52a2bbd76feb9b117502ceaa2fb10dae944e087e7038b9fdae5b835497213894760ca01698eb892087490d2
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-linux-amd64.tar.gz) | e2cc7819974316419a8973f0d77050b3262c4e8d078946ff9f6f013d052ec1dd82893313feff6e4493ae0fd3fb62310e6ce4de49ba6e80f8b9979650debf53f2
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-linux-arm.tar.gz) | 484aac48a7a736970ea0766547453b7d37b25ed29fdee771734973e3e080b33f6731eecc458647db962290b512d32546e675e4658287ced3214e87292b98a643
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-linux-arm64.tar.gz) | f793078dc2333825a6679126b279cb0a3415ded8c650478e73c37735c6aa9576b68b2a4165bb77ef475884d50563ea236d8db4c72b2e5552b5418ea06268daae
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-linux-ppc64le.tar.gz) | 4c204b8d3b2717470ee460230b6bdc63725402ad3d24789397934bfe077b94d68041a376864b618e01f541b5bd00d0e63d75aa531a327ab0082c01eb4b9aa5ee
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-linux-s390x.tar.gz) | d0f6e4ddbf122ebcb4c5a980d5f8e33a23213cb438983341870f288afd17e73ec42f0ded55a3a9622c57700e68999228508d449ca206aca85f3254f7622375db
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-windows-386.tar.gz) | a615a7821bba1f8e4115b7981347ed94a79947c78d32c692cd600e21e0de29fedfc4a39dc08ca516f2f35261cf4a6d6ce557008f034e0e1d311fa9e75478ec0c
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-client-windows-amd64.tar.gz) | 34046130c5ebb3afe17e6e3cf88229b8d3281a9ac9c28dece1fd2d49a11b7be011700b74d9b8111dee7d0943e5ebfa208185bae095c2571aa54e0f9201e2cddd
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-server-linux-amd64.tar.gz) | c922058ce9c665e329d3d4647aac5d2dd22d9a8af63a21e6af98943dfd14f2b90268c53876f42a64093b96499ee1109803868c9aead4c15fd8db4b1bbec58fd9
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-server-linux-arm.tar.gz) | 4f17489b946dc04570bfab87015f2c2401b139b9ee745ed659bc94ccd116f3f23e249f83e19aaa418aa980874fffb478b1ec7340aa25292af758c9eabd4c2022
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-server-linux-arm64.tar.gz) | 69e44a63d15962de95a484e311130d415ebfec16a9da54989afc53a835c5b67de20911d71485950d07259a0f8286a299f4d74f90c73530e905da8dc60e391597
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-server-linux-ppc64le.tar.gz) | 66b30ebad7a8226304150aa42a1bd660a0b3975fecbfd8dbbea3092936454d9f81c8083841cc67c6645ab771383b66c7f980dd65319803078c91436c55d5217a
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-server-linux-s390x.tar.gz) | 0e197280f99654ec9e18ea01a9fc848449213ce28521943bc5d593dd2cac65310b6a918f611ea283b3a0377347eb718e99dd59224b8fad8adb223d483fa9fecb
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-node-linux-amd64.tar.gz) | f40afee38155c5163ba92e3fa3973263ca975f3b72ac18535799fb29180413542ef86f09c87681161affeef94eb0bd38e7cf571a73ab0f51a88420f1aedeaeec
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-node-linux-arm.tar.gz) | 6088b11767b77f0ec932a9f1aee9f0c7795c3627529f259edf4d8b1be2e1a324a75c89caed65c6aa277c2fd6ee23b3ebeb05901f351cd2dde0a833bbbd6d6d07
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-node-linux-arm64.tar.gz) | e790c491d057721b94d0d2ad22dd5c75400e8602e95276471f20cd2181f52c5be38e66b445d8360e1fb671627217eb0b7735b485715844d0e9908cf3de249464
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-node-linux-ppc64le.tar.gz) | 04f696cfab66f92b4b22c23807a49c344d6a157a9ac3284a267613369b7f9f5887f67902cb8a2949caa204f89fdc65fe442a03c2c454013523f81b56476d39a0
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-node-linux-s390x.tar.gz) | c671e20f69f70ec567fb16bbed2fecac3099998a3365def1e0755e41509531fd65768f7a04015b27b17e6a5884e65cddb82ff30a8374ed011c5e2008817259db
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.2/kubernetes-node-windows-amd64.tar.gz) | 23d712fb2d455b5095d31b9c280d92442f7871786808528a1b39b9babf169dc7ae467f1ee2b2820089d69aa2342441d0290edf4f710808c78277e612f870321d
+
+## Changelog since v1.19.0-beta.1
+
+## Changes by Kind
+
+### Deprecation
+
+- Kubeadm: remove the deprecated "--use-api" flag for "kubeadm alpha certs renew" ([#90143](https://github.com/kubernetes/kubernetes/pull/90143), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Scheduler's alpha feature 'ResourceLimitsPriorityFunction' is completely removed due to lack of usage ([#91883](https://github.com/kubernetes/kubernetes/pull/91883), [@SataQiu](https://github.com/SataQiu)) [SIG Scheduling and Testing]
+
+### API Change
+
+- Remove `BindTimeoutSeconds` from schedule configuration `KubeSchedulerConfiguration` ([#91580](https://github.com/kubernetes/kubernetes/pull/91580), [@cofyc](https://github.com/cofyc)) [SIG Scheduling and Testing]
+- Resolve regression in metadata.managedFields handling in update/patch requests submitted by older API clients ([#91748](https://github.com/kubernetes/kubernetes/pull/91748), [@apelisse](https://github.com/apelisse)) [SIG API Machinery and Testing]
+- The CertificateSigningRequest API is promoted to certificates.k8s.io/v1 with the following changes:
+ - `spec.signerName` is now required, and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
+ - `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
+ - `status.conditions` may not contain duplicate types
+ - `status.conditions[*].status` is now required
+ - `status.certificate` must be PEM-encoded, and contain only CERTIFICATE blocks ([#91685](https://github.com/kubernetes/kubernetes/pull/91685), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Architecture, Auth, CLI and Testing]
+- The Kubelet's `--cloud-provider` and `--cloud-config` options are now marked as deprecated. ([#90408](https://github.com/kubernetes/kubernetes/pull/90408), [@knabben](https://github.com/knabben)) [SIG Cloud Provider and Node]
+
+### Feature
+
+- A new extension point `PostFilter` is introduced to scheduler framework which runs after Filter phase to resolve scheduling filter failures. A typical implementation is running preemption logic. ([#91314](https://github.com/kubernetes/kubernetes/pull/91314), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling and Testing]
+- Added --privileged flag to kubectl run ([#90569](https://github.com/kubernetes/kubernetes/pull/90569), [@brianpursley](https://github.com/brianpursley)) [SIG CLI]
+- Enable feature Gate DefaultPodTopologySpread to use PodTopologySpread plugin to do defaultspreading. In doing so, legacy DefaultPodTopologySpread plugin is disabled. ([#91793](https://github.com/kubernetes/kubernetes/pull/91793), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- Extend AWS azToRegion method to support Local Zones ([#90874](https://github.com/kubernetes/kubernetes/pull/90874), [@Jeffwan](https://github.com/Jeffwan)) [SIG Cloud Provider]
+- Kube-Proxy now supports IPv6DualStack on Windows with the IPv6DualStack feature gate. ([#90853](https://github.com/kubernetes/kubernetes/pull/90853), [@kumarvin123](https://github.com/kumarvin123)) [SIG Network, Node and Windows]
+- Kube-controller-manager: the `--experimental-cluster-signing-duration` flag is marked as deprecated for removal in v1.22, and is replaced with `--cluster-signing-duration`. ([#91154](https://github.com/kubernetes/kubernetes/pull/91154), [@liggitt](https://github.com/liggitt)) [SIG Auth and Cloud Provider]
+- Support kubectl create deployment with replicas ([#91562](https://github.com/kubernetes/kubernetes/pull/91562), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- The RotateKubeletClientCertificate feature gate has been promoted to GA, and the kubelet --feature-gate RotateKubeletClientCertificate parameter will be removed in 1.20. ([#91780](https://github.com/kubernetes/kubernetes/pull/91780), [@liggitt](https://github.com/liggitt)) [SIG Auth and Node]
+- The metric label name of `kubernetes_build_info` has been updated from `camel case` to `snake case`:
+ - gitVersion --> git_version
+ - gitCommit --> git_commit
+ - gitTreeState --> git_tree_state
+ - buildDate --> build_date
+ - goVersion --> go_version
+
+ This change happens in `kube-apiserver`、`kube-scheduler`、`kube-proxy` and `kube-controller-manager`. ([#91805](https://github.com/kubernetes/kubernetes/pull/91805), [@RainbowMango](https://github.com/RainbowMango)) [SIG API Machinery, Cluster Lifecycle and Instrumentation]
+- `EventRecorder()` is exposed to `FrameworkHandle` interface so that scheduler plugin developers can choose to log cluster-level events. ([#92010](https://github.com/kubernetes/kubernetes/pull/92010), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+
+### Bug or Regression
+
+- Azure: set dest prefix and port for IPv6 inbound security rule ([#91831](https://github.com/kubernetes/kubernetes/pull/91831), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- Fix etcd version migration script in etcd image. ([#91925](https://github.com/kubernetes/kubernetes/pull/91925), [@wenjiaswe](https://github.com/wenjiaswe)) [SIG API Machinery]
+- Fix issues when supported huge page sizes changes ([#80831](https://github.com/kubernetes/kubernetes/pull/80831), [@odinuge](https://github.com/odinuge)) [SIG Node and Testing]
+- Fix kubectl describe output format for empty annotations. ([#91405](https://github.com/kubernetes/kubernetes/pull/91405), [@iyashu](https://github.com/iyashu)) [SIG CLI]
+- Fixed an issue that a Pod's nominatedNodeName cannot be cleared upon node deletion. ([#91750](https://github.com/kubernetes/kubernetes/pull/91750), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling and Testing]
+- Fixed several bugs involving the IPFamily field when creating or updating services
+ in clusters with the IPv6DualStack feature gate enabled.
+
+ Beware that the behavior of the IPFamily field is strange and inconsistent and will
+ likely be changed before the dual-stack feature goes GA. Users should treat the
+ field as "write-only" for now and should not make any assumptions about a service
+ based on its current IPFamily value. ([#91400](https://github.com/kubernetes/kubernetes/pull/91400), [@danwinship](https://github.com/danwinship)) [SIG Apps and Network]
+- Kube-apiserver: fixes scale subresource patch handling to avoid returning unnecessary 409 Conflict error to clients ([#90342](https://github.com/kubernetes/kubernetes/pull/90342), [@liggitt](https://github.com/liggitt)) [SIG Apps, Autoscaling and Testing]
+- Kube-up: fixes setup of validating admission webhook credential configuration ([#91995](https://github.com/kubernetes/kubernetes/pull/91995), [@liggitt](https://github.com/liggitt)) [SIG Cloud Provider and Cluster Lifecycle]
+- Kubeadm: Add retries for kubeadm join / UpdateStatus to make update status more resilient by adding a retry loop to this operation ([#91952](https://github.com/kubernetes/kubernetes/pull/91952), [@xlgao-zju](https://github.com/xlgao-zju)) [SIG Cluster Lifecycle]
+- On AWS nodes with multiple network interfaces, kubelet should now more reliably report addresses from secondary interfaces. ([#91889](https://github.com/kubernetes/kubernetes/pull/91889), [@anguslees](https://github.com/anguslees)) [SIG Cloud Provider]
+- Resolve regression in metadata.managedFields handling in create/update/patch requests not using server-side apply ([#91690](https://github.com/kubernetes/kubernetes/pull/91690), [@apelisse](https://github.com/apelisse)) [SIG API Machinery and Testing]
+
+### Other (Cleanup or Flake)
+
+- Deprecate the `--target-ram-md` flags that is no longer used for anything. ([#91818](https://github.com/kubernetes/kubernetes/pull/91818), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery]
+- Replace framework.Failf with ExpectNoError ([#91811](https://github.com/kubernetes/kubernetes/pull/91811), [@lixiaobing1](https://github.com/lixiaobing1)) [SIG Instrumentation, Storage and Testing]
+- The Kubelet's `--experimental-allocatable-ignore-eviction` option is now marked as deprecated. ([#91578](https://github.com/kubernetes/kubernetes/pull/91578), [@knabben](https://github.com/knabben)) [SIG Node]
+- Update corefile-migration library to 1.0.8 ([#91856](https://github.com/kubernetes/kubernetes/pull/91856), [@wawa0210](https://github.com/wawa0210)) [SIG Node]
+
+## Dependencies
+
+### Added
+_Nothing has changed._
+
+### Changed
+- github.com/Azure/azure-sdk-for-go: [v40.2.0+incompatible → v43.0.0+incompatible](https://github.com/Azure/azure-sdk-for-go/compare/v40.2.0...v43.0.0)
+- github.com/coredns/corefile-migration: [v1.0.6 → v1.0.8](https://github.com/coredns/corefile-migration/compare/v1.0.6...v1.0.8)
+- k8s.io/klog/v2: v2.0.0 → v2.1.0
+
+### Removed
+_Nothing has changed._
+
+
+
+# v1.19.0-beta.1
+
+
+## Downloads for v1.19.0-beta.1
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes.tar.gz) | c4ab79e987790fbda842310525abecee60861e44374c414159e60d74e85b4dd36d9d49253b8e7f08aec36a031726f9517d0a401fb748e41835ae2dc86aee069d
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-src.tar.gz) | 08d1aadb8a31b35f3bc39f44d8f97b7e98951f833bb87f485f318c6acfdb53539851fbb2d4565036e00b6f620c5b1882c6f9620759c3b36833da1d6b2b0610f2
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-darwin-386.tar.gz) | 55eb230fdb4e60ded6c456ec6e03363c6d55e145a956aa5eff0c2b38d8ecfe848b4a404169def45d392e747e4d04ee71fe3182ab1e6426110901ccfb2e1bc17f
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-darwin-amd64.tar.gz) | ddc03644081928bd352c40077f2a075961c90a7159964be072b3e05ec170a17d6d78182d90210c18d24d61e75b45eae3d1b1486626db9e28f692dfb33196615c
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-linux-386.tar.gz) | 6e1e00a53289bd9a4d74a61fce4665786051aafe8fef8d1d42de88ba987911bfb7fd5f4a2c3771ae830819546cf9f4badd94fd90c50ca74367c1ace748e8eafd
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-linux-amd64.tar.gz) | 2c4db87c61bc4a528eb2af9246648fc7a015741fe52f551951fda786c252eca1dc48a4325be70e6f80f1560f773b763242334ad4fe06657af290e610f10bc231
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-linux-arm.tar.gz) | 8a2bebf67cbd8f91ba38edc36a239aa50d3e58187827763eb5778a5ca0d9d35be97e193b794bff415e8f5de071e47659033dc0420e038d78cc32e841a417a62a
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-linux-arm64.tar.gz) | f2d0029efc03bf17554c01c11e77b161b8956d9da4b17962ca878378169cbdee04722bbda87279f4b7431c91db0e92bfede45dcc6d971f34d3fe891339b7c47b
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-linux-ppc64le.tar.gz) | 45eb3fe40951ba152f05aa0fe41b7c17ffb91ee3cecb12ec19d2d9cdb467267c1eb5696660687852da314eb8a14a9ebf5f5da21eca252e1c2e3b18dca151ad0d
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-linux-s390x.tar.gz) | 2097ac5d593dd0951a34df9bdf7883b5c228da262042904ee3a2ccfd1f9c955ff6a3a59961850053e41646bce8fc70a023efe9e9fe49f14f9a6276c8da22f907
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-windows-386.tar.gz) | c38b034e8ac3a5972a01f36b184fe1a195f6a422a3c6564f1f3faff858b1220173b6ab934e7b7ec200931fd7d9456e947572620d82d02e7b05fc61a7fb67ec70
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-client-windows-amd64.tar.gz) | 0501694734381914882836e067dc177e8bccd48a4826e286017dc5f858f27cdef348edbb664dda59162f6cd3ac14a9e491e314a3ea032dec43bc77610ce8c8bc
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-server-linux-amd64.tar.gz) | 0dd2058889eabbf0b05b6fafd593997ff9911467f0fc567c142583adf0474f4d0e2f4024b4906ff9ee4264d1cbbfde66596ccb8c73b3d5bb79f67e5eb4b3258a
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-server-linux-arm.tar.gz) | 9c3a33d7c198116386178a4f8ee7d4df82e810d6f26833f19f93eff112c29f9f89e5ee790013ad1d497856ecb2662ee95a49fc6a41f0d33cc67e431d06135b88
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-server-linux-arm64.tar.gz) | 11f83132f729bec4a4d84fc1983dbd5ddd1643d000dc74c6e05f35637de21533834a572692fc1281c7b0bd29ee93e721fb00e276983e36c327a1950266b17f6d
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-server-linux-ppc64le.tar.gz) | 949334065d968f10207089db6175dcc4bf9432b3b48b120f689cd39c56562a0f4f60d774c95a20a5391d0467140a4c3cb6b2a2dfedccfda6c20f333a63ebcf81
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-server-linux-s390x.tar.gz) | 29e8f6a22969d8ab99bf6d272215f53d8f7a125d0c5c20981dcfe960ed440369f831c71a94bb61974b486421e4e9ed936a9421a1be6f02a40e456daab4995663
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-node-linux-amd64.tar.gz) | 3d9767e97a40b501f29bbfc652c8fd841eae1dee22a97fdc20115e670081de7fa8e84f6e1be7bbf2376b59c5eef15fb5291415ae2e24ce4c9c5e141faa38c47c
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-node-linux-arm.tar.gz) | 8ccf401e0bd0c59403af49046b49cf556ff164fca12c5233169a80e18cc4367f404fd7edd236bb862bff9fd25b687d48a8d57d5567809b89fd2727549d0dc48f
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-node-linux-arm64.tar.gz) | 3e1fa2bde05a4baec6ddd43cd1994d155a143b9c825ab5dafe766efc305cb1aad92d6026c41c05e9da114a04226361fb6b0510b98e3b05c3ed510da23db403b3
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-node-linux-ppc64le.tar.gz) | 01df4be687f5634afa0ab5ef06f8cee17079264aa452f00a45eccb8ace654c9acc6582f4c74e8242e6ca7715bc48bf2a7d2c4d3d1eef69106f99c8208bc245c4
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-node-linux-s390x.tar.gz) | 5523b0b53c30b478b1a9e1df991607886acdcde8605e1b44ef91c94993ca2256c74f6e38fbdd24918d7dbf7afd5cd73d24a3f7ff911e9762819776cc19935363
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.1/kubernetes-node-windows-amd64.tar.gz) | 8e7ebf000bc8dec1079a775576807c0a11764d20a59e16f89d93c948532ba5e6864efd3e08c3e8cc5bd7e7f97bb65baefbf2f01cb226897abd5e01997a4c4f75
+
+## Changelog since v1.19.0-alpha.3
+
+## Urgent Upgrade Notes
+
+### (No, really, you MUST read this before you upgrade)
+
+ - ACTION REQUIRED : Switch core master base images (kube-controller-manager) from debian to distroless. If you need Flex Volumes support using scripts, please build your own image with required packages (like bash) ([#91329](https://github.com/kubernetes/kubernetes/pull/91329), [@dims](https://github.com/dims)) [SIG Cloud Provider, Release, Storage and Testing]
+ - Kubeadm: Move the "kubeadm init" phase "kubelet-start" later in the init workflow, after the "kubeconfig" phase. This makes kubeadm start the kubelet only after the KubeletConfiguration component config file (/var/lib/kubelet/config.yaml) is generated and solves a problem where init systems like OpenRC cannot crashloop the kubelet service. ([#90892](https://github.com/kubernetes/kubernetes/pull/90892), [@xphoniex](https://github.com/xphoniex)) [SIG Cluster Lifecycle]
+
+## Changes by Kind
+
+### API Change
+
+- CertificateSigningRequest API conditions were updated:
+ - a `status` field was added; this field defaults to `True`, and may only be set to `True` for `Approved`, `Denied`, and `Failed` conditions
+ - a `lastTransitionTime` field was added
+ - a `Failed` condition type was added to allow signers to indicate permanent failure; this condition can be added via the `certificatesigningrequests/status` subresource.
+ - `Approved` and `Denied` conditions are mutually exclusive
+ - `Approved`, `Denied`, and `Failed` conditions can no longer be removed from a CSR ([#90191](https://github.com/kubernetes/kubernetes/pull/90191), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps, Auth, CLI and Node]
+- EnvVarSource api doc bug fixes ([#91194](https://github.com/kubernetes/kubernetes/pull/91194), [@wawa0210](https://github.com/wawa0210)) [SIG Apps]
+- Fixed: log timestamps now include trailing zeros to maintain a fixed width ([#91207](https://github.com/kubernetes/kubernetes/pull/91207), [@iamchuckss](https://github.com/iamchuckss)) [SIG Apps and Node]
+- The Kubelet's --node-status-max-images option is now available via the Kubelet config file field nodeStatusMaxImage ([#91275](https://github.com/kubernetes/kubernetes/pull/91275), [@knabben](https://github.com/knabben)) [SIG Node]
+- The Kubelet's --seccomp-profile-root option is now available via the Kubelet config file field seccompProfileRoot. ([#91182](https://github.com/kubernetes/kubernetes/pull/91182), [@knabben](https://github.com/knabben)) [SIG Node]
+- The Kubelet's `--enable-server` and `--provider-id` option is now available via the Kubelet config file field `enableServer` and `providerID` respectively. ([#90494](https://github.com/kubernetes/kubernetes/pull/90494), [@knabben](https://github.com/knabben)) [SIG Node]
+- The Kubelet's `--really-crash-for-testing` and `--chaos-chance` options are now marked as deprecated. ([#90499](https://github.com/kubernetes/kubernetes/pull/90499), [@knabben](https://github.com/knabben)) [SIG Node]
+- The alpha `DynamicAuditing` feature gate and `auditregistration.k8s.io/v1alpha1` API have been removed and are no longer supported. ([#91502](https://github.com/kubernetes/kubernetes/pull/91502), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Auth and Testing]
+- `NodeResourcesLeastAllocated` and `NodeResourcesMostAllocated` plugins now support customized weight on the CPU and memory. ([#90544](https://github.com/kubernetes/kubernetes/pull/90544), [@chendave](https://github.com/chendave)) [SIG Scheduling]
+- `PostFilter` type is added to scheduler component config API on version v1beta1. ([#91547](https://github.com/kubernetes/kubernetes/pull/91547), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- `kubescheduler.config.k8s.io` is now beta ([#91420](https://github.com/kubernetes/kubernetes/pull/91420), [@pancernik](https://github.com/pancernik)) [SIG Scheduling]
+
+### Feature
+
+- Add --logging-format flag for component-base. Defaults to "text" using unchanged klog. ([#89683](https://github.com/kubernetes/kubernetes/pull/89683), [@yuzhiquan](https://github.com/yuzhiquan)) [SIG Instrumentation]
+- Add --port flag to kubectl create deployment ([#91113](https://github.com/kubernetes/kubernetes/pull/91113), [@soltysh](https://github.com/soltysh)) [SIG CLI and Testing]
+- Add .import-restrictions file to cmd/cloud-controller-manager. ([#90630](https://github.com/kubernetes/kubernetes/pull/90630), [@nilo19](https://github.com/nilo19)) [SIG API Machinery and Cloud Provider]
+- Add Annotations to CRI-API ImageSpec objects. ([#90061](https://github.com/kubernetes/kubernetes/pull/90061), [@marosset](https://github.com/marosset)) [SIG Node and Windows]
+- Added feature support to Windows for configuring session affinity of Kubernetes services.
+ required: [Windows Server vNext Insider Preview Build 19551](https://blogs.windows.com/windowsexperience/2020/01/28/announcing-windows-server-vnext-insider-preview-build-19551/) (or higher) ([#91701](https://github.com/kubernetes/kubernetes/pull/91701), [@elweb9858](https://github.com/elweb9858)) [SIG Network and Windows]
+- Added service.beta.kubernetes.io/aws-load-balancer-target-node-labels annotation to target nodes in AWS LoadBalancer Services ([#90943](https://github.com/kubernetes/kubernetes/pull/90943), [@foobarfran](https://github.com/foobarfran)) [SIG Cloud Provider]
+- Feat: azure disk migration go beta in 1.19 ([#90896](https://github.com/kubernetes/kubernetes/pull/90896), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Kube-addon-manager has been updated to v9.1.1 to allow overriding the default list of whitelisted resources (https://github.com/kubernetes/kubernetes/pull/91018) ([#91240](https://github.com/kubernetes/kubernetes/pull/91240), [@tosi3k](https://github.com/tosi3k)) [SIG Cloud Provider, Scalability and Testing]
+- Kubeadm now distinguishes between generated and user supplied component configs, regenerating the former ones if a config upgrade is required ([#86070](https://github.com/kubernetes/kubernetes/pull/86070), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: add startup probes for static Pods to protect slow starting containers ([#91179](https://github.com/kubernetes/kubernetes/pull/91179), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubelets configured to rotate client certificates now publish a `certificate_manager_server_ttl_seconds` gauge metric indicating the remaining seconds until certificate expiration. ([#91148](https://github.com/kubernetes/kubernetes/pull/91148), [@liggitt](https://github.com/liggitt)) [SIG Auth and Node]
+- Local-up-cluster.sh installs CSI snapshotter by default now, can be disabled with ENABLE_CSI_SNAPSHOTTER=false. ([#91504](https://github.com/kubernetes/kubernetes/pull/91504), [@pohly](https://github.com/pohly)) [SIG Storage]
+- Rest.Config now supports a flag to override proxy configuration that was previously only configurable through environment variables. ([#81443](https://github.com/kubernetes/kubernetes/pull/81443), [@mikedanese](https://github.com/mikedanese)) [SIG API Machinery and Node]
+- Scores from PodTopologySpreading have reduced differentiation as maxSkew increases. ([#90820](https://github.com/kubernetes/kubernetes/pull/90820), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- Service controller: only sync LB node pools when relevant fields in Node changes ([#90769](https://github.com/kubernetes/kubernetes/pull/90769), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps and Network]
+- Switch core master base images (kube-apiserver, kube-scheduler) from debian to distroless ([#90674](https://github.com/kubernetes/kubernetes/pull/90674), [@dims](https://github.com/dims)) [SIG Cloud Provider, Release and Scalability]
+- Switch etcd image (with migration scripts) from debian to distroless ([#91171](https://github.com/kubernetes/kubernetes/pull/91171), [@dims](https://github.com/dims)) [SIG API Machinery and Cloud Provider]
+- The `certificatesigningrequests/approval` subresource now supports patch API requests ([#91558](https://github.com/kubernetes/kubernetes/pull/91558), [@liggitt](https://github.com/liggitt)) [SIG Auth and Testing]
+- Update cri-tools to v1.18.0 ([#89720](https://github.com/kubernetes/kubernetes/pull/89720), [@saschagrunert](https://github.com/saschagrunert)) [SIG Cloud Provider, Cluster Lifecycle, Release and Scalability]
+- Weight of PodTopologySpread scheduling Score is doubled. ([#91258](https://github.com/kubernetes/kubernetes/pull/91258), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- `maxThreshold` of `ImageLocality` plugin is now scaled by the number of images in the pod, which helps to distinguish the node priorities for pod with several images. ([#91138](https://github.com/kubernetes/kubernetes/pull/91138), [@chendave](https://github.com/chendave)) [SIG Scheduling]
+
+### Bug or Regression
+
+- Add support for TLS 1.3 ciphers: TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256 and TLS_AES_256_GCM_SHA384. ([#90843](https://github.com/kubernetes/kubernetes/pull/90843), [@pjbgf](https://github.com/pjbgf)) [SIG API Machinery, Auth and Cluster Lifecycle]
+- Base-images: Update to kube-cross:v1.13.9-5 ([#90963](https://github.com/kubernetes/kubernetes/pull/90963), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing]
+- CloudNodeLifecycleController will check node existence status before shutdown status when monitoring nodes. ([#90737](https://github.com/kubernetes/kubernetes/pull/90737), [@jiahuif](https://github.com/jiahuif)) [SIG Apps and Cloud Provider]
+- First pod with required affinity terms can schedule only on nodes with matching topology keys. ([#91168](https://github.com/kubernetes/kubernetes/pull/91168), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+- Fix VirtualMachineScaleSets.virtualMachines.GET not allowed issues when customers have set VMSS orchestrationMode. ([#91097](https://github.com/kubernetes/kubernetes/pull/91097), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Fix a racing issue that scheduler may perform unnecessary scheduling attempt. ([#90660](https://github.com/kubernetes/kubernetes/pull/90660), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling and Testing]
+- Fix kubectl create --dryrun client ignore namespace ([#90502](https://github.com/kubernetes/kubernetes/pull/90502), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Fix kubectl create secret docker-registry --from-file not usable ([#90960](https://github.com/kubernetes/kubernetes/pull/90960), [@zhouya0](https://github.com/zhouya0)) [SIG CLI and Testing]
+- Fix kubectl describe node for users not having access to lease information. ([#90469](https://github.com/kubernetes/kubernetes/pull/90469), [@uthark](https://github.com/uthark)) [SIG CLI]
+- Fix kubectl run --dry-run client ignore namespace ([#90785](https://github.com/kubernetes/kubernetes/pull/90785), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Fix public IP not shown issues after assigning public IP to Azure VMs ([#90886](https://github.com/kubernetes/kubernetes/pull/90886), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Fix: add azure file migration support on annotation support ([#91093](https://github.com/kubernetes/kubernetes/pull/91093), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Node]
+- Fix: azure disk dangling attach issue which would cause API throttling ([#90749](https://github.com/kubernetes/kubernetes/pull/90749), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- Fix: fix topology issue in azure disk storage class migration ([#91196](https://github.com/kubernetes/kubernetes/pull/91196), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- Fix: support removal of nodes backed by deleted non VMSS instances on Azure ([#91184](https://github.com/kubernetes/kubernetes/pull/91184), [@bpineau](https://github.com/bpineau)) [SIG Cloud Provider]
+- Fixed a regression preventing garbage collection of RBAC role and binding objects ([#90534](https://github.com/kubernetes/kubernetes/pull/90534), [@apelisse](https://github.com/apelisse)) [SIG Auth]
+- For external storage e2e test suite, update external driver, to pick snapshot provisioner from VolumeSnapshotClass, when a VolumeSnapshotClass is explicitly provided as an input. ([#90878](https://github.com/kubernetes/kubernetes/pull/90878), [@saikat-royc](https://github.com/saikat-royc)) [SIG Storage and Testing]
+- Get-kube.sh: fix order to get the binaries from the right bucket ([#91635](https://github.com/kubernetes/kubernetes/pull/91635), [@cpanato](https://github.com/cpanato)) [SIG Release]
+- In a HA env, during the period a standby scheduler lost connection to API server, if a Pod is deleted and recreated, and the standby scheduler becomes master afterwards, there could be a scheduler cache corruption. This PR fixes this issue. ([#91126](https://github.com/kubernetes/kubernetes/pull/91126), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- Kubeadm: during "reset" do not remove the only remaining stacked etcd member from the cluster and just proceed with the cleanup of the local etcd storage. ([#91145](https://github.com/kubernetes/kubernetes/pull/91145), [@tnqn](https://github.com/tnqn)) [SIG Cluster Lifecycle]
+- Kubeadm: increase robustness for "kubeadm join" when adding etcd members on slower setups ([#90645](https://github.com/kubernetes/kubernetes/pull/90645), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Pod Conditions updates are skipped for re-scheduling attempts ([#91252](https://github.com/kubernetes/kubernetes/pull/91252), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- Prevent PVC requested size overflow when expanding or creating a volume ([#90907](https://github.com/kubernetes/kubernetes/pull/90907), [@gnufied](https://github.com/gnufied)) [SIG Cloud Provider and Storage]
+- Resolves an issue using `kubectl certificate approve/deny` against a server serving the v1 CSR API ([#91691](https://github.com/kubernetes/kubernetes/pull/91691), [@liggitt](https://github.com/liggitt)) [SIG Auth and CLI]
+- Scheduling failures due to no nodes available are now reported as unschedulable under ```schedule_attempts_total``` metric. ([#90989](https://github.com/kubernetes/kubernetes/pull/90989), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+- The following components that do not expect non-empty, non-flag arguments will now print an error message and exit if an argument is specified: cloud-controller-manager, kube-apiserver, kube-controller-manager, kube-proxy, kubeadm {alpha|config|token|version}, kubemark. Flags should be prefixed with a single dash "-" (0x45) for short form or double dash "--" for long form. Before this change, malformed flags (for example, starting with a non-ascii dash character such as 0x8211: "–") would have been silently treated as positional arguments and ignored. ([#91349](https://github.com/kubernetes/kubernetes/pull/91349), [@neolit123](https://github.com/neolit123)) [SIG API Machinery, Cloud Provider, Cluster Lifecycle, Network and Scheduling]
+- When evicting, Pods in Pending state are removed without checking PDBs. ([#83906](https://github.com/kubernetes/kubernetes/pull/83906), [@michaelgugino](https://github.com/michaelgugino)) [SIG API Machinery, Apps, Node and Scheduling]
+
+### Other (Cleanup or Flake)
+
+- Adds additional testing to ensure that udp pods conntrack are cleaned up ([#90180](https://github.com/kubernetes/kubernetes/pull/90180), [@JacobTanenbaum](https://github.com/JacobTanenbaum)) [SIG Architecture, Network and Testing]
+- Adjusts the fsType for cinder values to be `ext4` if no fsType is specified. ([#90608](https://github.com/kubernetes/kubernetes/pull/90608), [@huffmanca](https://github.com/huffmanca)) [SIG Storage]
+- Change beta.kubernetes.io/os to kubernetes.io/os ([#89461](https://github.com/kubernetes/kubernetes/pull/89461), [@wawa0210](https://github.com/wawa0210)) [SIG Cloud Provider and Cluster Lifecycle]
+- Content-type and verb for request metrics are now bounded to a known set. ([#89451](https://github.com/kubernetes/kubernetes/pull/89451), [@logicalhan](https://github.com/logicalhan)) [SIG API Machinery and Instrumentation]
+- Emit `WaitingForPodScheduled` event if the unbound PVC is in delay binding mode but used by a pod ([#91455](https://github.com/kubernetes/kubernetes/pull/91455), [@cofyc](https://github.com/cofyc)) [SIG Storage]
+- Improve server-side apply conflict errors by setting dedicated kubectl subcommand field managers ([#88885](https://github.com/kubernetes/kubernetes/pull/88885), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing]
+- It is now possible to use the service annotation `cloud.google.com/network-tier: Standard` to configure the Network Tier of the GCE Loadbalancer ([#88532](https://github.com/kubernetes/kubernetes/pull/88532), [@zioproto](https://github.com/zioproto)) [SIG Cloud Provider, Network and Testing]
+- Kube-scheduler: The metric name `scheduler_total_preemption_attempts` has been renamed to `scheduler_preemption_attempts_total`. ([#91448](https://github.com/kubernetes/kubernetes/pull/91448), [@RainbowMango](https://github.com/RainbowMango)) [SIG API Machinery, Cluster Lifecycle, Instrumentation and Scheduling]
+- Kubeadm now forwards the IPv6DualStack feature gate using the kubelet component config, instead of the kubelet command line ([#90840](https://github.com/kubernetes/kubernetes/pull/90840), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: do not use a DaemonSet for the pre-pull of control-plane images during "kubeadm upgrade apply". Individual node upgrades now pull the required images using a preflight check. The flag "--image-pull-timeout" for "kubeadm upgrade apply" is now deprecated and will be removed in a future release following a GA deprecation policy. ([#90788](https://github.com/kubernetes/kubernetes/pull/90788), [@xlgao-zju](https://github.com/xlgao-zju)) [SIG Cluster Lifecycle]
+- Kubeadm: use two separate checks on /livez and /readyz for the kube-apiserver static Pod instead of using /healthz ([#90970](https://github.com/kubernetes/kubernetes/pull/90970), [@johscheuer](https://github.com/johscheuer)) [SIG Cluster Lifecycle]
+- NONE ([#91597](https://github.com/kubernetes/kubernetes/pull/91597), [@elmiko](https://github.com/elmiko)) [SIG Autoscaling and Testing]
+- Remove deprecated --server-dry-run flag from kubectl apply ([#91308](https://github.com/kubernetes/kubernetes/pull/91308), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing]
+- The "HostPath should give a volume the correct mode" is no longer a conformance test ([#90861](https://github.com/kubernetes/kubernetes/pull/90861), [@dims](https://github.com/dims)) [SIG Architecture and Testing]
+- The Kubelet's --experimental-mounter-path and --experimental-check-node-capabilities-before-mount options are now marked as deprecated. ([#91373](https://github.com/kubernetes/kubernetes/pull/91373), [@knabben](https://github.com/knabben)) [SIG Node]
+- The kube-apiserver `--kubelet-https` flag is deprecated. kube-apiserver connections to kubelets now unconditionally use `https` (kubelets have unconditionally used `https` to serve the endpoints the apiserver communicates with since before v1.0). ([#91630](https://github.com/kubernetes/kubernetes/pull/91630), [@liggitt](https://github.com/liggitt)) [SIG API Machinery and Node]
+- Update CNI to v0.8.6 ([#91370](https://github.com/kubernetes/kubernetes/pull/91370), [@justaugustus](https://github.com/justaugustus)) [SIG Cloud Provider, Network, Release and Testing]
+- `beta.kubernetes.io/os` and `beta.kubernetes.io/arch` node labels are deprecated. Update node selectors to use `kubernetes.io/os` and `kubernetes.io/arch`. ([#91046](https://github.com/kubernetes/kubernetes/pull/91046), [@wawa0210](https://github.com/wawa0210)) [SIG Apps and Node]
+- base-images: Use debian-base:v2.1.0 ([#90697](https://github.com/kubernetes/kubernetes/pull/90697), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery and Release]
+- base-images: Use debian-iptables:v12.1.0 ([#90782](https://github.com/kubernetes/kubernetes/pull/90782), [@justaugustus](https://github.com/justaugustus)) [SIG Release]
+
+## Dependencies
+
+### Added
+- cloud.google.com/go/bigquery: v1.0.1
+- cloud.google.com/go/datastore: v1.0.0
+- cloud.google.com/go/pubsub: v1.0.1
+- cloud.google.com/go/storage: v1.0.0
+- dmitri.shuralyov.com/gpu/mtl: 666a987
+- github.com/cespare/xxhash/v2: [v2.1.1](https://github.com/cespare/xxhash/v2/tree/v2.1.1)
+- github.com/chzyer/logex: [v1.1.10](https://github.com/chzyer/logex/tree/v1.1.10)
+- github.com/chzyer/readline: [2972be2](https://github.com/chzyer/readline/tree/2972be2)
+- github.com/chzyer/test: [a1ea475](https://github.com/chzyer/test/tree/a1ea475)
+- github.com/containerd/cgroups: [bf292b2](https://github.com/containerd/cgroups/tree/bf292b2)
+- github.com/containerd/continuity: [aaeac12](https://github.com/containerd/continuity/tree/aaeac12)
+- github.com/containerd/fifo: [a9fb20d](https://github.com/containerd/fifo/tree/a9fb20d)
+- github.com/containerd/go-runc: [5a6d9f3](https://github.com/containerd/go-runc/tree/5a6d9f3)
+- github.com/coreos/bbolt: [v1.3.2](https://github.com/coreos/bbolt/tree/v1.3.2)
+- github.com/cpuguy83/go-md2man/v2: [v2.0.0](https://github.com/cpuguy83/go-md2man/v2/tree/v2.0.0)
+- github.com/go-gl/glfw/v3.3/glfw: [12ad95a](https://github.com/go-gl/glfw/v3.3/glfw/tree/12ad95a)
+- github.com/google/renameio: [v0.1.0](https://github.com/google/renameio/tree/v0.1.0)
+- github.com/ianlancetaylor/demangle: [5e5cf60](https://github.com/ianlancetaylor/demangle/tree/5e5cf60)
+- github.com/rogpeppe/go-internal: [v1.3.0](https://github.com/rogpeppe/go-internal/tree/v1.3.0)
+- github.com/russross/blackfriday/v2: [v2.0.1](https://github.com/russross/blackfriday/v2/tree/v2.0.1)
+- github.com/shurcooL/sanitized_anchor_name: [v1.0.0](https://github.com/shurcooL/sanitized_anchor_name/tree/v1.0.0)
+- github.com/ugorji/go: [v1.1.4](https://github.com/ugorji/go/tree/v1.1.4)
+- golang.org/x/mod: v0.1.0
+- google.golang.org/protobuf: v1.23.0
+- gopkg.in/errgo.v2: v2.1.0
+- k8s.io/klog/v2: v2.0.0
+
+### Changed
+- cloud.google.com/go: v0.38.0 → v0.51.0
+- github.com/GoogleCloudPlatform/k8s-cloud-provider: [27a4ced → 7901bc8](https://github.com/GoogleCloudPlatform/k8s-cloud-provider/compare/27a4ced...7901bc8)
+- github.com/Microsoft/hcsshim: [672e52e → v0.8.9](https://github.com/Microsoft/hcsshim/compare/672e52e...v0.8.9)
+- github.com/alecthomas/template: [a0175ee → fb15b89](https://github.com/alecthomas/template/compare/a0175ee...fb15b89)
+- github.com/alecthomas/units: [2efee85 → c3de453](https://github.com/alecthomas/units/compare/2efee85...c3de453)
+- github.com/beorn7/perks: [v1.0.0 → v1.0.1](https://github.com/beorn7/perks/compare/v1.0.0...v1.0.1)
+- github.com/coreos/pkg: [97fdf19 → 399ea9e](https://github.com/coreos/pkg/compare/97fdf19...399ea9e)
+- github.com/go-kit/kit: [v0.8.0 → v0.9.0](https://github.com/go-kit/kit/compare/v0.8.0...v0.9.0)
+- github.com/go-logfmt/logfmt: [v0.3.0 → v0.4.0](https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0)
+- github.com/golang/groupcache: [02826c3 → 215e871](https://github.com/golang/groupcache/compare/02826c3...215e871)
+- github.com/golang/protobuf: [v1.3.3 → v1.4.2](https://github.com/golang/protobuf/compare/v1.3.3...v1.4.2)
+- github.com/google/cadvisor: [8af10c6 → 6a8d614](https://github.com/google/cadvisor/compare/8af10c6...6a8d614)
+- github.com/google/pprof: [3ea8567 → d4f498a](https://github.com/google/pprof/compare/3ea8567...d4f498a)
+- github.com/googleapis/gax-go/v2: [v2.0.4 → v2.0.5](https://github.com/googleapis/gax-go/v2/compare/v2.0.4...v2.0.5)
+- github.com/json-iterator/go: [v1.1.8 → v1.1.9](https://github.com/json-iterator/go/compare/v1.1.8...v1.1.9)
+- github.com/jstemmer/go-junit-report: [af01ea7 → v0.9.1](https://github.com/jstemmer/go-junit-report/compare/af01ea7...v0.9.1)
+- github.com/prometheus/client_golang: [v1.0.0 → v1.6.0](https://github.com/prometheus/client_golang/compare/v1.0.0...v1.6.0)
+- github.com/prometheus/common: [v0.4.1 → v0.9.1](https://github.com/prometheus/common/compare/v0.4.1...v0.9.1)
+- github.com/prometheus/procfs: [v0.0.5 → v0.0.11](https://github.com/prometheus/procfs/compare/v0.0.5...v0.0.11)
+- github.com/spf13/cobra: [v0.0.5 → v1.0.0](https://github.com/spf13/cobra/compare/v0.0.5...v1.0.0)
+- github.com/spf13/viper: [v1.3.2 → v1.4.0](https://github.com/spf13/viper/compare/v1.3.2...v1.4.0)
+- github.com/tmc/grpc-websocket-proxy: [89b8d40 → 0ad062e](https://github.com/tmc/grpc-websocket-proxy/compare/89b8d40...0ad062e)
+- go.opencensus.io: v0.21.0 → v0.22.2
+- go.uber.org/atomic: v1.3.2 → v1.4.0
+- golang.org/x/exp: 4b39c73 → da58074
+- golang.org/x/image: 0694c2d → cff245a
+- golang.org/x/lint: 959b441 → fdd1cda
+- golang.org/x/mobile: d3739f8 → d2bd2a2
+- golang.org/x/oauth2: 0f29369 → 858c2ad
+- google.golang.org/api: 5213b80 → v0.15.1
+- google.golang.org/appengine: v1.5.0 → v1.6.5
+- google.golang.org/genproto: f3c370f → ca5a221
+- honnef.co/go/tools: e561f67 → v0.0.1-2019.2.3
+- k8s.io/gengo: e0e292d → 8167cfd
+- k8s.io/kube-openapi: e1beb1b → 656914f
+- k8s.io/utils: a9aa75a → 2df71eb
+- sigs.k8s.io/apiserver-network-proxy/konnectivity-client: v0.0.7 → 33b9978
+
+### Removed
+- github.com/coreos/go-etcd: [v2.0.0+incompatible](https://github.com/coreos/go-etcd/tree/v2.0.0)
+- github.com/ugorji/go/codec: [d75b2dc](https://github.com/ugorji/go/codec/tree/d75b2dc)
+- k8s.io/klog: v1.0.0
+
+
+
+# v1.19.0-beta.0
+
+
+## Downloads for v1.19.0-beta.0
+
+### Source Code
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes.tar.gz) | 8c7e820b8bd7a8f742b7560cafe6ae1acc4c9836ae23d1b10d987b4de6a690826be75c68b8f76ec027097e8dfd861afb1d229b3687f0b82afcfe7b4d6481242e
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-src.tar.gz) | 543e9d36fd8b2de3e19631d3295d3a7706e6e88bbd3adb2d558b27b3179a3961455f4f04f0d4a5adcff1466779e1b08023fe64dc2ab39813b37adfbbc779dec7
+
+### Client binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-darwin-386.tar.gz) | 3ef37ef367a8d9803f023f6994d73ff217865654a69778c1ea3f58c88afbf25ff5d8d6bec9c608ac647c2654978228c4e63f30eec2a89d16d60f4a1c5f333b22
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-darwin-amd64.tar.gz) | edb02b0b8d6a1c2167fbce4a85d84fb413566d3a76839fd366801414ca8ad2d55a5417b39b4cac6b65fddf13c1b3259791a607703773241ca22a67945ecb0014
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-linux-386.tar.gz) | dafe93489df7328ae23f4bdf0a9d2e234e18effe7e042b217fe2dd1355e527a54bab3fb664696ed606a8ebedce57da4ee12647ec1befa2755bd4c43d9d016063
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-linux-amd64.tar.gz) | d8e2bf8c9dd665410c2e7ceaa98bc4fc4f966753b7ade91dcef3b5eff45e0dda63bd634610c8761392a7804deb96c6b030c292280bf236b8b29f63b7f1af3737
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-linux-arm.tar.gz) | d590d3d07d0ebbb562bce480c7cbe4e60b99feba24376c216fe73d8b99a246e2cd2acb72abe1427bde3e541d94d55b7688daf9e6961e4cbc6b875ac4eeea6e62
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-linux-arm64.tar.gz) | f9647a99a566c9febd348c1c4a8e5c05326058eab076292a8bb5d3a2b882ee49287903f8e0e036b40af294aa3571edd23e65f3de91330ac9af0c10350b02583d
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-linux-ppc64le.tar.gz) | 662f009bc393734a89203d7956942d849bad29e28448e7baa017d1ac2ec2d26d7290da4a44bccb99ed960b2e336d9d98908c98f8a3d9fe1c54df2d134c799cad
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-linux-s390x.tar.gz) | 61fdf4aff78dcdb721b82a3602bf5bc94d44d51ab6607b255a9c2218bb3e4b57f6e656c2ee0dd68586fb53acbeff800d6fd03e4642dded49735a93356e7c5703
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-windows-386.tar.gz) | 20d1e803b10b3bee09a7a206473ba320cc5f1120278d8f6e0136c388b2720da7264b917cd4738488b1d0a9aa922eb581c1f540715a6c2042c4dd7b217b6a9a0a
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-client-windows-amd64.tar.gz) | b85d729ec269f6aad0b6d2f95f3648fbea84330d2fbfde2267a519bc08c42d70d7b658b0e41c3b0d5f665702a8f1bbb37652753de34708ae3a03e45175c8b92c
+
+### Server binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-server-linux-amd64.tar.gz) | c3641bdb0a8d8eff5086d24b71c6547131092b21f976b080dc48129f91de3da560fed6edf880eab1d205017ad74be716a5b970e4bbc00d753c005e5932b3d319
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-server-linux-arm.tar.gz) | 7c29b8e33ade23a787330d28da22bf056610dae4d3e15574c56c46340afe5e0fdb00126ae3fd64fd70a26d1a87019f47e401682b88fa1167368c7edbecc72ccf
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-server-linux-arm64.tar.gz) | 27cd6042425eb94bb468431599782467ed818bcc51d75e8cb251c287a806b60a5cce50d4ae7525348c5446eaa45f849bc3fe3e6ac7248b54f3ebae8bf6553c3f
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-server-linux-ppc64le.tar.gz) | ede896424eb12ec07dd3756cbe808ca3915f51227e7b927795402943d81a99bb61654fd8f485a838c2faf199d4a55071af5bd8e69e85669a7f4a0b0e84a093cc
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-server-linux-s390x.tar.gz) | 4e48d4f5afd22f0ae6ade7da4877238fd2a5c10ae3dea2ae721c39ac454b0b295e1d7501e26bddee4bc0289e79e33dadca255a52a645bee98cf81acf937db0ef
+
+### Node binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-node-linux-amd64.tar.gz) | 8025bd8deb9586487fcf268bdaf99e8fd9f9433d9e7221c29363d1d66c4cbd55a2c44e6c89bc8133828c6a1aa0c42c2359b74846dfb71765c9ae8f21b8170625
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-node-linux-arm.tar.gz) | 25787d47c8cc1e9445218d3a947b443d261266033187f8b7bc6141ae353a6806503fe72e3626f058236d4cd7f284348d2cc8ccb7a0219b9ddd7c6a336dae360b
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-node-linux-arm64.tar.gz) | ff737a7310057bdfd603f2853b15f79dc2b54a3cbbbd7a8ffd4d9756720fa5a02637ffc10a381eeee58bef61024ff348a49f3044a6dfa0ba99645fda8d08e2da
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-node-linux-ppc64le.tar.gz) | 2b1144c9ae116306a2c3214b02361083a60a349afc804909f95ea85db3660de5025de69a1860e8fc9e7e92ded335c93b74ecbbb20e1f6266078842d4adaf4161
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-node-linux-s390x.tar.gz) | 822ec64aef3d65faa668a91177aa7f5d0c78a83cc1284c5e30629eda448ee4b2874cf4cfa6f3d68ad8eb8029dd035bf9fe15f68cc5aa4b644513f054ed7910ae
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-beta.0/kubernetes-node-windows-amd64.tar.gz) | 3957cae43211df050c5a9991a48e23ac27d20aec117c580c53fc7edf47caf79ed1e2effa969b5b972968a83e9bdba0b20c46705caca0c35571713041481c1966
+
+## Changelog since v1.19.0-alpha.3
+
+## Changes by Kind
+
+### API Change
+ - EnvVarSource api doc bug fixes ([#91194](https://github.com/kubernetes/kubernetes/pull/91194), [@wawa0210](https://github.com/wawa0210)) [SIG Apps]
+ - The Kubelet's `--really-crash-for-testing` and `--chaos-chance` options are now marked as deprecated. ([#90499](https://github.com/kubernetes/kubernetes/pull/90499), [@knabben](https://github.com/knabben)) [SIG Node]
+ - `NodeResourcesLeastAllocated` and `NodeResourcesMostAllocated` plugins now support customized weight on the CPU and memory. ([#90544](https://github.com/kubernetes/kubernetes/pull/90544), [@chendave](https://github.com/chendave)) [SIG Scheduling]
+
+### Feature
+ - Add .import-restrictions file to cmd/cloud-controller-manager. ([#90630](https://github.com/kubernetes/kubernetes/pull/90630), [@nilo19](https://github.com/nilo19)) [SIG API Machinery and Cloud Provider]
+ - Add Annotations to CRI-API ImageSpec objects. ([#90061](https://github.com/kubernetes/kubernetes/pull/90061), [@marosset](https://github.com/marosset)) [SIG Node and Windows]
+ - Kubelets configured to rotate client certificates now publish a `certificate_manager_server_ttl_seconds` gauge metric indicating the remaining seconds until certificate expiration. ([#91148](https://github.com/kubernetes/kubernetes/pull/91148), [@liggitt](https://github.com/liggitt)) [SIG Auth and Node]
+ - Rest.Config now supports a flag to override proxy configuration that was previously only configurable through environment variables. ([#81443](https://github.com/kubernetes/kubernetes/pull/81443), [@mikedanese](https://github.com/mikedanese)) [SIG API Machinery and Node]
+ - Scores from PodTopologySpreading have reduced differentiation as maxSkew increases. ([#90820](https://github.com/kubernetes/kubernetes/pull/90820), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+ - Service controller: only sync LB node pools when relevant fields in Node changes ([#90769](https://github.com/kubernetes/kubernetes/pull/90769), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps and Network]
+ - Switch core master base images (kube-apiserver, kube-scheduler) from debian to distroless ([#90674](https://github.com/kubernetes/kubernetes/pull/90674), [@dims](https://github.com/dims)) [SIG Cloud Provider, Release and Scalability]
+ - Update cri-tools to v1.18.0 ([#89720](https://github.com/kubernetes/kubernetes/pull/89720), [@saschagrunert](https://github.com/saschagrunert)) [SIG Cloud Provider, Cluster Lifecycle, Release and Scalability]
+
+### Bug or Regression
+ - Add support for TLS 1.3 ciphers: TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256 and TLS_AES_256_GCM_SHA384. ([#90843](https://github.com/kubernetes/kubernetes/pull/90843), [@pjbgf](https://github.com/pjbgf)) [SIG API Machinery, Auth and Cluster Lifecycle]
+ - Base-images: Update to kube-cross:v1.13.9-5 ([#90963](https://github.com/kubernetes/kubernetes/pull/90963), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing]
+ - CloudNodeLifecycleController will check node existence status before shutdown status when monitoring nodes. ([#90737](https://github.com/kubernetes/kubernetes/pull/90737), [@jiahuif](https://github.com/jiahuif)) [SIG Apps and Cloud Provider]
+ - First pod with required affinity terms can schedule only on nodes with matching topology keys. ([#91168](https://github.com/kubernetes/kubernetes/pull/91168), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+ - Fix VirtualMachineScaleSets.virtualMachines.GET not allowed issues when customers have set VMSS orchestrationMode. ([#91097](https://github.com/kubernetes/kubernetes/pull/91097), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+ - Fix a racing issue that scheduler may perform unnecessary scheduling attempt. ([#90660](https://github.com/kubernetes/kubernetes/pull/90660), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling and Testing]
+ - Fix kubectl run --dry-run client ignore namespace ([#90785](https://github.com/kubernetes/kubernetes/pull/90785), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+ - Fix public IP not shown issues after assigning public IP to Azure VMs ([#90886](https://github.com/kubernetes/kubernetes/pull/90886), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+ - Fix: azure disk dangling attach issue which would cause API throttling ([#90749](https://github.com/kubernetes/kubernetes/pull/90749), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+ - Fix: support removal of nodes backed by deleted non VMSS instances on Azure ([#91184](https://github.com/kubernetes/kubernetes/pull/91184), [@bpineau](https://github.com/bpineau)) [SIG Cloud Provider]
+ - Fixed a regression preventing garbage collection of RBAC role and binding objects ([#90534](https://github.com/kubernetes/kubernetes/pull/90534), [@apelisse](https://github.com/apelisse)) [SIG Auth]
+ - For external storage e2e test suite, update external driver, to pick snapshot provisioner from VolumeSnapshotClass, when a VolumeSnapshotClass is explicitly provided as an input. ([#90878](https://github.com/kubernetes/kubernetes/pull/90878), [@saikat-royc](https://github.com/saikat-royc)) [SIG Storage and Testing]
+ - In a HA env, during the period a standby scheduler lost connection to API server, if a Pod is deleted and recreated, and the standby scheduler becomes master afterwards, there could be a scheduler cache corruption. This PR fixes this issue. ([#91126](https://github.com/kubernetes/kubernetes/pull/91126), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+ - Kubeadm: increase robustness for "kubeadm join" when adding etcd members on slower setups ([#90645](https://github.com/kubernetes/kubernetes/pull/90645), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+ - Prevent PVC requested size overflow when expanding or creating a volume ([#90907](https://github.com/kubernetes/kubernetes/pull/90907), [@gnufied](https://github.com/gnufied)) [SIG Cloud Provider and Storage]
+ - Scheduling failures due to no nodes available are now reported as unschedulable under ```schedule_attempts_total``` metric. ([#90989](https://github.com/kubernetes/kubernetes/pull/90989), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+
+### Other (Cleanup or Flake)
+ - Adds additional testing to ensure that udp pods conntrack are cleaned up ([#90180](https://github.com/kubernetes/kubernetes/pull/90180), [@JacobTanenbaum](https://github.com/JacobTanenbaum)) [SIG Architecture, Network and Testing]
+ - Adjusts the fsType for cinder values to be `ext4` if no fsType is specified. ([#90608](https://github.com/kubernetes/kubernetes/pull/90608), [@huffmanca](https://github.com/huffmanca)) [SIG Storage]
+ - Change beta.kubernetes.io/os to kubernetes.io/os ([#89461](https://github.com/kubernetes/kubernetes/pull/89461), [@wawa0210](https://github.com/wawa0210)) [SIG Cloud Provider and Cluster Lifecycle]
+ - Improve server-side apply conflict errors by setting dedicated kubectl subcommand field managers ([#88885](https://github.com/kubernetes/kubernetes/pull/88885), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing]
+ - It is now possible to use the service annotation `cloud.google.com/network-tier: Standard` to configure the Network Tier of the GCE Loadbalancer ([#88532](https://github.com/kubernetes/kubernetes/pull/88532), [@zioproto](https://github.com/zioproto)) [SIG Cloud Provider, Network and Testing]
+ - Kubeadm now forwards the IPv6DualStack feature gate using the kubelet component config, instead of the kubelet command line ([#90840](https://github.com/kubernetes/kubernetes/pull/90840), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+ - Kubeadm: do not use a DaemonSet for the pre-pull of control-plane images during "kubeadm upgrade apply". Individual node upgrades now pull the required images using a preflight check. The flag "--image-pull-timeout" for "kubeadm upgrade apply" is now deprecated and will be removed in a future release following a GA deprecation policy. ([#90788](https://github.com/kubernetes/kubernetes/pull/90788), [@xlgao-zju](https://github.com/xlgao-zju)) [SIG Cluster Lifecycle]
+ - Kubeadm: use two separate checks on /livez and /readyz for the kube-apiserver static Pod instead of using /healthz ([#90970](https://github.com/kubernetes/kubernetes/pull/90970), [@johscheuer](https://github.com/johscheuer)) [SIG Cluster Lifecycle]
+ - The "HostPath should give a volume the correct mode" is no longer a conformance test ([#90861](https://github.com/kubernetes/kubernetes/pull/90861), [@dims](https://github.com/dims)) [SIG Architecture and Testing]
+ - `beta.kubernetes.io/os` and `beta.kubernetes.io/arch` node labels are deprecated. Update node selectors to use `kubernetes.io/os` and `kubernetes.io/arch`. ([#91046](https://github.com/kubernetes/kubernetes/pull/91046), [@wawa0210](https://github.com/wawa0210)) [SIG Apps and Node]
+ - base-images: Use debian-base:v2.1.0 ([#90697](https://github.com/kubernetes/kubernetes/pull/90697), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery and Release]
+ - base-images: Use debian-iptables:v12.1.0 ([#90782](https://github.com/kubernetes/kubernetes/pull/90782), [@justaugustus](https://github.com/justaugustus)) [SIG Release]
+
+## Dependencies
+
+### Added
+- cloud.google.com/go/bigquery: v1.0.1
+- cloud.google.com/go/datastore: v1.0.0
+- cloud.google.com/go/pubsub: v1.0.1
+- cloud.google.com/go/storage: v1.0.0
+- dmitri.shuralyov.com/gpu/mtl: 666a987
+- github.com/cespare/xxhash/v2: [v2.1.1](https://github.com/cespare/xxhash/v2/tree/v2.1.1)
+- github.com/chzyer/logex: [v1.1.10](https://github.com/chzyer/logex/tree/v1.1.10)
+- github.com/chzyer/readline: [2972be2](https://github.com/chzyer/readline/tree/2972be2)
+- github.com/chzyer/test: [a1ea475](https://github.com/chzyer/test/tree/a1ea475)
+- github.com/coreos/bbolt: [v1.3.2](https://github.com/coreos/bbolt/tree/v1.3.2)
+- github.com/cpuguy83/go-md2man/v2: [v2.0.0](https://github.com/cpuguy83/go-md2man/v2/tree/v2.0.0)
+- github.com/go-gl/glfw/v3.3/glfw: [12ad95a](https://github.com/go-gl/glfw/v3.3/glfw/tree/12ad95a)
+- github.com/google/renameio: [v0.1.0](https://github.com/google/renameio/tree/v0.1.0)
+- github.com/ianlancetaylor/demangle: [5e5cf60](https://github.com/ianlancetaylor/demangle/tree/5e5cf60)
+- github.com/rogpeppe/go-internal: [v1.3.0](https://github.com/rogpeppe/go-internal/tree/v1.3.0)
+- github.com/russross/blackfriday/v2: [v2.0.1](https://github.com/russross/blackfriday/v2/tree/v2.0.1)
+- github.com/shurcooL/sanitized_anchor_name: [v1.0.0](https://github.com/shurcooL/sanitized_anchor_name/tree/v1.0.0)
+- github.com/ugorji/go: [v1.1.4](https://github.com/ugorji/go/tree/v1.1.4)
+- golang.org/x/mod: v0.1.0
+- google.golang.org/protobuf: v1.23.0
+- gopkg.in/errgo.v2: v2.1.0
+- k8s.io/klog/v2: v2.0.0
+
+### Changed
+- cloud.google.com/go: v0.38.0 → v0.51.0
+- github.com/GoogleCloudPlatform/k8s-cloud-provider: [27a4ced → 7901bc8](https://github.com/GoogleCloudPlatform/k8s-cloud-provider/compare/27a4ced...7901bc8)
+- github.com/alecthomas/template: [a0175ee → fb15b89](https://github.com/alecthomas/template/compare/a0175ee...fb15b89)
+- github.com/alecthomas/units: [2efee85 → c3de453](https://github.com/alecthomas/units/compare/2efee85...c3de453)
+- github.com/beorn7/perks: [v1.0.0 → v1.0.1](https://github.com/beorn7/perks/compare/v1.0.0...v1.0.1)
+- github.com/coreos/pkg: [97fdf19 → 399ea9e](https://github.com/coreos/pkg/compare/97fdf19...399ea9e)
+- github.com/go-kit/kit: [v0.8.0 → v0.9.0](https://github.com/go-kit/kit/compare/v0.8.0...v0.9.0)
+- github.com/go-logfmt/logfmt: [v0.3.0 → v0.4.0](https://github.com/go-logfmt/logfmt/compare/v0.3.0...v0.4.0)
+- github.com/golang/groupcache: [02826c3 → 215e871](https://github.com/golang/groupcache/compare/02826c3...215e871)
+- github.com/golang/protobuf: [v1.3.3 → v1.4.2](https://github.com/golang/protobuf/compare/v1.3.3...v1.4.2)
+- github.com/google/cadvisor: [8af10c6 → 6a8d614](https://github.com/google/cadvisor/compare/8af10c6...6a8d614)
+- github.com/google/pprof: [3ea8567 → d4f498a](https://github.com/google/pprof/compare/3ea8567...d4f498a)
+- github.com/googleapis/gax-go/v2: [v2.0.4 → v2.0.5](https://github.com/googleapis/gax-go/v2/compare/v2.0.4...v2.0.5)
+- github.com/json-iterator/go: [v1.1.8 → v1.1.9](https://github.com/json-iterator/go/compare/v1.1.8...v1.1.9)
+- github.com/jstemmer/go-junit-report: [af01ea7 → v0.9.1](https://github.com/jstemmer/go-junit-report/compare/af01ea7...v0.9.1)
+- github.com/prometheus/client_golang: [v1.0.0 → v1.6.0](https://github.com/prometheus/client_golang/compare/v1.0.0...v1.6.0)
+- github.com/prometheus/common: [v0.4.1 → v0.9.1](https://github.com/prometheus/common/compare/v0.4.1...v0.9.1)
+- github.com/prometheus/procfs: [v0.0.5 → v0.0.11](https://github.com/prometheus/procfs/compare/v0.0.5...v0.0.11)
+- github.com/spf13/cobra: [v0.0.5 → v1.0.0](https://github.com/spf13/cobra/compare/v0.0.5...v1.0.0)
+- github.com/spf13/viper: [v1.3.2 → v1.4.0](https://github.com/spf13/viper/compare/v1.3.2...v1.4.0)
+- github.com/tmc/grpc-websocket-proxy: [89b8d40 → 0ad062e](https://github.com/tmc/grpc-websocket-proxy/compare/89b8d40...0ad062e)
+- go.opencensus.io: v0.21.0 → v0.22.2
+- go.uber.org/atomic: v1.3.2 → v1.4.0
+- golang.org/x/exp: 4b39c73 → da58074
+- golang.org/x/image: 0694c2d → cff245a
+- golang.org/x/lint: 959b441 → fdd1cda
+- golang.org/x/mobile: d3739f8 → d2bd2a2
+- golang.org/x/oauth2: 0f29369 → 858c2ad
+- google.golang.org/api: 5213b80 → v0.15.1
+- google.golang.org/appengine: v1.5.0 → v1.6.5
+- google.golang.org/genproto: f3c370f → ca5a221
+- honnef.co/go/tools: e561f67 → v0.0.1-2019.2.3
+- k8s.io/gengo: e0e292d → 8167cfd
+- k8s.io/kube-openapi: e1beb1b → 656914f
+- k8s.io/utils: a9aa75a → 2df71eb
+- sigs.k8s.io/apiserver-network-proxy/konnectivity-client: v0.0.7 → 33b9978
+
+### Removed
+- github.com/coreos/go-etcd: [v2.0.0+incompatible](https://github.com/coreos/go-etcd/tree/v2.0.0)
+- github.com/ugorji/go/codec: [d75b2dc](https://github.com/ugorji/go/codec/tree/d75b2dc)
+- k8s.io/klog: v1.0.0
+
+
+
+# v1.19.0-alpha.3
+
+[Documentation](https://docs.k8s.io)
+
+## Downloads for v1.19.0-alpha.3
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes.tar.gz) | `49df3a77453b759d3262be6883dd9018426666b4261313725017eed42da1bc8dd1af037ec6c11357a6360c0c32c2486490036e9e132c9026f491325ce353c84b`
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-src.tar.gz) | `ddbb0baaf77516dc885c41017f4a8d91d0ff33eeab14009168a1e4d975939ccc6a053a682c2af14346c67fe7b142aa2c1ba32e86a30f2433cefa423764c5332d`
+
+### Client Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-darwin-386.tar.gz) | `c0fb1afb5b22f6e29cf3e5121299d3a5244a33b7663e041209bcc674a0009842b35b9ebdafa5bd6b91a1e1b67fa891e768627b97ea5258390d95250f07c2defc`
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-darwin-amd64.tar.gz) | `f32596863fed32bc8e3f032ef1e4f9f232898ed506624cb1b4877ce2ced2a0821d70b15599258422aa13181ab0e54f38837399ca611ab86cbf3feec03ede8b95`
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-linux-386.tar.gz) | `37290244cee54ff05662c2b14b69445eee674d385e6b05ca0b8c8b410ba047cf054033229c78af91670ca1370807753103c25dbb711507edc1c6beca87bd0988`
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-linux-amd64.tar.gz) | `3753eb28b9d68a47ef91fff3e91215015c28bce12828f81c0bbddbde118fd2cf4d580e474e54b1e8176fa547829e2ed08a4df36bbf83b912c831a459821bd581`
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-linux-arm.tar.gz) | `86b1cdb59a6b4e9de4496e5aa817b1ae7687ac6a93f8b8259cdeb356020773711d360a2ea35f7a8dc1bdd6d31c95e6491abf976afaff3392eb7d2df1008e192c`
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-linux-arm64.tar.gz) | `fbf324e92b93cd8048073b2a627ddc8866020bc4f086604d82bf4733d463411a534d8c8f72565976eb1b32be64aecae8858cd140ef8b7a3c96fcbbf92ca54689`
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-linux-ppc64le.tar.gz) | `7a6551eca17d29efb5d818e360b53ab2f0284e1091cc537e0a7ce39843d0b77579f26eb14bdeca9aa9e0aa0ef92ce1ccde34bdce84b4a5c1e090206979afb0ea`
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-linux-s390x.tar.gz) | `46352be54882cf3edb949b355e71daea839c9b1955ccfe1085590b81326665d81cabde192327d82e56d6a157e224caefdcfbec3364b9f8b18b5da0cfcb97fc0c`
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-windows-386.tar.gz) | `d049bf5f27e5e646ea4aa657aa0a694de57394b0dc60eadf1f7516d1ca6a6db39fc89d34bb6bba0a82f0c140113c2a91c41ad409e0ab41118a104f47eddcb9d2`
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-client-windows-amd64.tar.gz) | `2e585f6f97b86443a6e3a847c8dfaa29c6323f8d5bbfdb86dc7bf5465ba54f64b35ee55a6d38e9be105a67fff39057ad16db3f3b1c3b9c909578517f4da7e51e`
+
+### Server Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-server-linux-amd64.tar.gz) | `8c41c6abf32ba7040c2cc654765d443e615d96891eacf6bcec24146a8aaf79b9206d13358518958e5ec04eb911ade108d4522ebd8603b88b3e3d95e7d5b24e60`
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-server-linux-arm.tar.gz) | `7e54c60bf724e2e3e2cff1197512ead0f73030788877f2f92a7e0deeeabd86e75ce8120eb815bf63909f8a110e647a5fcfddd510efffbd9c339bd0f90caa6706`
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-server-linux-arm64.tar.gz) | `7c57fd80b18be6dd6b6e17558d12ec0c07c06ce248e99837737fdd39b7f5d752597679748dc6294563f30def986ed712a8f469f3ea1c3a4cbe5d63c44f1d41dc`
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-server-linux-ppc64le.tar.gz) | `d22b1d4d8ccf9e9df8f90d35b8d2a1e7916f8d809806743cddc00b15d8ace095c54c61d7c9affd6609a316ee14ba43bf760bfec4276aee8273203aab3e7ac3c1`
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-server-linux-s390x.tar.gz) | `3177c9a2d6bd116d614fa69ff9cb16b822bee4e36e38f93ece6aeb5d118ae67dbe61546c7f628258ad719e763c127ca32437ded70279ea869cfe4869e06cbdde`
+
+### Node Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-node-linux-amd64.tar.gz) | `543248e35c57454bfc4b6f3cf313402d7cf81606b9821a5dd95c6758d55d5b9a42e283a7fb0d45322ad1014e3382aafaee69879111c0799dac31d5c4ad1b8041`
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-node-linux-arm.tar.gz) | `c94bed3861376d3fd41cb7bc93b5a849612bc7346ed918f6b5b634449cd3acef69ff63ca0b6da29f45df68402f64f3d290d7688bc50f46dac07e889219dac30c`
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-node-linux-arm64.tar.gz) | `3649dbca59d08c3922830b7acd8176e8d2f622fbf6379288f3a70045763d5d72c944d241f8a2c57306f23e6e44f7cc3b912554442f77e0f90e9f876f240114a8`
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-node-linux-ppc64le.tar.gz) | `5655d1d48a1ae97352af2d703954c7a28c2d1c644319c4eb24fe19ccc5fb546c30b34cc86d8910f26c88feee88d7583bc085ebfe58916054f73dcf372a824fd9`
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-node-linux-s390x.tar.gz) | `55190804357a687c37d1abb489d5aef7cea209d1c03778548f0aa4dab57a0b98b710fda09ff5c46d0963f2bb674726301d544b359f673df8f57226cafa831ce3`
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.3/kubernetes-node-windows-amd64.tar.gz) | `d8ffbe8dc9a0b0b55db357afa6ef94e6145f9142b1bc505897cac9ee7c950ef527a189397a8e61296e66ce76b020eccb276668256927d2273d6079b9ffebef24`
+
+## Changelog since v1.19.0-alpha.2
+
+## Urgent Upgrade Notes
+
+### (No, really, you MUST read this before you upgrade)
+
+- Kubeadm does not set the deprecated '--cgroup-driver' flag in /var/lib/kubelet/kubeadm-flags.env, it will be set in the kubelet config.yaml. If you have this flag in /var/lib/kubelet/kubeadm-flags.env or /etc/default/kubelet (/etc/sysconfig/kubelet for RPMs) please remove it and set the value using KubeletConfiguration ([#90513](https://github.com/kubernetes/kubernetes/pull/90513), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+
+- Kubeadm respects resolvConf value set by user even if systemd-resolved service is active. kubeadm no longer sets the flag in '--resolv-conf' in /var/lib/kubelet/kubeadm-flags.env. If you have this flag in /var/lib/kubelet/kubeadm-flags.env or /etc/default/kubelet (/etc/sysconfig/kubelet for RPMs) please remove it and set the value using KubeletConfiguration ([#90394](https://github.com/kubernetes/kubernetes/pull/90394), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+
+## Changes by Kind
+
+### Deprecation
+
+- Apiextensions.k8s.io/v1beta1 is deprecated in favor of apiextensions.k8s.io/v1 ([#90673](https://github.com/kubernetes/kubernetes/pull/90673), [@deads2k](https://github.com/deads2k)) [SIG API Machinery]
+- Apiregistration.k8s.io/v1beta1 is deprecated in favor of apiregistration.k8s.io/v1 ([#90672](https://github.com/kubernetes/kubernetes/pull/90672), [@deads2k](https://github.com/deads2k)) [SIG API Machinery]
+- Authentication.k8s.io/v1beta1 and authorization.k8s.io/v1beta1 are deprecated in 1.19 in favor of v1 levels and will be removed in 1.22 ([#90458](https://github.com/kubernetes/kubernetes/pull/90458), [@deads2k](https://github.com/deads2k)) [SIG API Machinery and Auth]
+- Autoscaling/v2beta1 is deprecated in favor of autoscaling/v2beta2 ([#90463](https://github.com/kubernetes/kubernetes/pull/90463), [@deads2k](https://github.com/deads2k)) [SIG Autoscaling]
+- Coordination.k8s.io/v1beta1 is deprecated in 1.19, targeted for removal in 1.22, use v1 instead. ([#90559](https://github.com/kubernetes/kubernetes/pull/90559), [@deads2k](https://github.com/deads2k)) [SIG Scalability]
+- Storage.k8s.io/v1beta1 is deprecated in favor of storage.k8s.io/v1 ([#90671](https://github.com/kubernetes/kubernetes/pull/90671), [@deads2k](https://github.com/deads2k)) [SIG Storage]
+
+### API Change
+
+- K8s.io/apimachinery - scheme.Convert() now uses only explicitly registered conversions - default reflection based conversion is no longer available. `+k8s:conversion-gen` tags can be used with the `k8s.io/code-generator` component to generate conversions. ([#90018](https://github.com/kubernetes/kubernetes/pull/90018), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery, Apps and Testing]
+- Kubelet's --runonce option is now also available in Kubelet's config file as `runOnce`. ([#89128](https://github.com/kubernetes/kubernetes/pull/89128), [@vincent178](https://github.com/vincent178)) [SIG Node]
+- Promote Immutable Secrets/ConfigMaps feature to Beta and enable the feature by default.
+ This allows to set `Immutable` field in Secrets or ConfigMap object to mark their contents as immutable. ([#89594](https://github.com/kubernetes/kubernetes/pull/89594), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps and Testing]
+- The unused `series.state` field, deprecated since v1.14, is removed from the `events.k8s.io/v1beta1` and `v1` Event types. ([#90449](https://github.com/kubernetes/kubernetes/pull/90449), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps]
+
+### Feature
+
+- Kube-apiserver: The NodeRestriction admission plugin now restricts Node labels kubelets are permitted to set when creating a new Node to the `--node-labels` parameters accepted by kubelets in 1.16+. ([#90307](https://github.com/kubernetes/kubernetes/pull/90307), [@liggitt](https://github.com/liggitt)) [SIG Auth and Node]
+- Kubectl supports taint no without specifying(without having to type the full resource name) ([#88723](https://github.com/kubernetes/kubernetes/pull/88723), [@wawa0210](https://github.com/wawa0210)) [SIG CLI]
+- New scoring for PodTopologySpreading that yields better spreading ([#90475](https://github.com/kubernetes/kubernetes/pull/90475), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- No ([#89549](https://github.com/kubernetes/kubernetes/pull/89549), [@happinesstaker](https://github.com/happinesstaker)) [SIG API Machinery, Auth, Instrumentation and Testing]
+- Try to send watch bookmarks (if requested) periodically in addition to sending them right before timeout ([#90560](https://github.com/kubernetes/kubernetes/pull/90560), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery]
+
+### Bug or Regression
+
+- Avoid GCE API calls when initializing GCE CloudProvider for Kubelets. ([#90218](https://github.com/kubernetes/kubernetes/pull/90218), [@wojtek-t](https://github.com/wojtek-t)) [SIG Cloud Provider and Scalability]
+- Avoid unnecessary scheduling churn when annotations are updated while Pods are being scheduled. ([#90373](https://github.com/kubernetes/kubernetes/pull/90373), [@fabiokung](https://github.com/fabiokung)) [SIG Scheduling]
+- Fix a bug where ExternalTrafficPolicy is not applied to service ExternalIPs. ([#90537](https://github.com/kubernetes/kubernetes/pull/90537), [@freehan](https://github.com/freehan)) [SIG Network]
+- Fixed a regression in wait.Forever that skips the backoff period on the first repeat ([#90476](https://github.com/kubernetes/kubernetes/pull/90476), [@zhan849](https://github.com/zhan849)) [SIG API Machinery]
+- Fixes a bug that non directory hostpath type can be recognized as HostPathFile and adds e2e tests for HostPathType ([#64829](https://github.com/kubernetes/kubernetes/pull/64829), [@dixudx](https://github.com/dixudx)) [SIG Apps, Storage and Testing]
+- Fixes a regression in 1.17 that dropped cache-control headers on API requests ([#90468](https://github.com/kubernetes/kubernetes/pull/90468), [@liggitt](https://github.com/liggitt)) [SIG API Machinery and Testing]
+- Fixes regression in CPUManager that caused freeing of exclusive CPUs at incorrect times ([#90377](https://github.com/kubernetes/kubernetes/pull/90377), [@cbf123](https://github.com/cbf123)) [SIG Cloud Provider and Node]
+- Fixes regression in CPUManager that had the (rare) possibility to release exclusive CPUs in app containers inherited from init containers. ([#90419](https://github.com/kubernetes/kubernetes/pull/90419), [@klueska](https://github.com/klueska)) [SIG Node]
+- Jsonpath support in kubectl / client-go serializes complex types (maps / slices / structs) as json instead of Go-syntax. ([#89660](https://github.com/kubernetes/kubernetes/pull/89660), [@pjferrell](https://github.com/pjferrell)) [SIG API Machinery, CLI and Cluster Lifecycle]
+- Kubeadm: ensure `image-pull-timeout` flag is respected during upgrade phase ([#90328](https://github.com/kubernetes/kubernetes/pull/90328), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubeadm: fix misleading warning for the kube-apiserver authz modes during "kubeadm init" ([#90064](https://github.com/kubernetes/kubernetes/pull/90064), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Provides a fix to allow a cluster in a private Azure cloud to authenticate to ACR in the same cloud. ([#90425](https://github.com/kubernetes/kubernetes/pull/90425), [@DavidParks8](https://github.com/DavidParks8)) [SIG Cloud Provider]
+- Update github.com/moby/ipvs to v1.0.1 to fix IPVS compatiblity issue with older kernels ([#90555](https://github.com/kubernetes/kubernetes/pull/90555), [@andrewsykim](https://github.com/andrewsykim)) [SIG Network]
+- Updates to pod status via the status subresource now validate that `status.podIP` and `status.podIPs` fields are well-formed. ([#90628](https://github.com/kubernetes/kubernetes/pull/90628), [@liggitt](https://github.com/liggitt)) [SIG Apps and Node]
+
+### Other (Cleanup or Flake)
+
+- Drop some conformance tests that rely on Kubelet API directly ([#90615](https://github.com/kubernetes/kubernetes/pull/90615), [@dims](https://github.com/dims)) [SIG Architecture, Network, Release and Testing]
+- Kube-proxy exposes a new metric, `kubeproxy_sync_proxy_rules_last_queued_timestamp_seconds`, that indicates the last time a change for kube-proxy was queued to be applied. ([#90175](https://github.com/kubernetes/kubernetes/pull/90175), [@squeed](https://github.com/squeed)) [SIG Instrumentation and Network]
+- Kubeadm: fix badly formatted error message for small service CIDRs ([#90411](https://github.com/kubernetes/kubernetes/pull/90411), [@johscheuer](https://github.com/johscheuer)) [SIG Cluster Lifecycle]
+- None. ([#90484](https://github.com/kubernetes/kubernetes/pull/90484), [@nilo19](https://github.com/nilo19)) [SIG Cloud Provider]
+- Remove the repeated calculation of nodeName and hostname during kubelet startup, these parameters are all calculated in the `RunKubelet` method ([#90284](https://github.com/kubernetes/kubernetes/pull/90284), [@wawa0210](https://github.com/wawa0210)) [SIG Node]
+- UI change ([#87743](https://github.com/kubernetes/kubernetes/pull/87743), [@u2takey](https://github.com/u2takey)) [SIG Apps and Node]
+- Update opencontainers/runtime-spec dependency to v1.0.2 ([#89644](https://github.com/kubernetes/kubernetes/pull/89644), [@saschagrunert](https://github.com/saschagrunert)) [SIG Node]
+
+
+# v1.19.0-alpha.2
+
+[Documentation](https://docs.k8s.io)
+
+## Downloads for v1.19.0-alpha.2
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes.tar.gz) | `a1106309d18a5d73882650f8a5cbd1f287436a0dc527136808e5e882f5e98d6b0d80029ff53abc0c06ac240f6b879167437f15906e5309248d536ec1675ed909`
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-src.tar.gz) | `c24c0b2a99ad0d834e0f017d7436fa84c6de8f30e8768ee59b1a418eb66a9b34ed4bcc25e03c04b19ea17366564f4ee6fe55a520fa4d0837e86c0a72fc7328c1`
+
+### Client Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-darwin-386.tar.gz) | `51ede026b0f8338f7fd293fb096772a67f88f23411c3280dff2f9efdd3ad7be7917d5c32ba764162c1a82b14218a90f624271c3cd8f386c8e41e4a9eac28751f`
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-darwin-amd64.tar.gz) | `4ed4358cabbecf724d974207746303638c7f23d422ece9c322104128c245c8485e37d6ffdd9d17e13bb1d8110e870c0fe17dcc1c9e556b69a4df7d34b6ff66d5`
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-linux-386.tar.gz) | `a57b10f146083828f18d809dbe07938b72216fa21083e7dbb9acce7dbcc3e8c51b8287d3bf89e81c8e1af4dd139075c675cc0f6ae7866ef69a3813db09309b97`
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-linux-amd64.tar.gz) | `099247419dd34dc78131f24f1890cc5c6a739e887c88fae96419d980c529456bfd45c4e451ba5b6425320ddc764245a2eab1bd5e2b5121d9a2774bdb5df9438b`
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-linux-arm.tar.gz) | `d12704bc6c821d3afcd206234fbd32e57cefcb5a5d15a40434b6b0ef4781d7fa77080e490678005225f24b116540ff51e436274debf66a6eb2247cd1dc833e6c`
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-linux-arm64.tar.gz) | `da0d110751fa9adac69ed2166eb82b8634989a32b65981eff014c84449047abfb94fe015e2d2e22665d57ff19f673e2c9f6549c578ad1b1e2f18b39871b50b81`
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-linux-ppc64le.tar.gz) | `7ac2b85bba9485dd38aed21895d627d34beb9e3b238e0684a9864f4ce2cfa67d7b3b7c04babc2ede7144d05beacdbe11c28c7d53a5b0041004700b2854b68042`
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-linux-s390x.tar.gz) | `ac447eabc5002a059e614b481d25e668735a7858134f8ad49feb388bb9f9191ff03b65da57bb49811119983e8744c8fdc7d19c184d9232bd6d038fae9eeec7c6`
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-windows-386.tar.gz) | `7c7dac7af329e4515302e7c35d3a19035352b4211942f254a4bb94c582a89d740b214d236ba6e35b9e78945a06b7e6fe8d70da669ecc19a40b7a9e8eaa2c0a28`
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-client-windows-amd64.tar.gz) | `0c89b70a25551123ffdd7c5d3cc499832454745508c5f539f13b4ea0bf6eea1afd16e316560da9cf68e5178ae69d91ccfe6c02d7054588db3fac15c30ed96f4b`
+
+### Server Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-server-linux-amd64.tar.gz) | `3396e6e0516a09999ec26631e305cf0fb1eb0109ca1490837550b7635eb051dd92443de8f4321971fc2b4030ea2d8da4bfe8b85887505dec96e2a136b6a46617`
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-server-linux-arm.tar.gz) | `cdea122a2d8d602ec0c89c1135ecfc27c47662982afc5b94edf4a6db7d759f27d6fe8d8b727bddf798bfec214a50e8d8a6d8eb0bca2ad5b1f72eb3768afd37f1`
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-server-linux-arm64.tar.gz) | `6543186a3f4437fb475fbc6a5f537640ab00afb2a22678c468c3699b3f7493f8b35fb6ca14694406ffc90ff8faad17a1d9d9d45732baa976cb69f4b27281295a`
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-server-linux-ppc64le.tar.gz) | `fde8dfeb9a0b243c8bef5127a9c63bf685429e2ff7e486ac8bae373882b87a4bd1b28a12955e3cce1c04eb0e6a67aabba43567952f9deef943a75fcb157a949c`
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-server-linux-s390x.tar.gz) | `399d004ee4db5d367f37a1fa9ace63b5db4522bd25eeb32225019f3df9b70c715d2159f6556015ddffe8f49aa0f72a1f095f742244637105ddbed3fb09570d0d`
+
+### Node Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-node-linux-amd64.tar.gz) | `fd865c2fcc71796d73c90982f90c789a44a921cf1d56aee692bd00efaa122dcc903b0448f285a06b0a903e809f8310546764b742823fb8d10690d36ec9e27cbd`
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-node-linux-arm.tar.gz) | `63aeb35222241e2a9285aeee4190b4b49c49995666db5cdb142016ca87872e7fdafc9723bc5de1797a45cc7e950230ed27be93ac165b8cda23ca2a9f9233c27a`
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-node-linux-arm64.tar.gz) | `3532574d9babfc064ce90099b514eadfc2a4ce69091f92d9c1a554ead91444373416d1506a35ef557438606a96cf0e5168a83ddd56c92593ea4adaa15b0b56a8`
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-node-linux-ppc64le.tar.gz) | `de59d91e5b0e4549e9a97f3a0243236e97babaed08c70f1a17273abf1966e6127db7546e1f91c3d66e933ce6eeb70bc65632ab473aa2c1be2a853da026c9d725`
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-node-linux-s390x.tar.gz) | `0cb8cf6f8dffd63122376a2f3e8986a2db155494a45430beea7cb5d1180417072428dabebd1af566ea13a4f079d46368c8b549be4b8a6c0f62a974290fd2fdb0`
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.2/kubernetes-node-windows-amd64.tar.gz) | `f1faf695f9f6fded681653f958b48779a2fecf50803af49787acba192441790c38b2b611ec8e238971508c56e67bb078fb423e8f6d9bddb392c199b5ee47937c`
+
+## Changelog since v1.19.0-alpha.1
+
+## Urgent Upgrade Notes
+
+### (No, really, you MUST read this before you upgrade)
+
+- Kubeadm now respects user specified etcd versions in the ClusterConfiguration and properly uses them. If users do not want to stick to the version specified in the ClusterConfiguration, they should edit the kubeadm-config config map and delete it. ([#89588](https://github.com/kubernetes/kubernetes/pull/89588), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+
+## Changes by Kind
+
+### API Change
+
+- Kube-proxy: add `--bind-address-hard-fail` flag to treat failure to bind to a port as fatal ([#89350](https://github.com/kubernetes/kubernetes/pull/89350), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle and Network]
+- Remove kubescheduler.config.k8s.io/v1alpha1 ([#89298](https://github.com/kubernetes/kubernetes/pull/89298), [@gavinfish](https://github.com/gavinfish)) [SIG Scheduling]
+- ServiceAppProtocol feature gate is now beta and enabled by default, adding new AppProtocol field to Services and Endpoints. ([#90023](https://github.com/kubernetes/kubernetes/pull/90023), [@robscott](https://github.com/robscott)) [SIG Apps and Network]
+- The Kubelet's `--volume-plugin-dir` option is now available via the Kubelet config file field `VolumePluginDir`. ([#88480](https://github.com/kubernetes/kubernetes/pull/88480), [@savitharaghunathan](https://github.com/savitharaghunathan)) [SIG Node]
+
+### Feature
+
+- Add client-side and server-side dry-run support to kubectl scale ([#89666](https://github.com/kubernetes/kubernetes/pull/89666), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing]
+- Add support for cgroups v2 node validation ([#89901](https://github.com/kubernetes/kubernetes/pull/89901), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle and Node]
+- Detailed scheduler scoring result can be printed at verbose level 10. ([#89384](https://github.com/kubernetes/kubernetes/pull/89384), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling]
+- E2e.test can print the list of conformance tests that need to pass for the cluster to be conformant. ([#88924](https://github.com/kubernetes/kubernetes/pull/88924), [@dims](https://github.com/dims)) [SIG Architecture and Testing]
+- Feat: add azure shared disk support ([#89511](https://github.com/kubernetes/kubernetes/pull/89511), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Kube-apiserver backed by etcd3 exports metric showing the database file size. ([#89151](https://github.com/kubernetes/kubernetes/pull/89151), [@jingyih](https://github.com/jingyih)) [SIG API Machinery]
+- Kube-apiserver: The NodeRestriction admission plugin now restricts Node labels kubelets are permitted to set when creating a new Node to the `--node-labels` parameters accepted by kubelets in 1.16+. ([#90307](https://github.com/kubernetes/kubernetes/pull/90307), [@liggitt](https://github.com/liggitt)) [SIG Auth and Node]
+- Kubeadm: during 'upgrade apply', if the kube-proxy ConfigMap is missing, assume that kube-proxy should not be upgraded. Same applies to a missing kube-dns/coredns ConfigMap for the DNS server addon. Note that this is a temporary workaround until 'upgrade apply' supports phases. Once phases are supported the kube-proxy/dns upgrade should be skipped manually. ([#89593](https://github.com/kubernetes/kubernetes/pull/89593), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: switch control-plane static Pods to the "system-node-critical" priority class ([#90063](https://github.com/kubernetes/kubernetes/pull/90063), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Support for running on a host that uses cgroups v2 unified mode ([#85218](https://github.com/kubernetes/kubernetes/pull/85218), [@giuseppe](https://github.com/giuseppe)) [SIG Node]
+- Update etcd client side to v3.4.7 ([#89822](https://github.com/kubernetes/kubernetes/pull/89822), [@jingyih](https://github.com/jingyih)) [SIG API Machinery and Cloud Provider]
+
+### Bug or Regression
+
+- An issue preventing GCP cloud-controller-manager running out-of-cluster to initialize new Nodes is now fixed. ([#90057](https://github.com/kubernetes/kubernetes/pull/90057), [@ialidzhikov](https://github.com/ialidzhikov)) [SIG Apps and Cloud Provider]
+- Avoid unnecessary GCE API calls when adding IP alises or reflecting them in Node object in GCE cloud provider. ([#90242](https://github.com/kubernetes/kubernetes/pull/90242), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps, Cloud Provider and Network]
+- Azure: fix concurreny issue in lb creation ([#89604](https://github.com/kubernetes/kubernetes/pull/89604), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- Bug fix for AWS NLB service when nodePort for existing servicePort changed manually. ([#89562](https://github.com/kubernetes/kubernetes/pull/89562), [@M00nF1sh](https://github.com/M00nF1sh)) [SIG Cloud Provider]
+- CSINode initialization does not crash kubelet on startup when APIServer is not reachable or kubelet has not the right credentials yet. ([#89589](https://github.com/kubernetes/kubernetes/pull/89589), [@jsafrane](https://github.com/jsafrane)) [SIG Storage]
+- Client-go: resolves an issue with informers falling back to full list requests when timeouts are encountered, rather than re-establishing a watch. ([#89652](https://github.com/kubernetes/kubernetes/pull/89652), [@liggitt](https://github.com/liggitt)) [SIG API Machinery and Testing]
+- Dual-stack: fix the bug that Service clusterIP does not respect specified ipFamily ([#89612](https://github.com/kubernetes/kubernetes/pull/89612), [@SataQiu](https://github.com/SataQiu)) [SIG Network]
+- Ensure Azure availability zone is always in lower cases. ([#89722](https://github.com/kubernetes/kubernetes/pull/89722), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Explain CRDs whose resource name are the same as builtin objects ([#89505](https://github.com/kubernetes/kubernetes/pull/89505), [@knight42](https://github.com/knight42)) [SIG API Machinery, CLI and Testing]
+- Fix flaws in Azure File CSI translation ([#90162](https://github.com/kubernetes/kubernetes/pull/90162), [@rfranzke](https://github.com/rfranzke)) [SIG Release and Storage]
+- Fix kubectl describe CSINode nil pointer error ([#89646](https://github.com/kubernetes/kubernetes/pull/89646), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Fix kubectl diff so it doesn't actually persist patches ([#89795](https://github.com/kubernetes/kubernetes/pull/89795), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing]
+- Fix kubectl version should print version info without config file ([#89913](https://github.com/kubernetes/kubernetes/pull/89913), [@zhouya0](https://github.com/zhouya0)) [SIG API Machinery and CLI]
+- Fix missing `-c` shorthand for `--container` flag of `kubectl alpha debug` ([#89674](https://github.com/kubernetes/kubernetes/pull/89674), [@superbrothers](https://github.com/superbrothers)) [SIG CLI]
+- Fix printers ignoring object average value ([#89142](https://github.com/kubernetes/kubernetes/pull/89142), [@zhouya0](https://github.com/zhouya0)) [SIG API Machinery]
+- Fix scheduler crash when removing node before its pods ([#89908](https://github.com/kubernetes/kubernetes/pull/89908), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- Fix: get attach disk error due to missing item in max count table ([#89768](https://github.com/kubernetes/kubernetes/pull/89768), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Fixed a bug where executing a kubectl command with a jsonpath output expression that has a nested range would ignore expressions following the nested range. ([#88464](https://github.com/kubernetes/kubernetes/pull/88464), [@brianpursley](https://github.com/brianpursley)) [SIG API Machinery]
+- Fixed a regression running kubectl commands with --local or --dry-run flags when no kubeconfig file is present ([#90243](https://github.com/kubernetes/kubernetes/pull/90243), [@soltysh](https://github.com/soltysh)) [SIG API Machinery, CLI and Testing]
+- Fixed an issue mounting credentials for service accounts whose name contains `.` characters ([#89696](https://github.com/kubernetes/kubernetes/pull/89696), [@nabokihms](https://github.com/nabokihms)) [SIG Auth]
+- Fixed mountOptions in iSCSI and FibreChannel volume plugins. ([#89172](https://github.com/kubernetes/kubernetes/pull/89172), [@jsafrane](https://github.com/jsafrane)) [SIG Storage]
+- Fixed the EndpointSlice controller to run without error on a cluster with the OwnerReferencesPermissionEnforcement validating admission plugin enabled. ([#89741](https://github.com/kubernetes/kubernetes/pull/89741), [@marun](https://github.com/marun)) [SIG Auth and Network]
+- Fixes a bug defining a default value for a replicas field in a custom resource definition that has the scale subresource enabled ([#89833](https://github.com/kubernetes/kubernetes/pull/89833), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation]
+- Fixes conversion error for HorizontalPodAutoscaler objects with invalid annotations ([#89963](https://github.com/kubernetes/kubernetes/pull/89963), [@liggitt](https://github.com/liggitt)) [SIG Autoscaling]
+- Fixes kubectl to apply all validly built objects, instead of stopping on error. ([#89848](https://github.com/kubernetes/kubernetes/pull/89848), [@seans3](https://github.com/seans3)) [SIG CLI and Testing]
+- For GCE cluster provider, fix bug of not being able to create internal type load balancer for clusters with more than 1000 nodes in a single zone. ([#89902](https://github.com/kubernetes/kubernetes/pull/89902), [@wojtek-t](https://github.com/wojtek-t)) [SIG Cloud Provider, Network and Scalability]
+- If firstTimestamp is not set use eventTime when printing event ([#89999](https://github.com/kubernetes/kubernetes/pull/89999), [@soltysh](https://github.com/soltysh)) [SIG CLI]
+- If we set parameter cgroupPerQos=false and cgroupRoot=/docker,this function will retrun nodeAllocatableRoot=/docker/kubepods, it is not right, the correct return should be /docker.
+ cm.NodeAllocatableRoot(s.CgroupRoot, s.CgroupDriver)
+
+ kubeDeps.CAdvisorInterface, err = cadvisor.New(imageFsInfoProvider, s.RootDirectory, cgroupRoots, cadvisor.UsingLegacyCadvisorStats(s.ContainerRuntime, s.RemoteRuntimeEndpoint))
+ the above funtion,as we use cgroupRoots to create cadvisor interface,the wrong parameter cgroupRoots will lead eviction manager not to collect metric from /docker, then kubelet frequently print those error:
+ E0303 17:25:03.436781 63839 summary_sys_containers.go:47] Failed to get system container stats for "/docker": failed to get cgroup stats for "/docker": failed to get container info for "/docker": unknown container "/docker"
+ E0303 17:25:03.436809 63839 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics ([#88970](https://github.com/kubernetes/kubernetes/pull/88970), [@mysunshine92](https://github.com/mysunshine92)) [SIG Node]
+- In the kubelet resource metrics endpoint at /metrics/resource, change the names of the following metrics:
+ - node_cpu_usage_seconds --> node_cpu_usage_seconds_total
+ - container_cpu_usage_seconds --> container_cpu_usage_seconds_total
+ This is a partial revert of #86282, which was added in 1.18.0, and initially removed the _total suffix ([#89540](https://github.com/kubernetes/kubernetes/pull/89540), [@dashpole](https://github.com/dashpole)) [SIG Instrumentation and Node]
+- Kube-apiserver: multiple comma-separated protocols in a single X-Stream-Protocol-Version header are now recognized, in addition to multiple headers, complying with RFC2616 ([#89857](https://github.com/kubernetes/kubernetes/pull/89857), [@tedyu](https://github.com/tedyu)) [SIG API Machinery]
+- Kubeadm increased to 5 minutes its timeout for the TLS bootstrapping process to complete upon join ([#89735](https://github.com/kubernetes/kubernetes/pull/89735), [@rosti](https://github.com/rosti)) [SIG Cluster Lifecycle]
+- Kubeadm: during join when a check is performed that a Node with the same name already exists in the cluster, make sure the NodeReady condition is properly validated ([#89602](https://github.com/kubernetes/kubernetes/pull/89602), [@kvaps](https://github.com/kvaps)) [SIG Cluster Lifecycle]
+- Kubeadm: fix a bug where post upgrade to 1.18.x, nodes cannot join the cluster due to missing RBAC ([#89537](https://github.com/kubernetes/kubernetes/pull/89537), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: fix misleading warning about passing control-plane related flags on 'kubeadm join' ([#89596](https://github.com/kubernetes/kubernetes/pull/89596), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubectl azure authentication: fixed a regression in 1.18.0 where "spn:" prefix was unexpectedly added to the `apiserver-id` configuration in the kubeconfig file ([#89706](https://github.com/kubernetes/kubernetes/pull/89706), [@weinong](https://github.com/weinong)) [SIG API Machinery and Auth]
+- Restore the ability to `kubectl apply --prune` without --namespace flag. Since 1.17, `kubectl apply --prune` only prunes resources in the default namespace (or from kubeconfig) or explicitly specified in command line flag. But this is s breaking change from kubectl 1.16, which can prune resources in all namespace in config file. This patch restores the kubectl 1.16 behaviour. ([#89551](https://github.com/kubernetes/kubernetes/pull/89551), [@tatsuhiro-t](https://github.com/tatsuhiro-t)) [SIG CLI and Testing]
+- Restores priority of static control plane pods in the cluster/gce/manifests control-plane manifests ([#89970](https://github.com/kubernetes/kubernetes/pull/89970), [@liggitt](https://github.com/liggitt)) [SIG Cluster Lifecycle and Node]
+- Service account tokens bound to pods can now be used during the pod deletion grace period. ([#89583](https://github.com/kubernetes/kubernetes/pull/89583), [@liggitt](https://github.com/liggitt)) [SIG Auth]
+- Sync LB backend nodes for Service Type=LoadBalancer on Add/Delete node events. ([#81185](https://github.com/kubernetes/kubernetes/pull/81185), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps and Network]
+
+### Other (Cleanup or Flake)
+
+- Change beta.kubernetes.io/os to kubernetes.io/os ([#89460](https://github.com/kubernetes/kubernetes/pull/89460), [@wawa0210](https://github.com/wawa0210)) [SIG Testing and Windows]
+- Changes not found message when using `kubectl get` to retrieve not namespaced resources ([#89861](https://github.com/kubernetes/kubernetes/pull/89861), [@rccrdpccl](https://github.com/rccrdpccl)) [SIG CLI]
+- Node ([#76443](https://github.com/kubernetes/kubernetes/pull/76443), [@mgdevstack](https://github.com/mgdevstack)) [SIG Architecture, Network, Node, Testing and Windows]
+- None. ([#90273](https://github.com/kubernetes/kubernetes/pull/90273), [@nilo19](https://github.com/nilo19)) [SIG Cloud Provider]
+- Reduce event spam during a volume operation error. ([#89794](https://github.com/kubernetes/kubernetes/pull/89794), [@msau42](https://github.com/msau42)) [SIG Storage]
+- The PR adds functionality to generate events when a PV or PVC processing encounters certain failures. The events help users to know the reason for the failure so they can take necessary recovery actions. ([#89845](https://github.com/kubernetes/kubernetes/pull/89845), [@yuga711](https://github.com/yuga711)) [SIG Apps]
+- The PodShareProcessNamespace feature gate has been removed, and the PodShareProcessNamespace is unconditionally enabled. ([#90099](https://github.com/kubernetes/kubernetes/pull/90099), [@tanjunchen](https://github.com/tanjunchen)) [SIG Node]
+- Update default etcd server version to 3.4.4 ([#89214](https://github.com/kubernetes/kubernetes/pull/89214), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cluster Lifecycle and Testing]
+- Update default etcd server version to 3.4.7 ([#89895](https://github.com/kubernetes/kubernetes/pull/89895), [@jingyih](https://github.com/jingyih)) [SIG API Machinery, Cluster Lifecycle and Testing]
+
+
+# v1.19.0-alpha.1
+
+[Documentation](https://docs.k8s.io)
+
+## Downloads for v1.19.0-alpha.1
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes.tar.gz) | `d5930e62f98948e3ae2bc0a91b2cb93c2009202657b9e798e43fcbf92149f50d991af34a49049b2640db729efc635d643d008f4b3dd6c093cac4426ee3d5d147`
+[kubernetes-src.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-src.tar.gz) | `5d92125ec3ca26b6b0af95c6bb3289bb7cf60a4bad4e120ccdad06ffa523c239ca8e608015b7b5a1eb789bfdfcedbe0281518793da82a7959081fb04cf53c174`
+
+### Client Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-darwin-386.tar.gz) | `08d307dafdd8e1aa27721f97f038210b33261d1777ea173cc9ed4b373c451801988a7109566425fce32d38df70bdf0be6b8cfff69da768fbd3c303abd6dc13a5`
+[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-darwin-amd64.tar.gz) | `08c3b722a62577d051e300ebc3c413ead1bd3e79555598a207c704064116087323215fb402bae7584b9ffd08590f36fa8a35f13f8fea1ce92e8f144e3eae3384`
+[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-linux-386.tar.gz) | `0735978b4d4cb0601171eae3cc5603393c00f032998f51d79d3b11e4020f4decc9559905e9b02ddcb0b6c3f4caf78f779940ebc97996e3b96b98ba378fbe189d`
+[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-linux-amd64.tar.gz) | `ca55fc431d59c1a0bf1f1c248da7eab65215e438fcac223d4fc3a57fae0205869e1727b2475dfe9b165921417d68ac380a6e42bf7ea6732a34937ba2590931ce`
+[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-linux-arm.tar.gz) | `4e1aa9e640d7cf0ccaad19377e4c3ca9a60203daa2ce0437d1d40fdea0e43759ef38797e948cdc3c676836b01e83f1bfde51effc0579bf832f6f062518f03f06`
+[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-linux-arm64.tar.gz) | `fca5df8c2919a9b3d99248120af627d9a1b5ddf177d9a10f04eb4e486c14d4e3ddb72e3abc4733b5078e0d27204a51e2f714424923fb92a5351137f82d87d6ea`
+[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-linux-ppc64le.tar.gz) | `6a98a4f99aa8b72ec815397c5062b90d5c023092da28fa7bca1cdadf406e2d86e2fd3a0eeab28574064959c6926007423c413d9781461e433705452087430d57`
+[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-linux-s390x.tar.gz) | `94724c17985ae2dbd3888e6896f300f95fec8dc2bf08e768849e98b05affc4381b322d802f41792b8e6da4708ce1ead2edcb8f4d5299be6267f6559b0d49e484`
+[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-windows-386.tar.gz) | `5a076bf3a5926939c170a501f8292a38003552848c45c1f148a97605b7ac9843fb660ef81a46abe6d139f4c5eaa342d4b834a799ee7055d5a548d189b31d7124`
+[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-client-windows-amd64.tar.gz) | `4b395894bfd9cfa0976512d1d58c0056a80bacefc798de294db6d3f363bd5581fd3ce2e4bdc1b902d46c8ce2ac87a98ced56b6b29544c86e8444fb8e9465faea`
+
+### Server Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-server-linux-amd64.tar.gz) | `6720d1b826dc20e56b0314e580403cd967430ff25bdbe08e8bf453fed339557d2a4ace114c2f524e6b6814ec9341ccdea870f784ebb53a52056ca3ab22e5cc36`
+[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-server-linux-arm.tar.gz) | `f09b295f5a95cc72494eb1c0e9706b237a8523eacda182778e9afdb469704c7eacd29614aff6d3d7aff3bc1783fb277d52ad56a1417f1bd973eeb9bdc8086695`
+[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-server-linux-arm64.tar.gz) | `24787767abd1d67a4d0234433e1693ea3e1e906364265ee03e58ba203b66583b75d4ce0c4185756fc529997eb9a842d65841962cd228df9c182a469dbd72493d`
+[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-server-linux-ppc64le.tar.gz) | `a117e609729263d7bd58aac156efa33941f0f9aa651892d1abf32cfa0a984aa495fccd3be8385cae083415bfa8f81942648d5978f72e950103e42184fd0d7527`
+[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-server-linux-s390x.tar.gz) | `19280a6dc20f019d23344934f8f1ec6aa17c3374b9c569d4c173535a8cd9e298b8afcabe06d232a146c9c7cb4bfe7d1d0e10aa2ab9184ace0b7987e36973aaef`
+
+### Node Binaries
+
+filename | sha512 hash
+-------- | -----------
+[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-node-linux-amd64.tar.gz) | `c4b23f113ed13edb91b59a498d15de8b62ff1005243f2d6654a11468511c9d0ebaebb6dc02d2fa505f18df446c9221e77d7fc3147fa6704cde9bec5d6d80b5a3`
+[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-node-linux-arm.tar.gz) | `8dcf5531a5809576049c455d3c5194f09ddf3b87995df1e8ca4543deff3ffd90a572539daff9aa887e22efafedfcada2e28035da8573e3733c21778e4440677a`
+[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-node-linux-arm64.tar.gz) | `4b3f4dfee2034ce7d01fef57b8766851fe141fc72da0f9edeb39aca4c7a937e2dccd2c198a83fbb92db7911d81e50a98bd0a17b909645adbeb26e420197db2cd`
+[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-node-linux-ppc64le.tar.gz) | `df0e87f5e42056db2bbc7ef5f08ecda95d66afc3f4d0bc57f6efcc05834118c39ab53d68595d8f2bb278829e33b9204c5cce718d8bf841ce6cccbb86d0d20730`
+[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-node-linux-s390x.tar.gz) | `3a6499b008a68da52f8ae12eb694885d9e10a8f805d98f28fc5f7beafea72a8e180df48b5ca31097b2d4779c61ff67216e516c14c2c812163e678518d95f22d6`
+[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.19.0-alpha.1/kubernetes-node-windows-amd64.tar.gz) | `c311373506cbfa0244ac92a709fbb9bddb46cbeb130733bdb689641ecee6b21a7a7f020eae4856a3f04a3845839dc5e0914cddc3478d55cd3d5af3d7804aa5ba`
+
+## Changelog since v1.19.0-alpha.0
+
+## Urgent Upgrade Notes
+
+### (No, really, you MUST read this before you upgrade)
+
+- The StreamingProxyRedirects feature and `--redirect-container-streaming` flag are deprecated, and will be removed in a future release. The default behavior (proxy streaming requests through the kubelet) will be the only supported option.
+ If you are setting `--redirect-container-streaming=true`, then you must migrate off this configuration. The flag will no longer be able to be enabled starting in v1.20. If you are not setting the flag, no action is necessary. ([#88290](https://github.com/kubernetes/kubernetes/pull/88290), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Node]
+
+- `kubectl` no longer defaults to `http://localhost:8080`. If you own one of these legacy clusters, you are *strongly- encouraged to secure your server. If you cannot secure your server, you can set `KUBERNETES_MASTER` if you were relying on that behavior and you're client-go user. Set `--server`, `--kubeconfig` or `KUBECONFIG` to make it work in `kubectl`. ([#86173](https://github.com/kubernetes/kubernetes/pull/86173), [@soltysh](https://github.com/soltysh)) [SIG API Machinery, CLI and Testing]
+
+## Changes by Kind
+
+### Deprecation
+
+- AlgorithmSource is removed from v1alpha2 Scheduler ComponentConfig ([#87999](https://github.com/kubernetes/kubernetes/pull/87999), [@damemi](https://github.com/damemi)) [SIG Scheduling]
+- Azure service annotation service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset has been deprecated. Its support would be removed in a future release. ([#88462](https://github.com/kubernetes/kubernetes/pull/88462), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Kube-proxy: deprecate `--healthz-port` and `--metrics-port` flag, please use `--healthz-bind-address` and `--metrics-bind-address` instead ([#88512](https://github.com/kubernetes/kubernetes/pull/88512), [@SataQiu](https://github.com/SataQiu)) [SIG Network]
+- Kubeadm: deprecate the usage of the experimental flag '--use-api' under the 'kubeadm alpha certs renew' command. ([#88827](https://github.com/kubernetes/kubernetes/pull/88827), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubernetes no longer supports building hyperkube images ([#88676](https://github.com/kubernetes/kubernetes/pull/88676), [@dims](https://github.com/dims)) [SIG Cluster Lifecycle and Release]
+
+### API Change
+
+- A new IngressClass resource has been added to enable better Ingress configuration. ([#88509](https://github.com/kubernetes/kubernetes/pull/88509), [@robscott](https://github.com/robscott)) [SIG API Machinery, Apps, CLI, Network, Node and Testing]
+- API additions to apiserver types ([#87179](https://github.com/kubernetes/kubernetes/pull/87179), [@Jefftree](https://github.com/Jefftree)) [SIG API Machinery, Cloud Provider and Cluster Lifecycle]
+- Add Scheduling Profiles to kubescheduler.config.k8s.io/v1alpha2 ([#88087](https://github.com/kubernetes/kubernetes/pull/88087), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling and Testing]
+- Added GenericPVCDataSource feature gate to enable using arbitrary custom resources as the data source for a PVC. ([#88636](https://github.com/kubernetes/kubernetes/pull/88636), [@bswartz](https://github.com/bswartz)) [SIG Apps and Storage]
+- Added support for multiple sizes huge pages on a container level ([#84051](https://github.com/kubernetes/kubernetes/pull/84051), [@bart0sh](https://github.com/bart0sh)) [SIG Apps, Node and Storage]
+- Allow user to specify fsgroup permission change policy for pods ([#88488](https://github.com/kubernetes/kubernetes/pull/88488), [@gnufied](https://github.com/gnufied)) [SIG Apps and Storage]
+- AppProtocol is a new field on Service and Endpoints resources, enabled with the ServiceAppProtocol feature gate. ([#88503](https://github.com/kubernetes/kubernetes/pull/88503), [@robscott](https://github.com/robscott)) [SIG Apps and Network]
+- BlockVolume and CSIBlockVolume features are now GA. ([#88673](https://github.com/kubernetes/kubernetes/pull/88673), [@jsafrane](https://github.com/jsafrane)) [SIG Apps, Node and Storage]
+- Consumers of the 'certificatesigningrequests/approval' API must now grant permission to 'approve' CSRs for the 'signerName' specified on the CSR. More information on the new signerName field can be found at https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190607-certificates-api.md#signers ([#88246](https://github.com/kubernetes/kubernetes/pull/88246), [@munnerz](https://github.com/munnerz)) [SIG API Machinery, Apps, Auth, CLI, Node and Testing]
+- CustomResourceDefinition schemas that use `x-kubernetes-list-map-keys` to specify properties that uniquely identify list items must make those properties required or have a default value, to ensure those properties are present for all list items. See https://kubernetes.io/docs/reference/using-api/api-concepts/#merge-strategy for details. ([#88076](https://github.com/kubernetes/kubernetes/pull/88076), [@eloyekunle](https://github.com/eloyekunle)) [SIG API Machinery and Testing]
+- Fixed missing validation of uniqueness of list items in lists with `x-kubernetes-list-type: map` or x-kubernetes-list-type: set` in CustomResources. ([#84920](https://github.com/kubernetes/kubernetes/pull/84920), [@sttts](https://github.com/sttts)) [SIG API Machinery]
+- Fixes a regression with clients prior to 1.15 not being able to update podIP in pod status, or podCIDR in node spec, against >= 1.16 API servers ([#88505](https://github.com/kubernetes/kubernetes/pull/88505), [@liggitt](https://github.com/liggitt)) [SIG Apps and Network]
+- Ingress: Add Exact and Prefix maching to Ingress PathTypes ([#88587](https://github.com/kubernetes/kubernetes/pull/88587), [@cmluciano](https://github.com/cmluciano)) [SIG Apps, Cluster Lifecycle and Network]
+- Ingress: Add alternate backends via TypedLocalObjectReference ([#88775](https://github.com/kubernetes/kubernetes/pull/88775), [@cmluciano](https://github.com/cmluciano)) [SIG Apps and Network]
+- Ingress: allow wildcard hosts in IngressRule ([#88858](https://github.com/kubernetes/kubernetes/pull/88858), [@cmluciano](https://github.com/cmluciano)) [SIG Network]
+- Introduces optional --detect-local flag to kube-proxy.
+ Currently the only supported value is "cluster-cidr",
+ which is the default if not specified. ([#87748](https://github.com/kubernetes/kubernetes/pull/87748), [@satyasm](https://github.com/satyasm)) [SIG Cluster Lifecycle, Network and Scheduling]
+- Kube-controller-manager and kube-scheduler expose profiling by default to match the kube-apiserver. Use `--enable-profiling=false` to disable. ([#88663](https://github.com/kubernetes/kubernetes/pull/88663), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Cloud Provider and Scheduling]
+- Kube-scheduler can run more than one scheduling profile. Given a pod, the profile is selected by using its `.spec.SchedulerName`. ([#88285](https://github.com/kubernetes/kubernetes/pull/88285), [@alculquicondor](https://github.com/alculquicondor)) [SIG Apps, Scheduling and Testing]
+- Move TaintBasedEvictions feature gates to GA ([#87487](https://github.com/kubernetes/kubernetes/pull/87487), [@skilxn-go](https://github.com/skilxn-go)) [SIG API Machinery, Apps, Node, Scheduling and Testing]
+- Moving Windows RunAsUserName feature to GA ([#87790](https://github.com/kubernetes/kubernetes/pull/87790), [@marosset](https://github.com/marosset)) [SIG Apps and Windows]
+- New flag --endpointslice-updates-batch-period in kube-controller-manager can be used to reduce number of endpointslice updates generated by pod changes. ([#88745](https://github.com/kubernetes/kubernetes/pull/88745), [@mborsz](https://github.com/mborsz)) [SIG API Machinery, Apps and Network]
+- New flag `--show-hidden-metrics-for-version` in kubelet can be used to show all hidden metrics that deprecated in the previous minor release. ([#85282](https://github.com/kubernetes/kubernetes/pull/85282), [@serathius](https://github.com/serathius)) [SIG Node]
+- Removes ConfigMap as suggestion for IngressClass parameters ([#89093](https://github.com/kubernetes/kubernetes/pull/89093), [@robscott](https://github.com/robscott)) [SIG Network]
+- Scheduler Extenders can now be configured in the v1alpha2 component config ([#88768](https://github.com/kubernetes/kubernetes/pull/88768), [@damemi](https://github.com/damemi)) [SIG Release, Scheduling and Testing]
+- The apiserver/v1alph1#EgressSelectorConfiguration API is now beta. ([#88502](https://github.com/kubernetes/kubernetes/pull/88502), [@caesarxuchao](https://github.com/caesarxuchao)) [SIG API Machinery]
+- The storage.k8s.io/CSIDriver has moved to GA, and is now available for use. ([#84814](https://github.com/kubernetes/kubernetes/pull/84814), [@huffmanca](https://github.com/huffmanca)) [SIG API Machinery, Apps, Auth, Node, Scheduling, Storage and Testing]
+- VolumePVCDataSource moves to GA in 1.18 release ([#88686](https://github.com/kubernetes/kubernetes/pull/88686), [@j-griffith](https://github.com/j-griffith)) [SIG Apps, CLI and Cluster Lifecycle]
+
+### Feature
+
+- deps: Update to Golang 1.13.9
+ - build: Remove kube-cross image building ([#89275](https://github.com/kubernetes/kubernetes/pull/89275), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing]
+- Add --dry-run to kubectl delete, taint, replace ([#88292](https://github.com/kubernetes/kubernetes/pull/88292), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing]
+- Add `rest_client_rate_limiter_duration_seconds` metric to component-base to track client side rate limiter latency in seconds. Broken down by verb and URL. ([#88134](https://github.com/kubernetes/kubernetes/pull/88134), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery, Cluster Lifecycle and Instrumentation]
+- Add huge page stats to Allocated resources in "kubectl describe node" ([#80605](https://github.com/kubernetes/kubernetes/pull/80605), [@odinuge](https://github.com/odinuge)) [SIG CLI]
+- Add support for pre allocated huge pages with different sizes, on node level ([#89252](https://github.com/kubernetes/kubernetes/pull/89252), [@odinuge](https://github.com/odinuge)) [SIG Apps and Node]
+- Adds support for NodeCIDR as an argument to --detect-local-mode ([#88935](https://github.com/kubernetes/kubernetes/pull/88935), [@satyasm](https://github.com/satyasm)) [SIG Network]
+- Allow user to specify resource using --filename flag when invoking kubectl exec ([#88460](https://github.com/kubernetes/kubernetes/pull/88460), [@soltysh](https://github.com/soltysh)) [SIG CLI and Testing]
+- Apiserver add a new flag --goaway-chance which is the fraction of requests that will be closed gracefully(GOAWAY) to prevent HTTP/2 clients from getting stuck on a single apiserver.
+ After the connection closed(received GOAWAY), the client's other in-flight requests won't be affected, and the client will reconnect.
+ The flag min value is 0 (off), max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
+ Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. ([#88567](https://github.com/kubernetes/kubernetes/pull/88567), [@answer1991](https://github.com/answer1991)) [SIG API Machinery]
+- Azure Cloud Provider now supports using Azure network resources (Virtual Network, Load Balancer, Public IP, Route Table, Network Security Group, etc.) in different AAD Tenant and Subscription than those for the Kubernetes cluster. To use the feature, please reference https://github.com/kubernetes-sigs/cloud-provider-azure/blob/master/docs/cloud-provider-config.md#host-network-resources-in-different-aad-tenant-and-subscription. ([#88384](https://github.com/kubernetes/kubernetes/pull/88384), [@bowen5](https://github.com/bowen5)) [SIG Cloud Provider]
+- Azure: add support for single stack IPv6 ([#88448](https://github.com/kubernetes/kubernetes/pull/88448), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- DefaultConstraints can be specified for the PodTopologySpread plugin in the component config ([#88671](https://github.com/kubernetes/kubernetes/pull/88671), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- EndpointSlice controller waits longer to retry failed sync. ([#89438](https://github.com/kubernetes/kubernetes/pull/89438), [@robscott](https://github.com/robscott)) [SIG Apps and Network]
+- Feat: change azure disk api-version ([#89250](https://github.com/kubernetes/kubernetes/pull/89250), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Feat: support [Azure shared disk](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-shared-enable), added a new field(`maxShares`) in azure disk storage class:
+
+ kind: StorageClass
+ apiVersion: storage.k8s.io/v1
+ metadata:
+ name: shared-disk
+ provisioner: kubernetes.io/azure-disk
+ parameters:
+ skuname: Premium_LRS # Currently only available with premium SSDs.
+ cachingMode: None # ReadOnly host caching is not available for premium SSDs with maxShares>1
+ maxShares: 2 ([#89328](https://github.com/kubernetes/kubernetes/pull/89328), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Kube-apiserver, kube-scheduler and kube-controller manager now use SO_REUSEPORT socket option when listening on address defined by --bind-address and --secure-port flags, when running on Unix systems (Windows is NOT supported). This allows to run multiple instances of those processes on a single host with the same configuration, which allows to update/restart them in a graceful way, without causing downtime. ([#88893](https://github.com/kubernetes/kubernetes/pull/88893), [@invidian](https://github.com/invidian)) [SIG API Machinery, Scheduling and Testing]
+- Kubeadm: The ClusterStatus struct present in the kubeadm-config ConfigMap is deprecated and will be removed on a future version. It is going to be maintained by kubeadm until it gets removed. The same information can be found on `etcd` and `kube-apiserver` pod annotations, `kubeadm.kubernetes.io/etcd.advertise-client-urls` and `kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint` respectively. ([#87656](https://github.com/kubernetes/kubernetes/pull/87656), [@ereslibre](https://github.com/ereslibre)) [SIG Cluster Lifecycle]
+- Kubeadm: add the experimental feature gate PublicKeysECDSA that can be used to create a
+ cluster with ECDSA certificates from "kubeadm init". Renewal of existing ECDSA certificates is
+ also supported using "kubeadm alpha certs renew", but not switching between the RSA and
+ ECDSA algorithms on the fly or during upgrades. ([#86953](https://github.com/kubernetes/kubernetes/pull/86953), [@rojkov](https://github.com/rojkov)) [SIG API Machinery, Auth and Cluster Lifecycle]
+- Kubeadm: on kubeconfig certificate renewal, keep the embedded CA in sync with the one on disk ([#88052](https://github.com/kubernetes/kubernetes/pull/88052), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
+- Kubeadm: support Windows specific kubelet flags in kubeadm-flags.env ([#88287](https://github.com/kubernetes/kubernetes/pull/88287), [@gab-satchi](https://github.com/gab-satchi)) [SIG Cluster Lifecycle and Windows]
+- Kubeadm: upgrade supports fallback to the nearest known etcd version if an unknown k8s version is passed ([#88373](https://github.com/kubernetes/kubernetes/pull/88373), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubectl cluster-info dump changed to only display a message telling you the location where the output was written when the output is not standard output. ([#88765](https://github.com/kubernetes/kubernetes/pull/88765), [@brianpursley](https://github.com/brianpursley)) [SIG CLI]
+- New flag `--show-hidden-metrics-for-version` in kube-scheduler can be used to show all hidden metrics that deprecated in the previous minor release. ([#84913](https://github.com/kubernetes/kubernetes/pull/84913), [@serathius](https://github.com/serathius)) [SIG Instrumentation and Scheduling]
+- Print NotReady when pod is not ready based on its conditions. ([#88240](https://github.com/kubernetes/kubernetes/pull/88240), [@soltysh](https://github.com/soltysh)) [SIG CLI]
+- Scheduler Extender API is now located under k8s.io/kube-scheduler/extender ([#88540](https://github.com/kubernetes/kubernetes/pull/88540), [@damemi](https://github.com/damemi)) [SIG Release, Scheduling and Testing]
+- Scheduler framework permit plugins now run at the end of the scheduling cycle, after reserve plugins. Waiting on permit will remain in the beginning of the binding cycle. ([#88199](https://github.com/kubernetes/kubernetes/pull/88199), [@mateuszlitwin](https://github.com/mateuszlitwin)) [SIG Scheduling]
+- Signatures on scale client methods have been modified to accept `context.Context` as a first argument. Signatures of Get, Update, and Patch methods have been updated to accept GetOptions, UpdateOptions and PatchOptions respectively. ([#88599](https://github.com/kubernetes/kubernetes/pull/88599), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG API Machinery, Apps, Autoscaling and CLI]
+- Signatures on the dynamic client methods have been modified to accept `context.Context` as a first argument. Signatures of Delete and DeleteCollection methods now accept DeleteOptions by value instead of by reference. ([#88906](https://github.com/kubernetes/kubernetes/pull/88906), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps, CLI, Cluster Lifecycle, Storage and Testing]
+- Signatures on the metadata client methods have been modified to accept `context.Context` as a first argument. Signatures of Delete and DeleteCollection methods now accept DeleteOptions by value instead of by reference. ([#88910](https://github.com/kubernetes/kubernetes/pull/88910), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps and Testing]
+- Support create or update VMSS asynchronously. ([#89248](https://github.com/kubernetes/kubernetes/pull/89248), [@nilo19](https://github.com/nilo19)) [SIG Cloud Provider]
+- The kubelet and the default docker runtime now support running ephemeral containers in the Linux process namespace of a target container. Other container runtimes must implement this feature before it will be available in that runtime. ([#84731](https://github.com/kubernetes/kubernetes/pull/84731), [@verb](https://github.com/verb)) [SIG Node]
+- Update etcd client side to v3.4.4 ([#89169](https://github.com/kubernetes/kubernetes/pull/89169), [@jingyih](https://github.com/jingyih)) [SIG API Machinery and Cloud Provider]
+- Upgrade to azure-sdk v40.2.0 ([#89105](https://github.com/kubernetes/kubernetes/pull/89105), [@andyzhangx](https://github.com/andyzhangx)) [SIG CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Storage and Testing]
+- Webhooks will have alpha support for network proxy ([#85870](https://github.com/kubernetes/kubernetes/pull/85870), [@Jefftree](https://github.com/Jefftree)) [SIG API Machinery, Auth and Testing]
+- When client certificate files are provided, reload files for new connections, and close connections when a certificate changes. ([#79083](https://github.com/kubernetes/kubernetes/pull/79083), [@jackkleeman](https://github.com/jackkleeman)) [SIG API Machinery, Auth, Node and Testing]
+- When deleting objects using kubectl with the --force flag, you are no longer required to also specify --grace-period=0. ([#87776](https://github.com/kubernetes/kubernetes/pull/87776), [@brianpursley](https://github.com/brianpursley)) [SIG CLI]
+- `kubectl` now contains a `kubectl alpha debug` command. This command allows attaching an ephemeral container to a running pod for the purposes of debugging. ([#88004](https://github.com/kubernetes/kubernetes/pull/88004), [@verb](https://github.com/verb)) [SIG CLI]
+
+### Documentation
+
+- Improved error message for incorrect auth field. ([#82829](https://github.com/kubernetes/kubernetes/pull/82829), [@martin-schibsted](https://github.com/martin-schibsted)) [SIG Auth]
+- Update Japanese translation for kubectl help ([#86837](https://github.com/kubernetes/kubernetes/pull/86837), [@inductor](https://github.com/inductor)) [SIG CLI and Docs]
+- Updated the instructions for deploying the sample app. ([#82785](https://github.com/kubernetes/kubernetes/pull/82785), [@ashish-billore](https://github.com/ashish-billore)) [SIG API Machinery]
+- `kubectl plugin` now prints a note how to install krew ([#88577](https://github.com/kubernetes/kubernetes/pull/88577), [@corneliusweig](https://github.com/corneliusweig)) [SIG CLI]
+
+### Other (Bug, Cleanup or Flake)
+
+- A PV set from in-tree source will have ordered requirement values in NodeAffinity when converted to CSIPersistentVolumeSource ([#88987](https://github.com/kubernetes/kubernetes/pull/88987), [@jiahuif](https://github.com/jiahuif)) [SIG Storage]
+- Add delays between goroutines for vm instance update ([#88094](https://github.com/kubernetes/kubernetes/pull/88094), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
+- Add init containers log to cluster dump info. ([#88324](https://github.com/kubernetes/kubernetes/pull/88324), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Azure VMSS LoadBalancerBackendAddressPools updating has been improved with squential-sync + concurrent-async requests. ([#88699](https://github.com/kubernetes/kubernetes/pull/88699), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Azure auth module for kubectl now requests login after refresh token expires. ([#86481](https://github.com/kubernetes/kubernetes/pull/86481), [@tdihp](https://github.com/tdihp)) [SIG API Machinery and Auth]
+- AzureFile and CephFS use new Mount library that prevents logging of sensitive mount options. ([#88684](https://github.com/kubernetes/kubernetes/pull/88684), [@saad-ali](https://github.com/saad-ali)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Storage]
+- Beta.kubernetes.io/arch is already deprecated since v1.14, are targeted for removal in v1.18 ([#89462](https://github.com/kubernetes/kubernetes/pull/89462), [@wawa0210](https://github.com/wawa0210)) [SIG Testing]
+- Build: Enable kube-cross image-building on K8s Infra ([#88562](https://github.com/kubernetes/kubernetes/pull/88562), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing]
+- CPU limits are now respected for Windows containers. If a node is over-provisioned, no weighting is used - only limits are respected. ([#86101](https://github.com/kubernetes/kubernetes/pull/86101), [@PatrickLang](https://github.com/PatrickLang)) [SIG Node, Testing and Windows]
+- Client-go certificate manager rotation gained the ability to preserve optional intermediate chains accompanying issued certificates ([#88744](https://github.com/kubernetes/kubernetes/pull/88744), [@jackkleeman](https://github.com/jackkleeman)) [SIG API Machinery and Auth]
+- Cloud provider config CloudProviderBackoffMode has been removed since it won't be used anymore. ([#88463](https://github.com/kubernetes/kubernetes/pull/88463), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Conformance image now depends on stretch-slim instead of debian-hyperkube-base as that image is being deprecated and removed. ([#88702](https://github.com/kubernetes/kubernetes/pull/88702), [@dims](https://github.com/dims)) [SIG Cluster Lifecycle, Release and Testing]
+- Deprecate --generator flag from kubectl create commands ([#88655](https://github.com/kubernetes/kubernetes/pull/88655), [@soltysh](https://github.com/soltysh)) [SIG CLI]
+- Deprecate kubectl top flags related to heapster
+ Drop support of heapster in kubectl top ([#87498](https://github.com/kubernetes/kubernetes/pull/87498), [@serathius](https://github.com/serathius)) [SIG CLI]
+- EndpointSlice should not contain endpoints for terminating pods ([#89056](https://github.com/kubernetes/kubernetes/pull/89056), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps and Network]
+- Evictions due to pods breaching their ephemeral storage limits are now recorded by the `kubelet_evictions` metric and can be alerted on. ([#87906](https://github.com/kubernetes/kubernetes/pull/87906), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node]
+- FIX: prevent apiserver from panicking when failing to load audit webhook config file ([#88879](https://github.com/kubernetes/kubernetes/pull/88879), [@JoshVanL](https://github.com/JoshVanL)) [SIG API Machinery and Auth]
+- Fix /readyz to return error immediately after a shutdown is initiated, before the --shutdown-delay-duration has elapsed. ([#88911](https://github.com/kubernetes/kubernetes/pull/88911), [@tkashem](https://github.com/tkashem)) [SIG API Machinery]
+- Fix a bug that didn't allow to use IPv6 addresses with leading zeros ([#89341](https://github.com/kubernetes/kubernetes/pull/89341), [@aojea](https://github.com/aojea)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation]
+- Fix a bug where ExternalTrafficPolicy is not applied to service ExternalIPs. ([#88786](https://github.com/kubernetes/kubernetes/pull/88786), [@freehan](https://github.com/freehan)) [SIG Network]
+- Fix a bug where kubenet fails to parse the tc output. ([#83572](https://github.com/kubernetes/kubernetes/pull/83572), [@chendotjs](https://github.com/chendotjs)) [SIG Network]
+- Fix bug with xfs_repair from stopping xfs mount ([#89444](https://github.com/kubernetes/kubernetes/pull/89444), [@gnufied](https://github.com/gnufied)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Storage]
+- Fix describe ingress annotations not sorted. ([#88394](https://github.com/kubernetes/kubernetes/pull/88394), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Fix detection of SystemOOMs in which the victim is a container. ([#88871](https://github.com/kubernetes/kubernetes/pull/88871), [@dashpole](https://github.com/dashpole)) [SIG Node]
+- Fix handling of aws-load-balancer-security-groups annotation. Security-Groups assigned with this annotation are no longer modified by kubernetes which is the expected behaviour of most users. Also no unnecessary Security-Groups are created anymore if this annotation is used. ([#83446](https://github.com/kubernetes/kubernetes/pull/83446), [@Elias481](https://github.com/Elias481)) [SIG Cloud Provider]
+- Fix invalid VMSS updates due to incorrect cache ([#89002](https://github.com/kubernetes/kubernetes/pull/89002), [@ArchangelSDY](https://github.com/ArchangelSDY)) [SIG Cloud Provider]
+- Fix isCurrentInstance for Windows by removing the dependency of hostname. ([#89138](https://github.com/kubernetes/kubernetes/pull/89138), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Fix kube-apiserver startup to wait for APIServices to be installed into the HTTP handler before reporting readiness. ([#89147](https://github.com/kubernetes/kubernetes/pull/89147), [@sttts](https://github.com/sttts)) [SIG API Machinery]
+- Fix kubectl create deployment image name ([#86636](https://github.com/kubernetes/kubernetes/pull/86636), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Fix missing "apiVersion" for "involvedObject" in Events for Nodes. ([#87537](https://github.com/kubernetes/kubernetes/pull/87537), [@uthark](https://github.com/uthark)) [SIG Apps and Node]
+- Fix that prevents repeated fetching of PVC/PV objects by kubelet when processing of pod volumes fails. While this prevents hammering API server in these error scenarios, it means that some errors in processing volume(s) for a pod could now take up to 2-3 minutes before retry. ([#88141](https://github.com/kubernetes/kubernetes/pull/88141), [@tedyu](https://github.com/tedyu)) [SIG Node and Storage]
+- Fix the VMSS name and resource group name when updating Azure VMSS for LoadBalancer backendPools ([#89337](https://github.com/kubernetes/kubernetes/pull/89337), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
+- Fix: add remediation in azure disk attach/detach ([#88444](https://github.com/kubernetes/kubernetes/pull/88444), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- Fix: azure file mount timeout issue ([#88610](https://github.com/kubernetes/kubernetes/pull/88610), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage]
+- Fix: check disk status before delete azure disk ([#88360](https://github.com/kubernetes/kubernetes/pull/88360), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider]
+- Fix: corrupted mount point in csi driver ([#88569](https://github.com/kubernetes/kubernetes/pull/88569), [@andyzhangx](https://github.com/andyzhangx)) [SIG Storage]
+- Fixed a bug in the TopologyManager. Previously, the TopologyManager would only guarantee alignment if container creation was serialized in some way. Alignment is now guaranteed under all scenarios of container creation. ([#87759](https://github.com/kubernetes/kubernetes/pull/87759), [@klueska](https://github.com/klueska)) [SIG Node]
+- Fixed a data race in kubelet image manager that can cause static pod workers to silently stop working. ([#88915](https://github.com/kubernetes/kubernetes/pull/88915), [@roycaihw](https://github.com/roycaihw)) [SIG Node]
+- Fixed an issue that could cause the kubelet to incorrectly run concurrent pod reconciliation loops and crash. ([#89055](https://github.com/kubernetes/kubernetes/pull/89055), [@tedyu](https://github.com/tedyu)) [SIG Node]
+- Fixed block CSI volume cleanup after timeouts. ([#88660](https://github.com/kubernetes/kubernetes/pull/88660), [@jsafrane](https://github.com/jsafrane)) [SIG Node and Storage]
+- Fixed bug where a nonzero exit code was returned when initializing zsh completion even though zsh completion was successfully initialized ([#88165](https://github.com/kubernetes/kubernetes/pull/88165), [@brianpursley](https://github.com/brianpursley)) [SIG CLI]
+- Fixed cleaning of CSI raw block volumes. ([#87978](https://github.com/kubernetes/kubernetes/pull/87978), [@jsafrane](https://github.com/jsafrane)) [SIG Storage]
+- Fixes conversion error in multi-version custom resources that could cause metadata.generation to increment on no-op patches or updates of a custom resource. ([#88995](https://github.com/kubernetes/kubernetes/pull/88995), [@liggitt](https://github.com/liggitt)) [SIG API Machinery]
+- Fixes issue where you can't attach more than 15 GCE Persistent Disks to c2, n2, m1, m2 machine types. ([#88602](https://github.com/kubernetes/kubernetes/pull/88602), [@yuga711](https://github.com/yuga711)) [SIG Storage]
+- Fixes v1.18.0-rc.1 regression in `kubectl port-forward` when specifying a local and remote port ([#89401](https://github.com/kubernetes/kubernetes/pull/89401), [@liggitt](https://github.com/liggitt)) [SIG CLI]
+- For volumes that allow attaches across multiple nodes, attach and detach operations across different nodes are now executed in parallel. ([#88678](https://github.com/kubernetes/kubernetes/pull/88678), [@verult](https://github.com/verult)) [SIG Apps, Node and Storage]
+- Get-kube.sh uses the gcloud's current local GCP service account for auth when the provider is GCE or GKE instead of the metadata server default ([#88383](https://github.com/kubernetes/kubernetes/pull/88383), [@BenTheElder](https://github.com/BenTheElder)) [SIG Cluster Lifecycle]
+- Golang/x/net has been updated to bring in fixes for CVE-2020-9283 ([#88381](https://github.com/kubernetes/kubernetes/pull/88381), [@BenTheElder](https://github.com/BenTheElder)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation]
+- Hide kubectl.kubernetes.io/last-applied-configuration in describe command ([#88758](https://github.com/kubernetes/kubernetes/pull/88758), [@soltysh](https://github.com/soltysh)) [SIG Auth and CLI]
+- In GKE alpha clusters it will be possible to use the service annotation `cloud.google.com/network-tier: Standard` ([#88487](https://github.com/kubernetes/kubernetes/pull/88487), [@zioproto](https://github.com/zioproto)) [SIG Cloud Provider]
+- Ipvs: only attempt setting of sysctlconnreuse on supported kernels ([#88541](https://github.com/kubernetes/kubernetes/pull/88541), [@cmluciano](https://github.com/cmluciano)) [SIG Network]
+- Kube-proxy: on dual-stack mode, if it is not able to get the IP Family of an endpoint, logs it with level InfoV(4) instead of Warning, avoiding flooding the logs for endpoints without addresses ([#88934](https://github.com/kubernetes/kubernetes/pull/88934), [@aojea](https://github.com/aojea)) [SIG Network]
+- Kubeadm now includes CoreDNS version 1.6.7 ([#86260](https://github.com/kubernetes/kubernetes/pull/86260), [@rajansandeep](https://github.com/rajansandeep)) [SIG Cluster Lifecycle]
+- Kubeadm: fix the bug that 'kubeadm upgrade' hangs in single node cluster ([#88434](https://github.com/kubernetes/kubernetes/pull/88434), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
+- Kubelet: fix the bug that kubelet help information can not show the right type of flags ([#88515](https://github.com/kubernetes/kubernetes/pull/88515), [@SataQiu](https://github.com/SataQiu)) [SIG Docs and Node]
+- Kubelets perform fewer unnecessary pod status update operations on the API server. ([#88591](https://github.com/kubernetes/kubernetes/pull/88591), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Scalability]
+- Optimize kubectl version help info ([#88313](https://github.com/kubernetes/kubernetes/pull/88313), [@zhouya0](https://github.com/zhouya0)) [SIG CLI]
+- Plugin/PluginConfig and Policy APIs are mutually exclusive when running the scheduler ([#88864](https://github.com/kubernetes/kubernetes/pull/88864), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- Removes the deprecated command `kubectl rolling-update` ([#88057](https://github.com/kubernetes/kubernetes/pull/88057), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG Architecture, CLI and Testing]
+- Resolved a regression in v1.18.0-rc.1 mounting windows volumes ([#89319](https://github.com/kubernetes/kubernetes/pull/89319), [@mboersma](https://github.com/mboersma)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Storage]
+- Scheduler PreScore plugins are not executed if there is one filtered node or less. ([#89370](https://github.com/kubernetes/kubernetes/pull/89370), [@ahg-g](https://github.com/ahg-g)) [SIG Scheduling]
+- Specifying PluginConfig for the same plugin more than once fails scheduler startup.
+
+ Specifying extenders and configuring .ignoredResources for the NodeResourcesFit plugin fails ([#88870](https://github.com/kubernetes/kubernetes/pull/88870), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
+- Support TLS Server Name overrides in kubeconfig file and via --tls-server-name in kubectl ([#88769](https://github.com/kubernetes/kubernetes/pull/88769), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Auth and CLI]
+- Terminating a restartPolicy=Never pod no longer has a chance to report the pod succeeded when it actually failed. ([#88440](https://github.com/kubernetes/kubernetes/pull/88440), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Testing]
+- The EventRecorder from k8s.io/client-go/tools/events will now create events in the default namespace (instead of kube-system) when the related object does not have it set. ([#88815](https://github.com/kubernetes/kubernetes/pull/88815), [@enj](https://github.com/enj)) [SIG API Machinery]
+- The audit event sourceIPs list will now always end with the IP that sent the request directly to the API server. ([#87167](https://github.com/kubernetes/kubernetes/pull/87167), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Auth]
+- Update Cluster Autoscaler to 1.18.0; changelog: https://github.com/kubernetes/autoscaler/releases/tag/cluster-autoscaler-1.18.0 ([#89095](https://github.com/kubernetes/kubernetes/pull/89095), [@losipiuk](https://github.com/losipiuk)) [SIG Autoscaling and Cluster Lifecycle]
+- Update to use golang 1.13.8 ([#87648](https://github.com/kubernetes/kubernetes/pull/87648), [@ialidzhikov](https://github.com/ialidzhikov)) [SIG Release and Testing]
+- Validate kube-proxy flags --ipvs-tcp-timeout, --ipvs-tcpfin-timeout, --ipvs-udp-timeout ([#88657](https://github.com/kubernetes/kubernetes/pull/88657), [@chendotjs](https://github.com/chendotjs)) [SIG Network]
+- Wait for all CRDs to show up in discovery endpoint before reporting readiness. ([#89145](https://github.com/kubernetes/kubernetes/pull/89145), [@sttts](https://github.com/sttts)) [SIG API Machinery]
+- `kubectl config view` now redacts bearer tokens by default, similar to client certificates. The `--raw` flag can still be used to output full content. ([#88985](https://github.com/kubernetes/kubernetes/pull/88985), [@brianpursley](https://github.com/brianpursley)) [SIG API Machinery and CLI]
diff --git a/content/ko/docs/sitemap.md b/content/ko/docs/sitemap.md
deleted file mode 100644
index c0a8e6c299121..0000000000000
--- a/content/ko/docs/sitemap.md
+++ /dev/null
@@ -1,114 +0,0 @@
----
----
-
-
-
-필터하려면 태그를 클릭하거나 드롭 다운을 사용하세요. 정순 또는 역순으로 정렬하려면 테이블 헤더를 클릭하세요.
-
-
-개념으로 필터:
-오브젝트로 필터:
-커맨드로 필터:
-
-
-
diff --git a/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md
new file mode 100644
index 0000000000000..e79d655d01b2e
--- /dev/null
+++ b/content/ko/docs/tasks/access-application-cluster/connecting-frontend-backend.md
@@ -0,0 +1,212 @@
+---
+title: 서비스를 사용하여 프론트엔드를 백엔드에 연결
+content_type: tutorial
+weight: 70
+---
+
+
+
+이 작업은 프론트엔드와 마이크로서비스 백엔드를 어떻게 생성하는지를
+설명한다. 백엔드 마이크로서비스는 인사하기(hello greeter)이다.
+프론트엔드와 백엔드는 쿠버네티스 {{< glossary_tooltip term_id="service" text="서비스" >}}
+오브젝트를 이용해 연결되어 있다.
+
+## {{% heading "objectives" %}}
+
+* {{< glossary_tooltip term_id="deployment" text="디플로이먼트(Deployment)" >}} 오브젝트를 사용해 마이크로서비스를 생성하고 실행한다.
+* 프론트엔드를 사용하여 백엔드로 트래픽을 전달한다.
+* 프론트엔드 애플리케이션을 백엔드 애플리케이션에 연결하기 위해
+ 서비스 오브젝트를 사용한다.
+
+## {{% heading "prerequisites" %}}
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+이 작업은
+지원되는 환경이 필요한
+[외부 로드밸런서가 있는 서비스](/docs/tasks/access-application-cluster/create-external-load-balancer/)를 사용한다. 만약, 이를 지원하지 않는 환경이라면, [노드포트](/ko/docs/concepts/services-networking/service/#nodeport) 서비스 타입을
+대신 사용할 수 있다.
+
+
+
+## 디플로이먼트를 사용해 백엔드 생성하기
+
+백엔드는 인사하기라는 간단한 마이크로서비스이다. 여기에 백엔드 디플로이먼트
+구성 파일이 있다.
+
+{{< codenew file="service/access/hello.yaml" >}}
+
+백엔드 디플로이먼트를 생성한다.
+
+```shell
+kubectl apply -f https://k8s.io/examples/service/access/hello.yaml
+```
+
+백엔드 디플로이먼트에 관한 정보를 본다.
+
+```shell
+kubectl describe deployment hello
+```
+
+결과는 아래와 같다.
+
+```
+Name: hello
+Namespace: default
+CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
+Labels: app=hello
+ tier=backend
+ track=stable
+Annotations: deployment.kubernetes.io/revision=1
+Selector: app=hello,tier=backend,track=stable
+Replicas: 7 desired | 7 updated | 7 total | 7 available | 0 unavailable
+StrategyType: RollingUpdate
+MinReadySeconds: 0
+RollingUpdateStrategy: 1 max unavailable, 1 max surge
+Pod Template:
+ Labels: app=hello
+ tier=backend
+ track=stable
+ Containers:
+ hello:
+ Image: "gcr.io/google-samples/hello-go-gke:1.0"
+ Port: 80/TCP
+ Environment:
+ Mounts:
+ Volumes:
+Conditions:
+ Type Status Reason
+ ---- ------ ------
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+OldReplicaSets:
+NewReplicaSet: hello-3621623197 (7/7 replicas created)
+Events:
+...
+```
+
+## 백엔드 서비스 오브젝트 생성하기
+
+프론트엔드와 백엔드를 연결하는 방법은 백엔드
+서비스이다. 서비스는 백엔드 마이크로서비스에 언제든 도달하기 위해
+변하지 않는 IP 주소와 DNS 이름 항목을 생성한다. 서비스는
+트래픽을 보내는 파드를 찾기 위해
+{{< glossary_tooltip text="selectors" term_id="selector" text="셀렉터" >}}를 사용한다.
+
+먼저, 서비스 구성 파일을 살펴보자.
+
+{{< codenew file="service/access/hello-service.yaml" >}}
+
+구성 파일에서 서비스가 `app: hello` 과 `tier: backend` 레이블을 갖는
+파드에 트래픽을 보내는 것을 볼 수 있다.
+
+`hello` 서비스를 생성한다.
+
+```shell
+kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml
+```
+
+이 시점에서, 백엔드 디플로이먼트는 Running 중이며, 해당 백엔드로
+트래픽을 보내는 서비스를 갖고 있다.
+
+## 프론트엔드 생성하기
+
+이제 백엔드가 있기 때문에 백엔드와 연결하는 프론트엔드를 생성할 수 있다.
+프론트엔드는 백엔드 서비스에서 제공된 DNS 이름을 사용해 백엔드 워커
+파드에 연결할 수 있다. DNS 이름은 "hello" 이고, 앞선
+서비스 구성 파일의 `name` 항목의 값이다.
+
+프론트엔드 디플로이먼트 안에 파드는 hello 백엔드 서비스를 찾도록
+구성된 nginx 이미지를 실행한다. 여기에 nginx 구성 파일이 있다.
+
+{{< codenew file="service/access/frontend.conf" >}}
+
+백엔드와 같이, 프론트엔드는 디플로이먼트와 서비스를 갖고 있다.
+서비스의 구성은 `type: LoadBalancer` 이며, 이는 서비스가
+클라우드 공급자의 기본 로드 밸런서를 사용하는 것을 의미한다.
+
+{{< codenew file="service/access/frontend.yaml" >}}
+
+프론트엔드 디플로이먼트와 서비스를 생성한다.
+
+```shell
+kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml
+```
+
+결과는 두 리소스가 생성되었음을 확인한다.
+
+```
+deployment.apps/frontend created
+service/frontend created
+```
+
+{{< note >}}
+nginx 구성은 [컨테이너 이미지](/examples/service/access/Dockerfile)에
+반영 되었다. 이를 실행하는 더 좋은 방법은
+구성을
+보다 쉽게 변경할
+수 있는 [컨피그맵(ConfigMap)](/docs/tasks/configure-pod-container/configure-pod-configmap/)을 사용하는 것이다.
+{{< /note >}}
+
+## 프론트엔드 서비스와 통신하기
+
+일단 로드밸런서 타입의 서비스를 생성하면, 이 명령어를
+사용해 외부 IP를 찾을 수 있다.
+
+```shell
+kubectl get service frontend --watch
+```
+
+`frontend` 서비스의 구성을 보여주고, 변경 사항을
+주시한다. 처음에, 외부 IP는 `` 으로 나열된다.
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+frontend LoadBalancer 10.51.252.116 80/TCP 10s
+```
+
+하지만, 외부 IP가 생성되자마자 구성은
+`EXTERNAL-IP` 제목 아래에 새로운 IP를 포함하여 갱신한다.
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m
+```
+
+이제 해당 IP는 클러스터 외부에서 `frontend` 서비스와 통신하는데
+사용된다.
+
+## 프론트엔드 통해서 트래픽 보내기
+
+이제 프론트엔드와 백엔드가 연결되었다. 프론트엔드 서비스의 외부 IP에서
+curl 명령을 사용해 엔드포인트에 도달할 수 있다.
+
+```shell
+curl http://${EXTERNAL_IP} # 앞의 예에서 본 EXTERNAL-IP로 수정한다
+```
+
+결과로 백엔드에서 생성된 메시지가 보인다.
+
+```json
+{"message":"Hello"}
+```
+
+## {{% heading "cleanup" %}}
+
+서비스를 삭제하기 위해, 아래 명령어를 입력하자.
+
+```shell
+kubectl delete services frontend hello
+```
+
+백엔드와 프론트엔드 애플리케이션에서 실행 중인 디플로이먼트, 레플리카셋, 파드를 삭제하기 위해, 아래 명령어를 입력하자.
+
+```shell
+kubectl delete deployment frontend hello
+```
+
+## {{% heading "whatsnext" %}}
+
+* [서비스](/ko/docs/concepts/services-networking/service/)에 대해 더 알아본다.
+* [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)에 대해 더 알아본다.
+
diff --git a/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
index 5a91ebd66e703..0856cc355d932 100644
--- a/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
+++ b/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
@@ -7,7 +7,7 @@ min-kubernetes-server-version: v1.10
-이 페이지는 `kubectl port-forward` 를 사용해서 쿠버네티스 클러스터 내에서
+이 페이지는 `kubectl port-forward` 를 사용해서 쿠버네티스 클러스터 내에서
실행중인 Redis 서버에 연결하는 방법을 보여준다. 이 유형의 연결은 데이터베이스
디버깅에 유용할 수 있다.
@@ -29,7 +29,7 @@ min-kubernetes-server-version: v1.10
## Redis 디플로이먼트와 서비스 생성하기
1. Redis를 실행하기 위해 디플로이먼트를 생성한다.
-
+
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
```
@@ -151,7 +151,7 @@ min-kubernetes-server-version: v1.10
또는 다음과 같다.
```shell
- kubectl port-forward svc/redis-master 7000:6379
+ kubectl port-forward service/redis-master 7000:redis
```
위의 명령어들은 모두 동일하게 동작한다. 이와 유사하게 출력된다.
@@ -203,7 +203,3 @@ UDP 프로토콜에 대한 지원은
## {{% heading "whatsnext" %}}
[kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward)에 대해 더 알아본다.
-
-
-
-
diff --git a/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md
index b352e6ffd07df..d0cfee30093af 100644
--- a/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md
+++ b/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md
@@ -146,7 +146,7 @@ Kubeconfig 인증 방법은 외부 아이덴티티 프로파이더 또는 x509
만약 이름을 10이라는 숫자로 세팅한다면, 파드는 기본 네임스페이스로 배정하게 될 것이다.
네임스페이스 생성이 성공하는 경우, 생성된 네임스페이스가 기본으로 선택된다.
- 만약 생성에 실패하면, 첫번째 네임스페이스가 선택된다.
+ 만약 생성에 실패하면, 첫 번째 네임스페이스가 선택된다.
- **이미지 풀(Pull) 시크릿**:
특정 도커 컨테이너 이미지가 프라이빗한 경우,
diff --git a/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
index 78c19ad39b7f4..eb953143bef8d 100644
--- a/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
+++ b/content/ko/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
@@ -63,7 +63,7 @@ deployment.apps/cilium-operator created
```
시작하기 안내서의 나머지 부분은 예제 애플리케이션을 이용하여
-L3/L4(예, IP 주소 + 포트) 모두의 보안 정책 뿐만 아니라 L7(예, HTTP)의 보안 정책을
+L3/L4(예, IP 주소 + 포트) 모두의 보안 정책뿐만 아니라 L7(예, HTTP)의 보안 정책을
적용하는 방법을 설명한다.
## 실리움을 실 서비스 용도로 배포하기
diff --git a/content/ko/docs/tasks/configure-pod-container/configure-pod-initialization.md b/content/ko/docs/tasks/configure-pod-container/configure-pod-initialization.md
index ee7d5a9f82f2b..92abf6e4f66be 100644
--- a/content/ko/docs/tasks/configure-pod-container/configure-pod-initialization.md
+++ b/content/ko/docs/tasks/configure-pod-container/configure-pod-initialization.md
@@ -37,7 +37,7 @@ weight: 130
`/usr/share/nginx/html` 에 마운트한다. 초기화 컨테이너는 다음 명령을 실행 후
종료한다.
- wget -O /work-dir/index.html http://kubernetes.io
+ wget -O /work-dir/index.html http://info.cern.ch
초기화 컨테이너는 nginx 서버의 루트 디렉터리 내 `index.html` 파일을
저장한다.
@@ -67,16 +67,13 @@ init-demo 파드 내 실행 중인 nginx 컨테이너의 셸을 실행한다.
출력 결과는 nginx가 초기화 컨테이너에 의해 저장된 웹 페이지를 제공하고 있음을 보여준다.
-
-
+
+ http://info.cern.ch
+
-
- ...
- "url": "http://kubernetes.io/"}
-
-
+
http://info.cern.ch - home of the first website
...
-
Kubernetes is open source giving you the freedom to take advantage ...
...
diff --git a/content/ko/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/ko/docs/tasks/configure-pod-container/configure-volume-storage.md
index b202417ee1fc9..73c749390975e 100644
--- a/content/ko/docs/tasks/configure-pod-container/configure-volume-storage.md
+++ b/content/ko/docs/tasks/configure-pod-container/configure-volume-storage.md
@@ -136,7 +136,7 @@ Redis 파드의
* [파드](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)을 참고한다.
-* 쿠버네티스는 `emptyDir` 이 제공하는 로컬 디스크 스토리지 뿐만 아니라,
+* 쿠버네티스는 `emptyDir` 이 제공하는 로컬 디스크 스토리지뿐만 아니라,
중요한 데이터에 선호하는 GCE의 PD, EC2의 EBS를 포함해서
네트워크 연결 스토리지(NAS) 솔루션을 지원하며,
노드의 디바이스 마운트, 언마운트와 같은 세부사항을 처리한다.
diff --git a/content/ko/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/ko/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
index 758c782a449af..a9c9bf182dcb9 100644
--- a/content/ko/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
+++ b/content/ko/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
@@ -7,16 +7,16 @@ content_type: concept
컨테이너 CPU 및 메모리 사용량과 같은 리소스 사용량 메트릭은
쿠버네티스의 메트릭 API를 통해 사용할 수 있다. 이 메트릭은
-`kubectl top` 커맨드 사용과 같이 사용자가 직접적으로 액세스하거나,
+`kubectl top` 커맨드 사용하여 사용자가 직접적으로 액세스하거나,
Horizontal Pod Autoscaler 같은 클러스터의 컨트롤러에서 결정을 내릴 때 사용될 수 있다.
## 메트릭 API
-메트릭 API를 통해 주어진 노드나 파드에서 현재 사용중인
+메트릭 API를 통해, 주어진 노드나 파드에서 현재 사용중인
리소스의 양을 알 수 있다. 이 API는 메트릭 값을 저장하지
-않으므로 지정된 노드에서 10분 전에 사용된 리소스의 양을
+않으므로, 예를 들어, 지정된 노드에서 10분 전에 사용된 리소스의 양을
가져오는 것과 같은 일을 할 수는 없다.
이 API와 다른 API는 차이가 없다.
@@ -52,14 +52,12 @@ kubelet은 비율 계산에 사용할 윈도우를 선택한다.
## 메트릭 서버
[메트릭 서버](https://github.com/kubernetes-incubator/metrics-server)는 클러스터 전역에서 리소스 사용량 데이터를 집계한다.
-`kube-up.sh` 스크립트에 의해 생성된 클러스터에는 기본적으로 메트릭 서버가
+기본적으로, `kube-up.sh` 스크립트에 의해 생성된 클러스터에는 메트릭 서버가
디플로이먼트 오브젝트로 배포된다. 만약 다른 쿠버네티스 설치 메커니즘을 사용한다면, 제공된
[디플로이먼트 components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) 파일을 사용하여 메트릭 서버를 배포할 수 있다.
메트릭 서버는 각 노드에서 [Kubelet](/docs/reference/command-line-tools-reference/kubelet/)에 의해
-노출된 Summary API에서 메트릭을 수집한다.
-
-메트릭 서버는 [쿠버네티스 aggregator](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)를
+노출된 Summary API에서 메트릭을 수집하고, [쿠버네티스 aggregator](/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)를
통해 메인 API 서버에 등록된다.
[설계 문서](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md)에서
diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 73c031049340b..c9c694e2375e2 100644
--- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -108,11 +108,7 @@ php-apache Deployment/php-apache/scale 0% / 50% 1 10 1
```shell
-kubectl run -it --rm load-generator --image=busybox /bin/sh
-
-Hit enter for command prompt
-
-while true; do wget -q -O- http://php-apache; done
+kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
```
실행 후, 약 1분 정도 후에 CPU 부하가 올라가는 것을 볼 수 있다.
@@ -386,7 +382,7 @@ object:
외부 메트릭을 사용하면 모니터링 시스템의 사용 가능한 메트릭에 기반하여 클러스터를 오토스케일링 할 수 있다.
위의 예제처럼 `name`과 `selector`를 갖는 `metric` 블록을 제공하고,
`Object` 대신에 `External` 메트릭 타입을 사용한다.
-만일 여러개의 시계열이 `metricSelector`와 일치하면, HorizontalPodAutoscaler가 값의 합을 사용한다.
+만일 여러 개의 시계열이 `metricSelector`와 일치하면, HorizontalPodAutoscaler가 값의 합을 사용한다.
외부 메트릭들은 `Value`와 `AverageValue` 대상 타입을 모두 지원하고,
`Object` 타입을 사용할 때와 똑같이 동작한다.
diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md
index 4afcb6927de81..7a7d129525850 100644
--- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -406,6 +406,9 @@ behavior:
마지막으로 5개의 파드를 드롭하기 위해 다른 폴리시를 추가하고, 최소 선택
전략을 추가할 수 있다.
+분당 5개 이하의 파드가 제거되지 않도록, 고정 크기가 5인 두 번째 축소
+정책을 추가하고, `selectPolicy` 를 최소로 설정하면 된다. `selectPolicy` 를 `Min` 으로 설정하면
+자동 스케일러가 가장 적은 수의 파드에 영향을 주는 정책을 선택함을 의미한다.
```yaml
behavior:
@@ -417,7 +420,7 @@ behavior:
- type: Pods
value: 5
periodSeconds: 60
- selectPolicy: Max
+ selectPolicy: Min
```
### 예시: 스케일 다운 비활성화
diff --git a/content/ko/docs/tasks/tools/_index.md b/content/ko/docs/tasks/tools/_index.md
index 208a8ab33d19d..9753772c5f565 100755
--- a/content/ko/docs/tasks/tools/_index.md
+++ b/content/ko/docs/tasks/tools/_index.md
@@ -8,34 +8,41 @@ no_list: true
## kubectl
쿠버네티스 커맨드 라인 도구인 `kubectl` 사용하면 쿠버네티스 클러스터에 대해 명령을
-실행할 수 있다. kubectl을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및
+실행할 수 있다. `kubectl` 을 사용하여 애플리케이션을 배포하고, 클러스터 리소스를 검사 및
관리하고, 로그를 볼 수 있다.
클러스터에 접근하기 위해 `kubectl` 을 다운로드 및 설치하고 설정하는 방법에 대한 정보는
-[kubectl 설치 및 설정](/ko/docs/tasks/tools/install-kubectl/)을 참고한다.
+[`kubectl` 설치 및 설정](/ko/docs/tasks/tools/install-kubectl/)을
+참고한다.
-`kubectl` 레퍼런스 문서를 읽어볼 수도 있다.
+kubectl 설치 및 설정 가이드 보기
-## Minikube
+[`kubectl` 레퍼런스 문서](/ko/docs/reference/kubectl/)를 읽어볼 수도 있다.
-[Minikube](https://minikube.sigs.k8s.io/)는 쿠버네티스를 로컬에서 실행할 수 있는
-도구이다. Minikube는 개인용 컴퓨터(윈도우, macOS 및 리눅스 PC 포함)에서
+## minikube
+
+[`minikube`](https://minikube.sigs.k8s.io/)는 쿠버네티스를 로컬에서 실행할 수 있는
+도구이다. `minikube` 는 개인용 컴퓨터(윈도우, macOS 및 리눅스 PC 포함)에서
단일 노드 쿠버네티스 클러스터를 실행하여 쿠버네티스를 사용해보거나 일상적인 개발 작업을
수행할 수 있다.
-공식 사이트에서의 [시작하기!](https://minikube.sigs.k8s.io/docs/start/)
-가이드를 따라 해볼 수 있고, 또는 도구 설치에 중점을 두고 있다면
-[Minikube 설치](/ko/docs/tasks/tools/install-minikube/)를 읽어볼 수 있다.
+도구 설치에 중점을 두고 있다면 공식 사이트에서의
+[시작하기!](https://minikube.sigs.k8s.io/docs/start/)
+가이드를 따라 해볼 수 있다.
+
+minikube 시작하기! 가이드 보기
-Minikube가 작동하면, 이를 사용하여
+`minikube` 가 작동하면, 이를 사용하여
[샘플 애플리케이션을 실행](/ko/docs/tutorials/hello-minikube/)해볼 수 있다.
## kind
-Minikube와 마찬가지로, [kind](https://kind.sigs.k8s.io/docs/)를 사용하면 로컬 컴퓨터에서
-쿠버네티스를 실행할 수 있다. Minikuke와 달리, kind는 단일 컨테이너 런타임에서만 작동한다.
-kind는 [도커](https://docs.docker.com/get-docker/)를 설치하고
+`minikube` 와 마찬가지로, [kind](https://kind.sigs.k8s.io/docs/)를 사용하면 로컬 컴퓨터에서
+쿠버네티스를 실행할 수 있다. `minikube` 와 달리, `kind` 는 단일 컨테이너 런타임에서만 작동한다.
+`kind` 는 [도커](https://docs.docker.com/get-docker/)를 설치하고
구성해야 한다.
-[퀵 스타트](https://kind.sigs.k8s.io/docs/user/quick-start/)는 kind를 시작하고 실행하기 위해
+[퀵 스타트](https://kind.sigs.k8s.io/docs/user/quick-start/)는 `kind` 를 시작하고 실행하기 위해
수행해야 하는 작업을 보여준다.
+
+kind 시작하기 가이드 보기
diff --git a/content/ko/docs/tasks/tools/install-kubectl.md b/content/ko/docs/tasks/tools/install-kubectl.md
index 59b1a58b8aa1e..2f2e6c4fe01c5 100644
--- a/content/ko/docs/tasks/tools/install-kubectl.md
+++ b/content/ko/docs/tasks/tools/install-kubectl.md
@@ -62,7 +62,7 @@ kubectl을 사용하여 애플리케이션을 배포하고, 클러스터 리소
{{< tabs name="kubectl_install" >}}
{{< tab name="Ubuntu, Debian 또는 HypriotOS" codelang="bash" >}}
-sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2
+sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
diff --git a/content/ko/docs/tasks/tools/install-minikube.md b/content/ko/docs/tasks/tools/install-minikube.md
deleted file mode 100644
index 04c4ca64a6968..0000000000000
--- a/content/ko/docs/tasks/tools/install-minikube.md
+++ /dev/null
@@ -1,262 +0,0 @@
----
-title: Minikube 설치
-content_type: task
-weight: 20
-card:
- name: tasks
- weight: 10
----
-
-
-
-이 페이지는 단일 노드 쿠버네티스 클러스터를 노트북의 가상 머신에서 구동하는 도구인 [Minikube](/ko/docs/tutorials/hello-minikube)의 설치 방법을 설명한다.
-
-
-
-## {{% heading "prerequisites" %}}
-
-
-{{< tabs name="minikube_before_you_begin" >}}
-{{% tab name="리눅스" %}}
-리눅스에서 가상화 지원 여부를 확인하려면, 아래의 명령을 실행하고 출력이 비어있지 않은지 확인한다.
-```
-grep -E --color 'vmx|svm' /proc/cpuinfo
-```
-{{% /tab %}}
-
-{{% tab name="맥OS" %}}
-맥OS에서 가상화 지원 여부를 확인하려면, 아래 명령어를 터미널에서 실행한다.
-```
-sysctl -a | grep -E --color 'machdep.cpu.features|VMX'
-```
-만약 출력 중에 (색상으로 강조된) `VMX`를 볼 수 있다면, VT-x 기능이 머신에서 활성화된 것이다.
-{{% /tab %}}
-
-{{% tab name="윈도우" %}}
-윈도우 8 이후 버전에서 가상화 지원 여부를 확인하려면, 다음 명령어를 윈도우 터미널이나 명령 프롬프트에서 실행한다.
-```
-systeminfo
-```
-아래와 같은 내용을 볼 수 있다면, 윈도우에서 가상화를 지원한다.
-```
-Hyper-V Requirements: VM Monitor Mode Extensions: Yes
- Virtualization Enabled In Firmware: Yes
- Second Level Address Translation: Yes
- Data Execution Prevention Available: Yes
-```
-
-다음의 출력을 확인할 수 있다면, 이미 하이퍼바이저가 설치되어 있는 것으로 다음 단계를 건너 뛸 수 있다.
-```
-Hyper-V Requirements: A hypervisor has been detected. Features required for Hyper-V will not be displayed.
-```
-
-
-{{% /tab %}}
-{{< /tabs >}}
-
-
-
-
-
-## minikube 설치하기
-
-{{< tabs name="tab_with_md" >}}
-{{% tab name="리눅스" %}}
-
-### kubectl 설치
-
-kubectl이 설치되었는지 확인한다. kubectl은 [kubectl 설치하고 설정하기](/ko/docs/tasks/tools/install-kubectl/#리눅스에-kubectl-설치)의 요령을 따라서 설치할 수 있다.
-
-## 하이퍼바이저(hypervisor) 설치
-
-하이퍼바이저를 설치하지 않다면, 운영체제에 적합한 하이퍼바이저를 지금 설치한다.
-
-• [KVM](https://www.linux-kvm.org/), 또한 QEMU를 사용한다
-
-• [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
-
-Minikube는 쿠버네티스 컴포넌트를 VM이 아닌 호스트에서도 동작하도록 `--driver=none` 옵션도 지원한다.
-이 드라이버를 사용하려면 [도커](https://www.docker.com/products/docker-desktop)와 리눅스 환경이 필요하지만, 하이퍼바이저는 필요하지 않다.
-
-데비안(Debian) 또는 파생된 배포판에서 `none` 드라이버를 사용하는 경우,
-Minikube에서는 동작하지 않는 스냅 패키지 대신 도커용 `.deb` 패키지를 사용한다.
-[도커](https://www.docker.com/products/docker-desktop)에서 `.deb` 패키지를 다운로드 할 수 있다.
-
-{{< caution >}}
-`none` VM 드라이버는 보안과 데이터 손실 이슈를 일으킬 수 있다.
-`--driver=none` 을 사용하기 전에 [이 문서](https://minikube.sigs.k8s.io/docs/reference/drivers/none/)를 참조해서 더 자세한 내용을 본다.
-{{< /caution >}}
-
-Minikube는 도커 드라이브와 비슷한 `vm-driver=podman` 도 지원한다. 슈퍼사용자 권한(root 사용자)으로 실행되는 Podman은 컨테이너가 시스템에서 사용 가능한 모든 기능에 완전히 접근할 수 있는 가장 좋은 방법이다.
-
-{{< caution >}}
-일반 사용자 계정은 컨테이너를 실행하는 데 필요한 모든 운영 체제 기능에 완전히 접근할 수 없기에 `podman` 드라이버는 컨테이너를 root로 실행해야 한다.
-{{< /caution >}}
-
-### 패키지를 이용하여 Minikube 설치
-
-Minikube를 위한 *실험적인* 패키지가 있다.
-리눅스 (AMD64) 패키지는 GitHub의 Minikube의 [릴리스](https://github.com/kubernetes/minikube/releases)에서 찾을 수 있다.
-
-적절한 패키지를 설치하기 위해 리눅스 배포판의 패키지 도구를 사용한다.
-
-### Minikube를 직접 다운로드하여 설치
-
-패키지를 통해 설치하지 못하였다면,
-바이너리 자체를 다운로드 받고 사용할 수 있다.
-
-```shell
-curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
- && chmod +x minikube
-```
-
-Minikube 실행 파일을 사용자 실행 경로에 추가하는 가장 쉬운 방법은 다음과 같다.
-
-```shell
-sudo mkdir -p /usr/local/bin/
-sudo install minikube /usr/local/bin/
-```
-
-### Homebrew를 이용해서 Minikube 설치하기
-
-또 다른 대안으로 리눅스 [Homebrew](https://docs.brew.sh/Homebrew-on-Linux)를 이용해서 Minikube를 설치할 수 있다.
-
-```shell
-brew install minikube
-```
-
-{{% /tab %}}
-{{% tab name="맥OS" %}}
-### kubectl 설치
-
-kubectl이 설치되었는지 확인한다. kubectl은 [kubectl 설치하고 설정하기](/ko/docs/tasks/tools/install-kubectl/#macos에-kubectl-설치)의 요령을 따라서 설치할 수 있다.
-
-### 하이퍼바이저(hypervisor) 설치
-
-하이퍼바이저를 설치하지 않았다면, 다음 중 하나를 지금 설치한다.
-
-• [HyperKit](https://github.com/moby/hyperkit)
-
-• [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
-
-• [VMware Fusion](https://www.vmware.com/products/fusion)
-
-### Minikube 설치
-가장 쉽게 맥OS에 Minikube를 설치하는 방법은 [Homebrew](https://brew.sh)를 이용하는 것이다.
-
-```shell
-brew install minikube
-```
-
-실행 바이너리를 다운로드 받아서 맥OS에 설치할 수도 있다.
-
-```shell
-curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
- && chmod +x minikube
-```
-
-Minikube 실행 파일을 사용자 실행 경로에 추가하는 가장 쉬운 방법은 다음과 같다.
-
-```shell
-sudo mv minikube /usr/local/bin
-```
-
-{{% /tab %}}
-{{% tab name="윈도우" %}}
-### kubectl 설치하기
-
-kubectl이 설치되었는지 확인한다. kubectl은 [kubectl 설치하고 설정하기](/ko/docs/tasks/tools/install-kubectl/#windows에-kubectl-설치)의 요령을 따라서 설치할 수 있다.
-
-### 하이퍼바이저(hypervisor) 설치하기
-
-하이퍼바이저가 설치 안 되어 있다면 아래중 하나를 지금 설치한다.
-
-• [Hyper-V](https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/quick_start/walkthrough_install)
-
-• [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
-
-{{< note >}}
-Hyper-V는 다음 세 버전의 윈도우 10에서 실행할 수 있다. Windows 10 Enterprise, Windows 10 Professional, Windows 10 Education.
-{{< /note >}}
-
-### Chocolatey를 이용한 Minikube 설치
-
-윈도우에서 Minikube를 설치하는 가장 쉬운 방법은 [Chocolatey](https://chocolatey.org/)를 사용하는 것이다(관리자 권한으로 실행).
-
-```shell
-choco install minikube
-```
-
-Minikube 설치를 마친 후, 현재 CLI 세션을 닫고 재시작한다. Minikube 실행 파일의 경로는 실행 경로(path)에 자동으로 추가된다.
-
-### 인스톨러 실행파일을 통한 Minikube 설치
-
-[윈도우 인스톨러](https://docs.microsoft.com/en-us/windows/desktop/msi/windows-installer-portal)를 사용하여 윈도우에 Minikube를 수동으로 설치하려면, [`minikube-installer.exe`](https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe)를 다운로드 받고, 이 인스톨러를 실행한다.
-
-### 직접 다운로드하여 Minikube 설치
-
-윈도우에서 Minikube를 수동으로 설치하려면, [`minikube-windows-amd64`](https://github.com/kubernetes/minikube/releases/latest)를 다운로드 받아서, 파일 이름을 `minikube.exe`로 변경하고, 실행 경로에 추가한다.
-
-{{% /tab %}}
-{{< /tabs >}}
-
-## 설치 확인
-
-하이퍼바이저와 Minikube의 성공적인 설치를 확인하려면, 다음 명령어를 실행해서 로컬 쿠버네티스 클러스터를 시작할 수 있다.
-
-{{< note >}}
-
-`minikube start` 시 `--driver` 를 설정하려면, 아래에 `` 로 소문자로 언급된 곳에 설치된 하이퍼바이저의 이름을 입력한다. `--driver` 값의 전체 목록은 [VM driver 지정하기 문서](/ko/docs/setup/learning-environment/minikube/#vm-드라이버-지정하기)에서 확인할 수 있다.
-
-{{< /note >}}
-
-{{< caution >}}
-KVM을 사용할 때 Debian과 다른 시스템에서 libvirt의 기본 QEMU URI는 `qemu:///session`이고, Minikube의 기본 QEMU URI는 `qemu:///system`이다. 시스템이 이런 환경이라면, `--kvm-qemu-uri qemu:///session`을 `minikube start`에 전달해야 한다.
-{{< /caution >}}
-
-```shell
-minikube start --driver=
-```
-
-`minikube start` 가 완료되면, 아래 명령을 실행해서 클러스터의 상태를 확인한다.
-
-```shell
-minikube status
-```
-
-만약 클러스터가 실행 중이면, `minikube status` 의 출력은 다음과 유사해야 한다.
-
-```
-host: Running
-kubelet: Running
-apiserver: Running
-kubeconfig: Configured
-```
-
-Minikube가 선택한 하이퍼바이저와 작동하는지 확인한 후에는, Minikube를 계속 사용하거나 클러스터를 중지할 수 있다. 클러스터를 중지하려면 다음을 실행한다.
-
-```shell
-minikube stop
-```
-
-## 새롭게 시작하기 위해 모두 정리하기 {#cleanup-local-state}
-
-이전에 Minikube를 설치했었다면, 다음을 실행한다.
-```shell
-minikube start
-```
-
-그리고 `minikube start`는 에러를 보여준다.
-```shell
-machine does not exist
-```
-
-이제 구성 파일을 삭제해야 한다.
-```shell
-minikube delete
-```
-
-## {{% heading "whatsnext" %}}
-
-
-* [Minikube로 로컬에서 쿠버네티스 실행하기](/ko/docs/setup/learning-environment/minikube/)
diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md
index 4a566e37768c4..699902295e17e 100644
--- a/content/ko/docs/tutorials/hello-minikube.md
+++ b/content/ko/docs/tutorials/hello-minikube.md
@@ -137,6 +137,9 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다.
`--type=LoadBalancer`플래그는 클러스터 밖의 서비스로 노출하기
원한다는 뜻이다.
+ `k8s.gcr.io/echoserver` 이미지 내의 애플리케이션 코드는 TCP 포트 8080에서만 수신한다. `kubectl expose`를
+ 사용하여 다른 포트를 노출한 경우, 클라이언트는 다른 포트에 연결할 수 없다.
+
2. 방금 생성한 서비스 살펴보기
```shell
diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
index 6f0fb013d6eb8..e2296b84bff09 100644
--- a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
+++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -35,7 +35,7 @@
쿠버네티스 클러스터
방식으로 패키지할 필요가 있다. 즉, 컨테이너화 해야 한다. 컨테이너화된 애플리케이션은 호스트에
매우 깊이 통합된 패키지로써, 특정 머신에 직접 설치되는 예전의 배포 모델보다 유연하고 가용성이 높다.
쿠버네티스는 애플리케이션 컨테이너를 클러스터에 분산시키고 스케줄링하는 일을 보다 효율적으로
- 자동화한다.
+ 자동화한다.
쿠버네티스는 오픈소스 플랫폼이고 운영 수준의 안정성을 가졌다.
쿠버네티스 클러스터는 두 가지 형태의 자원으로 구성된다.
@@ -56,7 +56,7 @@
요약:
- 쿠버네티스는 컴퓨터 클러스터에 걸쳐서 애플리케이션 컨테이너의 위치(스케줄링)와 실행을
+ 쿠버네티스는 컴퓨터 클러스터에 걸쳐서 애플리케이션 컨테이너의 위치(스케줄링)와 실행을
오케스트레이션하는 운영 수준의 오픈소스 플랫폼이다.
@@ -84,7 +84,7 @@
클러스터 다이어그램
클러스터 내 모든 활동을 조율한다.
노드는 쿠버네티스 클러스터 내 워커 머신으로써 동작하는 VM 또는 물리적인 컴퓨터다.
각 노드는 노드를 관리하고 쿠버네티스 마스터와 통신하는 Kubelet이라는 에이전트를 갖는다. 노드는
- 컨테이너 운영을 담당하는 Docker 또는 rkt와 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스
+ 컨테이너 운영을 담당하는 containerd 또는 도커와 같은 툴도 갖는다. 운영 트래픽을 처리하는 쿠버네티스
클러스터는 최소 세 대의 노드를 가져야한다.
@@ -103,10 +103,10 @@
클러스터 다이어그램
사용해서 클러스터와 상호작용할 수 있다.
쿠버네티스 클러스터는 물리 및 가상 머신 모두에 설치될 수 있다. 쿠버네티스 개발을 시작하려면
- Minikube를 사용할 수 있다. Minikube는 로컬 머신에 VM을 만들고 하나의 노드로 구성된 간단한
- 클러스터를 배포하는 가벼운 쿠버네티스 구현체다. Minikube는 리눅스, 맥, 그리고 윈도우 시스템에서
- 구동이 가능하다. Minikube CLI는 클러스터에 대해 시작, 중지, 상태 조회 및 삭제 등의 기본적인
- 부트스트래핑 기능을 제공한다. 하지만, 본 튜토리얼에서는 Minikube가 미리 설치된 채로 제공되는
+ Minikube를 사용할 수 있다. Minikube는 로컬 머신에 VM을 만들고 하나의 노드로 구성된 간단한
+ 클러스터를 배포하는 가벼운 쿠버네티스 구현체다. Minikube는 리눅스, 맥, 그리고 윈도우 시스템에서
+ 구동이 가능하다. Minikube CLI는 클러스터에 대해 시작, 중지, 상태 조회 및 삭제 등의 기본적인
+ 부트스트래핑 기능을 제공한다. 하지만, 본 튜토리얼에서는 Minikube가 미리 설치된 채로 제공되는
온라인 터미널을 사용할 것이다.
이제 쿠버네티스가 무엇인지 알아봤으니, 온라인 튜토리얼로 이동해서 우리의 첫 번째 클러스터를
diff --git a/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html b/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html
index 6e6eabda96758..2e34002571470 100644
--- a/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html
+++ b/content/ko/docs/tutorials/kubernetes-basics/explore/explore-intro.html
@@ -123,7 +123,7 @@
kubectl로 문제해결하기
-
노드는 쿠버네티스에 있어서 워커 머신이며 클러스터에 따라 VM 또는 물리 머신이 될 수 있다. 여러개의 파드는 하나의 노드 위에서 동작할 수 있다.
+
노드는 쿠버네티스에 있어서 워커 머신이며 클러스터에 따라 VM 또는 물리 머신이 될 수 있다. 여러 개의 파드는 하나의 노드 위에서 동작할 수 있다.
diff --git a/content/ko/examples/application/guestbook/redis-master-service.yaml b/content/ko/examples/application/guestbook/redis-master-service.yaml
index a484014f1fe3b..65cef2191c493 100644
--- a/content/ko/examples/application/guestbook/redis-master-service.yaml
+++ b/content/ko/examples/application/guestbook/redis-master-service.yaml
@@ -8,7 +8,8 @@ metadata:
tier: backend
spec:
ports:
- - port: 6379
+ - name: redis
+ port: 6379
targetPort: 6379
selector:
app: redis
diff --git a/content/ko/examples/pods/init-containers.yaml b/content/ko/examples/pods/init-containers.yaml
index ad96425dd5054..e83beec9afd6c 100644
--- a/content/ko/examples/pods/init-containers.yaml
+++ b/content/ko/examples/pods/init-containers.yaml
@@ -19,7 +19,7 @@ spec:
- wget
- "-O"
- "/work-dir/index.html"
- - http://kubernetes.io
+ - http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
diff --git a/content/ko/examples/service/access/Dockerfile b/content/ko/examples/service/access/Dockerfile
new file mode 100644
index 0000000000000..b7b09d492a2be
--- /dev/null
+++ b/content/ko/examples/service/access/Dockerfile
@@ -0,0 +1,4 @@
+FROM nginx:1.17.3
+
+RUN rm /etc/nginx/conf.d/default.conf
+COPY frontend.conf /etc/nginx/conf.d
diff --git a/content/ko/examples/service/access/frontend.conf b/content/ko/examples/service/access/frontend.conf
new file mode 100644
index 0000000000000..9a1f5a0ed63a8
--- /dev/null
+++ b/content/ko/examples/service/access/frontend.conf
@@ -0,0 +1,11 @@
+upstream hello {
+ server hello;
+}
+
+server {
+ listen 80;
+
+ location / {
+ proxy_pass http://hello;
+ }
+}
diff --git a/content/ko/examples/service/access/frontend.yaml b/content/ko/examples/service/access/frontend.yaml
new file mode 100644
index 0000000000000..9f5b6b757fe8c
--- /dev/null
+++ b/content/ko/examples/service/access/frontend.yaml
@@ -0,0 +1,39 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: frontend
+spec:
+ selector:
+ app: hello
+ tier: frontend
+ ports:
+ - protocol: "TCP"
+ port: 80
+ targetPort: 80
+ type: LoadBalancer
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: frontend
+spec:
+ selector:
+ matchLabels:
+ app: hello
+ tier: frontend
+ track: stable
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: hello
+ tier: frontend
+ track: stable
+ spec:
+ containers:
+ - name: nginx
+ image: "gcr.io/google-samples/hello-frontend:1.0"
+ lifecycle:
+ preStop:
+ exec:
+ command: ["/usr/sbin/nginx","-s","quit"]
diff --git a/content/ko/examples/service/access/hello-service.yaml b/content/ko/examples/service/access/hello-service.yaml
new file mode 100644
index 0000000000000..71344ecb8be13
--- /dev/null
+++ b/content/ko/examples/service/access/hello-service.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: hello
+spec:
+ selector:
+ app: hello
+ tier: backend
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: http
diff --git a/content/ko/examples/service/access/hello.yaml b/content/ko/examples/service/access/hello.yaml
new file mode 100644
index 0000000000000..85dff18ee1d80
--- /dev/null
+++ b/content/ko/examples/service/access/hello.yaml
@@ -0,0 +1,24 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: hello
+spec:
+ selector:
+ matchLabels:
+ app: hello
+ tier: backend
+ track: stable
+ replicas: 7
+ template:
+ metadata:
+ labels:
+ app: hello
+ tier: backend
+ track: stable
+ spec:
+ containers:
+ - name: hello
+ image: "gcr.io/google-samples/hello-go-gke:1.0"
+ ports:
+ - name: http
+ containerPort: 80
From 5cf1d6702d149d60b195c5c11180e01fea9c09b6 Mon Sep 17 00:00:00 2001
From: zhanwang
Date: Sun, 1 Nov 2020 16:39:42 +0000
Subject: [PATCH 36/50] Update zh translation in configure-pod-configmap.md
---
.../tasks/configure-pod-container/configure-pod-configmap.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md
index bdf8695bbc39d..4e8a7459087e6 100644
--- a/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -850,7 +850,7 @@ As explained in [Create ConfigMaps from files](#create-configmaps-from-files), w
-->
## 将 ConfigMap 数据添加到一个卷中
-如基于文件创建 ConfigMap](#create-configmaps-from-files) 中所述,当你使用
+如基于文件创建 [ConfigMap](#create-configmaps-from-files) 中所述,当你使用
`--from-file` 创建 ConfigMap 时,文件名成为存储在 ConfigMap 的 `data` 部分中的键,
文件内容成为键对应的值。
From dbb93e842eed247b97e33f352e9907bbfb5340bb Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Sun, 1 Nov 2020 23:17:49 +0000
Subject: [PATCH 37/50] Fix incorrect instructions for adding PGP keys to APT
---
.../setup/production-environment/container-runtimes.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index ffc5d122931c1..2ba9ca610dd58 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -257,8 +257,8 @@ cat <
Date: Mon, 2 Nov 2020 09:24:25 +0800
Subject: [PATCH 38/50] [zh] Resync
docs/setup/production-environment/container-runtimes.md
---
.../container-runtimes.md | 1173 ++++++++---------
1 file changed, 586 insertions(+), 587 deletions(-)
diff --git a/content/zh/docs/setup/production-environment/container-runtimes.md b/content/zh/docs/setup/production-environment/container-runtimes.md
index fc4494cf73461..3c9f12fce5acd 100644
--- a/content/zh/docs/setup/production-environment/container-runtimes.md
+++ b/content/zh/docs/setup/production-environment/container-runtimes.md
@@ -16,525 +16,545 @@ weight: 10
-->
-{{< feature-state for_k8s_version="v1.6" state="stable" >}}
-
-Kubernetes 使用容器运行时来实现在 pod 中运行容器。
-这是各种运行时的安装说明。
+
+你需要在集群内每个节点上安装一个 {{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}
+以使 Pod 可以运行在上面。本文概述了所涉及的内容并描述了与节点设置相关的任务。
-
-{{< caution >}}
-我们发现 runc 在运行容器,处理系统文件描述符时存在一个漏洞。
-恶意容器可以利用此漏洞覆盖 runc 二进制文件的内容,并以此在主机系统的容器上运行任意的命令。
-
-请参考此链接以获取有关此问题的更多信息 [cve-2019-5736: runc vulnerability](https://access.redhat.com/security/cve/cve-2019-5736)
-{{< /caution >}}
+
+本文列出了在 Linux 上结合 Kubernetes 使用的几种通用容器运行时的详细信息:
-
-### 适用性
+- [containerd](#containerd)
+- [CRI-O](#cri-o)
+- [Docker](#docker)
-
+
+提示:对于其他操作系统,请查阅特定于你所使用平台的相关文档。
-你应该以 `root` 身份执行本指南中的所有命令。
-例如,使用 `sudo` 前缀命令,或者成为 `root` 并以该用户身份运行命令。
+## Cgroup 驱动程序
-### Cgroup 驱动程序
+Control groups are used to constrain resources that are allocated to processes.
-
-当某个 Linux 系统发行版使用 systemd 作为其初始化系统时,初始化进程会生成并使用一个 root 控制组 (`cgroup`),并充当 cgroup 管理器。
-systemd 与 cgroup 集成紧密,并将为每个进程分配 cgroup。
+控制组用来约束分配给进程的资源。
+
+当某个 Linux 系统发行版使用 [systemd](https://www.freedesktop.org/wiki/Software/systemd/) 作为其初始化系统时,初始化进程会生成并使用一个 root 控制组 (`cgroup`), 并充当 cgroup 管理器。
+Systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。
你也可以配置容器运行时和 kubelet 使用 `cgroupfs`。
连同 systemd 一起使用 `cgroupfs` 意味着将有两个不同的 cgroup 管理器。
-控制组用来约束分配给进程的资源。
单个 cgroup 管理器将简化分配资源的视图,并且默认情况下将对可用资源和使用中的资源具有更一致的视图。
-当有两个管理器时,最终将对这些资源产生两种视图。
-在此领域我们已经看到案例,某些节点配置让 kubelet 和 docker 使用 `cgroupfs`,而节点上运行的其余进程则使用 systemd;这类节点在资源压力下会变得不稳定。
+当有两个管理器共存于一个系统中时,最终将对这些资源产生两种视图。
+在此领域人们已经报告过一些案例,某些节点配置让 kubelet 和 docker 使用 `cgroupfs`,而节点上运行的其余进程则使用 systemd; 这类节点在资源压力下会变得不稳定。
更改设置,令容器运行时和 kubelet 使用 `systemd` 作为 cgroup 驱动,以此使系统更为稳定。
-请注意在 docker 下设置 `native.cgroupdriver=systemd` 选项。
+对于 Docker, 设置 `native.cgroupdriver=systemd` 选项。
-{{< caution >}}
-强烈建议不要更改已加入集群的节点的 cgroup 驱动。
-如果 kubelet 已经使用某 cgroup 驱动的语义创建了 pod,尝试更改运行时以使用别的 cgroup 驱动,为现有 Pods 重新创建 PodSandbox 时会产生错误。
-重启 kubelet 也可能无法解决此类问题。
-推荐将工作负载逐出节点,之后将节点从集群中删除并重新加入。
+runtime to another cgroup driver can cause errors when trying to re-create the Pod sandbox
+for such existing Pods. Restarting the kubelet may not solve such errors.
+
+If you have automation that makes it feasible, replace the node with another using the updated
+configuration, or reinstall it using automation.
{{< /caution >}}
+-->
+注意:非常 *不* 建议更改已加入集群的节点的 cgroup 驱动。
+如果 kubelet 已经使用某 cgroup 驱动的语义创建了 pod,更改运行时以使用别的 cgroup 驱动,当为现有 Pods 重新创建 PodSandbox 时会产生错误。重启 kubelet 也可能无法解决此类问题。
+如果你有切实可行的自动化方案,使用其他已更新配置的节点来替换该节点,或者使用自动化方案来重新安装。
+
+
+## 容器运行时
+
+{{% thirdparty-content %}}
+
+### containerd
-## Docker
+本节包含使用 `containerd` 作为 CRI 运行时的必要步骤。
-在你的每台机器上安装 Docker。
-推荐安装 19.03.11 版本,但是 1.13.1、17.03、17.06、17.09、18.06 和 18.09 版本也是可以的。
-请跟踪 Kubernetes 发行说明中经过验证的 Docker 最新版本变化。
+使用以下命令在系统上安装容器:
-使用以下命令在你的系统上安装 Docker:
+安装和配置的先决条件:
-{{< tabs name="tab-cri-docker-installation" >}}
-{{% tab name="Ubuntu 16.04+" %}}
+```shell
+cat <
+安装 containerd:
+
+{{< tabs name="tab-cri-containerd-installation" >}}
+{{% tab name="Ubuntu 16.04" %}}
+
```shell
-### 添加 Docker apt 仓库
-add-apt-repository \
- "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
- $(lsb_release -cs) \
- stable"
+# (安装 containerd)
+## (设置仓库)
+### (安装软件包以允许 apt 通过 HTTPS 使用存储库)
+sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
```
```shell
-## 安装 Docker CE
-apt-get update && apt-get install -y\
- containerd.io=1.2.13-2 \
- docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
- docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
+## 安装 Docker 的官方 GPG 密钥
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
```
```shell
-# 设置 daemon
-cat > /etc/docker/daemon.json < /etc/containerd/config.toml
```
-
+
```shell
-# 重启 docker.
-systemctl daemon-reload
-systemctl restart docker
+# 重启 containerd
+sudo systemctl restart containerd
```
-{{% /tab %}}
+{{< /tab >}}
{{% tab name="CentOS/RHEL 7.4+" %}}
+
```shell
-# 安装 Docker CE
+# 安装 containerd
## 设置仓库
### 安装所需包
-yum install yum-utils device-mapper-persistent-data lvm2
+sudo yum install -y yum-utils device-mapper-persistent-data lvm2
```
```shell
-### 新增 Docker 仓库。
-yum-config-manager \
- --add-repo \
- https://download.docker.com/linux/centos/docker-ce.repo
+### 新增 Docker 仓库
+sudo yum-config-manager \
+ --add-repo \
+ https://download.docker.com/linux/centos/docker-ce.repo
```
```shell
-## 安装 Docker CE.
-yum update -y && yum install -y \
- containerd.io-1.2.13 \
- docker-ce-19.03.11 \
- docker-ce-cli-19.03.11
+## 安装 containerd
+sudo yum update -y && sudo yum install -y containerd.io
```
```shell
-## 创建 /etc/docker 目录。
-mkdir /etc/docker
+# 配置 containerd
+sudo mkdir -p /etc/containerd
+sudo containerd config default > /etc/containerd/config.toml
```
```shell
-# 设置 daemon。
-cat > /etc/docker/daemon.json <
-```shell
-# 重启 Docker
-systemctl daemon-reload
-systemctl restart docker
+
+```powershell
+# start containerd
+.\containerd.exe --register-service
+Start-Service containerd
+```
+ -->
+```powershell
+# (安装 containerd )
+# 下载 contianerd
+cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.0-beta.2/containerd-1.4.0-beta.2-windows-amd64.tar.gz
+cmd /c tar xvf .\containerd-1.4.0-beta.2-windows-amd64.tar.gz
```
-{{% /tab %}}
-{{% /tabs %}}
-
-如果你想开机即启动 docker 服务,执行以下命令:
+```powershell
+# 启动 containerd
+.\containerd.exe --register-service
+Start-Service containerd
+```
+{{% /tab %}}
+{{< /tabs >}}
-```shell
-sudo systemctl enable docker
+#### systemd {#containerd-systemd}
+
+
+结合 `runc` 使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置
-请参阅[官方 Docker 安装指南](https://docs.docker.com/engine/installation/)
-来获取更多的信息。
+```
+[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
+ ...
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
+ SystemdCgroup = true
+```
-
-## CRI-O
-
-本节包含安装 `CRI-O` 作为 CRI 运行时的必要步骤。
+本节包含安装 CRI-O 作为容器运行时的必要步骤。
使用以下命令在系统中安装 CRI-O:
-### 准备环境
+提示:CRI-O 的主要以及次要版本必须与 Kubernetes 的主要和次要版本相匹配。
+更多信息请查阅 [CRI-O 兼容性列表](https://github.com/cri-o/cri-o).
+
+安装以及配置的先决条件:
```shell
-modprobe overlay
-modprobe br_netfilter
+sudo modprobe overlay
+sudo modprobe br_netfilter
-# 设置必需的sysctl参数,这些参数在重新启动后仍然存在。
-cat > /etc/sysctl.d/99-kubernetes-cri.conf <}}
{{% tab name="Debian" %}}
-
-
-```shell
-# Debian Unstable/Sid
-echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
-wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add -
-```
-
-
-
-```shell
-# Debian Testing
-echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
-wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add -
-```
-
-
+
-
-```shell
-# Raspbian 10
-echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
-wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add -
-```
-
-
-
-随后安装 CRI-O:
-
-```shell
-sudo apt-get update
-sudo apt-get install cri-o-1.17
-```
-
-{{% /tab %}}
-
-{{% tab name="Ubuntu 18.04, 19.04 and 19.10" %}}
+
+Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
+For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
+You can pin your installation to a specific release.
+To install version 1.18.3, set `VERSION=1.18:1.18.3`.
+
-
+Then run
+ -->
+在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`:
-```shell
-# 配置仓库
-. /etc/os-release
-sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
-wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
-sudo apt-get update
-```
+| 操作系统 | `$OS` |
+|-----------------|-------------------|
+| Debian Unstable | `Debian_Unstable` |
+| Debian Testing | `Debian_Testing` |
-
+
+然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。
+例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`.
+你也可以安装一个特定的发行版本。
+例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`.
+
+然后执行
```shell
-# 安装 CRI-O
+cat <
+
+Then, set `$VERSION` to the CRI-O version that matches your Kubernetes version.
+For instance, if you want to install CRI-O 1.18, set `VERSION=1.18`.
+You can pin your installation to a specific release.
+To install version 1.18.3, set `VERSION=1.18:1.18.3`.
+
+
+Then run
+ -->
+在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`:
+
+| 操作系统 | `$OS` |
+|--------------|-----------------|
+| Ubuntu 20.04 | `xUbuntu_20.04` |
+| Ubuntu 19.10 | `xUbuntu_19.10` |
+| Ubuntu 19.04 | `xUbuntu_19.04` |
+| Ubuntu 18.04 | `xUbuntu_18.04` |
+
+
+然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。
+例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`.
+你也可以安装一个特定的发行版本。
+例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`.
+
+
+然后执行
```shell
-# 安装必备软件
-apt-get update
-apt-get install software-properties-common
+cat <
Then run
-```shell
-curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
-curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
-yum install cri-o
-```
-->
+在下列操作系统上安装 CRI-O, 使用下表中合适的值设置环境变量 `OS`:
-要在以下操作系统上安装,请将环境变量 `$OS` 设置为下表中的相应字段:
-
-| 操作系统 | $OS |
-| ---------------- | ----------------- |
-| Centos 8 | `CentOS_8` |
-| Centos 8 Stream | `CentOS_8_Stream` |
-| Centos 7 | `CentOS_7` |
+| 操作系统 | `$OS` |
+|-----------------|-------------------|
+| Centos 8 | `CentOS_8` |
+| Centos 8 Stream | `CentOS_8_Stream` |
+| Centos 7 | `CentOS_7` |
-然后将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。
-例如,如果要安装 CRI-O 1.18,请设置 `VERSION=1.18`。
-你也可以安装特定版本,例如 1.18.3,请设置 `VERSION=1.18:1.18.3`。
+然后,将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。
+例如,如果你要安装 CRI-O 1.18, 请设置 `VERSION=1.18`.
+你也可以安装一个特定的发行版本。
+例如要安装 1.18.3 版本,设置 `VERSION=1.18:1.18.3`.
-确保声明变量后,使用下面命令安装
-
+然后执行
```shell
-curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
-curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
-yum install cri-o
+sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
+sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
+sudo yum install cri-o
```
{{% /tab %}}
@@ -583,331 +597,316 @@ yum install cri-o
```shell
sudo zypper install cri-o
```
-
{{% /tab %}}
-
{{% tab name="Fedora" %}}
-
-
将 `$VERSION` 设置为与你的 Kubernetes 相匹配的 CRI-O 版本。
例如,如果要安装 CRI-O 1.18,请设置 `VERSION=1.18`。
你可以用下列命令查找可用的版本:
```shell
-dnf module list cri-o
+sudo dnf module list cri-o
```
-
CRI-O 不支持在 Fedora 上固定到特定的版本。
-运行下列命令安装
+然后执行
```shell
-dnf module enable cri-o:$VERSION
-dnf install cri-o
+sudo dnf module enable cri-o:$VERSION
+sudo dnf install cri-o
```
{{% /tab %}}
{{< /tabs >}}
-
-### 启动 CRI-O
-
-```
-systemctl start crio
-```
-
-请参阅 [CRI-O 安装指南](https://github.com/kubernetes-sigs/cri-o#getting-started)
-来获取更多的信息。
-
-
+启动 CRI-O:
```shell
-cat > /etc/modules-load.d/containerd.conf < /etc/sysctl.d/99-kubernetes-cri.conf <
-## containerd
-本节包含使用 `containerd` 作为 CRI 运行时的必要步骤。
-使用以下命令在系统上安装容器:
+### Docker
-### 准备环境
+
+在你每个节点上安装 Docker CE.
-# 设置必需的sysctl参数,这些参数在重新启动后仍然存在。
-cat > /etc/sysctl.d/99-kubernetes-cri.conf <
-### 安装 containerd
+{{< tabs name="tab-cri-docker-installation" >}}
+{{% tab name="Ubuntu 16.04+" %}}
-{{< tabs name="tab-cri-containerd-installation" >}}
-{{% tab name="Ubuntu 16.04" %}}
```shell
-### Add Docker apt repository.
-add-apt-repository \
- "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
- $(lsb_release -cs) \
- stable"
+# (安装 Docker CE)
+## 设置仓库:
+### 安装软件包以允许 apt 通过 HTTPS 使用存储库
+sudo apt-get update && sudo apt-get install -y \
+ apt-transport-https ca-certificates curl software-properties-common gnupg2
```
```shell
-## Install containerd
-apt-get update && apt-get install -y containerd.io
+### 新增 Docker 的 官方 GPG 秘钥:
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add --keyring /etc/apt/trusted.gpg.d/docker.gpg -
```
+
```shell
-# 安装 containerd
-## 设置仓库
-### 安装软件包以允许 apt 通过 HTTPS 使用存储库
-apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
+# Install Docker CE
+sudo apt-get update && sudo apt-get install -y \
+ containerd.io=1.2.13-2 \
+ docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
+ docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
```
```shell
-### 安装 Docker 的官方 GPG 密钥
-curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
+# Set up the Docker daemon
+cat <
+```shell
+### 添加 Docker apt 仓库:
+sudo add-apt-repository \
+ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
+ $(lsb_release -cs) \
+ stable"
```
```shell
-## 安装 containerd
-apt-get update && apt-get install -y containerd.io
+## 安装 Docker CE
+sudo apt-get update && sudo apt-get install -y \
+ containerd.io=1.2.13-2 \
+ docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
+ docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
```
```shell
-# 配置 containerd
-mkdir -p /etc/containerd
-containerd config default > /etc/containerd/config.toml
+# 设置 Docker daemon
+cat <
```shell
-# 重启 containerd
-systemctl restart containerd
+# 重启 docker.
+sudo systemctl daemon-reload
+sudo systemctl restart docker
```
-{{< /tab >}}
+{{% /tab %}}
{{% tab name="CentOS/RHEL 7.4+" %}}
+
```shell
-# 安装 containerd
-## 设置仓库
-### 安装所需包
-yum install yum-utils device-mapper-persistent-data lvm2
+# Set up the Docker daemon
+cat <
```shell
-## 安装 containerd
-yum update && yum install containerd.io
+# (安装 Docker CE)
+## 设置仓库
+### 安装所需包
+sudo yum install -y yum-utils device-mapper-persistent-data lvm2
```
```shell
-# 配置 containerd
-mkdir -p /etc/containerd
-containerd config default > /etc/containerd/config.toml
+### 新增 Docker 仓库
+sudo yum-config-manager --add-repo \
+ https://download.docker.com/linux/centos/docker-ce.repo
```
-
```shell
-# 重启 containerd
-systemctl restart containerd
+## 创建 /etc/docker 目录
+sudo mkdir /etc/docker
```
-{{% /tab %}}
-{{% tab name="Windows/(PowerShell)" %}}
-
-```powershell
-# (安装 containerd )
-# 下载 contianerd
-cmd /c curl -OL https://github.com/containerd/containerd/releases/download/v1.4.0-beta.2/containerd-1.4.0-beta.2-windows-amd64.tar.gz
-cmd /c tar xvf .\containerd-1.4.0-beta.2-windows-amd64.tar.gz
+```shell
+# 设置 Docker daemon
+cat <
+```shell
+# 重启 Docker
+sudo systemctl daemon-reload
+sudo systemctl restart docker
```
-
{{% /tab %}}
-{{< /tabs >}}
+{{% /tabs %}}
-### systemd {#containerd-systemd}
+如果你想开机即启动 docker 服务,执行以下命令:
-使用 `systemd` cgroup 驱动,在 `/etc/containerd/config.toml` 中设置
-```
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
- ...
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
- SystemdCgroup = true
+```shell
+sudo systemctl enable docker
```
-当使用 kubeadm 时,请手动配置
-[kubelet 的 cgroup 驱动](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
-
-## 其他的 CRI 运行时:frakti
-
-请参阅 [Frakti 快速开始指南](https://github.com/kubernetes/frakti#quickstart) 来获取更多的信息。
+
+请参阅 [官方 Docker 安装指南](https://docs.docker.com/engine/installation/)
+来获取更多的信息。
\ No newline at end of file
From 40095e32db7c74a011fd18365a01a388e18247a9 Mon Sep 17 00:00:00 2001
From: Dominic Yin
Date: Mon, 2 Nov 2020 10:49:07 +0800
Subject: [PATCH 39/50] [zh] Resync docs/concepts/overview/components.md
---
.../zh/docs/concepts/overview/components.md | 28 +++++++++----------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/content/zh/docs/concepts/overview/components.md b/content/zh/docs/concepts/overview/components.md
index 7f9ef72b726cf..433ef8476a5c1 100644
--- a/content/zh/docs/concepts/overview/components.md
+++ b/content/zh/docs/concepts/overview/components.md
@@ -24,14 +24,14 @@ card:
当你部署完 Kubernetes, 即拥有了一个完整的集群。
@@ -41,22 +41,22 @@ Here's the diagram of a Kubernetes cluster with all the components tied together
这张图表展示了包含所有相互关联组件的 Kubernetes 集群。
-![Kubernetes 组件](/images/docs/components-of-kubernetes.png)
+![Kubernetes 组件](/images/docs/components-of-kubernetes.svg)
## 控制平面组件(Control Plane Components) {#control-plane-components}
控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 `replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。
@@ -84,11 +84,11 @@ the same machine, and do not run user containers on this machine. See
这些控制器包括:
@@ -118,9 +118,9 @@ The following controllers can have cloud provider dependencies:
{{< glossary_definition term_id="cloud-controller-manager" length="short" >}}
-`cloud-controller-manager` 进运行特定于云平台的控制回路。
+`cloud-controller-manager` 仅运行特定于云平台的控制回路。
如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境,
-所部属的环境中不需要云控制器管理器。
+所部署的环境中不需要云控制器管理器。
与 `kube-controller-manager` 类似,`cloud-controller-manager` 将若干逻辑上独立的
控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。
@@ -221,7 +221,7 @@ about containers in a central database, and provides a UI for browsing that data
### 集群层面日志
@@ -234,7 +234,7 @@ saving container logs to a central log store with search/browsing interface.
* 进一步了解[节点](/zh/docs/concepts/architecture/nodes/)
From 237298b5a1054ac0cf0d206492ec4c3c2c37099f Mon Sep 17 00:00:00 2001
From: TAKAHASHI Shuuji
Date: Mon, 2 Nov 2020 20:42:57 +0900
Subject: [PATCH 40/50] Apply suggestions from code review
Co-authored-by: bells17
---
content/ja/docs/concepts/storage/storage-capacity.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/content/ja/docs/concepts/storage/storage-capacity.md b/content/ja/docs/concepts/storage/storage-capacity.md
index 9611c59eacead..a764ba5d99fbd 100644
--- a/content/ja/docs/concepts/storage/storage-capacity.md
+++ b/content/ja/docs/concepts/storage/storage-capacity.md
@@ -6,7 +6,7 @@ weight: 45
-ストレージ容量は、Podが実行されるノードごとに制限があったり、大きさが異なる可能性があります。たとえば、NASがすべてのノードからはアクセスできなかったり、初めはストレージがノードローカルでしか利用できない可能性があります。
+ストレージ容量は、Podが実行されるノードごとに制限があったり、大きさが異なる可能性があります。たとえば、NASがすべてのノードからはアクセスできなかったり、初めからストレージがノードローカルでしか利用できない可能性があります。
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
@@ -29,18 +29,18 @@ weight: 45
ストレージ容量の情報がKubernetesのスケジューラーで利用されるのは、以下のすべての条件を満たす場合です。
- `CSIStorageCapacity`フィーチャーゲートがtrueである
-- Podがまだ作成されていないボリュームを使用している
+- Podがまだ作成されていないボリュームを使用する時
- そのボリュームが、CSIドライバーを参照し、[volume binding mode](/docs/concepts/storage/storage-classes/#volume-binding-mode)に`WaitForFirstConsumer`を使う{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}を使用している
-その場合、スケジューラーはPodに対して、十分なストレージが利用できるノードだけを考慮するようになります。このチェックは非常に単純で、ボリュームのサイズと、`CSIStorageCapacity`オブジェクトに一覧された容量を、ノードを含むトポロジで比較するだけです。
+その場合、スケジューラーはPodに対して、十分なストレージ容量が利用できるノードだけを考慮するようになります。このチェックは非常に単純で、ボリュームのサイズと、`CSIStorageCapacity`オブジェクトに一覧された容量を、ノードを含むトポロジーで比較するだけです。
-volume binding modeが`Immediate`のボリュームの場合、ボリュームを使用するPodとは独立に、ストレージドライバーがボリュームの作成場所を決定します。次に、スケジューラーはボリュームが作成された後、Podをボリュームが利用できるノードにスケジューリングします。
+volume binding modeが`Immediate`のボリュームの場合、ストレージドライバーはボリュームを使用するPodとは関係なく、ボリュームを作成する場所を決定します。次に、スケジューラーはボリュームが作成された後、Podをボリュームが利用できるノードにスケジューリングします。
[CSI ephemeral volumes](/docs/concepts/storage/volumes/#csi)の場合、スケジューリングは常にストレージ容量を考慮せずに行われます。このような動作になっているのは、このボリュームタイプはノードローカルな特別なCSIドライバーでのみ使用され、そこでは特に大きなリソースが必要になることはない、という想定に基づいています。
## 再スケジューリング
-`WaitForFirstConsumer`ボリュームがあるPodに対してノードが選択された場合は、その決定はまだ一時的なものです。次のステップで、CSIストレージドライバーに対して、選択されたノード上でボリュームが利用可能になることが予定されているというヒントを付きでボリュームの作成を要求します。
+`WaitForFirstConsumer`ボリュームがあるPodに対してノードが選択された場合は、その決定はまだ一時的なものです。次のステップで、CSIストレージドライバーに対して、選択されたノード上でボリュームが利用可能になることが予定されているというヒントを使用してボリュームの作成を要求します。
Kubernetesは古い容量の情報をもとにノードを選択する場合があるため、実際にはボリュームが作成できないという可能性が存在します。その場合、ノードの選択がリセットされ、KubernetesスケジューラーはPodに割り当てるノードを再び探します。
@@ -48,7 +48,7 @@ Kubernetesは古い容量の情報をもとにノードを選択する場合が
ストレージ容量を追跡することで、1回目の試行でスケジューリングが成功する可能性が高くなります。しかし、スケジューラーは潜在的に古い情報に基づいて決定を行う可能性があるため、成功を保証することはできません。通常、ストレージ容量の情報が存在しないスケジューリングと同様のリトライの仕組みによって、スケジューリングの失敗に対処します。
-スケジューリングが永続的に失敗する状況の1つは、Podが複数のボリュームを使用する場合で、あるトポロジーのセグメントで1つのボリュームがすでに作成された後、もう1つのボリュームのために十分な容量が残っていないような場合です。この状況から回復するには、たとえば、容量を増加させたり、すでに作成されたボリュームを削除するなどの手動の仲介が必要です。この問題に自動的に対処するためには、まだ[追加の作業](https://github.com/kubernetes/enhancements/pull/1703)が必要となっています。
+スケジューリングが永続的に失敗する状況の1つは、Podが複数のボリュームを使用する場合で、あるトポロジーのセグメントで1つのボリュームがすでに作成された後、もう1つのボリュームのために十分な容量が残っていないような場合です。この状況から回復するには、たとえば、容量を増加させたり、すでに作成されたボリュームを削除するなどの手動での対応が必要です。この問題に自動的に対処するためには、まだ[追加の作業](https://github.com/kubernetes/enhancements/pull/1703)が必要となっています。
## ストレージ容量の追跡を有効にする {#enabling-storage-capacity-tracking}
From b5a66fe4db5f00bd3ceba278f98c886b052d8dde Mon Sep 17 00:00:00 2001
From: TAKAHASHI Shuuji
Date: Mon, 2 Nov 2020 20:47:03 +0900
Subject: [PATCH 41/50] Append a missing sentence
---
content/ja/docs/concepts/storage/storage-capacity.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/content/ja/docs/concepts/storage/storage-capacity.md b/content/ja/docs/concepts/storage/storage-capacity.md
index a764ba5d99fbd..7e2f6c34f79dd 100644
--- a/content/ja/docs/concepts/storage/storage-capacity.md
+++ b/content/ja/docs/concepts/storage/storage-capacity.md
@@ -31,6 +31,7 @@ weight: 45
- `CSIStorageCapacity`フィーチャーゲートがtrueである
- Podがまだ作成されていないボリュームを使用する時
- そのボリュームが、CSIドライバーを参照し、[volume binding mode](/docs/concepts/storage/storage-classes/#volume-binding-mode)に`WaitForFirstConsumer`を使う{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}を使用している
+- ドライバーに対する`CSIDriver`オブジェクトの`StorageCapacity`がtrueに設定されている
その場合、スケジューラーはPodに対して、十分なストレージ容量が利用できるノードだけを考慮するようになります。このチェックは非常に単純で、ボリュームのサイズと、`CSIStorageCapacity`オブジェクトに一覧された容量を、ノードを含むトポロジーで比較するだけです。
From dbdea6b35418dfbc929ed4f032f0db4f596e6339 Mon Sep 17 00:00:00 2001
From: Christoph Blecker
Date: Mon, 2 Nov 2020 15:44:33 -0800
Subject: [PATCH 42/50] new blog post: Remembering Dan Kohn
---
.../_posts/2020-11-02-remembering-dan-kohn.md | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
create mode 100644 content/en/blog/_posts/2020-11-02-remembering-dan-kohn.md
diff --git a/content/en/blog/_posts/2020-11-02-remembering-dan-kohn.md b/content/en/blog/_posts/2020-11-02-remembering-dan-kohn.md
new file mode 100644
index 0000000000000..b8ffb8686a844
--- /dev/null
+++ b/content/en/blog/_posts/2020-11-02-remembering-dan-kohn.md
@@ -0,0 +1,16 @@
+---
+layout: blog
+title: "Remembering Dan Kohn"
+date: 2020-11-02
+slug: remembering-dan-kohn
+---
+
+**Author**: The Kubernetes Steering Committee
+
+Dan Kohn was instrumental in getting Kubernetes and CNCF community to where it is today. He shared our values, motivations, enthusiasm, community spirit, and helped the Kubernetes community to become the best that it could be. Dan loved getting people together to solve problems big and small. He enabled people to grow their individual scope in the community which often helped launch their career in open source software.
+
+Dan built a coalition around the nascent Kubernetes project and turned that into a cornerstone to build the larger cloud native space. He loved challenges, especially ones where the payoff was great like building worldwide communities, spreading the love of open source, and helping diverse, underprivileged communities and students to get a head start in technology.
+
+Our heart goes out to his family. Thank you, Dan, for bringing your boys to events in India and elsewhere as we got to know how great you were as a father. Dan, your thoughts and ideas will help us make progress in our journey as a community. Thank you for your life's work!
+
+If Dan has made an impact on you in some way, please consider adding a memory of him in his [CNCF memorial](https://github.com/cncf/memorials/blob/master/dan-kohn.md).
From 3189bdf52a7659904092e742788c2ee3102f35ab Mon Sep 17 00:00:00 2001
From: povsister
Date: Tue, 3 Nov 2020 14:30:44 +0800
Subject: [PATCH 43/50] Fix experimental flag example
---
.../command-line-tools-reference/kubelet-tls-bootstrapping.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md
index 0daa490276308..f512dc991d43f 100644
--- a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md
+++ b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md
@@ -243,7 +243,7 @@ for example:
The validity duration of signed certificates can be configured with flag:
```
---experimental-cluster-signing-duration
+--cluster-signing-duration
```
### Approval
From eda7a79ea20ce3d5740ca6ef518795b13e5ee42f Mon Sep 17 00:00:00 2001
From: Aldo Culquicondor
Date: Tue, 3 Nov 2020 09:22:02 -0500
Subject: [PATCH 44/50] Fix feature state for cluster-level default topology
spread
The configuration is part of the PodTopologySpread feature
(previously known as EvenPodsSpread)
---
.../concepts/workloads/pods/pod-topology-spread-constraints.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
index 28b844d47436a..e3a30b3f8b3d8 100644
--- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
@@ -284,8 +284,6 @@ There are some implicit conventions worth noting here:
### Cluster-level default constraints
-{{< feature-state for_k8s_version="v1.19" state="beta" >}}
-
It is possible to set default topology spread constraints for a cluster. Default
topology spread constraints are applied to a Pod if, and only if:
From 300c2e8545c64b827fc9b260eb86cf3d2abbba94 Mon Sep 17 00:00:00 2001
From: Tim Hockin
Date: Wed, 15 Jul 2020 17:06:26 -0700
Subject: [PATCH 45/50] Better docs for standard topology labels
As per KEP 1659, topology labels are now more formalized. Move away
from the older `failure-domain.beta` names ands use `topology` names
instead.
---
.../configuration/pod-priority-preemption.md | 2 +-
.../scheduling-eviction/assign-pod-node.md | 4 +-
.../docs/concepts/storage/storage-classes.md | 2 +-
content/en/docs/concepts/storage/volumes.md | 2 +-
.../admission-controllers.md | 4 +-
.../command-line-tools-reference/kubelet.md | 2 +-
.../labels-annotations-taints.md | 61 +++++++------------
.../examples/pods/pod-with-pod-affinity.yaml | 4 +-
8 files changed, 32 insertions(+), 49 deletions(-)
diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md
index 40c38ae21e23a..10a054cfa1053 100644
--- a/content/en/docs/concepts/configuration/pod-priority-preemption.md
+++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md
@@ -271,7 +271,7 @@ preempted. Here's an example:
* Pod P is being considered for Node N.
* Pod Q is running on another Node in the same Zone as Node N.
* Pod P has Zone-wide anti-affinity with Pod Q (`topologyKey:
- failure-domain.beta.kubernetes.io/zone`).
+ topology.kubernetes.io/zone`).
* There are no other cases of anti-affinity between Pod P and other Pods in
the Zone.
* In order to schedule Pod P on Node N, Pod Q can be preempted, but scheduler
diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
index c132d9affec31..5123f34ca3978 100644
--- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -200,8 +200,8 @@ The affinity on this pod defines one pod affinity rule and one pod anti-affinity
while the `podAntiAffinity` is `preferredDuringSchedulingIgnoredDuringExecution`. The
pod affinity rule says that the pod can be scheduled onto a node only if that node is in the same zone
as at least one already-running pod that has a label with key "security" and value "S1". (More precisely, the pod is eligible to run
-on node N if node N has a label with key `failure-domain.beta.kubernetes.io/zone` and some value V
-such that there is at least one node in the cluster with key `failure-domain.beta.kubernetes.io/zone` and
+on node N if node N has a label with key `topology.kubernetes.io/zone` and some value V
+such that there is at least one node in the cluster with key `topology.kubernetes.io/zone` and
value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity
rule says that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with
label having key "security" and value "S2". See the
diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md
index 587cc8a501443..9729913dde4f2 100644
--- a/content/en/docs/concepts/storage/storage-classes.md
+++ b/content/en/docs/concepts/storage/storage-classes.md
@@ -209,7 +209,7 @@ parameters:
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- - key: failure-domain.beta.kubernetes.io/zone
+ - key: topology.kubernetes.io/zone
values:
- us-central1-a
- us-central1-b
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index 39410ec2e5319..0a8a34bb400ff 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -449,7 +449,7 @@ spec:
required:
nodeSelectorTerms:
- matchExpressions:
- - key: failure-domain.beta.kubernetes.io/zone
+ - key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-a
diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md
index b4823245d3dd6..8de8b0d361fa3 100644
--- a/content/en/docs/reference/access-authn-authz/admission-controllers.md
+++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md
@@ -534,8 +534,8 @@ and kubelets will not be allowed to modify labels with that prefix.
* `kubernetes.io/os`
* `beta.kubernetes.io/instance-type`
* `node.kubernetes.io/instance-type`
- * `failure-domain.beta.kubernetes.io/region`
- * `failure-domain.beta.kubernetes.io/zone`
+ * `failure-domain.beta.kubernetes.io/region` (deprecated)
+ * `failure-domain.beta.kubernetes.io/zone` (deprecated)
* `topology.kubernetes.io/region`
* `topology.kubernetes.io/zone`
* `kubelet.kubernetes.io/`-prefixed labels
diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md
index 2a123796a14ac..4bf6f9f7692c4 100644
--- a/content/en/docs/reference/command-line-tools-reference/kubelet.md
+++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md
@@ -967,7 +967,7 @@ WindowsEndpointSliceProxying=true|false (ALPHA - default=false)
--node-labels mapStringString
-
<Warning: Alpha feature> Labels to add when registering the node in the cluster. Labels must be `key=value` pairs separated by `,`. Labels in the `kubernetes.io` namespace must begin with an allowed prefix (`kubelet.kubernetes.io`, `node.kubernetes.io`) or be in the specifically allowed set (`beta.kubernetes.io/arch`, `beta.kubernetes.io/instance-type`, `beta.kubernetes.io/os`, `failure-domain.beta.kubernetes.io/region`, `failure-domain.beta.kubernetes.io/zone`, `failure-domain.kubernetes.io/region`, `failure-domain.kubernetes.io/zone`, `kubernetes.io/arch`, `kubernetes.io/hostname`, `kubernetes.io/instance-type`, `kubernetes.io/os`)
+
<Warning: Alpha feature>Labels to add when registering the node in the cluster. Labels must be `key=value pairs` separated by `,`. Labels in the `kubernetes.io` namespace must begin with an allowed prefix (`kubelet.kubernetes.io`, `node.kubernetes.io`) or be in the specifically allowed set (`beta.kubernetes.io/arch`, `beta.kubernetes.io/instance-type`, `beta.kubernetes.io/os`, `failure-domain.beta.kubernetes.io/region`, `failure-domain.beta.kubernetes.io/zone`, `kubernetes.io/arch`, `kubernetes.io/hostname`, `kubernetes.io/os`, `node.kubernetes.io/instance-type`, `topology.kubernetes.io/region`, `topology.kubernetes.io/zone`)
diff --git a/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md b/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md
index d1faa51a8876b..b0ef5d5a65265 100644
--- a/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md
+++ b/content/en/docs/reference/kubernetes-api/labels-annotations-taints.md
@@ -38,7 +38,7 @@ This label has been deprecated. Please use `kubernetes.io/arch` instead.
This label has been deprecated. Please use `kubernetes.io/os` instead.
-## kubernetes.io/hostname
+## kubernetes.io/hostname {#kubernetesiohostname}
Example: `kubernetes.io/hostname=ip-172-20-114-199.ec2.internal`
@@ -46,6 +46,8 @@ Used on: Node
The Kubelet populates this label with the hostname. Note that the hostname can be changed from the "actual" hostname by passing the `--hostname-override` flag to the `kubelet`.
+This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information.
+
## beta.kubernetes.io/instance-type (deprecated)
{{< note >}} Starting in v1.17, this label is deprecated in favor of [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type). {{< /note >}}
@@ -63,71 +65,52 @@ to rely on the Kubernetes scheduler to perform resource-based scheduling. You sh
## failure-domain.beta.kubernetes.io/region (deprecated) {#failure-domainbetakubernetesioregion}
-See [failure-domain.beta.kubernetes.io/zone](#failure-domainbetakubernetesiozone).
+See [topology.kubernetes.io/region](#topologykubernetesioregion).
{{< note >}} Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/region](#topologykubernetesioregion). {{< /note >}}
## failure-domain.beta.kubernetes.io/zone (deprecated) {#failure-domainbetakubernetesiozone}
-Example:
-
-`failure-domain.beta.kubernetes.io/region=us-east-1`
-
-`failure-domain.beta.kubernetes.io/zone=us-east-1c`
-
-Used on: Node, PersistentVolume
-
-On the Node: The `kubelet` populates this with the zone information as defined by the `cloudprovider`.
-This will be set only if you are using a `cloudprovider`. However, you should consider setting this
-on the nodes if it makes sense in your topology.
-
-On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
-
-Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
-
-_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
-
-The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-
-The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
-The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
-
-If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
-adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
+See [topology.kubernetes.io/zone](#topologykubernetesiozone).
{{< note >}} Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/zone](#topologykubernetesiozone). {{< /note >}}
## topology.kubernetes.io/region {#topologykubernetesioregion}
+Example:
+
+`topology.kubernetes.io/region=us-east-1`
+
See [topology.kubernetes.io/zone](#topologykubernetesiozone).
## topology.kubernetes.io/zone {#topologykubernetesiozone}
Example:
-`topology.kubernetes.io/region=us-east-1`
-
`topology.kubernetes.io/zone=us-east-1c`
Used on: Node, PersistentVolume
-On the Node: The `kubelet` populates this with the zone information as defined by the `cloudprovider`.
-This will be set only if you are using a `cloudprovider`. However, you should consider setting this
-on the nodes if it makes sense in your topology.
+On Node: The `kubelet` or the external `cloud-controller-manager` populates this with the information as provided by the `cloudprovider`. This will be set only if you are using a `cloudprovider`. However, you should consider setting this on nodes if it makes sense in your topology.
-On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes, on GCE and AWS.
+On PersistentVolume: topology-aware volume provisioners will automatically set node affinity constraints on `PersistentVolumes`.
-Kubernetes will automatically spread the Pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). With multiple-zone clusters, this spreading behaviour is extended across zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
+A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not.
+
+A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
+
+Kubernetes makes a few assumptions about the structure of zones and regions:
+1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
+2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
+
+It should be safe to assume that topology labels do not change. Even though labels are strictly mutable, consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated.
+
+Kubernetes can use this information in various ways. For example, the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single-zone cluster (to reduce the impact of node failures, see [kubernetes.io/hostname](#kubernetesiohostname)). With multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-The actual values of zone and region don't matter. Nor is the node hierarchy rigidly defined.
-The expectation is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example, zones should typically avoid sharing a single network switch. The exact mapping depends on your particular infrastructure - a three rack installation will choose a very different setup to a multi-datacenter configuration.
-
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-
-
diff --git a/content/en/examples/pods/pod-with-pod-affinity.yaml b/content/en/examples/pods/pod-with-pod-affinity.yaml
index 35e645ef1f376..6d075e086088b 100644
--- a/content/en/examples/pods/pod-with-pod-affinity.yaml
+++ b/content/en/examples/pods/pod-with-pod-affinity.yaml
@@ -12,7 +12,7 @@ spec:
operator: In
values:
- S1
- topologyKey: failure-domain.beta.kubernetes.io/zone
+ topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
@@ -23,7 +23,7 @@ spec:
operator: In
values:
- S2
- topologyKey: failure-domain.beta.kubernetes.io/zone
+ topologyKey: topology.kubernetes.io/zone
containers:
- name: with-pod-affinity
image: k8s.gcr.io/pause:2.0
From dd618cff39c7cc145c0a50cd996c5110f814094c Mon Sep 17 00:00:00 2001
From: Karen Bradshaw
Date: Wed, 28 Oct 2020 11:19:25 -0400
Subject: [PATCH 46/50] clean up turnkey cloud solutions
---
.../container-runtimes.md | 2 +-
.../production-environment/turnkey/_index.md | 4 -
.../turnkey/alibaba-cloud.md | 20 --
.../production-environment/turnkey/aws.md | 88 -------
.../production-environment/turnkey/azure.md | 36 ---
.../production-environment/turnkey/gce.md | 223 ------------------
.../production-environment/turnkey/icp.md | 65 -----
.../production-environment/turnkey/tencent.md | 19 --
.../horizontal-pod-autoscale-walkthrough.md | 53 +++--
static/_redirects | 5 -
10 files changed, 28 insertions(+), 487 deletions(-)
delete mode 100644 content/en/docs/setup/production-environment/turnkey/_index.md
delete mode 100644 content/en/docs/setup/production-environment/turnkey/alibaba-cloud.md
delete mode 100644 content/en/docs/setup/production-environment/turnkey/aws.md
delete mode 100644 content/en/docs/setup/production-environment/turnkey/azure.md
delete mode 100644 content/en/docs/setup/production-environment/turnkey/gce.md
delete mode 100644 content/en/docs/setup/production-environment/turnkey/icp.md
delete mode 100644 content/en/docs/setup/production-environment/turnkey/tencent.md
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index cbe6cba37f605..ecaf53c03f3d8 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -4,7 +4,7 @@ reviewers:
- bart0sh
title: Container runtimes
content_type: concept
-weight: 10
+weight: 20
---
diff --git a/content/en/docs/setup/production-environment/turnkey/_index.md b/content/en/docs/setup/production-environment/turnkey/_index.md
deleted file mode 100644
index 1941966bb02ca..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/_index.md
+++ /dev/null
@@ -1,4 +0,0 @@
----
-title: Turnkey Cloud Solutions
-weight: 30
----
diff --git a/content/en/docs/setup/production-environment/turnkey/alibaba-cloud.md b/content/en/docs/setup/production-environment/turnkey/alibaba-cloud.md
deleted file mode 100644
index d83ecf18ac3cc..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/alibaba-cloud.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-reviewers:
-- colemickens
-- brendandburns
-title: Running Kubernetes on Alibaba Cloud
----
-
-## Alibaba Cloud Container Service
-
-The [Alibaba Cloud Container Service](https://www.alibabacloud.com/product/container-service) lets you run and manage Docker applications on a cluster of either Alibaba Cloud ECS instances or in a Serverless fashion. It supports the popular open source container orchestrators: Docker Swarm and Kubernetes.
-
-To simplify cluster deployment and management, use [Kubernetes Support for Alibaba Cloud Container Service](https://www.alibabacloud.com/product/kubernetes). You can get started quickly by following the [Kubernetes walk-through](https://www.alibabacloud.com/help/doc-detail/86737.htm), and there are some [tutorials for Kubernetes Support on Alibaba Cloud](https://yq.aliyun.com/teams/11/type_blog-cid_200-page_1) in Chinese.
-
-To use custom binaries or open source Kubernetes, follow the instructions below.
-
-## Custom Deployments
-
-The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub.
-
-For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English.
diff --git a/content/en/docs/setup/production-environment/turnkey/aws.md b/content/en/docs/setup/production-environment/turnkey/aws.md
deleted file mode 100644
index 7dd901aa0f1a9..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/aws.md
+++ /dev/null
@@ -1,88 +0,0 @@
----
-reviewers:
-- justinsb
-- clove
-title: Running Kubernetes on AWS EC2
-content_type: task
----
-
-
-
-This page describes how to install a Kubernetes cluster on AWS.
-
-
-
-## {{% heading "prerequisites" %}}
-
-
-To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.
-
-### Supported Production Grade Tools
-
-* [conjure-up](https://docs.conjure-up.io/stable/en/cni/k8s-and-aws) is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.
-
-* [Kubernetes Operations](https://github.com/kubernetes/kops) - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.
-
-* [kube-aws](https://github.com/kubernetes-retired/kube-aws), creates and manages Kubernetes clusters with [Flatcar Linux](https://www.flatcar-linux.org/) nodes, using AWS tools: EC2, CloudFormation and Autoscaling.
-
-* [KubeOne](https://github.com/kubermatic/kubeone) is an open source cluster lifecycle management tool that creates, upgrades and manages Kubernetes Highly-Available clusters.
-
-
-
-
-
-## Getting started with your cluster
-
-### Command line administration tool: kubectl
-
-The cluster startup script will leave you with a `kubernetes` directory on your workstation.
-Alternately, you can download the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases).
-
-Next, add the appropriate binary folder to your `PATH` to access kubectl:
-
-```shell
-# macOS
-export PATH=/platforms/darwin/amd64:$PATH
-
-# Linux
-export PATH=/platforms/linux/amd64:$PATH
-```
-
-An up-to-date documentation page for this tool is available here: [kubectl manual](/docs/reference/kubectl/kubectl/)
-
-By default, `kubectl` will use the `kubeconfig` file generated during the cluster startup for authenticating against the API.
-For more information, please read [kubeconfig files](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
-
-### Examples
-
-See [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
-
-The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
-
-For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/)
-
-## Scaling the cluster
-
-Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the
-[Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.
-
-## Tearing down the cluster
-
-Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
-`kubernetes` directory:
-
-```shell
-cluster/kube-down.sh
-```
-
-## Support Level
-
-
-IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
--------------------- | ------------ | ------------- | ---------- | --------------------------------------------- | ---------| ----------------------------
-AWS | kops | Debian | k8s (VPC) | [docs](https://github.com/kubernetes/kops) | | Community ([@justinsb](https://github.com/justinsb))
-AWS | CoreOS | CoreOS | flannel | - | | Community
-AWS | Juju | Ubuntu | flannel, calico, canal | - | 100% | Commercial, Community
-AWS | KubeOne | Ubuntu, CoreOS, CentOS | canal, weavenet | [docs](https://github.com/kubermatic/kubeone) | 100% | Commercial, Community
-
-
diff --git a/content/en/docs/setup/production-environment/turnkey/azure.md b/content/en/docs/setup/production-environment/turnkey/azure.md
deleted file mode 100644
index eccbbca75bcb3..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/azure.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-reviewers:
-- colemickens
-- brendandburns
-title: Running Kubernetes on Azure
----
-
-## Azure Kubernetes Service (AKS)
-
-The [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) offers simple
-deployments for Kubernetes clusters.
-
-For an example of deploying a Kubernetes cluster onto Azure via the Azure Kubernetes Service:
-
-**[Microsoft Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)**
-
-## Custom Deployments: AKS-Engine
-
-The core of the Azure Kubernetes Service is **open source** and available on GitHub for the community
-to use and contribute to: **[AKS-Engine](https://github.com/Azure/aks-engine)**. The legacy [ACS-Engine](https://github.com/Azure/acs-engine) codebase has been deprecated in favor of AKS-engine.
-
-AKS-Engine is a good choice if you need to make customizations to the deployment beyond what the Azure Kubernetes
-Service officially supports. These customizations include deploying into existing virtual networks, utilizing multiple
-agent pools, and more. Some community contributions to AKS-Engine may even become features of the Azure Kubernetes Service.
-
-The input to AKS-Engine is an apimodel JSON file describing the Kubernetes cluster. It is similar to the Azure Resource Manager (ARM) template syntax used to deploy a cluster directly with the Azure Kubernetes Service. The resulting output is an ARM template that can be checked into source control and used to deploy Kubernetes clusters to Azure.
-
-You can get started by following the **[AKS-Engine Kubernetes Tutorial](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/README.md)**.
-
-## CoreOS Tectonic for Azure
-
-The CoreOS Tectonic Installer for Azure is **open source** and available on GitHub for the community to use and contribute to: **[Tectonic Installer](https://github.com/coreos/tectonic-installer)**.
-
-Tectonic Installer is a good choice when you need to make cluster customizations as it is built on [Hashicorp's Terraform](https://www.terraform.io/docs/providers/azurerm/) Azure Resource Manager (ARM) provider. This enables users to customize or integrate using familiar Terraform tooling.
-
-You can get started using the [Tectonic Installer for Azure Guide](https://coreos.com/tectonic/docs/latest/install/azure/azure-terraform.html).
diff --git a/content/en/docs/setup/production-environment/turnkey/gce.md b/content/en/docs/setup/production-environment/turnkey/gce.md
deleted file mode 100644
index 78386161a64cf..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/gce.md
+++ /dev/null
@@ -1,223 +0,0 @@
----
-reviewers:
-- brendandburns
-- jbeda
-- mikedanese
-- thockin
-title: Running Kubernetes on Google Compute Engine
-content_type: task
----
-
-
-
-The example below creates a Kubernetes cluster with 3 worker node Virtual Machines and a master Virtual Machine (i.e. 4 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
-
-
-
-## {{% heading "prerequisites" %}}
-
-
-If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) for hosted cluster installation and management.
-
-For an easy way to experiment with the Kubernetes development environment, click the button below
-to open a Google Cloud Shell with an auto-cloned copy of the Kubernetes source repo.
-
-[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/kubernetes/kubernetes&page=editor&open_in_editor=README.md)
-
-If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.
-
-### Prerequisites
-
-1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](https://console.cloud.google.com) for more details.
-1. Install `gcloud` as necessary. `gcloud` can be installed as a part of the [Google Cloud SDK](https://cloud.google.com/sdk/).
-1. Enable the [Compute Engine Instance Group Manager API](https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview) in the [Google Cloud developers console](https://console.developers.google.com/apis/library).
-1. Make sure that gcloud is set to use the Google Cloud Platform project you want. You can check the current project using `gcloud config list project` and change it via `gcloud config set project `.
-1. Make sure you have credentials for GCloud by running `gcloud auth login`.
-1. (Optional) In order to make API calls against GCE, you must also run `gcloud auth application-default login`.
-1. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/instances/#startinstancegcloud) part of the GCE Quickstart.
-1. Make sure you can SSH into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/instances/#sshing) part of the GCE Quickstart.
-
-
-
-
-
-## Starting a cluster
-
-You can install a client and start a cluster with either one of these commands (we list both in case only one is installed on your machine):
-
-
-```shell
-curl -sS https://get.k8s.io | bash
-```
-
-or
-
-```shell
-wget -q -O - https://get.k8s.io | bash
-```
-
-Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
-
-By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
-
-The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
-
-Alternately, you can download and install the latest Kubernetes release from [this page](https://github.com/kubernetes/kubernetes/releases), then run the `/cluster/kube-up.sh` script to start the cluster:
-
-```shell
-cd kubernetes
-cluster/kube-up.sh
-```
-
-If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
-
-If you run into trouble, please see the section on [troubleshooting](/docs/setup/production-environment/turnkey/gce/#troubleshooting), post to the
-[Kubernetes Forum](https://discuss.kubernetes.io), or come ask questions on `#gke` Slack channel.
-
-The next few steps will show you:
-
-1. How to set up the command line client on your workstation to manage the cluster
-1. Examples of how to use the cluster
-1. How to delete the cluster
-1. How to start clusters with non-default options (like larger clusters)
-
-## Installing the Kubernetes command line tools on your workstation
-
-The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
-
-The [kubectl](/docs/reference/kubectl/kubectl/) tool controls the Kubernetes cluster
-manager. It lets you inspect your cluster resources, create, delete, and update
-components, and much more. You will use it to look at your new cluster and bring
-up example apps.
-
-You can use `gcloud` to install the `kubectl` command-line tool on your workstation:
-
-```shell
-gcloud components install kubectl
-```
-
-{{< note >}}
-The kubectl version bundled with `gcloud` may be older than the one
-downloaded by the get.k8s.io install script. See [Installing kubectl](/docs/tasks/tools/install-kubectl/)
-document to see how you can set up the latest `kubectl` on your workstation.
-{{< /note >}}
-
-## Getting started with your cluster
-
-### Inspect your cluster
-
-Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
-
-```shell
-kubectl get --all-namespaces services
-```
-
-should show a set of [services](/docs/concepts/services-networking/service/) that look something like this:
-
-```shell
-NAMESPACE NAME TYPE CLUSTER_IP EXTERNAL_IP PORT(S) AGE
-default kubernetes ClusterIP 10.0.0.1 443/TCP 1d
-kube-system kube-dns ClusterIP 10.0.0.2 53/TCP,53/UDP 1d
-kube-system kube-ui ClusterIP 10.0.0.3 80/TCP 1d
-...
-```
-
-Similarly, you can take a look at the set of [pods](/docs/concepts/workloads/pods/) that were created during cluster startup.
-You can do this via the
-
-```shell
-kubectl get --all-namespaces pods
-```
-
-command.
-
-You'll see a list of pods that looks something like this (the name specifics will be different):
-
-```shell
-NAMESPACE NAME READY STATUS RESTARTS AGE
-kube-system coredns-5f4fbb68df-mc8z8 1/1 Running 0 15m
-kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
-kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
-kube-system fluentd-cloud-logging-kubernetes-minion-c4og 1/1 Running 0 14m
-kube-system fluentd-cloud-logging-kubernetes-minion-ngua 1/1 Running 0 14m
-kube-system kube-ui-v1-curt1 1/1 Running 0 15m
-kube-system monitoring-heapster-v5-ex4u3 1/1 Running 1 15m
-kube-system monitoring-influx-grafana-v1-piled 2/2 Running 0 15m
-```
-
-Some of the pods may take a few seconds to start up (during this time they'll show `Pending`), but check that they all show as `Running` after a short period.
-
-### Run some examples
-
-Then, see [a simple nginx example](/docs/tasks/run-application/run-stateless-application-deployment/) to try out your new cluster.
-
-For more complete applications, please look in the [examples directory](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/). The [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) is a good "getting started" walkthrough.
-
-## Tearing down the cluster
-
-To remove/delete/teardown the cluster, use the `kube-down.sh` script.
-
-```shell
-cd kubernetes
-cluster/kube-down.sh
-```
-
-Likewise, the `kube-up.sh` in the same directory will bring it back up. You do not need to rerun the `curl` or `wget` command: everything needed to setup the Kubernetes cluster is now on your workstation.
-
-## Customizing
-
-The script above relies on Google Storage to stage the Kubernetes release. It
-then will start (by default) a single master VM along with 3 worker VMs. You
-can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
-You can view a transcript of a successful cluster creation
-[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
-
-## Troubleshooting
-
-### Project settings
-
-You need to have the Google Cloud Storage API, and the Google Cloud Storage
-JSON API enabled. It is activated by default for new projects. Otherwise, it
-can be done in the Google Cloud Console. See the [Google Cloud Storage JSON
-API Overview](https://cloud.google.com/storage/docs/json_api/) for more
-details.
-
-Also ensure that-- as listed in the [Prerequisites section](#prerequisites)-- you've enabled the `Compute Engine Instance Group Manager API`, and can start up a GCE VM from the command line as in the [GCE Quickstart](https://cloud.google.com/compute/docs/quickstart) instructions.
-
-### Cluster initialization hang
-
-If the Kubernetes startup script hangs waiting for the API to be reachable, you can troubleshoot by SSHing into the master and node VMs and looking at logs such as `/var/log/startupscript.log`.
-
-**Once you fix the issue, you should run `kube-down.sh` to cleanup** after the partial cluster creation, before running `kube-up.sh` to try again.
-
-### SSH
-
-If you're having trouble SSHing into your instances, ensure the GCE firewall
-isn't blocking port 22 to your VMs. By default, this should work but if you
-have edited firewall rules or created a new non-default network, you'll need to
-expose it: `gcloud compute firewall-rules create default-ssh --network=
---description "SSH allowed from anywhere" --allow tcp:22`
-
-Additionally, your GCE SSH key must either have no passcode or you need to be
-using `ssh-agent`.
-
-### Networking
-
-The instances must be able to connect to each other using their private IP. The
-script uses the "default" network which should have a firewall rule called
-"default-allow-internal" which allows traffic on any port on the private IPs.
-If this rule is missing from the default network or if you change the network
-being used in `cluster/config-default.sh` create a new rule with the following
-field values:
-
-* Source Ranges: `10.0.0.0/8`
-* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
-
-## Support Level
-
-
-IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
--------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
-GCE | Saltstack | Debian | GCE | [docs](/docs/setup/production-environment/turnkey/gce/) | | Project
-
-
diff --git a/content/en/docs/setup/production-environment/turnkey/icp.md b/content/en/docs/setup/production-environment/turnkey/icp.md
deleted file mode 100644
index 1ebb7a9267896..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/icp.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-reviewers:
-- bradtopol
-title: Running Kubernetes on Multiple Clouds with IBM Cloud Private
----
-
-IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform.
-
-IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started).
-
-For more information, explore the following resources:
-
-* [IBM Cloud Private](https://www.ibm.com/cloud/private)
-* [Reference architecture for IBM Cloud Private](https://github.com/ibm-cloud-architecture/refarch-privatecloud)
-* [IBM Cloud Private documentation](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html)
-
-## IBM Cloud Private and Terraform
-
-The following modules are available where you can deploy IBM Cloud Private by using Terraform:
-
-* AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
-* Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
-* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
-* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
-* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
-* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
-
-## IBM Cloud Private on AWS
-
-You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) using Terraform.
-
-IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws).
-
-## IBM Cloud Private on Azure
-
-You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/azure_overview.html).
-
-## IBM Cloud Private with Red Hat OpenShift
-
-You can deploy IBM certified software containers that are running on IBM Cloud Private onto Red Hat OpenShift.
-
-Integration capabilities:
-
-* Supports Linux® 64-bit platform in offline-only installation mode
-* Single-master configuration
-* Integrated IBM Cloud Private cluster management console and catalog
-* Integrated core platform services, such as monitoring, metering, and logging
-* IBM Cloud Private uses the OpenShift image registry
-
-For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/supported_environments/openshift/overview.html).
-
-## IBM Cloud Private on VirtualBox
-
-To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox).
-
-## IBM Cloud Private on VMware
-
-You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects:
-
-* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
-* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
-
-The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
-
-For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-icp_overview).
diff --git a/content/en/docs/setup/production-environment/turnkey/tencent.md b/content/en/docs/setup/production-environment/turnkey/tencent.md
deleted file mode 100644
index fadef5d3dd8af..0000000000000
--- a/content/en/docs/setup/production-environment/turnkey/tencent.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Running Kubernetes on Tencent Kubernetes Engine
----
-
-## Tencent Kubernetes Engine
-
- [Tencent Cloud Tencent Kubernetes Engine (TKE)](https://intl.cloud.tencent.com/product/tke) provides native Kubernetes container management services. You can deploy and manage a Kubernetes cluster with TKE in just a few steps. For detailed directions, see [Deploy Tencent Kubernetes Engine](https://intl.cloud.tencent.com/document/product/457/11741).
-
- TKE is a [Certified Kubernetes product](https://www.cncf.io/certification/software-conformance/).It is fully compatible with the native Kubernetes API.
-
-## Custom Deployment
-
- The core of Tencent Kubernetes Engine is open source and available [on GitHub](https://github.com/TencentCloud/tencentcloud-cloud-controller-manager/).
-
- When using TKE to create a Kubernetes cluster, you can choose managed mode or independent deployment mode. In addition, you can customize the deployment as needed; for example, you can choose an existing Cloud Virtual Machine instance for cluster creation or enable Kube-proxy in IPVS mode.
-
-## What's Next
-
- To learn more, see the [TKE documentation](https://intl.cloud.tencent.com/document/product/457).
\ No newline at end of file
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index c840034f0271f..49009e1268807 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -16,34 +16,32 @@ in a replication controller, deployment, replica set or stateful set based on ob
(or, with beta support, on some other, application-provided metrics).
This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server.
-For more information on how Horizontal Pod Autoscaler behaves, see the
+For more information on how Horizontal Pod Autoscaler behaves, see the
[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/).
## {{% heading "prerequisites" %}}
-
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later.
-[metrics-server](https://github.com/kubernetes-sigs/metrics-server) monitoring needs to be deployed in the cluster
-to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics. The instructions for deploying this are on the GitHub repository of [metrics-server](https://github.com/kubernetes-sigs/metrics-server), if you followed [getting started on GCE guide](/docs/setup/production-environment/turnkey/gce/),
-metrics-server monitoring will be turned-on by default.
-
-To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster
-and kubectl at version 1.6 or later. Furthermore, in order to make use of custom metrics, your cluster
-must be able to communicate with the API server providing the custom metrics API. Finally, to use metrics
-not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and
-you must be able to communicate with the API server that provides the external metrics API.
+[Metrics server](https://github.com/kubernetes-sigs/metrics-server) monitoring needs to be deployed
+in the cluster to provide metrics through the [Metrics API](https://github.com/kubernetes/metrics).
+Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server,
+see the [metrics-server documentation](https://github.com/kubernetes-sigs/metrics-server#deployment).
+
+To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a
+Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster
+must be able to communicate with the API server providing the custom Metrics API.
+Finally, to use metrics not related to any Kubernetes object you must have a
+Kubernetes cluster at version 1.10 or later, and you must be able to communicate
+with the API server that provides the external Metrics API.
See the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics) for more details.
-
-
-## Run & expose php-apache server
+## Run and expose php-apache server
-To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image.
-The Dockerfile has the following content:
+To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image. The Dockerfile has the following content:
-```
+```dockerfile
FROM php:5-apache
COPY index.php /var/www/html/index.php
RUN chmod a+rx index.php
@@ -51,7 +49,7 @@ RUN chmod a+rx index.php
It defines an index.php page which performs some CPU intensive computations:
-```
+```php
}}
-
Run the following command:
+
```shell
kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
```
+
```
deployment.apps/php-apache created
service/php-apache created
@@ -90,6 +89,7 @@ See [here](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-detai
```shell
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
```
+
```
horizontalpodautoscaler.autoscaling/php-apache autoscaled
```
@@ -99,10 +99,10 @@ We may check the current status of autoscaler by running:
```shell
kubectl get hpa
```
+
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s
-
```
Please note that the current CPU consumption is 0% as we are not sending any requests to the server
@@ -122,10 +122,10 @@ Within a minute or so, we should see the higher CPU load by executing:
```shell
kubectl get hpa
```
+
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
-
```
Here, CPU consumption has increased to 305% of the request.
@@ -134,6 +134,7 @@ As a result, the deployment was resized to 7 replicas:
```shell
kubectl get deployment php-apache
```
+
```
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache 7/7 7 7 19m
@@ -157,6 +158,7 @@ Then we will verify the result state (after a minute or so):
```shell
kubectl get hpa
```
+
```
NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
@@ -165,6 +167,7 @@ php-apache Deployment/php-apache/scale 0% / 50% 1 10 1
```shell
kubectl get deployment php-apache
```
+
```
NAME READY UP-TO-DATE AVAILABLE AGE
php-apache 1/1 1 1 27m
@@ -176,8 +179,6 @@ Here CPU utilization dropped to 0, and so HPA autoscaled the number of replicas
Autoscaling the replicas may take a few minutes.
{{< /note >}}
-
-
## Autoscaling on multiple metrics and custom metrics
@@ -419,7 +420,8 @@ we can use `kubectl describe hpa`:
```shell
kubectl describe hpa cm-test
```
-```shell
+
+```
Name: cm-test
Namespace: prom
Labels:
@@ -474,8 +476,7 @@ We will create the autoscaler by executing the following command:
```shell
kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
```
+
```
horizontalpodautoscaler.autoscaling/php-apache created
```
-
-
diff --git a/static/_redirects b/static/_redirects
index 1e1301da7b48b..3258f96ea1a78 100644
--- a/static/_redirects
+++ b/static/_redirects
@@ -487,11 +487,6 @@
/docs/setup/independent/kubelet-integration/ /docs/setup/production-environment/tools/kubeadm/kubelet-integration/ 301
/docs/setup/custom-cloud/kops/ /docs/setup/production-environment/tools/kops/ 301
/docs/setup/custom-cloud/kubespray/ /docs/setup/production-environment/tools/kubespray/ 301
-/docs/setup/turnkey/aws/ /docs/setup/production-environment/turnkey/aws/ 301
-/docs/setup/turnkey/alibaba-cloud/ /docs/setup/production-environment/turnkey/alibaba-cloud/ 301
-/docs/setup/turnkey/azure/ /docs/setup/production-environment/turnkey/azure/ 301
-/docs/setup/turnkey/gce/ /docs/setup/production-environment/turnkey/gce/ 301
-/docs/setup/turnkey/icp/ /docs/setup/production-environment/turnkey/icp/ 301
/docs/setup/on-premises-vm/cloudstack/ /docs/setup/production-environment/on-premises-vm/cloudstack/ 301
/docs/setup/on-premises-vm/dcos/ /docs/setup/production-environment/on-premises-vm/dcos/ 301
/docs/setup/on-premises-vm/ovirt/ /docs/setup/production-environment/on-premises-vm/ovirt/ 301
From 1e1e016d17c42e9381793c53f43a7b1aa4a259e1 Mon Sep 17 00:00:00 2001
From: Karen Bradshaw
Date: Tue, 3 Nov 2020 15:39:55 -0500
Subject: [PATCH 47/50] testing landscape shortcode
---
.../production-environment/turnkey-solutions.md | 12 ++++++++++++
1 file changed, 12 insertions(+)
create mode 100644 content/en/docs/setup/production-environment/turnkey-solutions.md
diff --git a/content/en/docs/setup/production-environment/turnkey-solutions.md b/content/en/docs/setup/production-environment/turnkey-solutions.md
new file mode 100644
index 0000000000000..1a15f47de6bf6
--- /dev/null
+++ b/content/en/docs/setup/production-environment/turnkey-solutions.md
@@ -0,0 +1,12 @@
+---
+title: Turnkey Solutions
+content_type: concept
+weight: 30
+---
+
+
+This page provides a list of Kubernetes certified solution providers.
+
+
+
+{{< cncf-landscape helpers=true category="certified-kubernetes-hosted" >}}
From a8f07b6a1bc07057292958295d11451309519687 Mon Sep 17 00:00:00 2001
From: Karen Bradshaw
Date: Tue, 3 Nov 2020 16:24:20 -0500
Subject: [PATCH 48/50] modify width of iframe
---
.../docs/setup/production-environment/turnkey-solutions.md | 6 ++++--
layouts/shortcodes/cncf-landscape.html | 2 +-
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/setup/production-environment/turnkey-solutions.md b/content/en/docs/setup/production-environment/turnkey-solutions.md
index 1a15f47de6bf6..c62b41932aae0 100644
--- a/content/en/docs/setup/production-environment/turnkey-solutions.md
+++ b/content/en/docs/setup/production-environment/turnkey-solutions.md
@@ -1,11 +1,13 @@
---
-title: Turnkey Solutions
+title: Turnkey Cloud Solutions
content_type: concept
weight: 30
---
-This page provides a list of Kubernetes certified solution providers.
+This page provides a list of Kubernetes certified solution providers. From each
+provider page, you can learn how to install and setup production
+ready clusters.
diff --git a/layouts/shortcodes/cncf-landscape.html b/layouts/shortcodes/cncf-landscape.html
index 455f2b5658b66..ee37678624cee 100644
--- a/layouts/shortcodes/cncf-landscape.html
+++ b/layouts/shortcodes/cncf-landscape.html
@@ -58,7 +58,7 @@
{{- end -}}
{{ if ( .Get "category" ) }}
-
+
{{ else }}
{{ end }}
From e0bad3430ad538742dae5c189e91bc76308133fa Mon Sep 17 00:00:00 2001
From: Arhell
Date: Wed, 4 Nov 2020 01:36:12 +0200
Subject: [PATCH 49/50] add shortcode for training page
---
content/zh/training/_index.html | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/content/zh/training/_index.html b/content/zh/training/_index.html
index ba3a2df60f3ac..b70f1c9cb3260 100644
--- a/content/zh/training/_index.html
+++ b/content/zh/training/_index.html
@@ -145,7 +145,6 @@
From 8c68745c0fe7bacc66a1806651eebee372cb3d03 Mon Sep 17 00:00:00 2001
From: Jakob
Date: Wed, 4 Nov 2020 17:00:05 +0100
Subject: [PATCH 50/50] improve structure of finalizer section in CRD guide
(#24851)
- It's now more precise regarding the format of finalizers (which have to
consist of a namespace and a name, separated by a forward slash or they
will get rejected by the apiserver, with the exception of built in ones)
- It's less repetitve in general
---
.../custom-resource-definitions.md | 24 +++++++++----------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md
index 110613c200290..386bc7a08baaf 100644
--- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md
+++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md
@@ -518,27 +518,25 @@ apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
finalizers:
- - finalizer.stable.example.com
+ - stable.example.com/finalizer
```
-Finalizers are arbitrary string values, that when present ensure that a hard delete
-of a resource is not possible while they exist.
+Identifiers of custom finalizers consist of a domain name, a forward slash and the name of
+the finalizer. Any controller can add a finalizer to any object's list of finalizers.
The first delete request on an object with finalizers sets a value for the
`metadata.deletionTimestamp` field but does not delete it. Once this value is set,
-entries in the `finalizers` list can only be removed.
+entries in the `finalizers` list can only be removed. While any finalizers remain it is also
+impossible to force the deletion of an object.
-When the `metadata.deletionTimestamp` field is set, controllers watching the object
-execute any finalizers they handle, by polling update requests for that
-object. When all finalizers have been executed, the resource is deleted.
+When the `metadata.deletionTimestamp` field is set, controllers watching the object execute any
+finalizers they handle and remove the finalizer from the list after they are done. It is the
+responsibility of each controller to remove its finalizer from the list.
-The value of `metadata.deletionGracePeriodSeconds` controls the interval between
-polling updates.
+The value of `metadata.deletionGracePeriodSeconds` controls the interval between polling updates.
-It is the responsibility of each controller to remove its finalizer from the list.
-
-Kubernetes only finally deletes the object if the list of finalizers is empty,
-meaning all finalizers have been executed.
+Once the list of finalizers is empty, meaning all finalizers have been executed, the resource is
+deleted by Kubernetes.
### Validation