Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official 1.14 Release Docs #13174

Merged
merged 50 commits into from
Mar 25, 2019
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
4652684
Official documentation on Poseidon/Firmament, a new multi-scheduler s…
Dec 23, 2018
cefff92
Document timeout attribute for kms-plugin. (#12158)
immutableT Jan 23, 2019
de2e67e
Official documentation on Poseidon/Firmament, a new multi-scheduler …
Jan 29, 2019
b822184
Remove initializers from doc. It will be removed in 1.14 (#12331)
caesarxuchao Jan 29, 2019
e528300
kubeadm: Document CRI auto detection functionality (#12462)
rosti Feb 8, 2019
ce380cc
Resolved merge conflict removing initializers
jimangel Feb 11, 2019
df1b59b
Minor doc change for GAing Pod DNS Config (#12514)
MrHohn Feb 12, 2019
eb5aaa7
Graduate ExpandInUsePersistentVolumes feature to beta (#10574)
mlmhl Feb 13, 2019
1588645
Rename 2018-11-07-grpc-load-balancing-with-linkerd.md.md file (#12594)
makoscafee Feb 13, 2019
48fd1e5
Add dynamic percentage of node scoring to user docs (#12235)
bsalamat Feb 15, 2019
d22320f
delete special symbol (#12445)
hyponet Feb 17, 2019
582995a
Update documentation for VolumeSubpathEnvExpansion (#11843)
Feb 20, 2019
16b551c
Graduate Pod Priority and Preemption to GA (#12428)
bsalamat Feb 20, 2019
99d3d86
Added Instana links to the documentation (#12977)
noctarius Mar 7, 2019
9742867
Update kubectl plugins to stable (#12847)
soltysh Mar 11, 2019
5f049ec
documentation for CSI topology beta (#12889)
msau42 Mar 11, 2019
98b449d
Document changes to default RBAC discovery ClusterRole(Binding)s (#12…
dekkagaijin Mar 12, 2019
ead0a28
CSI raw block to beta (#12931)
bswartz Mar 12, 2019
b37e645
Change incorrect string raw to block (#12926)
bswartz Mar 15, 2019
ac99ed4
Update documentation on node OS/arch labels (#12976)
yujuhong Mar 15, 2019
f7aa166
local pv GA doc updates (#12915)
msau42 Mar 15, 2019
f18d212
Publish CRD OpenAPI Documentation (#12910)
roycaihw Mar 15, 2019
90d53c2
kubeadm: add document for upgrading from 1.13 to 1.14 (single CP and …
neolit123 Mar 15, 2019
ed5f459
fix bullet indentation (#13214)
roycaihw Mar 15, 2019
6e49749
mark PodReadinessGate GA (#12800)
freehan Mar 16, 2019
cc769cb
Update RuntimeClass documentation for beta (#13043)
tallclair Mar 16, 2019
ee19771
CSI ephemeral volume alpha documentation (#10934)
vladimirvivien Mar 16, 2019
092e288
update kubectl documentation (#12867)
Liujingfang1 Mar 16, 2019
07c4eb3
Documentation for Windows GMSA feature (#12936)
ddebroy Mar 16, 2019
21d60d1
HugePages graduated to GA (#13004)
derekwaynecarr Mar 16, 2019
b36d68a
Docs for node PID limiting (https://github.com/kubernetes/kubernetes/…
RobertKrawitz Mar 16, 2019
c037ab5
kubeadm: update the reference documentation for 1.14 (#12911)
neolit123 Mar 16, 2019
f50c664
kubeadm: update the 1.14 HA guide (#13191)
neolit123 Mar 16, 2019
61372fe
resolve conflicts for master
jimangel Mar 16, 2019
a0b5acd
fixed a few missed merge conflicts
jimangel Mar 16, 2019
92fd5d4
Admission Webhook new features doc (#12938)
mbohlool Mar 18, 2019
3bf2d15
Clarifications and fixes in GMSA doc (#13226)
ddebroy Mar 18, 2019
e15667a
RunAsGroup documentation for Progressing this to Beta (#12297)
krmayankk Mar 18, 2019
655aed9
start serverside-apply documentation (#13077)
kwiesmueller Mar 18, 2019
965a801
Document CSI update (#12928)
gnufied Mar 19, 2019
cb0b9d0
Overall docs for CSI Migration feature (#12935)
ddebroy Mar 19, 2019
f1ffe72
Windows documentation updates for 1.14 (#12929)
craiglpeters Mar 19, 2019
94c455a
add section on upgrading CoreDNS (#12909)
rajansandeep Mar 19, 2019
30915de
documentation for kubelet resource metrics endpoint (#12934)
dashpole Mar 20, 2019
8f68521
windows docs updates for 1.14 (#13279)
michmike Mar 20, 2019
ae5d409
update to windows docs for 1.14 (#13322)
michmike Mar 22, 2019
74319b6
Update intro-windows-in-kubernetes.md (#13344)
michmike Mar 23, 2019
f902f7d
server side apply followup (#13321)
kwiesmueller Mar 23, 2019
87c1d6a
resolving conflicts
jimangel Mar 23, 2019
3459d02
Update config.toml (#13365)
jimangel Mar 25, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 9 additions & 22 deletions content/en/docs/concepts/configuration/pod-priority-preemption.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ weight: 70

{{% capture overview %}}

{{< feature-state for_k8s_version="1.11" state="beta" >}}
{{< feature-state for_k8s_version="1.14" state="stable" >}}

[Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
Expand All @@ -19,8 +19,8 @@ pending Pod possible.
In Kubernetes 1.9 and later, Priority also affects scheduling order of Pods and
out-of-resource eviction ordering on the Node.

Pod priority and preemption are moved to beta since Kubernetes 1.11 and are
enabled by default in this release and later.
Pod priority and preemption graduated to beta in Kubernetes 1.11 and to GA in
Kubernetes 1.14. They have been enabled by default since 1.11.

In Kubernetes versions where Pod priority and preemption is still an alpha-level
feature, you need to explicitly enable it. To use these features in the older
Expand All @@ -34,6 +34,7 @@ Kubernetes Version | Priority and Preemption State | Enabled by default
1.9 | alpha | no
1.10 | alpha | no
1.11 | beta | yes
1.14 | GA | yes
jimangel marked this conversation as resolved.
Show resolved Hide resolved

{{< warning >}}In a cluster where not all users are trusted, a
malicious user could create pods at the highest possible priorities, causing
Expand Down Expand Up @@ -71,15 +72,15 @@ Pods.
## How to disable preemption

{{< note >}}
In Kubernetes 1.11, critical pods (except DaemonSet pods, which are
still scheduled by the DaemonSet controller) rely on scheduler preemption to be
scheduled when a cluster is under resource pressure. For this reason, you will
need to run an older version of Rescheduler if you decide to disable preemption.
More on this is provided below.
In Kubernetes 1.12+, critical pods rely on scheduler preemption to be scheduled
when a cluster is under resource pressure. For this reason, it is not
recommended to disable preemption.
{{< /note >}}

In Kubernetes 1.11 and later, preemption is controlled by a kube-scheduler flag
`disablePreemption`, which is set to `false` by default.
If you want to disable preemption despite the above note, you can set
`disablePreemption` to `true`.

This option is available in component configs only and is not available in
old-style command line options. Below is a sample component config to disable
Expand All @@ -96,20 +97,6 @@ algorithmSource:
disablePreemption: true
```

### Start an older version of Rescheduler in the cluster

When priority or preemption is disabled, we must run Rescheduler v0.3.1 (instead
of v0.4.0) to ensure that critical Pods are scheduled when nodes or cluster are
under resource pressure. Since critical Pod annotation is still supported in
this release, running Rescheduler should be enough and no other changes to the
configuration of Pods should be needed.

Rescheduler images can be found at:
[gcr.io/k8s-image-staging/rescheduler](http://gcr.io/k8s-image-staging/rescheduler).

In the code, changing the Rescheduler version back to v.0.3.1 is the reverse of
[this PR](https://github.com/kubernetes/kubernetes/pull/65454).

## PriorityClass

A PriorityClass is a non-namespaced object that defines a mapping from a
Expand Down
70 changes: 37 additions & 33 deletions content/en/docs/concepts/configuration/scheduler-perf-tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,31 +8,39 @@ weight: 70

{{% capture overview %}}

{{< feature-state for_k8s_version="1.12" >}}
{{< feature-state for_k8s_version="1.14" state="beta" >}}

Kube-scheduler is the Kubernetes default scheduler. It is responsible for
placement of Pods on Nodes in a cluster. Nodes in a cluster that meet the
scheduling requirements of a Pod are called "feasible" Nodes for the Pod. The
scheduler finds feasible Nodes for a Pod and then runs a set of functions to
score the feasible Nodes and picks a Node with the highest score among the
feasible ones to run the Pod. The scheduler then notifies the API server about this
decision in a process called "Binding".
feasible ones to run the Pod. The scheduler then notifies the API server about
this decision in a process called "Binding".

{{% /capture %}}

{{% capture body %}}

## Percentage of Nodes to Score

Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all the
nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 has a new
feature that allows the scheduler to stop looking for more feasible nodes once
it finds a certain number of them. This improves the scheduler's performance in
large clusters. The number is specified as a percentage of the cluster size and
is controlled by a configuration option called `percentageOfNodesToScore`. The
range should be between 1 and 100. Other values are considered as 100%. The
default value of this option is 50%. A cluster administrator can change this value by providing a
different value in the scheduler configuration. However, it may not be necessary to change this value.
Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all
nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 added a
new feature that allows the scheduler to stop looking for more feasible nodes
once it finds a certain number of them. This improves the scheduler's
performance in large clusters. The number is specified as a percentage of the
cluster size. The percentage can be controlled by a configuration option called
`percentageOfNodesToScore`. The range should be between 1 and 100. Larger values
are considered as 100%. Zero is equivalent to not providing the config option.
Kubernetes 1.14 has logic to find the percentage of nodes to score based on the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about starting a new paragraph here?

size of the cluster if it is not specified in the configuration. It uses a
linear formula which yields 50% for a 100-node cluster. The formula yields 10%
for a 5000-node cluster. The lower bound for the automatic value is 5%. In other
words, the scheduler always scores at least 5% of the cluster no matter how
large the cluster is, unless the user provides the config option with a value
smaller than 5.

Below is an example configuration that sets `percentageOfNodesToScore` to 50%.

```yaml
apiVersion: componentconfig/v1alpha1
Expand All @@ -45,41 +53,37 @@ algorithmSource:
percentageOfNodesToScore: 50
```

{{< note >}}
In clusters with zero or less than 50 feasible nodes, the
scheduler still checks all the nodes, simply because there are not enough
feasible nodes to stop the scheduler's search early.
{{< /note >}}
{{< note >}} In clusters with less than 50 feasible nodes, the scheduler still
checks all the nodes, simply because there are not enough feasible nodes to stop
the scheduler's search early. {{< /note >}}

**To disable this feature**, you can set `percentageOfNodesToScore` to 100.

### Tuning percentageOfNodesToScore

`percentageOfNodesToScore` must be a value between 1 and 100
with the default value of 50. There is also a hardcoded minimum value of 50
nodes which is applied internally. The scheduler tries to find at
least 50 nodes regardless of the value of `percentageOfNodesToScore`. This means
that changing this option to lower values in clusters with several hundred nodes
will not have much impact on the number of feasible nodes that the scheduler
tries to find. This is intentional as this option is unlikely to improve
performance noticeably in smaller clusters. In large clusters with over a 1000
nodes setting this value to lower numbers may show a noticeable performance
improvement.
`percentageOfNodesToScore` must be a value between 1 and 100 with the default
value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes. This means that changing
this option to lower values in clusters with several hundred nodes will not have
much impact on the number of feasible nodes that the scheduler tries to find.
This is intentional as this option is unlikely to improve performance noticeably
in smaller clusters. In large clusters with over a 1000 nodes setting this value
to lower numbers may show a noticeable performance improvement.

An important note to consider when setting this value is that when a smaller
number of nodes in a cluster are checked for feasibility, some nodes are not
sent to be scored for a given Pod. As a result, a Node which could possibly
score a higher value for running the given Pod might not even be passed to the
scoring phase. This would result in a less than ideal placement of the Pod. For
this reason, the value should not be set to very low percentages. A general rule
of thumb is to never set the value to anything lower than 30. Lower values
of thumb is to never set the value to anything lower than 10. Lower values
should be used only when the scheduler's throughput is critical for your
application and the score of nodes is not important. In other words, you prefer
to run the Pod on any Node as long as it is feasible.

It is not recommended to lower this value from its default if your cluster has
only several hundred Nodes. It is unlikely to improve the scheduler's
performance significantly.
If your cluster has several hundred Nodes or fewer, we do not recommend lowering
the default value of this configuration option. It is unlikely to improve the
scheduler's performance significantly.

### How the scheduler iterates over Nodes

Expand All @@ -91,8 +95,8 @@ for running Pods, the scheduler iterates over the nodes in a round robin
fashion. You can imagine that Nodes are in an array. The scheduler starts from
the start of the array and checks feasibility of the nodes until it finds enough
Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the
scheduler continues from the point in the Node array that it stopped at when checking
feasibility of Nodes for the previous Pod.
scheduler continues from the point in the Node array that it stopped at when
checking feasibility of Nodes for the previous Pod.

If Nodes are in multiple zones, the scheduler iterates over Nodes in various
zones to ensure that Nodes from different zones are considered in the
Expand Down
26 changes: 15 additions & 11 deletions content/en/docs/concepts/services-networking/dns-pod-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,10 +170,10 @@ following pod-specific DNS policies. These policies are specified in the
for details on how DNS queries are handled in those cases.
- "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should
explicitly set its DNS policy "`ClusterFirstWithHostNet`".
- "`None`": A new option value introduced in Kubernetes v1.9 (Beta in v1.10). It
allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS
settings are supposed to be provided using the `dnsConfig` field in the Pod Spec.
See [DNS config](#dns-config) subsection below.
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
environment. All DNS settings are supposed to be provided using the
`dnsConfig` field in the Pod Spec.
See [Pod's DNS config](#pod-s-dns-config) subsection below.

{{< note >}}
"Default" is not the default DNS policy. If `dnsPolicy` is not
Expand Down Expand Up @@ -205,13 +205,7 @@ spec:

### Pod's DNS Config

Kubernetes v1.9 introduces an Alpha feature (Beta in v1.10) that allows users more
control on the DNS settings for a Pod. This feature is enabled by default in v1.10.
To enable this feature in v1.9, the cluster administrator
needs to enable the `CustomPodDNS` feature gate on the apiserver and the kubelet,
for example, "`--feature-gates=CustomPodDNS=true,...`".
When the feature gate is enabled, users can set the `dnsPolicy` field of a Pod
to "`None`" and they can add a new field `dnsConfig` to a Pod Spec.
Pod's DNS Config allows users more control on the DNS settings for a Pod.

The `dnsConfig` field is optional and it can work with any `dnsPolicy` settings.
However, when a Pod's `dnsPolicy` is set to "`None`", the `dnsConfig` field has
Expand Down Expand Up @@ -257,6 +251,16 @@ search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
```

### Feature availability

The availability of Pod DNS Config and DNS Policy "`None`"" is shown as below.

| k8s version | Feature support |
| :---------: |:-----------:|
| 1.14 | Stable |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The feature states earlier in this PR were lower case, I wonder if we should copy that here?

| 1.10 | Beta (on by default)|
| 1.9 | Alpha |

{{% /capture %}}

{{% capture whatsnext %}}
Expand Down
5 changes: 5 additions & 0 deletions content/en/docs/concepts/storage/storage-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,11 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent
* All of the above
* [Local](#local)

{{< feature-state state="beta" for_k8s_version="1.14" >}}
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver
to see its supported topology keys and examples. The `CSINodeInfo` feature gate must be enabled.

### Allowed Topologies

When a cluster operator specifies the `WaitForFirstConsumer` volume binding mode, it is no longer necessary
Expand Down
21 changes: 9 additions & 12 deletions content/en/docs/concepts/storage/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -1072,13 +1072,14 @@ spec:

### Using subPath with expanded environment variables

{{< feature-state for_k8s_version="v1.11" state="alpha" >}}
{{< feature-state for_k8s_version="v1.14" state="alpha" >}}


`subPath` directory names can also be constructed from Downward API environment variables.
Use the `subPathExpr` field to construct `subPath` directory names from Downward API environment variables.
Before you use this feature, you must enable the `VolumeSubpathEnvExpansion` feature gate.
The `subPath` and `subPathExpr` properties are mutually exclusive.

In this example, a Pod uses `subPath` to create a directory `pod1` within the hostPath volume `/var/log/pods`, using the pod name from the Downward API. The host directory `/var/log/pods/pod1` is mounted at `/logs` in the container.
In this example, a Pod uses `subPathExpr` to create a directory `pod1` within the hostPath volume `/var/log/pods`, using the pod name from the Downward API. The host directory `/var/log/pods/pod1` is mounted at `/logs` in the container.

```yaml
apiVersion: v1
Expand All @@ -1099,7 +1100,7 @@ spec:
volumeMounts:
- name: workdir1
mountPath: /logs
subPath: $(POD_NAME)
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
Expand Down Expand Up @@ -1216,20 +1217,16 @@ persistent volume:

#### CSI raw block volume support

{{< feature-state for_k8s_version="v1.11" state="alpha" >}}
{{< feature-state for_k8s_version="v1.14" state="beta" >}}

Starting with version 1.11, CSI introduced support for raw block volumes, which
relies on the raw block volume feature that was introduced in a previous version of
Kubernetes. This feature will make it possible for vendors with external CSI drivers to
implement raw block volumes support in Kubernetes workloads.

CSI block volume support is feature-gated and turned off by default. To run CSI with
block volume support enabled, a cluster administrator must enable the feature for each
Kubernetes component using the following feature gate flags:

```
--feature-gates=BlockVolume=true,CSIBlockVolume=true
```
CSI block volume support is feature-gated, but enabled by default. The two
feature gates which must be enabled for this feature are `BlockVolume` and
`CSIBlockVolume`.

Learn how to
[setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Some typical uses of a DaemonSet are:
- running a cluster storage daemon, such as `glusterd`, `ceph`, on each node.
- running a logs collection daemon on every node, such as `fluentd` or `logstash`.
- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](
https://github.com/prometheus/node_exporter), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), Datadog agent, New Relic agent, Ganglia `gmond` or Instana agent.
https://github.com/prometheus/node_exporter), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), Datadog agent, New Relic agent, Ganglia `gmond`, or [Instana Agent](https://www.instana.com/supported-integrations/kubernetes-monitoring/).

In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon.
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -335,13 +335,6 @@ Examples of information you might put here are:

In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. In the future, if an annotation is determined to be widely useful, it may be promoted to a named field of ImageReviewSpec.

### Initializers (alpha) {#initializers}

The admission controller determines the initializers of a resource based on the existing
`InitializerConfiguration`s. It sets the pending initializers by modifying the
metadata of the resource to be created.
For more information, please check [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/).

### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology}

This admission controller denies any pod that defines `AntiAffinity` topology key other than
Expand Down
Loading