Skip to content

Commit

Permalink
Merge branch 'master' into logstash_fluentd
Browse files Browse the repository at this point in the history
  • Loading branch information
chenopis committed Jul 10, 2017
2 parents 2e1ef20 + 0667180 commit 026a2f4
Show file tree
Hide file tree
Showing 101 changed files with 306 additions and 286 deletions.
1 change: 0 additions & 1 deletion _data/overrides.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ overrides:
- path: docs/admin/kube-scheduler.md
- path: docs/admin/kubelet.md
- copypath: k8s/federation/docs/api-reference/ docs/federation/
- copypath: k8s/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml docs/getting-started-guides/fluentd-gcp.yaml
1 change: 1 addition & 0 deletions _data/reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ toc:
- title: Using the API
section:
- docs/reference/api-overview.md
- docs/reference/client-libraries.md
- title: Accessing the API
section:
- docs/admin/accessing-the-api.md
Expand Down
8 changes: 5 additions & 3 deletions _data/tasks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -127,9 +127,11 @@ toc:
- docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
- docs/tasks/administer-cluster/safely-drain-node.md
- docs/tasks/administer-cluster/declare-network-policy.md
- docs/tasks/administer-cluster/calico-network-policy.md
- docs/tasks/administer-cluster/romana-network-policy.md
- docs/tasks/administer-cluster/weave-network-policy.md
- title: Install Network Policy Provider
section:
- docs/tasks/administer-cluster/calico-network-policy.md
- docs/tasks/administer-cluster/romana-network-policy.md
- docs/tasks/administer-cluster/weave-network-policy.md
- docs/tasks/administer-cluster/change-pv-reclaim-policy.md
- docs/tasks/administer-cluster/configure-pod-disruption-budget.md
- docs/tasks/administer-cluster/limit-storage-consumption.md
Expand Down
17 changes: 5 additions & 12 deletions _includes/partner-script.js
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
type: 0,
name: 'Citrix',
logo: 'citrix',
link: 'http://wercker.com/workflows/partners/kubernetes/',
link: 'https://www.citrix.com/networking/microservices.html',
blurb: 'Netscaler CPX gives app developers all the features they need to load balance their microservices and containerized apps with Kubernetes.'
},
{
Expand Down Expand Up @@ -67,7 +67,7 @@
type: 0,
name: 'Wercker',
logo: 'wercker',
link: 'http://wercker.com/workflows/partners/kubernetes/',
link: 'http://www.wercker.com/integrations/kubernetes',
blurb: 'Wercker automates your build, test and deploy pipelines for launching containers and triggering rolling updates on your Kubernetes cluster. '
},
{
Expand Down Expand Up @@ -183,7 +183,7 @@
blurb: 'Aporeto makes cloud-native applications secure by default without impacting developer velocity and works at any scale, on any cloud.'
},
{
type: 0,
type: 0,
name: 'Giant Swarm',
logo: 'giant_swarm',
link: 'https://giantswarm.io',
Expand Down Expand Up @@ -413,13 +413,6 @@
link: 'http://www.stackoverdrive.net/kubernetes-consulting/',
blurb: 'We are a devops consulting firm and we do alot of work with containers and Kunbernetes is one of our go to tools.'
},
{
type: 0,
name: 'F5 Networks',
logo: 'f5networks',
link: 'https://f5.com/about-us/news/press-kit',
blurb: 'Integration of our ADC services with Kubernetes'
},
{
type: 0,
name: 'StackIQ, Inc.',
Expand Down Expand Up @@ -459,8 +452,8 @@
type: 1,
name: 'Lovable Tech',
logo: 'lovable',
link: 'https://drive.google.com/file/d/0BxCnAyMK1pgBTUFOdEZsUndLa01xMGJYZWtUVmVOdldadk80/view?usp=sharing',
blurb: ''
link: 'http://lovable.tech/',
blurb: 'World class engineers, designers, and strategic consultants helping you ship Lovable web & mobile technology.'
},
{
type: 0,
Expand Down
1 change: 1 addition & 0 deletions case-studies/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ <h3>Kubernetes Users</h3>
<a href="/case-studies/pearson/"><img src="/images/case_studies/pearson_logo.png" alt="Pearson"></a>
<a target="_blank" href="#" onclick="event.preventDefault(); kub.showVideo()"><img src="/images/case_studies/zulily_logo.png" alt="zulily"></a>
<a target="_blank" href="http://www.nextplatform.com/2015/11/12/inside-ebays-shift-to-kubernetes-and-containers-atop-openstack/"><img src="/images/case_studies/ebay_logo.png" alt="Ebay"></a>
<a target="_blank" href="http://blog.kubernetes.io/2017/02/inside-jd-com-shift-to-kubernetes-from-openstack.html"><img src="/images/case_studies/jd.png" alt="JD.COM"></a>
<a target="_blank" href="https://docs.google.com/a/google.com/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform" class="tell-your-story"><img src="/images/case_studies/story.png" alt="Tell your story"></a>
</div>
</main>
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Note that these master node sizes are currently only set at cluster startup time

To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).

For [example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml):
For example:

```yaml
containers:
Expand Down
31 changes: 15 additions & 16 deletions docs/admin/extensible-admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ you need to:
### Deploy an initializer controller

You should deploy an initializer controller via the [deployment
API](/docs/api-reference/v1.6/#deployment-v1beta1-apps).
API](/docs/api-reference/{{page.version}}/#deployment-v1beta1-apps).

### Configure initializers on the fly

Expand All @@ -112,19 +112,18 @@ apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: example-config
spec:
initializers:
# the name needs to be fully qualified, i.e., containing at least two "."
- name: podimage.example.com
rules:
# apiGroups, apiVersion, resources all support wildcard "*".
# "*" cannot be mixed with non-wildcard.
- apiGroups:
- ""
apiVersions:
- v1
resources:
- pods
initializers:
# the name needs to be fully qualified, i.e., containing at least two "."
- name: podimage.example.com
rules:
# apiGroups, apiVersion, resources all support wildcard "*".
# "*" cannot be mixed with non-wildcard.
- apiGroups:
- ""
apiVersions:
- v1
resources:
- pods
```
Make sure that all expansions of the `<apiGroup, apiVersions, resources>` tuple
Expand Down Expand Up @@ -217,9 +216,9 @@ See [caesarxuchao/example-webhook-admission-controller deployment](https://githu
for an example deployment.

The webhook admission controller should be deployed via the
[deployment API](/docs/api-reference/v1.6/#deployment-v1beta1-apps).
[deployment API](/docs/api-reference/{{page.version}}/#deployment-v1beta1-apps).
You also need to create a
[service](/docs/api-reference/v1.6/#service-v1-core) as the
[service](/docs/api-reference/{{page.version}}/#service-v1-core) as the
front-end of the deployment.

### Configure webhook admission controller on the fly
Expand Down
10 changes: 5 additions & 5 deletions docs/admin/federation/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,12 +131,12 @@ $ federation/deploy/deploy.sh deploy_federation
This spins up the federation control components as pods managed by
[`Deployments`](/docs/concepts/workloads/controllers/deployment/) on your
existing Kubernetes cluster. It also starts a
[`type: LoadBalancer`](http://kubernetes.io/docs/user-guide/services/#type-loadbalancer)
[`Service`](http://kubernetes.io/docs/user-guide/services/) for the
[`type: LoadBalancer`](/docs/concepts/services-networking/service/#type-loadbalancer)
[`Service`](/docs/concepts/services-networking/service/) for the
`federation-apiserver` and a
[`PVC`](http://kubernetes.io/docs/user-guide/persistent-volumes/) backed
[`PVC`](/docs/concepts/storage/persistent-volumes/) backed
by a dynamically provisioned
[`PV`](http://kubernetes.io/docs/user-guide/persistent-volumes/) for
[`PV`](/docs/concepts/storage/persistent-volumes/) for
`etcd`. All these components are created in the `federation` namespace.

You can verify that the pods are available by running the following
Expand Down Expand Up @@ -247,7 +247,7 @@ federation, and
in your federation DNS.

You can find more details about config maps in general at
[config map](http://kubernetes.io/docs/user-guide/configmap/).
[config map](/docs/tasks/configure-pod-container/configmap/).

### Kubernetes 1.4 and earlier: Setting federations flag on kube-dns-rc

Expand Down
42 changes: 0 additions & 42 deletions docs/admin/kubeadm.md
Original file line number Diff line number Diff line change
Expand Up @@ -485,23 +485,6 @@ EOF

Now `kubelet` is ready to use the specified CRI runtime, and you can continue with `kubeadm init` and `kubeadm join` workflow to deploy Kubernetes cluster.

## Using custom certificates

By default kubeadm will generate all the certificates needed for a cluster to run.
You can override this behaviour by providing your own certificates.

To do so, you must place them in whatever directory is specified by the
`--cert-dir` flag or `CertificatesDir` configuration file key. By default this
is `/etc/kubernetes/pki`.

If a given certificate and private key pair both exist, kubeadm will skip the
generation step and those files will be validated and used for the prescribed
use-case.

This means you can, for example, prepopulate `/etc/kubernetes/pki/ca.crt`
and `/etc/kubernetes/pki/ca.key` with an existing CA, which then will be used
for signing the rest of the certs.

## Running kubeadm without an internet connection

All of the control plane components run in Pods started by the kubelet and
Expand Down Expand Up @@ -673,31 +656,6 @@ export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,example.com,1
Remember to change `proxy_ip` and add a kube master node IP address to
`no_proxy`.

## Use Kubeadm with other CRI runtimes

Since [Kubernetes 1.6 release](https://git.k8s.io/kubernetes/CHANGELOG.md#node-components-1), Kubernetes container runtimes have been transferred to using CRI by default. Currently, the build-in container runtime is Docker which is enabled by build-in `dockershim` in `kubelet`.

Using other CRI based runtimes with kubeadm is very simple, and currently supported runtimes are:

- [cri-o](https://github.com/kubernetes-incubator/cri-o)
- [frakti](https://github.com/kubernetes/frakti)
- [rkt](https://github.com/kubernetes-incubator/rktlet)

After you have successfully installed `kubeadm` and `kubelet`, please follow these two steps:

1. Install runtime shim on every node. You will need to follow the installation document in the runtime shim project listing above.

2. Configure kubelet to use remote CRI runtime. Please remember to change `RUNTIME_ENDPOINT` to your own value like `/var/run/{your_runtime}.sock`:

```shell
$ cat > /etc/systemd/system/kubelet.service.d/20-cri.conf <<EOF
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=$RUNTIME_ENDPOINT --feature-gates=AllAlpha=true"
EOF
$ systemctl daemon-reload
```

Now `kubelet` is ready to use the specified CRI runtime, and you can continue with `kubeadm init` and `kubeadm join` workflow to deploy Kubernetes cluster.

## Using custom certificates

By default kubeadm will generate all the certificates needed for a cluster to run.
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/kubelet-tls-bootstrapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ name should be as depicted:
```

Add the `--token-auth-file=FILENAME` flag to the kube-apiserver command (in your systemd unit file perhaps) to enable the token file.
See docs [here](http://kubernetes.io/docs/admin/authentication/#static-token-file) for further details.
See docs [here](/docs/admin/authentication/#static-token-file) for further details.

### Client certificate CA bundle

Expand Down
4 changes: 2 additions & 2 deletions docs/api-reference/v1.7/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ <h1 id="container-v1-core">Container v1 core</h1>
</tr>
<tr>
<td>securityContext <br /> <em><a href="#securitycontext-v1-core">SecurityContext</a></em></td>
<td>Security options the pod should run with. More info: <a href="https://kubernetes.io/docs/concepts/policy/security-context/">https://kubernetes.io/docs/concepts/policy/security-context/</a> More info: <a href="https://git.k8s.io/community/contributors/design-proposals/security_context.md">https://git.k8s.io/community/contributors/design-proposals/security_context.md</a></td>
<td>Security options the pod should run with. More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a> More info: <a href="https://git.k8s.io/community/contributors/design-proposals/security_context.md">https://git.k8s.io/community/contributors/design-proposals/security_context.md</a></td>
</tr>
<tr>
<td>stdin <br /> <em>boolean</em></td>
Expand Down Expand Up @@ -72131,4 +72131,4 @@ <h1 id="userinfo-v1beta1-authentication">UserInfo v1beta1 authentication</h1>
<!--<script src="actions.js"></script>-->
<script src="tabvisibility.js"></script>
</body>
</html>
</html>
4 changes: 2 additions & 2 deletions docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ The node condition is represented as a JSON object. For example, the following r

If the Status of the Ready condition is "Unknown" or "False" for longer than the `pod-eviction-timeout`, an argument passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/), all of the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver. In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.

In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/user-guide/pods/#force-deletion-of-pods) these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is confirmed that they have stopped running in the cluster. One can see these pods which may be running on an unreachable node as being in the "Terminating" or "Unknown" states. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from Kubernetes causes all the Pod objects running on it to be deleted from the apiserver, freeing up their names.
In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is confirmed that they have stopped running in the cluster. One can see these pods which may be running on an unreachable node as being in the "Terminating" or "Unknown" states. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from Kubernetes causes all the Pod objects running on it to be deleted from the apiserver, freeing up their names.

### Capacity

Expand Down Expand Up @@ -263,4 +263,4 @@ on each kubelet where you want to reserve resources.

Node is a top-level resource in the Kubernetes REST API. More details about the
API object can be found at: [Node API
object](/docs/api-reference/v1.6/#node-v1-core).
object](/docs/api-reference/{{page.version}}/#node-v1-core).
4 changes: 2 additions & 2 deletions docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ $ kubectl logs counter
...
```

You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/user-guide/kubectl/v1.6/#logs) for more details.
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/user-guide/kubectl/{{page.version}}/#logs) for more details.

## Logging at the node level

Expand Down Expand Up @@ -77,7 +77,7 @@ As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding [script]
[cosConfigureHelper].

When you run [`kubectl logs`](/docs/user-guide/kubectl/v1.6/#logs) as in
When you run [`kubectl logs`](/docs/user-guide/kubectl/{{page.version}}/#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file, returning the contents in the response.
**Note:** currently, if some external system has performed the rotation,
Expand Down
12 changes: 6 additions & 6 deletions docs/concepts/cluster-administration/manage-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ my-nginx-2035384211-u3t6x 1/1 Running 0 23m fe

This outputs all "app=nginx" pods, with an additional label column of pods' tier (specified with `-L` or `--label-columns`).

For more information, please see [labels](/docs/user-guide/labels/) and [kubectl label](/docs/user-guide/kubectl/v1.6/#label) document.
For more information, please see [labels](/docs/user-guide/labels/) and [kubectl label](/docs/user-guide/kubectl/{{page.version}}/#label) document.

## Updating annotations

Expand All @@ -276,7 +276,7 @@ metadata:
...
```

For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/user-guide/kubectl/v1.6/#annotate) document.
For more information, please see [annotations](/docs/concepts/overview/working-with-objects/annotations/) and [kubectl annotate](/docs/user-guide/kubectl/{{page.version}}/#annotate) document.

## Scaling your application

Expand Down Expand Up @@ -304,7 +304,7 @@ deployment "my-nginx" autoscaled

Now your nginx replicas will be scaled up and down as needed, automatically.

For more information, please see [kubectl scale](/docs/user-guide/kubectl/v1.6/#scale), [kubectl autoscale](/docs/user-guide/kubectl/v1.6/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document.
For more information, please see [kubectl scale](/docs/user-guide/kubectl/{{page.version}}/#scale), [kubectl autoscale](/docs/user-guide/kubectl/v1.6/#autoscale) and [horizontal pod autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) document.


## In-place updates of resources
Expand All @@ -315,7 +315,7 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you

It is suggested to maintain a set of configuration files in source control (see [configuration as code](http://martinfowler.com/bliki/InfrastructureAsCode.html)),
so that they can be maintained and versioned along with the code for the resources they configure.
Then, you can use [`kubectl apply`](/docs/user-guide/kubectl/v1.6/#apply) to push your configuration changes to the cluster.
Then, you can use [`kubectl apply`](/docs/user-guide/kubectl/{{page.version}}/#apply) to push your configuration changes to the cluster.

This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.

Expand Down Expand Up @@ -353,7 +353,7 @@ $ rm /tmp/nginx.yaml

This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.

For more information, please see [kubectl edit](/docs/user-guide/kubectl/v1.6/#edit) document.
For more information, please see [kubectl edit](/docs/user-guide/kubectl/{{page.version}}/#edit) document.

### kubectl patch

Expand Down Expand Up @@ -401,7 +401,7 @@ The patch is specified using json.

The system ensures that you don't clobber changes made by other users or components by confirming that the `resourceVersion` doesn't differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don't use your original configuration file as the source since additional fields most likely were set in the live state.

For more information, please see [kubectl patch](/docs/user-guide/kubectl/v1.6/#patch) document.
For more information, please see [kubectl patch](/docs/user-guide/kubectl/{{page.version}}/#patch) document.

## Disruptive updates

Expand Down
Loading

0 comments on commit 026a2f4

Please sign in to comment.