From dd56387dd12e409cd0cb2f42e6ed0de7bdd36b21 Mon Sep 17 00:00:00 2001 From: DanyC97 Date: Thu, 28 Feb 2019 16:02:04 +0000 Subject: [PATCH] Replace $ symbol in all english docs files --- .../cluster-administration/logging.md | 8 +- .../manage-deployment.md | 56 ++++++------- .../concepts/configuration/assign-pod-node.md | 2 +- .../manage-compute-resources-container.md | 4 +- .../en/docs/concepts/configuration/secret.md | 18 ++--- .../working-with-objects/field-selectors.md | 14 ++-- .../kubernetes-objects.md | 2 +- .../overview/working-with-objects/labels.md | 8 +- .../working-with-objects/namespaces.md | 14 ++-- .../connect-applications-service.md | 54 ++++++------- .../services-networking/dns-pod-service.md | 2 +- .../workloads/controllers/deployment.md | 78 +++++++++---------- .../controllers/jobs-run-to-completion.md | 6 +- .../controllers/replicationcontroller.md | 4 +- .../workloads/pods/init-containers.md | 14 ++-- .../access-authn-authz/authorization.md | 8 +- .../kubectl/docker-cli-to-kubectl.md | 32 ++++---- content/en/docs/reference/kubectl/jsonpath.md | 12 +-- content/en/docs/reference/kubectl/overview.md | 60 +++++++------- .../setup-tools/kubeadm/kubeadm-join.md | 12 +-- .../independent/create-cluster-kubeadm.md | 2 +- content/en/docs/setup/minikube.md | 12 +-- .../access-cluster.md | 6 +- .../configure-multiple-schedulers.md | 4 +- .../kubeadm/kubeadm-certs.md | 2 +- .../namespaces-walkthrough.md | 38 ++++----- .../tasks/administer-cluster/namespaces.md | 46 +++++------ .../cilium-network-policy.md | 2 +- .../storage-object-in-use-protection.md | 4 +- .../translate-compose-kubernetes.md | 2 +- .../debug-application.md | 14 ++-- .../logging-elasticsearch-kibana.md | 2 +- .../logging-stackdriver.md | 10 +-- .../tasks/extend-kubectl/kubectl-plugins.md | 26 +++---- .../inject-data-application/podpreset.md | 12 +-- .../job/automated-tasks-with-cron-jobs.md | 14 ++-- .../coarse-parallel-processing-work-queue.md | 8 +- .../tasks/run-application/configure-pdb.md | 6 +- .../horizontal-pod-autoscale-walkthrough.md | 22 +++--- .../rolling-update-replication-controller.md | 8 +- .../en/docs/tutorials/clusters/apparmor.md | 18 ++--- .../en/docs/tutorials/services/source-ip.md | 28 +++---- 42 files changed, 347 insertions(+), 347 deletions(-) diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index 7b540f7baabc7..1ab198f0e8e74 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -35,14 +35,14 @@ a container that writes some text to standard output once per second. To run this pod, use the following command: ```shell -$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml +kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml pod/counter created ``` To fetch the logs, use the `kubectl logs` command, as follows: ```shell -$ kubectl logs counter +kubectl logs counter 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -178,7 +178,7 @@ Now when you run this pod, you can access each log stream separately by running the following commands: ```shell -$ kubectl logs counter count-log-1 +kubectl logs counter count-log-1 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -186,7 +186,7 @@ $ kubectl logs counter count-log-1 ``` ```shell -$ kubectl logs counter count-log-2 +kubectl logs counter count-log-2 Mon Jan 1 00:00:00 UTC 2001 INFO 0 Mon Jan 1 00:00:01 UTC 2001 INFO 1 Mon Jan 1 00:00:02 UTC 2001 INFO 2 diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md index 0288c73efaac2..55c8fba85797e 100644 --- a/content/en/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md @@ -26,7 +26,7 @@ Many applications require multiple resources to be created, such as a Deployment Multiple resources can be created the same way as a single resource: ```shell -$ kubectl create -f https://k8s.io/examples/application/nginx-app.yaml +kubectl create -f https://k8s.io/examples/application/nginx-app.yaml service/my-nginx-svc created deployment.apps/my-nginx created ``` @@ -36,13 +36,13 @@ The resources will be created in the order they appear in the file. Therefore, i `kubectl create` also accepts multiple `-f` arguments: ```shell -$ kubectl create -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +kubectl create -f https://k8s.io/examples/application/nginx/nginx-svc.yaml -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml ``` And a directory can be specified rather than or in addition to individual files: ```shell -$ kubectl create -f https://k8s.io/examples/application/nginx/ +kubectl create -f https://k8s.io/examples/application/nginx/ ``` `kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`. @@ -52,7 +52,7 @@ It is a recommended practice to put resources related to the same microservice o A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github: ```shell -$ kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml +kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml deployment.apps/my-nginx created ``` @@ -61,7 +61,7 @@ deployment.apps/my-nginx created Resource creation isn't the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created: ```shell -$ kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml +kubectl delete -f https://k8s.io/examples/application/nginx-app.yaml deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` @@ -69,13 +69,13 @@ service "my-nginx-svc" deleted In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax: ```shell -$ kubectl delete deployments/my-nginx services/my-nginx-svc +kubectl delete deployments/my-nginx services/my-nginx-svc ``` For larger numbers of resources, you'll find it easier to specify the selector (label query) specified using `-l` or `--selector`, to filter resources by their labels: ```shell -$ kubectl delete deployment,services -l app=nginx +kubectl delete deployment,services -l app=nginx deployment.apps "my-nginx" deleted service "my-nginx-svc" deleted ``` @@ -83,7 +83,7 @@ service "my-nginx-svc" deleted Because `kubectl` outputs resource names in the same syntax it accepts, it's easy to chain operations using `$()` or `xargs`: ```shell -$ kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) +kubectl get $(kubectl create -f docs/concepts/cluster-administration/nginx/ -o name | grep service) NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 10.0.0.208 80/TCP 0s ``` @@ -108,14 +108,14 @@ project/k8s/development By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error: ```shell -$ kubectl create -f project/k8s/development +kubectl create -f project/k8s/development error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin) ``` Instead, specify the `--recursive` or `-R` flag with the `--filename,-f` flag as such: ```shell -$ kubectl create -f project/k8s/development --recursive +kubectl create -f project/k8s/development --recursive configmap/my-config created deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created @@ -126,7 +126,7 @@ The `--recursive` flag works with any operation that accepts the `--filename,-f` The `--recursive` flag also works when multiple `-f` arguments are provided: ```shell -$ kubectl create -f project/k8s/namespaces -f project/k8s/development --recursive +kubectl create -f project/k8s/namespaces -f project/k8s/development --recursive namespace/development created namespace/staging created configmap/my-config created @@ -169,8 +169,8 @@ and The labels allow us to slice and dice our resources along any dimension specified by a label: ```shell -$ kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml -$ kubectl get pods -Lapp -Ltier -Lrole +kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml +kubectl get pods -Lapp -Ltier -Lrole NAME READY STATUS RESTARTS AGE APP TIER ROLE guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend @@ -180,7 +180,7 @@ guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestboo guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave my-nginx-divi2 1/1 Running 0 29m nginx my-nginx-o0ef1 1/1 Running 0 29m nginx -$ kubectl get pods -lapp=guestbook,role=slave +kubectl get pods -lapp=guestbook,role=slave NAME READY STATUS RESTARTS AGE guestbook-redis-slave-2q2yf 1/1 Running 0 3m guestbook-redis-slave-qgazl 1/1 Running 0 3m @@ -240,7 +240,7 @@ Sometimes existing pods and other resources need to be relabeled before creating For example, if you want to label all your nginx pods as frontend tier, simply run: ```shell -$ kubectl label pods -l app=nginx tier=fe +kubectl label pods -l app=nginx tier=fe pod/my-nginx-2035384211-j5fhi labeled pod/my-nginx-2035384211-u2c7e labeled pod/my-nginx-2035384211-u3t6x labeled @@ -250,7 +250,7 @@ This first filters all pods with the label "app=nginx", and then labels them wit To see the pods you just labeled, run: ```shell -$ kubectl get pods -l app=nginx -L tier +kubectl get pods -l app=nginx -L tier NAME READY STATUS RESTARTS AGE TIER my-nginx-2035384211-j5fhi 1/1 Running 0 23m fe my-nginx-2035384211-u2c7e 1/1 Running 0 23m fe @@ -266,8 +266,8 @@ For more information, please see [labels](/docs/concepts/overview/working-with-o Sometimes you would want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example: ```shell -$ kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' -$ kubectl get pods my-nginx-v4-9gw19 -o yaml +kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx' +kubectl get pods my-nginx-v4-9gw19 -o yaml apiversion: v1 kind: pod metadata: @@ -283,14 +283,14 @@ For more information, please see [annotations](/docs/concepts/overview/working-w When load on your application grows or shrinks, it's easy to scale with `kubectl`. For instance, to decrease the number of nginx replicas from 3 to 1, do: ```shell -$ kubectl scale deployment/my-nginx --replicas=1 +kubectl scale deployment/my-nginx --replicas=1 deployment.extensions/my-nginx scaled ``` Now you only have one pod managed by the deployment. ```shell -$ kubectl get pods -l app=nginx +kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE my-nginx-2035384211-j5fhi 1/1 Running 0 30m ``` @@ -298,7 +298,7 @@ my-nginx-2035384211-j5fhi 1/1 Running 0 30m To have the system automatically choose the number of nginx replicas as needed, ranging from 1 to 3, do: ```shell -$ kubectl autoscale deployment/my-nginx --min=1 --max=3 +kubectl autoscale deployment/my-nginx --min=1 --max=3 horizontalpodautoscaler.autoscaling/my-nginx autoscaled ``` @@ -320,7 +320,7 @@ Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-co This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified. ```shell -$ kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml +kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml deployment.apps/my-nginx configured ``` @@ -339,16 +339,16 @@ To use apply, always create resource initially with either `kubectl apply` or `k Alternatively, you may also update resources with `kubectl edit`: ```shell -$ kubectl edit deployment/my-nginx +kubectl edit deployment/my-nginx ``` This is equivalent to first `get` the resource, edit it in text editor, and then `apply` the resource with the updated version: ```shell -$ kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml +kubectl get deployment my-nginx -o yaml > /tmp/nginx.yaml $ vi /tmp/nginx.yaml # do some edit, and then save the file -$ kubectl apply -f /tmp/nginx.yaml +kubectl apply -f /tmp/nginx.yaml deployment.apps/my-nginx configured $ rm /tmp/nginx.yaml ``` @@ -370,7 +370,7 @@ and In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file: ```shell -$ kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force +kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force deployment.apps/my-nginx deleted deployment.apps/my-nginx replaced ``` @@ -385,14 +385,14 @@ you should read [how to use `kubectl rolling-update`](/docs/tasks/run-applicatio Let's say you were running version 1.7.9 of nginx: ```shell -$ kubectl run my-nginx --image=nginx:1.7.9 --replicas=3 +kubectl run my-nginx --image=nginx:1.7.9 --replicas=3 deployment.apps/my-nginx created ``` To update to version 1.9.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`, with the kubectl commands we learned above. ```shell -$ kubectl edit deployment/my-nginx +kubectl edit deployment/my-nginx ``` That's it! The Deployment will declaratively update the deployed nginx application progressively behind the scene. It ensures that only a certain number of old replicas may be down while they are being updated, and only a certain number of new replicas may be created above the desired number of pods. To learn more details about it, visit [Deployment page](/docs/concepts/workloads/controllers/deployment/). diff --git a/content/en/docs/concepts/configuration/assign-pod-node.md b/content/en/docs/concepts/configuration/assign-pod-node.md index 70ec7f2938ea5..9596438fbc949 100644 --- a/content/en/docs/concepts/configuration/assign-pod-node.md +++ b/content/en/docs/concepts/configuration/assign-pod-node.md @@ -339,7 +339,7 @@ If we create the above two deployments, our three node cluster should look like As you can see, all the 3 replicas of the `web-server` are automatically co-located with the cache as expected. ``` -$ kubectl get pods -o wide +kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3 redis-cache-1450370735-j2j96 1/1 Running 0 8m 10.192.2.2 kube-node-1 diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md index b05b2e508d20a..db8590453e246 100644 --- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md @@ -189,7 +189,7 @@ unscheduled until a place can be found. An event is produced each time the scheduler fails to find a place for the Pod, like this: ```shell -$ kubectl describe pod frontend | grep -A 3 Events +kubectl describe pod frontend | grep -A 3 Events Events: FirstSeen LastSeen Count From Subobject PathReason Message 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others @@ -210,7 +210,7 @@ You can check node capacities and amounts allocated with the `kubectl describe nodes` command. For example: ```shell -$ kubectl describe nodes e2e-test-minion-group-4lw4 +kubectl describe nodes e2e-test-minion-group-4lw4 Name: e2e-test-minion-group-4lw4 [ ... lines removed for clarity ...] Capacity: diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index fa654f776e66c..ede2112f08f54 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -70,18 +70,18 @@ packages these files into a Secret and creates the object on the Apiserver. ```shell -$ kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt secret "db-user-pass" created ``` You can check that the secret was created like this: ```shell -$ kubectl get secrets +kubectl get secrets NAME TYPE DATA AGE db-user-pass Opaque 2 51s -$ kubectl describe secrets/db-user-pass +kubectl describe secrets/db-user-pass Name: db-user-pass Namespace: default Labels: @@ -139,7 +139,7 @@ data: Now create the Secret using [`kubectl create`](/docs/reference/generated/kubectl/kubectl-commands#create): ```shell -$ kubectl create -f ./secret.yaml +kubectl create -f ./secret.yaml secret "mysecret" created ``` @@ -250,7 +250,7 @@ the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section: ```shell -$ kubectl get secret mysecret -o yaml +kubectl get secret mysecret -o yaml apiVersion: v1 data: username: YWRtaW4= @@ -569,7 +569,7 @@ invalid keys that were skipped. The example shows a pod which refers to the default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad. ```shell -$ kubectl get events +kubectl get events LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. ``` @@ -592,7 +592,7 @@ start until all the pod's volumes are mounted. Create a secret containing some ssh keys: ```shell -$ kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub ``` {{< caution >}} @@ -642,9 +642,9 @@ credentials. Make the secrets: ```shell -$ kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 +kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 secret "prod-db-secret" created -$ kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests +kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests secret "test-db-secret" created ``` {{< note >}} diff --git a/content/en/docs/concepts/overview/working-with-objects/field-selectors.md b/content/en/docs/concepts/overview/working-with-objects/field-selectors.md index 243eecce24d78..c35750de11617 100644 --- a/content/en/docs/concepts/overview/working-with-objects/field-selectors.md +++ b/content/en/docs/concepts/overview/working-with-objects/field-selectors.md @@ -12,15 +12,15 @@ _Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/ This `kubectl` command selects all Pods for which the value of the [`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) field is `Running`: ```shell -$ kubectl get pods --field-selector status.phase=Running +kubectl get pods --field-selector status.phase=Running ``` {{< note >}} Field selectors are essentially resource *filters*. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the following `kubectl` queries equivalent: ```shell -$ kubectl get pods -$ kubectl get pods --field-selector "" +kubectl get pods +kubectl get pods --field-selector "" ``` {{< /note >}} @@ -29,7 +29,7 @@ $ kubectl get pods --field-selector "" Supported field selectors vary by Kubernetes resource type. All resource types support the `metadata.name` and `metadata.namespace` fields. Using unsupported field selectors produces an error. For example: ```shell -$ kubectl get ingress --field-selector foo.bar=baz +kubectl get ingress --field-selector foo.bar=baz Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.bar" is not a known field selector: only "metadata.name", "metadata.namespace" ``` @@ -38,7 +38,7 @@ Error from server (BadRequest): Unable to find "ingresses" that match label sele You can use the `=`, `==`, and `!=` operators with field selectors (`=` and `==` mean the same thing). This `kubectl` command, for example, selects all Kubernetes Services that aren't in the `default` namespace: ```shell -$ kubectl get services --field-selector metadata.namespace!=default +kubectl get services --field-selector metadata.namespace!=default ``` ## Chained selectors @@ -46,7 +46,7 @@ $ kubectl get services --field-selector metadata.namespace!=default As with [label](/docs/concepts/overview/working-with-objects/labels) and other selectors, field selectors can be chained together as a comma-separated list. This `kubectl` command selects all Pods for which the `status.phase` does not equal `Running` and the `spec.restartPolicy` field equals `Always`: ```shell -$ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always +kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always ``` ## Multiple resource types @@ -54,5 +54,5 @@ $ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Alw You use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace: ```shell -$ kubectl get statefulsets,services --field-selector metadata.namespace!=default +kubectl get statefulsets,services --field-selector metadata.namespace!=default ``` diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index 00d1cc65f8171..57d65343d0e8a 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -46,7 +46,7 @@ One way to create a Deployment using a `.yaml` file like the one above is to use in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example: ```shell -$ kubectl create -f https://k8s.io/examples/application/deployment.yaml --record +kubectl create -f https://k8s.io/examples/application/deployment.yaml --record ``` The output is similar to this: diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md index d737ef6f64516..d2858af8dbf55 100644 --- a/content/en/docs/concepts/overview/working-with-objects/labels.md +++ b/content/en/docs/concepts/overview/working-with-objects/labels.md @@ -139,25 +139,25 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje Both label selector styles can be used to list or watch resources via a REST client. For example, targeting `apiserver` with `kubectl` and using _equality-based_ one may write: ```shell -$ kubectl get pods -l environment=production,tier=frontend +kubectl get pods -l environment=production,tier=frontend ``` or using _set-based_ requirements: ```shell -$ kubectl get pods -l 'environment in (production),tier in (frontend)' +kubectl get pods -l 'environment in (production),tier in (frontend)' ``` As already mentioned _set-based_ requirements are more expressive.  For instance, they can implement the _OR_ operator on values: ```shell -$ kubectl get pods -l 'environment in (production, qa)' +kubectl get pods -l 'environment in (production, qa)' ``` or restricting negative matching via _exists_ operator: ```shell -$ kubectl get pods -l 'environment,environment notin (frontend)' +kubectl get pods -l 'environment,environment notin (frontend)' ``` ### Set references in API objects diff --git a/content/en/docs/concepts/overview/working-with-objects/namespaces.md b/content/en/docs/concepts/overview/working-with-objects/namespaces.md index eb10f1067b35d..562f7ef7f9b19 100644 --- a/content/en/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/en/docs/concepts/overview/working-with-objects/namespaces.md @@ -46,7 +46,7 @@ for namespaces](/docs/admin/namespaces). You can list the current namespaces in a cluster using: ```shell -$ kubectl get namespaces +kubectl get namespaces NAME STATUS AGE default Active 1d kube-system Active 1d @@ -66,8 +66,8 @@ To temporarily set the namespace for a request, use the `--namespace` flag. For example: ```shell -$ kubectl --namespace= run nginx --image=nginx -$ kubectl --namespace= get pods +kubectl --namespace= run nginx --image=nginx +kubectl --namespace= get pods ``` ### Setting the namespace preference @@ -76,9 +76,9 @@ You can permanently save the namespace for all subsequent kubectl commands in th context. ```shell -$ kubectl config set-context $(kubectl config current-context) --namespace= +kubectl config set-context $(kubectl config current-context) --namespace= # Validate it -$ kubectl config view | grep namespace: +kubectl config view | grep namespace: ``` ## Namespaces and DNS @@ -101,10 +101,10 @@ To see which Kubernetes resources are and aren't in a namespace: ```shell # In a namespace -$ kubectl api-resources --namespaced=true +kubectl api-resources --namespaced=true # Not in a namespace -$ kubectl api-resources --namespaced=false +kubectl api-resources --namespaced=false ``` {{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 796e66d30612d..33ab5815656c6 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -35,8 +35,8 @@ Create an nginx Pod, and note that it has a container port specification: This makes it accessible from any node in your cluster. Check the nodes the Pod is running on: ```shell -$ kubectl create -f ./run-my-nginx.yaml -$ kubectl get pods -l run=my-nginx -o wide +kubectl create -f ./run-my-nginx.yaml +kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd @@ -45,7 +45,7 @@ my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 Check your pods' IPs: ```shell -$ kubectl get pods -l run=my-nginx -o yaml | grep podIP +kubectl get pods -l run=my-nginx -o yaml | grep podIP podIP: 10.244.3.4 podIP: 10.244.2.5 ``` @@ -63,7 +63,7 @@ A Kubernetes Service is an abstraction which defines a logical set of Pods runni You can create a Service for your 2 nginx replicas with `kubectl expose`: ```shell -$ kubectl expose deployment/my-nginx +kubectl expose deployment/my-nginx service/my-nginx exposed ``` @@ -81,7 +81,7 @@ API object to see the list of supported fields in service definition. Check your Service: ```shell -$ kubectl get svc my-nginx +kubectl get svc my-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx ClusterIP 10.0.162.149 80/TCP 21s ``` @@ -95,7 +95,7 @@ Check the endpoints, and note that the IPs are the same as the Pods created in the first step: ```shell -$ kubectl describe svc my-nginx +kubectl describe svc my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx @@ -108,7 +108,7 @@ Endpoints: 10.244.2.5:80,10.244.3.4:80 Session Affinity: None Events: -$ kubectl get ep my-nginx +kubectl get ep my-nginx NAME ENDPOINTS AGE my-nginx 10.244.2.5:80,10.244.3.4:80 1m ``` @@ -131,7 +131,7 @@ each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx Pods (your Pod name will be different): ```shell -$ kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE +kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE KUBERNETES_SERVICE_HOST=10.0.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 @@ -147,9 +147,9 @@ replicas. This will give you scheduler-level Service spreading of your Pods variables: ```shell -$ kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; +kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; -$ kubectl get pods -l run=my-nginx -o wide +kubectl get pods -l run=my-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m @@ -158,7 +158,7 @@ my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 You may notice that the pods have different names, since they are killed and recreated. ```shell -$ kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE +kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE KUBERNETES_SERVICE_PORT=443 MY_NGINX_SERVICE_HOST=10.0.162.149 KUBERNETES_SERVICE_HOST=10.0.0.1 @@ -171,7 +171,7 @@ KUBERNETES_SERVICE_PORT_HTTPS=443 Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster: ```shell -$ kubectl get services kube-dns --namespace=kube-system +kubectl get services kube-dns --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m ``` @@ -183,7 +183,7 @@ cluster addon), so you can talk to the Service from any pod in your cluster usin standard methods (e.g. gethostbyname). Let's run another curl application to test this: ```shell -$ kubectl run curl --image=radial/busyboxplus:curl -i --tty +kubectl run curl --image=radial/busyboxplus:curl -i --tty Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false Hit enter for command prompt ``` @@ -211,9 +211,9 @@ You can acquire all these from the [nginx https example](https://github.com/kube ```shell $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json -$ kubectl create -f /tmp/secret.json +kubectl create -f /tmp/secret.json secret/nginxsecret created -$ kubectl get secrets +kubectl get secrets NAME TYPE DATA AGE default-token-il9rc kubernetes.io/service-account-token 1 1d nginxsecret Opaque 2 1m @@ -242,8 +242,8 @@ data: Now create the secrets using the file: ```shell -$ kubectl create -f nginxsecrets.yaml -$ kubectl get secrets +kubectl create -f nginxsecrets.yaml +kubectl get secrets NAME TYPE DATA AGE default-token-il9rc kubernetes.io/service-account-token 1 1d nginxsecret Opaque 2 1m @@ -263,13 +263,13 @@ Noteworthy points about the nginx-secure-app manifest: This is setup *before* the nginx server is started. ```shell -$ kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml +kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml ``` At this point you can reach the nginx server from any node. ```shell -$ kubectl get pods -o yaml | grep -i podip +kubectl get pods -o yaml | grep -i podip podIP: 10.244.3.5 node $ curl -k https://10.244.3.5 ... @@ -283,11 +283,11 @@ Let's test this from a pod (the same secret is being reused for simplicity, the {{< codenew file="service/networking/curlpod.yaml" >}} ```shell -$ kubectl create -f ./curlpod.yaml -$ kubectl get pods -l app=curlpod +kubectl create -f ./curlpod.yaml +kubectl get pods -l app=curlpod NAME READY STATUS RESTARTS AGE curl-deployment-1515033274-1410r 1/1 Running 0 1m -$ kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt +kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt ... Welcome to nginx! ... @@ -302,7 +302,7 @@ so your nginx HTTPS replica is ready to serve traffic on the internet if your node has a public IP. ```shell -$ kubectl get svc my-nginx -o yaml | grep nodePort -C 5 +kubectl get svc my-nginx -o yaml | grep nodePort -C 5 uid: 07191fb3-f61a-11e5-8ae5-42010af00002 spec: clusterIP: 10.0.162.149 @@ -320,7 +320,7 @@ spec: selector: run: my-nginx -$ kubectl get nodes -o yaml | grep ExternalIP -C 1 +kubectl get nodes -o yaml | grep ExternalIP -C 1 - address: 104.197.41.11 type: ExternalIP allocatable: @@ -338,8 +338,8 @@ $ curl https://: -k Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: ```shell -$ kubectl edit svc my-nginx -$ kubectl get svc my-nginx +kubectl edit svc my-nginx +kubectl get svc my-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx ClusterIP 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s @@ -357,7 +357,7 @@ output, in fact, so you'll need to do `kubectl describe service my-nginx` to see it. You'll see something like this: ```shell -$ kubectl describe service my-nginx +kubectl describe service my-nginx ... LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com ... diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 5c519d479a904..3bc594ad44da2 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -251,7 +251,7 @@ options ndots:2 edns0 For IPv6 setup, search path and name server should be setup like this: ``` -$ kubectl exec -it dns-example -- cat /etc/resolv.conf +kubectl exec -it dns-example -- cat /etc/resolv.conf nameserver fd00:79:30::a search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 7ec2c97600a7e..ad2b097d08b75 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -172,21 +172,21 @@ Suppose that you now want to update the nginx Pods to use the `nginx:1.9.1` imag instead of the `nginx:1.7.9` image. ```shell -$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 +kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 image updated ``` Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`: ```shell -$ kubectl edit deployment.v1.apps/nginx-deployment +kubectl edit deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment edited ``` To see the rollout status, run: ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for rollout to finish: 2 out of 3 new replicas have been updated... deployment.apps/nginx-deployment successfully rolled out ``` @@ -194,7 +194,7 @@ deployment.apps/nginx-deployment successfully rolled out After the rollout succeeds, you may want to `get` the Deployment: ```shell -$ kubectl get deployments +kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 36s ``` @@ -207,7 +207,7 @@ You can run `kubectl get rs` to see that the Deployment updated the Pods by crea up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. ```shell -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-1564180365 3 3 3 6s nginx-deployment-2035384211 0 0 0 36s @@ -216,7 +216,7 @@ nginx-deployment-2035384211 0 0 0 36s Running `get pods` should now show only the new Pods: ```shell -$ kubectl get pods +kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-1564180365-khku8 1/1 Running 0 14s nginx-deployment-1564180365-nacti 1/1 Running 0 14s @@ -237,7 +237,7 @@ new Pods have come up, and does not create new Pods until a sufficient number of It makes sure that number of available Pods is at least 2 and the number of total Pods is at most 4. ```shell -$ kubectl describe deployments +kubectl describe deployments Name: nginx-deployment Namespace: default CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000 @@ -338,14 +338,14 @@ rolled back. Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`: ```shell -$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true +kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true deployment.apps/nginx-deployment image updated ``` The rollout will be stuck. ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for rollout to finish: 1 out of 3 new replicas have been updated... ``` @@ -355,7 +355,7 @@ Press Ctrl-C to stop the above rollout status watch. For more information on stu You will see that the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) is 2, and new replicas (nginx-deployment-3066724191) is 1. ```shell -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-1564180365 3 3 3 25s nginx-deployment-2035384211 0 0 0 36s @@ -365,7 +365,7 @@ nginx-deployment-3066724191 1 1 0 6s Looking at the Pods created, you will see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. ```shell -$ kubectl get pods +kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-1564180365-70iae 1/1 Running 0 25s nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s @@ -380,7 +380,7 @@ Kubernetes by default sets the value to 25%. {{< /note >}} ```shell -$ kubectl describe deployment +kubectl describe deployment Name: nginx-deployment Namespace: default CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 @@ -427,7 +427,7 @@ To fix this, you need to rollback to a previous revision of Deployment that is s First, check the revisions of this deployment: ```shell -$ kubectl rollout history deployment.v1.apps/nginx-deployment +kubectl rollout history deployment.v1.apps/nginx-deployment deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 kubectl create --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true @@ -443,7 +443,7 @@ REVISION CHANGE-CAUSE To further see the details of each revision, run: ```shell -$ kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 +kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 @@ -464,14 +464,14 @@ deployments "nginx-deployment" revision 2 Now you've decided to undo the current rollout and rollback to the previous revision: ```shell -$ kubectl rollout undo deployment.v1.apps/nginx-deployment +kubectl rollout undo deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment ``` Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: ```shell -$ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 +kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 deployment.apps/nginx-deployment ``` @@ -481,11 +481,11 @@ The Deployment is now rolled back to a previous stable revision. As you can see, for rolling back to revision 2 is generated from Deployment controller. ```shell -$ kubectl get deployment nginx-deployment +kubectl get deployment nginx-deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 30m -$ kubectl describe deployment nginx-deployment +kubectl describe deployment nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 @@ -534,7 +534,7 @@ Events: You can scale a Deployment by using the following command: ```shell -$ kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 +kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 deployment.apps/nginx-deployment scaled ``` @@ -543,7 +543,7 @@ in your cluster, you can setup an autoscaler for your Deployment and choose the Pods you want to run based on the CPU utilization of your existing Pods. ```shell -$ kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 +kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 deployment.apps/nginx-deployment scaled ``` @@ -557,7 +557,7 @@ ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *p For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surge)=3, and [maxUnavailable](#max-unavailable)=2. ```shell -$ kubectl get deploy +kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 10 10 10 10 50s ``` @@ -565,7 +565,7 @@ nginx-deployment 10 10 10 10 50s You update to a new image which happens to be unresolvable from inside the cluster. ```shell -$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag +kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag deployment.apps/nginx-deployment image updated ``` @@ -573,7 +573,7 @@ The image update starts a new rollout with ReplicaSet nginx-deployment-198919819 `maxUnavailable` requirement that you mentioned above. ```shell -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 5 5 0 9s nginx-deployment-618515232 8 8 8 1m @@ -591,10 +591,10 @@ new ReplicaSet. The rollout process should eventually move all replicas to the n the new replicas become healthy. ```shell -$ kubectl get deploy +kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 15 18 7 8 7m -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 7 7 0 7m nginx-deployment-618515232 11 11 11 7m @@ -608,10 +608,10 @@ apply multiple fixes in between pausing and resuming without triggering unnecess For example, with a Deployment that was just created: ```shell -$ kubectl get deploy +kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 3 3 3 3 1m -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-2142116321 3 3 3 1m ``` @@ -619,26 +619,26 @@ nginx-2142116321 3 3 3 1m Pause by running the following command: ```shell -$ kubectl rollout pause deployment.v1.apps/nginx-deployment +kubectl rollout pause deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment paused ``` Then update the image of the Deployment: ```shell -$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 +kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 deployment.apps/nginx-deployment image updated ``` Notice that no new rollout started: ```shell -$ kubectl rollout history deployment.v1.apps/nginx-deployment +kubectl rollout history deployment.v1.apps/nginx-deployment deployments "nginx" REVISION CHANGE-CAUSE 1 -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-2142116321 3 3 3 2m ``` @@ -646,7 +646,7 @@ nginx-2142116321 3 3 3 2m You can make as many updates as you wish, for example, update the resources that will be used: ```shell -$ kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi +kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi deployment.apps/nginx-deployment resource requirements updated ``` @@ -656,9 +656,9 @@ the Deployment will not have any effect as long as the Deployment is paused. Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates: ```shell -$ kubectl rollout resume deployment.v1.apps/nginx-deployment +kubectl rollout resume deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment resumed -$ kubectl get rs -w +kubectl get rs -w NAME DESIRED CURRENT READY AGE nginx-2142116321 2 2 2 2m nginx-3926361531 2 2 0 6s @@ -675,7 +675,7 @@ nginx-2142116321 0 1 1 2m nginx-2142116321 0 0 0 2m nginx-3926361531 3 3 3 20s ^C -$ kubectl get rs +kubectl get rs NAME DESIRED CURRENT READY AGE nginx-2142116321 0 0 0 2m nginx-3926361531 3 3 3 28s @@ -714,7 +714,7 @@ You can check if a Deployment has completed by using `kubectl rollout status`. I successfully, `kubectl rollout status` returns a zero exit code. ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for rollout to finish: 2 of 3 updated replicas are available... deployment.apps/nginx-deployment successfully rolled out $ echo $? @@ -742,7 +742,7 @@ The following `kubectl` command sets the spec with `progressDeadlineSeconds` to lack of progress for a Deployment after 10 minutes: ```shell -$ kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' +kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' deployment.apps/nginx-deployment patched ``` Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following @@ -771,7 +771,7 @@ due to any other kind of error that can be treated as transient. For example, le insufficient quota. If you describe the Deployment you will notice the following section: ```shell -$ kubectl describe deployment nginx-deployment +kubectl describe deployment nginx-deployment <...> Conditions: Type Status Reason @@ -847,7 +847,7 @@ You can check if a Deployment has failed to progress by using `kubectl rollout s returns a non-zero exit code if the Deployment has exceeded the progression deadline. ```shell -$ kubectl rollout status deployment.v1.apps/nginx-deployment +kubectl rollout status deployment.v1.apps/nginx-deployment Waiting for rollout to finish: 2 out of 3 new replicas have been updated... error: deployment "nginx" exceeded its progress deadline $ echo $? diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 6a6a2275011b6..a12b4583c5303 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -39,14 +39,14 @@ It takes around 10s to complete. You can run the example with this command: ```shell -$ kubectl create -f https://k8s.io/examples/controllers/job.yaml +kubectl create -f https://k8s.io/examples/controllers/job.yaml job "pi" created ``` Check on the status of the Job with `kubectl`: ```shell -$ kubectl describe jobs/pi +kubectl describe jobs/pi Name: pi Namespace: default Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495 @@ -94,7 +94,7 @@ that just gets the name from each Pod in the returned list. View the standard output of one of the pods: ```shell -$ kubectl logs $pods +kubectl logs $pods 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 ``` diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md index daf0dfd59a784..7bdd40103b2c3 100644 --- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md @@ -55,14 +55,14 @@ This example ReplicationController config runs three copies of the nginx web ser Run the example job by downloading the example file and then running this command: ```shell -$ kubectl create -f https://k8s.io/examples/controllers/replication.yaml +kubectl create -f https://k8s.io/examples/controllers/replication.yaml replicationcontroller/nginx created ``` Check on the status of the ReplicationController using this command: ```shell -$ kubectl describe replicationcontrollers/nginx +kubectl describe replicationcontrollers/nginx Name: nginx Namespace: default Selector: app=nginx diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index a6c3e64b112d1..ebf9b7d5455ab 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -180,12 +180,12 @@ spec: This Pod can be started and debugged with the following commands: ```shell -$ kubectl create -f myapp.yaml +kubectl create -f myapp.yaml pod/myapp-pod created -$ kubectl get -f myapp.yaml +kubectl get -f myapp.yaml NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 6m -$ kubectl describe -f myapp.yaml +kubectl describe -f myapp.yaml Name: myapp-pod Namespace: default [...] @@ -218,18 +218,18 @@ Events: 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox" 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined] 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634 -$ kubectl logs myapp-pod -c init-myservice # Inspect the first init container -$ kubectl logs myapp-pod -c init-mydb # Inspect the second init container +kubectl logs myapp-pod -c init-myservice # Inspect the first init container +kubectl logs myapp-pod -c init-mydb # Inspect the second init container ``` Once we start the `mydb` and `myservice` services, we can see the Init Containers complete and the `myapp-pod` is created: ```shell -$ kubectl create -f services.yaml +kubectl create -f services.yaml service/myservice created service/mydb created -$ kubectl get -f myapp.yaml +kubectl get -f myapp.yaml NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 9m ``` diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index e955053c63374..36707124056e7 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -90,9 +90,9 @@ a given action, and works regardless of the authorization mode used. ```bash -$ kubectl auth can-i create deployments --namespace dev +kubectl auth can-i create deployments --namespace dev yes -$ kubectl auth can-i create deployments --namespace prod +kubectl auth can-i create deployments --namespace prod no ``` @@ -100,7 +100,7 @@ Administrators can combine this with [user impersonation](/docs/reference/access to determine what action other users can perform. ```bash -$ kubectl auth can-i list secrets --namespace dev --as dave +kubectl auth can-i list secrets --namespace dev --as dave no ``` @@ -116,7 +116,7 @@ These APIs can be queried by creating normal Kubernetes resources, where the res field of the returned object is the result of the query. ```bash -$ kubectl create -f - -o yaml << EOF +kubectl create -f - -o yaml << EOF apiVersion: authorization.k8s.io/v1 kind: SelfSubjectAccessReview spec: diff --git a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md index 0925992ef8364..59945ed5947f1 100644 --- a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -31,7 +31,7 @@ kubectl: ```shell # start the pod running nginx -$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" +kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster" deployment "nginx-app" created ``` @@ -41,7 +41,7 @@ deployment "nginx-app" created ```shell # expose a port through with a service -$ kubectl expose deployment nginx-app --port=80 --name=nginx-http +kubectl expose deployment nginx-app --port=80 --name=nginx-http service "nginx-http" exposed ``` @@ -75,7 +75,7 @@ CONTAINER ID IMAGE COMMAND CREATED kubectl: ```shell -$ kubectl get po +kubectl get po NAME READY STATUS RESTARTS AGE nginx-app-8df569cb7-4gd89 1/1 Running 0 3m ubuntu 0/1 Completed 0 20s @@ -99,11 +99,11 @@ $ docker attach 55c103fa1296 kubectl: ```shell -$ kubectl get pods +kubectl get pods NAME READY STATUS RESTARTS AGE nginx-app-5jyvm 1/1 Running 0 10m -$ kubectl attach -it nginx-app-5jyvm +kubectl attach -it nginx-app-5jyvm ... ``` @@ -127,11 +127,11 @@ $ docker exec 55c103fa1296 cat /etc/hostname kubectl: ```shell -$ kubectl get po +kubectl get po NAME READY STATUS RESTARTS AGE nginx-app-5jyvm 1/1 Running 0 10m -$ kubectl exec nginx-app-5jyvm -- cat /etc/hostname +kubectl exec nginx-app-5jyvm -- cat /etc/hostname nginx-app-5jyvm ``` @@ -148,7 +148,7 @@ $ docker exec -ti 55c103fa1296 /bin/sh kubectl: ```shell -$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh +kubectl exec -ti nginx-app-5jyvm -- /bin/sh # exit ``` @@ -170,7 +170,7 @@ $ docker logs -f a9e kubectl: ```shell -$ kubectl logs -f nginx-app-zibvs +kubectl logs -f nginx-app-zibvs 10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" 10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" ``` @@ -178,7 +178,7 @@ $ kubectl logs -f nginx-app-zibvs There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this: ```shell -$ kubectl logs --previous nginx-app-zibvs +kubectl logs --previous nginx-app-zibvs 10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" 10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-" ``` @@ -206,18 +206,18 @@ a9ec34d98787 kubectl: ```shell -$ kubectl get deployment nginx-app +kubectl get deployment nginx-app NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-app 1 1 1 1 2m -$ kubectl get po -l run=nginx-app +kubectl get po -l run=nginx-app NAME READY STATUS RESTARTS AGE nginx-app-2883164633-aklf7 1/1 Running 0 2m -$ kubectl delete deployment nginx-app +kubectl delete deployment nginx-app deployment "nginx-app" deleted -$ kubectl get po -l run=nginx-app +kubectl get po -l run=nginx-app # Return nothing ``` @@ -252,7 +252,7 @@ OS/Arch (server): linux/amd64 kubectl: ```shell -$ kubectl version +kubectl version Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4335", GitCommit:"9b77fed11a9843ce3780f70dd251e92901c43072", GitTreeState:"dirty", BuildDate:"2017-08-29T20:32:58Z", OpenPaasKubernetesVersion:"v1.03.02", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} ``` @@ -286,7 +286,7 @@ WARNING: No swap limit support kubectl: ```shell -$ kubectl cluster-info +kubectl cluster-info Kubernetes master is running at https://108.59.85.141 KubeDNS is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kube-dns/proxy kubernetes-dashboard is running at https://108.59.85.141/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md index 74fcea92fbb09..931410f69ceda 100644 --- a/content/en/docs/reference/kubectl/jsonpath.md +++ b/content/en/docs/reference/kubectl/jsonpath.md @@ -81,11 +81,11 @@ range, end | iterate list | {range .items[*]}[{.metadata.nam Examples using `kubectl` and JSONPath expressions: ```shell -$ kubectl get pods -o json -$ kubectl get pods -o=jsonpath='{@}' -$ kubectl get pods -o=jsonpath='{.items[0]}' -$ kubectl get pods -o=jsonpath='{.items[0].metadata.name}' -$ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' +kubectl get pods -o json +kubectl get pods -o=jsonpath='{@}' +kubectl get pods -o=jsonpath='{.items[0]}' +kubectl get pods -o=jsonpath='{.items[0].metadata.name}' +kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' ``` On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example: @@ -95,4 +95,4 @@ C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.stat C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}" ``` -{{% /capture %}} \ No newline at end of file +{{% /capture %}} diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index 71befbd9664b8..7907d02d9b13f 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -176,7 +176,7 @@ Output format | Description In this example, the following command outputs the details for a single pod as a YAML formatted object: ```shell -$ kubectl get pod web-pod-13je7 -o=yaml +kubectl get pod web-pod-13je7 -o=yaml ``` Remember: See the [kubectl](/docs/user-guide/kubectl/) reference documentation for details about which output format is supported by each command. @@ -190,13 +190,13 @@ To define custom columns and output only the details that you want into a table, Inline: ```shell -$ kubectl get pods -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion +kubectl get pods -o=custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion ``` Template file: ```shell -$ kubectl get pods -o=custom-columns-file=template.txt +kubectl get pods -o=custom-columns-file=template.txt ``` where the `template.txt` file contains: @@ -251,7 +251,7 @@ kubectl [command] [TYPE] [NAME] --sort-by= To print a list of pods sorted by name, you run: ```shell -$ kubectl get pods --sort-by=.metadata.name +kubectl get pods --sort-by=.metadata.name ``` ## Examples: Common operations @@ -262,52 +262,52 @@ Use the following set of examples to help you familiarize yourself with running ```shell # Create a service using the definition in example-service.yaml. -$ kubectl create -f example-service.yaml +kubectl create -f example-service.yaml # Create a replication controller using the definition in example-controller.yaml. -$ kubectl create -f example-controller.yaml +kubectl create -f example-controller.yaml # Create the objects that are defined in any .yaml, .yml, or .json file within the directory. -$ kubectl create -f +kubectl create -f ``` `kubectl get` - List one or more resources. ```shell # List all pods in plain-text output format. -$ kubectl get pods +kubectl get pods # List all pods in plain-text output format and include additional information (such as node name). -$ kubectl get pods -o wide +kubectl get pods -o wide # List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'. -$ kubectl get replicationcontroller +kubectl get replicationcontroller # List all replication controllers and services together in plain-text output format. -$ kubectl get rc,services +kubectl get rc,services # List all daemon sets, including uninitialized ones, in plain-text output format. -$ kubectl get ds --include-uninitialized +kubectl get ds --include-uninitialized # List all pods running on node server01 -$ kubectl get pods --field-selector=spec.nodeName=server01 +kubectl get pods --field-selector=spec.nodeName=server01 ``` `kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default. ```shell # Display the details of the node with name . -$ kubectl describe nodes +kubectl describe nodes # Display the details of the pod with name . -$ kubectl describe pods/ +kubectl describe pods/ # Display the details of all the pods that are managed by the replication controller named . # Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller. -$ kubectl describe pods +kubectl describe pods # Describe all pods, not including uninitialized ones -$ kubectl describe pods --include-uninitialized=false +kubectl describe pods --include-uninitialized=false ``` {{< note >}} @@ -326,39 +326,39 @@ the pods running on it, the events generated for the node etc. ```shell # Delete a pod using the type and name specified in the pod.yaml file. -$ kubectl delete -f pod.yaml +kubectl delete -f pod.yaml # Delete all the pods and services that have the label name=. -$ kubectl delete pods,services -l name= +kubectl delete pods,services -l name= # Delete all the pods and services that have the label name=, including uninitialized ones. -$ kubectl delete pods,services -l name= --include-uninitialized +kubectl delete pods,services -l name= --include-uninitialized # Delete all pods, including uninitialized ones. -$ kubectl delete pods --all +kubectl delete pods --all ``` `kubectl exec` - Execute a command against a container in a pod. ```shell # Get output from running 'date' from pod . By default, output is from the first container. -$ kubectl exec date +kubectl exec date # Get output from running 'date' in container of pod . -$ kubectl exec -c date +kubectl exec -c date # Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container. -$ kubectl exec -ti /bin/bash +kubectl exec -ti /bin/bash ``` `kubectl logs` - Print the logs for a container in a pod. ```shell # Return a snapshot of the logs from pod . -$ kubectl logs +kubectl logs # Start streaming the logs from pod . This is similar to the 'tail -f' Linux command. -$ kubectl logs -f +kubectl logs -f ``` ## Examples: Creating and using plugins @@ -382,7 +382,7 @@ $ sudo mv ./kubectl-hello /usr/local/bin # we have now created and "installed" a kubectl plugin. # we can begin using our plugin by invoking it from kubectl as if it were a regular command -$ kubectl hello +kubectl hello hello world # we can "uninstall" a plugin, by simply removing it from our PATH @@ -393,7 +393,7 @@ In order to view all of the plugins that are available to `kubectl`, we can use the `kubectl plugin list` subcommand: ```shell -$ kubectl plugin list +kubectl plugin list The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-hello @@ -404,7 +404,7 @@ The following kubectl-compatible plugins are available: # not executable, or that are overshadowed by other # plugins, for example $ sudo chmod -x /usr/local/bin/kubectl-foo -$ kubectl plugin list +kubectl plugin list The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-hello @@ -437,7 +437,7 @@ $ sudo chmod +x ./kubectl-whoami # and move it into our PATH $ sudo mv ./kubectl-whoami /usr/local/bin -$ kubectl whoami +kubectl whoami Current user: plugins-user ``` diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md index 6c6de5a281606..03107748b4eca 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-join.md @@ -146,20 +146,20 @@ for a kubelet when a Bootstrap Token was used when authenticating. If you don't automatically approve kubelet client certs, you can turn it off by executing this command: ```console -$ kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap +kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap ``` After that, `kubeadm join` will block until the admin has manually approved the CSR in flight: ```console -$ kubectl get csr +kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 18s system:bootstrap:878f07 Pending -$ kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ +kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ certificatesigningrequest "node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ" approved -$ kubectl get csr +kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ 1m system:bootstrap:878f07 Approved,Issued ``` @@ -177,7 +177,7 @@ it off regardless. Doing so will disable the ability to use the `--discovery-tok * Fetch the `cluster-info` file from the API Server: ```console -$ kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml +kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/ //" | tee cluster-info.yaml apiVersion: v1 clusters: - cluster: @@ -196,7 +196,7 @@ users: [] * Turn off public access to the `cluster-info` ConfigMap: ```console -$ kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo +kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo ``` These commands should be run after `kubeadm init` but before `kubeadm join`. diff --git a/content/en/docs/setup/independent/create-cluster-kubeadm.md b/content/en/docs/setup/independent/create-cluster-kubeadm.md index f83d66fa0a9cd..908be57bd0062 100644 --- a/content/en/docs/setup/independent/create-cluster-kubeadm.md +++ b/content/en/docs/setup/independent/create-cluster-kubeadm.md @@ -322,7 +322,7 @@ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/ Once all Cilium pods are marked as `READY`, you start using your cluster. ```shell -$ kubectl get pods -n kube-system --selector=k8s-app=cilium +kubectl get pods -n kube-system --selector=k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-drxkl 1/1 Running 0 18m ``` diff --git a/content/en/docs/setup/minikube.md b/content/en/docs/setup/minikube.md index 1b148756c430f..336b5aace4ba4 100644 --- a/content/en/docs/setup/minikube.md +++ b/content/en/docs/setup/minikube.md @@ -53,19 +53,19 @@ Running pre-create checks... Creating machine... Starting local Kubernetes cluster... -$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 +kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 deployment.apps/hello-minikube created -$ kubectl expose deployment hello-minikube --type=NodePort +kubectl expose deployment hello-minikube --type=NodePort service/hello-minikube exposed # We have now launched an echoserver pod but we have to wait until the pod is up before curling/accessing it # via the exposed service. # To check whether the pod is up and running we can use the following: -$ kubectl get pod +kubectl get pod NAME READY STATUS RESTARTS AGE hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s # We can see that the pod is still being created from the ContainerCreating status -$ kubectl get pod +kubectl get pod NAME READY STATUS RESTARTS AGE hello-minikube-3383150820-vctvh 1/1 Running 0 13s # We can see that the pod is now Running and we will now be able to curl it: @@ -98,9 +98,9 @@ Request Body: -no body in request- -$ kubectl delete services hello-minikube +kubectl delete services hello-minikube service "hello-minikube" deleted -$ kubectl delete deployment hello-minikube +kubectl delete deployment hello-minikube deployment.extensions "hello-minikube" deleted $ minikube stop Stopping local Kubernetes cluster... diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md index 2dfc3767d7e90..397993887abe6 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md @@ -26,7 +26,7 @@ or someone else setup the cluster and provided you with credentials and a locati Check the location and credentials that kubectl knows about with this command: ```shell -$ kubectl config view +kubectl config view ``` Many of the [examples](/docs/user-guide/kubectl-cheatsheet) provide an introduction to using @@ -56,7 +56,7 @@ locating the apiserver and authenticating. Run it like this: ```shell -$ kubectl proxy --port=8080 +kubectl proxy --port=8080 ``` See [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands/#proxy) for more details. @@ -239,7 +239,7 @@ Typically, there are several services which are started on a cluster by kube-sys with the `kubectl cluster-info` command: ```shell -$ kubectl cluster-info +kubectl cluster-info Kubernetes master is running at https://104.197.5.247 elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy diff --git a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md index b3a2c5d311acd..b8b1e6c81c67f 100644 --- a/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/administer-cluster/configure-multiple-schedulers.md @@ -96,7 +96,7 @@ kubectl create -f my-scheduler.yaml Verify that the scheduler pod is running: ```shell -$ kubectl get pods --namespace=kube-system +kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE .... my-scheduler-lnf4s-4744f 1/1 Running 0 2m @@ -116,7 +116,7 @@ First, update the following fields in your YAML file: If RBAC is enabled on your cluster, you must update the `system:kube-scheduler` cluster role. Add your scheduler name to the resourceNames of the rule applied for endpoints resources, as in the following example: ``` -$ kubectl edit clusterrole system:kube-scheduler +kubectl edit clusterrole system:kube-scheduler - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index c3a4444f35da5..7026a45df8526 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -74,7 +74,7 @@ The following kubeadm command outputs the name of the certificate to approve, th $ sudo kubeadm alpha certs renew apiserver --use-api & [1] 2890 [certs] certificate request "kubeadm-cert-kube-apiserver-ld526" created -$ kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 +kubectl certificate approve kubeadm-cert-kube-apiserver-ld526 certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kube-apiserver-ld526 approved [1]+ Done sudo kubeadm alpha certs renew apiserver --use-api ``` diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 5ae36d2a6d88c..ae838eee58a31 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -45,7 +45,7 @@ Services, and Deployments used by the cluster. Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following: ```shell -$ kubectl get namespaces +kubectl get namespaces NAME STATUS AGE default Active 13m ``` @@ -74,7 +74,7 @@ Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which de Create the `development` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json +kubectl create -f https://k8s.io/examples/admin/namespace-dev.json ``` Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace: @@ -84,13 +84,13 @@ Save the following contents into file [`namespace-prod.json`](/examples/admin/na And then let's create the `production` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json +kubectl create -f https://k8s.io/examples/admin/namespace-prod.json ``` To be sure things are right, let's list all of the namespaces in our cluster. ```shell -$ kubectl get namespaces --show-labels +kubectl get namespaces --show-labels NAME STATUS AGE LABELS default Active 32m development Active 29s name=development @@ -108,7 +108,7 @@ To demonstrate this, let's spin up a simple Deployment and Pods in the `developm We first check what is the current context: ```shell -$ kubectl config view +kubectl config view apiVersion: v1 clusters: - cluster: @@ -134,17 +134,17 @@ users: password: h5M0FtUUIflBSdI7 username: admin -$ kubectl config current-context +kubectl config current-context lithe-cocoa-92103_kubernetes ``` The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. ```shell -$ kubectl config set-context dev --namespace=development \ +kubectl config set-context dev --namespace=development \ --cluster=lithe-cocoa-92103_kubernetes \ --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production \ +kubectl config set-context prod --namespace=production \ --cluster=lithe-cocoa-92103_kubernetes \ --user=lithe-cocoa-92103_kubernetes ``` @@ -156,7 +156,7 @@ new request contexts depending on which namespace you wish to work against. To view the new contexts: ```shell -$ kubectl config view +kubectl config view apiVersion: v1 clusters: - cluster: @@ -196,13 +196,13 @@ users: Let's switch to operate in the `development` namespace. ```shell -$ kubectl config use-context dev +kubectl config use-context dev ``` You can verify your current context by doing the following: ```shell -$ kubectl config current-context +kubectl config current-context dev ``` @@ -211,18 +211,18 @@ At this point, all requests we make to the Kubernetes cluster from the command l Let's create some contents. ```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 +kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. ```shell -$ kubectl get deployment +kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE snowflake 2 2 2 2 2m -$ kubectl get pods -l run=snowflake +kubectl get pods -l run=snowflake NAME READY STATUS RESTARTS AGE snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m @@ -233,22 +233,22 @@ And this is great, developers are able to do what they want, and they do not hav Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. ```shell -$ kubectl config use-context prod +kubectl config use-context prod ``` The `production` namespace should be empty, and the following commands should return nothing. ```shell -$ kubectl get deployment -$ kubectl get pods +kubectl get deployment +kubectl get pods ``` Production likes to run cattle, so let's create some cattle pods. ```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 +kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 -$ kubectl get deployment +kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE cattle 5 5 5 5 10s diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index 6311ba8b64194..144ba9a900d69 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -22,7 +22,7 @@ This page shows how to view, work in, and delete {{< glossary_tooltip text="name 1. List the current namespaces in a cluster using: ```shell -$ kubectl get namespaces +kubectl get namespaces NAME STATUS AGE default Active 11d kube-system Active 11d @@ -38,13 +38,13 @@ Kubernetes starts with three initial namespaces: You can also get the summary of a specific namespace using: ```shell -$ kubectl get namespaces +kubectl get namespaces ``` Or you can get detailed information with: ```shell -$ kubectl describe namespaces +kubectl describe namespaces Name: default Labels: Annotations: @@ -89,7 +89,7 @@ metadata: Then run: ```shell -$ kubectl create -f ./my-namespace.yaml +kubectl create -f ./my-namespace.yaml ``` Note that the name of your namespace must be a DNS compatible label. @@ -103,7 +103,7 @@ More information on `finalizers` can be found in the namespace [design doc](http 1. Delete a namespace with ```shell -$ kubectl delete namespaces +kubectl delete namespaces ``` {{< warning >}} @@ -122,7 +122,7 @@ Services, and Deployments used by the cluster. Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: ```shell -$ kubectl get namespaces +kubectl get namespaces NAME STATUS AGE default Active 13m ``` @@ -151,19 +151,19 @@ Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which de Create the `development` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json +kubectl create -f https://k8s.io/examples/admin/namespace-dev.json ``` And then let's create the `production` namespace using kubectl. ```shell -$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json +kubectl create -f https://k8s.io/examples/admin/namespace-prod.json ``` To be sure things are right, list all of the namespaces in our cluster. ```shell -$ kubectl get namespaces --show-labels +kubectl get namespaces --show-labels NAME STATUS AGE LABELS default Active 32m development Active 29s name=development @@ -181,7 +181,7 @@ To demonstrate this, let's spin up a simple Deployment and Pods in the `developm We first check what is the current context: ```shell -$ kubectl config view +kubectl config view apiVersion: v1 clusters: - cluster: @@ -207,15 +207,15 @@ users: password: h5M0FtUUIflBSdI7 username: admin -$ kubectl config current-context +kubectl config current-context lithe-cocoa-92103_kubernetes ``` The next step is to define a context for the kubectl client to work in each namespace. The values of "cluster" and "user" fields are copied from the current context. ```shell -$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes -$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes +kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes +kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes ``` The above commands provided two request contexts you can alternate against depending on what namespace you @@ -224,13 +224,13 @@ wish to work against. Let's switch to operate in the `development` namespace. ```shell -$ kubectl config use-context dev +kubectl config use-context dev ``` You can verify your current context by doing the following: ```shell -$ kubectl config current-context +kubectl config current-context dev ``` @@ -239,18 +239,18 @@ At this point, all requests we make to the Kubernetes cluster from the command l Let's create some contents. ```shell -$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 +kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. ```shell -$ kubectl get deployment +kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE snowflake 2 2 2 2 2m -$ kubectl get pods -l run=snowflake +kubectl get pods -l run=snowflake NAME READY STATUS RESTARTS AGE snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m @@ -261,22 +261,22 @@ And this is great, developers are able to do what they want, and they do not hav Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. ```shell -$ kubectl config use-context prod +kubectl config use-context prod ``` The `production` namespace should be empty, and the following commands should return nothing. ```shell -$ kubectl get deployment -$ kubectl get pods +kubectl get deployment +kubectl get pods ``` Production likes to run cattle, so let's create some cattle pods. ```shell -$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 +kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 -$ kubectl get deployment +kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE cattle 5 5 5 5 10s diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index 4c9f1b07df801..2a829a182ec27 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -41,7 +41,7 @@ DaemonSet configurations for Cilium, and the necessary configurations to connect to the etcd instance deployed in minikube as well as appropriate RBAC settings: ```shell -$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml +kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml configmap/cilium-config created daemonset.apps/cilium created clusterrolebinding.rbac.authorization.k8s.io/cilium created diff --git a/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md b/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md index d83510cfc9962..7d4c161c6aeaa 100644 --- a/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md +++ b/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md @@ -304,9 +304,9 @@ task-pv-volume 1Gi RWO Delete Terminating defa ```shell kubectl delete pvc task-pv-claim persistentvolumeclaim "task-pv-claim" deleted -$ kubectl get pvc +kubectl get pvc No resources found. -$ kubectl get pv +kubectl get pv No resources found. ``` diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index 867b4bcd0d920..6229ad6449afc 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -320,7 +320,7 @@ INFO Successfully created deployment: frontend Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details. -$ kubectl get deployment,svc,pods +kubectl get deployment,svc,pods NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.extensions/frontend 1 1 1 1 4m deployment.extensions/redis-master 1 1 1 1 4m diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md index 94adc0578c8eb..eb69129790e7d 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md @@ -31,7 +31,7 @@ your Service? The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command: ```shell -$ kubectl describe pods ${POD_NAME} +kubectl describe pods ${POD_NAME} ``` Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts? @@ -68,19 +68,19 @@ First, take a look at the logs of the current container: ```shell -$ kubectl logs ${POD_NAME} ${CONTAINER_NAME} +kubectl logs ${POD_NAME} ${CONTAINER_NAME} ``` If your container has previously crashed, you can access the previous container's crash log with: ```shell -$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} +kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} ``` Alternately, you can run commands inside that container with `exec`: ```shell -$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} +kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} ``` {{< note >}} @@ -90,7 +90,7 @@ $ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ As an example, to look at the logs from a running Cassandra pod, you might run ```shell -$ kubectl exec cassandra -- cat /var/log/cassandra/system.log +kubectl exec cassandra -- cat /var/log/cassandra/system.log ``` If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host, @@ -145,7 +145,7 @@ First, verify that there are endpoints for the service. For every Service object You can view this resource with: ```shell -$ kubectl get endpoints ${SERVICE_NAME} +kubectl get endpoints ${SERVICE_NAME} ``` Make sure that the endpoints match up with the number of containers that you expect to be a member of your service. @@ -168,7 +168,7 @@ spec: You can use: ```shell -$ kubectl get pods --selector=name=nginx,type=frontend +kubectl get pods --selector=name=nginx,type=frontend ``` to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service. diff --git a/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md index 93ae3c38ddb32..44b4d84ddc664 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md @@ -63,7 +63,7 @@ all be running in the kube-system namespace soon after the cluster comes to life. ```shell -$ kubectl get pods --namespace=kube-system +kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE elasticsearch-logging-v1-78nog 1/1 Running 0 2h elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md index 1a5bc4e8e6011..839120734cb1f 100644 --- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md +++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md @@ -145,7 +145,7 @@ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml You can observe the running pod: ```shell -$ kubectl get pods +kubectl get pods NAME READY STATUS RESTARTS AGE counter 1/1 Running 0 5m ``` @@ -155,7 +155,7 @@ has to download the container image first. When the pod status changes to `Runni you can use the `kubectl logs` command to view the output of this counter pod. ```shell -$ kubectl logs counter +kubectl logs counter 0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 @@ -169,21 +169,21 @@ if the pod is evicted from the node, log files are lost. Let's demonstrate this by deleting the currently running counter container: ```shell -$ kubectl delete pod counter +kubectl delete pod counter pod "counter" deleted ``` and then recreating it: ```shell -$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml +kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml pod/counter created ``` After some time, you can access logs from the counter pod again: ```shell -$ kubectl logs counter +kubectl logs counter 0: Mon Jan 1 00:01:00 UTC 2001 1: Mon Jan 1 00:01:01 UTC 2001 2: Mon Jan 1 00:01:02 UTC 2001 diff --git a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md index 387b14c8022f4..e9577bcab8604 100644 --- a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md +++ b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md @@ -96,14 +96,14 @@ sudo mv ./kubectl-foo /usr/local/bin You may now invoke your plugin as a `kubectl` command: ``` -$ kubectl foo +kubectl foo I am a plugin named kubectl-foo ``` All args and flags are passed as-is to the executable: ``` -$ kubectl foo version +kubectl foo version 1.0.0 ``` @@ -111,7 +111,7 @@ All environment variables are also passed as-is to the executable: ```bash $ export KUBECONFIG=~/.kube/config -$ kubectl foo config +kubectl foo config /home//.kube/config $ KUBECONFIG=/etc/kube/config kubectl foo config @@ -149,7 +149,7 @@ $ sudo chmod +x ./kubectl-foo-bar-baz $ sudo mv ./kubectl-foo-bar-baz /usr/local/bin # ensure our plugin is recognized by kubectl -$ kubectl plugin list +kubectl plugin list The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-foo-bar-baz @@ -157,7 +157,7 @@ The following kubectl-compatible plugins are available: # test that calling our plugin via a "kubectl" command works # even when additional arguments and flags are passed to our # plugin executable by the user. -$ kubectl foo bar baz arg1 --meaningless-flag=true +kubectl foo bar baz arg1 --meaningless-flag=true My first command-line argument was arg1 ``` @@ -179,7 +179,7 @@ $ sudo chmod +x ./kubectl-foo_bar $ sudo mv ./kubectl-foo_bar /usr/local/bin # our plugin can now be invoked from `kubectl` like so: -$ kubectl foo-bar +kubectl foo-bar I am a plugin with a dash in my name ``` @@ -188,11 +188,11 @@ The command from the above example, can be invoked using either a dash (`-`) or ```bash # our plugin can be invoked with a dash -$ kubectl foo-bar +kubectl foo-bar I am a plugin with a dash in my name # it can also be invoked using an underscore -$ kubectl foo_bar +kubectl foo_bar I am a plugin with a dash in my name ``` @@ -223,16 +223,16 @@ There is another kind of overshadowing that can occur with plugin filenames. Giv ```bash # for a given kubectl command, the plugin with the longest possible filename will always be preferred -$ kubectl foo bar baz +kubectl foo bar baz Plugin kubectl-foo-bar-baz is executed -$ kubectl foo bar +kubectl foo bar Plugin kubectl-foo-bar is executed -$ kubectl foo bar baz buz +kubectl foo bar baz buz Plugin kubectl-foo-bar-baz is executed, with "buz" as its first argument -$ kubectl foo bar buz +kubectl foo bar buz Plugin kubectl-foo-bar is executed, with "buz" as its first argument ``` @@ -250,7 +250,7 @@ kubectl-parent-subcommand-subsubcommand You can use the aforementioned `kubectl plugin list` command to ensure that your plugin is visible by `kubectl`, and verify that there are no warnings preventing it from being called as a `kubectl` command. ```bash -$ kubectl plugin list +kubectl plugin list The following kubectl-compatible plugins are available: test/fixtures/pkg/kubectl/plugins/kubectl-foo diff --git a/content/en/docs/tasks/inject-data-application/podpreset.md b/content/en/docs/tasks/inject-data-application/podpreset.md index 10fff8cec432f..96beb6256e557 100644 --- a/content/en/docs/tasks/inject-data-application/podpreset.md +++ b/content/en/docs/tasks/inject-data-application/podpreset.md @@ -42,7 +42,7 @@ kubectl create -f https://k8s.io/examples/podpreset/preset.yaml Examine the created PodPreset: ```shell -$ kubectl get podpreset +kubectl get podpreset NAME AGE allow-database 1m ``` @@ -54,13 +54,13 @@ The new PodPreset will act upon any pod that has label `role: frontend`. Create a pod: ```shell -$ kubectl create -f https://k8s.io/examples/podpreset/pod.yaml +kubectl create -f https://k8s.io/examples/podpreset/pod.yaml ``` List the running Pods: ```shell -$ kubectl get pods +kubectl get pods NAME READY STATUS RESTARTS AGE website 1/1 Running 0 4m ``` @@ -72,7 +72,7 @@ website 1/1 Running 0 4m To see above output, run the following command: ```shell -$ kubectl get pod website -o yaml +kubectl get pod website -o yaml ``` ## Pod Spec with ConfigMap Example @@ -157,7 +157,7 @@ when there is a conflict. **If we run `kubectl describe...` we can see the event:** ```shell -$ kubectl describe ... +kubectl describe ... .... Events: FirstSeen LastSeen Count From SubobjectPath Reason Message @@ -169,7 +169,7 @@ Events: Once you don't need a pod preset anymore, you can delete it with `kubectl`: ```shell -$ kubectl delete podpreset allow-database +kubectl delete podpreset allow-database podpreset "allow-database" deleted ``` diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index d74635e42e72d..837932450b975 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -50,21 +50,21 @@ This example cron job config `.spec` file prints the current time and a hello me Run the example cron job by downloading the example file and then running this command: ```shell -$ kubectl create -f ./cronjob.yaml +kubectl create -f ./cronjob.yaml cronjob "hello" created ``` Alternatively, you can use `kubectl run` to create a cron job without writing a full config: ```shell -$ kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" +kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" cronjob "hello" created ``` After creating the cron job, get its status using this command: ```shell -$ kubectl get cronjob hello +kubectl get cronjob hello NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE hello */1 * * * * False 0 ``` @@ -73,7 +73,7 @@ As you can see from the results of the command, the cron job has not scheduled o Watch for the job to be created in around one minute: ```shell -$ kubectl get jobs --watch +kubectl get jobs --watch NAME DESIRED SUCCESSFUL AGE hello-4111706356 1 1 2s ``` @@ -82,7 +82,7 @@ Now you've seen one running job scheduled by the "hello" cron job. You can stop watching the job and view the cron job again to see that it scheduled the job: ```shell -$ kubectl get cronjob hello +kubectl get cronjob hello NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE hello */1 * * * * False 0 Mon, 29 Aug 2016 14:34:00 -0700 ``` @@ -100,7 +100,7 @@ $ pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath $ echo $pods hello-4111706356-o9qcm -$ kubectl logs $pods +kubectl logs $pods Mon Aug 29 21:34:09 UTC 2016 Hello from the Kubernetes cluster ``` @@ -110,7 +110,7 @@ Hello from the Kubernetes cluster When you don't need a cron job any more, delete it with `kubectl delete cronjob`: ```shell -$ kubectl delete cronjob hello +kubectl delete cronjob hello cronjob "hello" deleted ``` diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md index ae9577a5a4add..7063d9a56ea33 100644 --- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md +++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md @@ -46,9 +46,9 @@ cluster and reuse it for many jobs, as well as for long-running services. Start RabbitMQ as follows: ```shell -$ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml +kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml service "rabbitmq-service" created -$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml +kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml replicationcontroller "rabbitmq-controller" created ``` @@ -64,7 +64,7 @@ First create a temporary interactive Pod. ```shell # Create a temporary interactive container -$ kubectl run -i --tty temp --image ubuntu:18.04 +kubectl run -i --tty temp --image ubuntu:18.04 Waiting for pod default/temp-loe07 to be running, status is Pending, pod ready: false ... [ previous line repeats several times .. hit return when it stops ] ... ``` @@ -240,7 +240,7 @@ kubectl create -f ./job.yaml Now wait a bit, then check on the job. ```shell -$ kubectl describe jobs/job-wq-1 +kubectl describe jobs/job-wq-1 Name: job-wq-1 Namespace: default Selector: controller-uid=41d75705-92df-11e7-b85e-fa163ee3c11f diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index 36a06deddae90..d64ea8c8b3e33 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -179,7 +179,7 @@ Assuming you don't actually have pods matching `app: zookeeper` in your namespac then you'll see something like this: ```shell -$ kubectl get poddisruptionbudgets +kubectl get poddisruptionbudgets NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE zk-pdb 2 0 7s ``` @@ -187,7 +187,7 @@ zk-pdb 2 0 7s If there are matching pods (say, 3), then you would see something like this: ```shell -$ kubectl get poddisruptionbudgets +kubectl get poddisruptionbudgets NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE zk-pdb 2 1 7s ``` @@ -198,7 +198,7 @@ counted the matching pods, and updated the status of the PDB. You can get more information about the status of a PDB with this command: ```shell -$ kubectl get poddisruptionbudgets zk-pdb -o yaml +kubectl get poddisruptionbudgets zk-pdb -o yaml apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 010511f8069f4..eb266aa73a80e 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -65,7 +65,7 @@ It defines an index.php page which performs some CPU intensive computations: First, we will start a deployment running the image and expose it as a service: ```shell -$ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 +kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80 service/php-apache created deployment.apps/php-apache created ``` @@ -82,14 +82,14 @@ Roughly speaking, HPA will increase and decrease the number of replicas See [here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#autoscaling-algorithm) for more details on the algorithm. ```shell -$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 +kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled ``` We may check the current status of autoscaler by running: ```shell -$ kubectl get hpa +kubectl get hpa NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s @@ -104,7 +104,7 @@ Now, we will see how the autoscaler reacts to increased load. We will start a container, and send an infinite loop of queries to the php-apache service (please run it in a different terminal): ```shell -$ kubectl run -i --tty load-generator --image=busybox /bin/sh +kubectl run -i --tty load-generator --image=busybox /bin/sh Hit enter for command prompt @@ -114,7 +114,7 @@ $ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done Within a minute or so, we should see the higher CPU load by executing: ```shell -$ kubectl get hpa +kubectl get hpa NAME REFERENCE TARGET CURRENT MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 305% / 50% 305% 1 10 1 3m @@ -124,7 +124,7 @@ Here, CPU consumption has increased to 305% of the request. As a result, the deployment was resized to 7 replicas: ```shell -$ kubectl get deployment php-apache +kubectl get deployment php-apache NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 7 7 7 7 19m ``` @@ -145,11 +145,11 @@ the load generation by typing ` + C`. Then we will verify the result state (after a minute or so): ```shell -$ kubectl get hpa +kubectl get hpa NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m -$ kubectl get deployment php-apache +kubectl get deployment php-apache NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php-apache 1 1 1 1 27m ``` @@ -172,7 +172,7 @@ by making use of the `autoscaling/v2beta2` API version. First, get the YAML of your HorizontalPodAutoscaler in the `autoscaling/v2beta2` form: ```shell -$ kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml +kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml ``` Open the `/tmp/hpa-v2.yaml` file in an editor, and you should see YAML which looks like this: @@ -401,7 +401,7 @@ The conditions appear in the `status.conditions` field. To see the conditions a we can use `kubectl describe hpa`: ```shell -$ kubectl describe hpa cm-test +kubectl describe hpa cm-test Name: cm-test Namespace: prom Labels: @@ -454,7 +454,7 @@ can use the following file to create it declaratively: We will create the autoscaler by executing the following command: ```shell -$ kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml +kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml horizontalpodautoscaler.autoscaling/php-apache created ``` diff --git a/content/en/docs/tasks/run-application/rolling-update-replication-controller.md b/content/en/docs/tasks/run-application/rolling-update-replication-controller.md index e0ace5c4c9794..35d7b27a4c502 100644 --- a/content/en/docs/tasks/run-application/rolling-update-replication-controller.md +++ b/content/en/docs/tasks/run-application/rolling-update-replication-controller.md @@ -165,14 +165,14 @@ spec: To update to version 1.9.1, you can use [`kubectl rolling-update --image`](https://git.k8s.io/community/contributors/design-proposals/cli/simple-rolling-update.md) to specify the new image: ```shell -$ kubectl rolling-update my-nginx --image=nginx:1.9.1 +kubectl rolling-update my-nginx --image=nginx:1.9.1 Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 ``` In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old: ```shell -$ kubectl get pods -l app=nginx -L deployment +kubectl get pods -l app=nginx -L deployment NAME READY STATUS RESTARTS AGE DEPLOYMENT my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46 my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46 @@ -199,7 +199,7 @@ replicationcontroller "my-nginx" rolling updated If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`: ```shell -$ kubectl rolling-update my-nginx --rollback +kubectl rolling-update my-nginx --rollback Setting "my-nginx" replicas to 1 Continuing update with existing controller my-nginx. Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods) @@ -239,7 +239,7 @@ spec: and roll it out: ```shell -$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml +kubectl rolling-update my-nginx -f ./nginx-rc.yaml Created my-nginx-v4 Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods) Scaling my-nginx-v4 up to 1 diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index bfb453b226d99..a9cc17f30ca14 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -107,7 +107,7 @@ on nodes by checking the node ready condition message (though this is likely to later release): ```shell -$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}' +kubectl get nodes -o=jsonpath=$'{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}\n{end}' gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled @@ -148,14 +148,14 @@ prerequisites have not been met, the Pod will be rejected, and will not run. To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event: ```shell -$ kubectl get events | grep Created +kubectl get events | grep Created 22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet e2e-test-stclair-minion-group-31nt} Created container with docker id 269a53b202d3; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write] ``` You can also verify directly that the container's root process is running with the correct profile by checking its proc attr: ```shell -$ kubectl exec cat /proc/1/attr/current +kubectl exec cat /proc/1/attr/current k8s-apparmor-example-deny-write (enforce) ``` @@ -198,14 +198,14 @@ Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile: {{< codenew file="pods/security/hello-apparmor.yaml" >}} ```shell -$ kubectl create -f ./hello-apparmor.yaml +kubectl create -f ./hello-apparmor.yaml ``` If we look at the pod events, we can see that the Pod container was created with the AppArmor profile "k8s-apparmor-example-deny-write": ```shell -$ kubectl get events | grep hello-apparmor +kubectl get events | grep hello-apparmor 14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2 14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox" 13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox" @@ -216,14 +216,14 @@ $ kubectl get events | grep hello-apparmor We can verify that the container is actually running with that profile by checking its proc attr: ```shell -$ kubectl exec hello-apparmor cat /proc/1/attr/current +kubectl exec hello-apparmor cat /proc/1/attr/current k8s-apparmor-example-deny-write (enforce) ``` Finally, we can see what happens if we try to violate the profile by writing to a file: ```shell -$ kubectl exec hello-apparmor touch /tmp/test +kubectl exec hello-apparmor touch /tmp/test touch: /tmp/test: Permission denied error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1 ``` @@ -231,7 +231,7 @@ error: error executing remote command: command terminated with non-zero exit cod To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded: ```shell -$ kubectl create -f /dev/stdin < 2h v1.13.0 kubernetes-minion-group-cx31 Ready 2h v1.13.0 @@ -72,10 +72,10 @@ iptables You can test source IP preservation by creating a Service over the source IP app: ```console -$ kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080 +kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080 service/clusterip exposed -$ kubectl get svc clusterip +kubectl get svc clusterip NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterip ClusterIP 10.0.170.92 80/TCP 51s ``` @@ -83,7 +83,7 @@ clusterip ClusterIP 10.0.170.92 80/TCP 51s And hitting the `ClusterIP` from a pod in the same cluster: ```console -$ kubectl run busybox -it --image=busybox --restart=Never --rm +kubectl run busybox -it --image=busybox --restart=Never --rm Waiting for pod default/busybox to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. @@ -115,7 +115,7 @@ As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/concep are source NAT'd by default. You can test this by creating a `NodePort` Service: ```console -$ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort +kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort service/nodeport exposed $ NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport) @@ -170,7 +170,7 @@ packet that make it through to the endpoint. Set the `service.spec.externalTrafficPolicy` field as follows: ```console -$ kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}' +kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}' service/nodeport patched ``` @@ -219,10 +219,10 @@ described in the previous section). You can test this by exposing the source-ip-app through a loadbalancer ```console -$ kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer +kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer service/loadbalancer exposed -$ kubectl get svc loadbalancer +kubectl get svc loadbalancer NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE loadbalancer LoadBalancer 10.0.65.118 104.198.149.140 80/TCP 5m @@ -254,14 +254,14 @@ health check ---> node 1 node 2 <--- health check You can test this by setting the annotation: ```console -$ kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}' +kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}' ``` You should immediately see the `service.spec.healthCheckNodePort` field allocated by Kubernetes: ```console -$ kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort +kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort healthCheckNodePort: 32122 ``` @@ -269,7 +269,7 @@ The `service.spec.healthCheckNodePort` field points to a port on every node serving the health check at `/healthz`. You can test this: ``` -$ kubectl get pod -o wide -l run=source-ip-app +kubectl get pod -o wide -l run=source-ip-app NAME READY STATUS RESTARTS AGE IP NODE source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-minion-group-6jst @@ -322,13 +322,13 @@ the `service.spec.healthCheckNodePort` field on the Service. Delete the Services: ```console -$ kubectl delete svc -l run=source-ip-app +kubectl delete svc -l run=source-ip-app ``` Delete the Deployment, ReplicaSet and Pod: ```console -$ kubectl delete deployment source-ip-app +kubectl delete deployment source-ip-app ``` {{% /capture %}}