Note
|
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website. |
Explore how to deploy microservices in Open Liberty Docker containers to Kubernetes and manage them with the kubectl Kubernetes CLI.
Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications.
Over the years, Kubernetes has become a major tool in containerized environments as containers are being more and more utilized for all steps of a continuous delivery pipeline.
Managing containers on their own can be challenging. While a few containers used for development by a small team might not pose a problem, a large number of containers, all running various applications, all sharing databases, all communicating with each other, and all needing to be run and monitored every day of the week will give even a large team of experienced developers a headache. Kubernetes is a developer’s primary tool in containerized environments. It handles scheduling, deployment, as well as mass deletion and creation of containers. It provides update rollout capabilities on a large scale that would otherwise prove extremely tedious to do. Imagine that you updated a Docker image, which now needs to propagate to a dozen containers. While you could destroy and then recreate these containers, you could also issue a short one-line command and have Kubernetes do all that updating for you. Of course this is just a simple example, the iceberg that is Kubernetes has a lot more to offer.
Deploying an application to Kubernetes simply means deploying an application to a Kubernetes cluster.
A typical Kubernetes cluster is a collection of physical or virtual machines called nodes
that run
containerized applications. A cluster is made up of one master node
that manages the cluster, and
multiple worker nodes
that run the actual application instances inside Kubernetes objects called Pods
.
A Pod
is a basic building block in a Kubernetes cluster. It represents a single running process that
encapsulates a container or in some scenarios multiple closely coupled containers. Pods
can be
replicated to scale applications and handle more traffic. From the perspective of a cluster, a set
of replicated Pods
is still one application instance, although it might be made up of dozens of
instances of itself. A single Pod
or a group of replicated Pods
are managed by Kubernetes objects
called Controllers
. A Controller
handles replication, self-healing, rollout of updates, and general
management of Pods
. Some examples of Controllers
include Deployments
, StatefulSets
, and DaemonSets
.
In this guide, you will work with Deployments
.
A Pod
or a group of replicated Pods
are abstracted through Kubernetes objects called Services
that define a set of rules by which the Pods
can be accessed. In a basic scenario, a Kubernetes
Service
will expose a node port that can be used together with the cluster IP address to access
the Pods
encapsulated by the Service
. In this guide, however, you will set up an Ingress
, which
is a Kubernetes object that contains a set of rules for mapping external requests to the Services
inside
a cluster, over protocols such as HTTP, as well as provides load-balancing and other functionality. An Ingress
requires an Ingress Controller to process requests accordingly.
To learn about the various Kubernetes resources that you can configure, see the https://kubernetes.io/docs/concepts/.
You will learn how to deploy two microservices in Open Liberty Docker containers to a local Kubernetes cluster using the Kubernetes package
manager called Helm. Helm allows you to install packages or charts, which are sets of preconfigured
Kubernetes resources. Installing charts is much more convenient than creating and configuring Kubernetes
resources yourself. You will then manage your deployed microservices using the kubectl
command line
interface for Kubernetes. The kubectl
CLI is your primary tool for communicating with and managing your
Kubernetes cluster.
The two microservices you will deploy are called name
and ping
. The name
microservice simply
displays a brief greeting and the name of the
container that it runs in, making it easy to distinguish it from its other replicas. The ping
microservice
simply pings the Kubernetes Service that encapsulates the Pods running the name
microservice, demonstrating
how communication can be established between Pods inside a cluster.
You will use a local single-node Kubernetes cluster and employ NGINX as your Ingress controller.
Before you begin, make sure to have the following tools installed:
-
A containerization software for building containers. Kubernetes supports a variety of container types and while you’re not limited to any of them in particular, you should use
Docker
as it’s what this guide will focus on. For installation instructions, see https://docs.docker.com/install/. -
kubernetes
- the Kubernetes orchestration platform. If you have a Docker installation that provides the Kubernetes environment then just use this. For example, in Docker for Windows a local Kubernetes environment is pre-installed and enabled through Docker for Windows settings. Otherwise you can useMinikube
as a single-node Kubernetes cluster that runs locally in a virtual machine. For Minikube installation instructions see https://github.com/kubernetes/minikube. Make sure to read the "Requirements" section as different operating systems require different prerequisites to get Minikube running. -
kubectl
- a command line client interface for Kubernetes. If you have a Docker installation that provides kubectl, such as Docker for Windows then just use this. Otherwise install the client as described at https://kubernetes.io/docs/tasks/tools/install-kubectl/ -
helm
- a package manager for Kubernetes. For installation instructions see https://docs.helm.sh/using_helm/#installing-helm
Start up your Kubernetes cluster. If you are using Docker for Windows just start your Docker environment. If you are using Minikube then run the following command from the command line:
minikube start
Later, when you no longer need your cluster, you can stop it with minikube stop
and delete it completely
with minikube delete
.
For any environment, validate that you have a healthy kubernetes environment by running the following command:
kubectl get nodes
This should return a Ready
status for the master node.
Next, in order to setup an Ingress, you’ll need to deploy the NGINX Ingress controller. If you are using Minikube this can be done through simply enabling an optional add-on in Minikube. To enable this add-on, run the following command:
minikube addons enable ingress
If you are not using Minikube then follow the platform-specific instructions at https://kubernetes.github.io/ingress-nginx/deploy/ . The Docker for Mac instructions are equally appropriate for Docker for Windows, where you should run the following two commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
This results in the deployment of two pods. The first Pod, called default-http-backend
,
is reponsible for routing invalid requests to a default HTTP backend. The second Pod, called nginx-ingress-controller
,
runs the actual NGINX Ingress controller. Make sure that both of these Pods are in the ready state
before you proceed:
$ kubectl get pods --all-namespaces
You’ll see an output that includes the two pods and the namespace they were created under, similar to the following:
NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-7c5bc89cc9-z5l29 1/1 Running 0 11m ingress-nginx nginx-ingress-controller-cf9ff8c96-mf94d 1/1 Running 0 11m
When both Pods are ready, the front-end load balancer is configured to use the rules defined in the
ol-name-ibm-open-liberty
and ol-ping-ibm-open-liberty
Ingress resources to route external traffic
to internal Kubernetes Services.
Next, initialize the Helm client and server by running the following command:
helm init
This command sets up helm
, the Helm client, as well as Tiller
, the Helm server. Tiller
is
installed directly into your cluster and manages your chart releases (installations).
Next, add the IBM Helm chart repository to gain access to various IBM charts. This includes the Open Liberty chart that you will use to deploy your two microservices:
helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
A final step is required only if you are using Minikube. In this case, run the following command to configure the Docker CLI to use Minikube’s Docker daemon. By running this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:
# From Bash if you're on Linux or MacOS
eval $(minikube docker-env)
# From PowerShell or CMD if you're on Windows
minikube docker-env > tmp.cmd && call tmp.cmd && DEL tmp.cmd
When you no longer want to use Minikube’s Docker daemon, run the following command to point back to your host:
# From Bash if you're on Linux or MacOS
eval $(minikube docker-env -u)
# From PowerShell or CMD if you're on Windows
minikube docker-env -u > tmp.cmd && call tmp.cmd && DEL tmp.cmd
This is not required if Kubernetes is provided as part of your Docker installation (such as Docker for Windows), where the Kubernetes server uses the local Docker daemon.
The first step of deploying to Kubernetes is to build your microservices and containerize them with Docker.
The starting Java project, which you can find in the start
directory, is a multi-module Maven
project that’s made up of the name
and ping
microservices, each residing in their own directories,
start/name
and start/ping
. Each of these directories also contains a Dockerfile, which is necessary
for building Docker images. If you’re unfamiliar with Dockerfiles, check out the
Using Docker containers to develop microservices guide,
which covers Dockerfiles in depth.
If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then
use the .war
file produced by the build to build a Docker image. While this is by no means a wrong
approach, we’ve setup the projects such that you can build your microservices and Docker image simultaneously
as a part of a single Maven build. This is done using the dockerfile-maven
plugin, which automatically
picks up the Dockerfile located in the same directory as its POM file and builds a Docker image from it.
If you are using Docker for Windows ensure that, on the Docker for Windows General Setting page, the option is set to Expose daemon on tcp://localhost:2375 without TLS
. This is required by the dockerfile-maven
part of the build.
Navigate to the start
directory and run the following command:
mvn package
The package
goal automatically invokes the dockerfile-maven:build
goal, which runs during the
package
phase. This goal builds a Docker image from the Dockerfile located in the same directory
as the POM file.
During the build, you’ll see various Docker messages describing what images are being downloaded and built. If the build is successful, run the following command to list all local Docker images:
docker images
Verify that the name:1.0-SNAPSHOT
and ping:1.0-SNAPSHOT
images are listed among them, for example:
REPOSITORY TAG
ping 1.0-SNAPSHOT
name 1.0-SNAPSHOT
open-liberty latest
gcr.io/kubernetes-helm/tiller v2.9.0
k8s.gcr.io/kube-proxy-amd64 v1.10.0
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0
k8s.gcr.io/kube-apiserver-amd64 v1.10.0
k8s.gcr.io/kube-scheduler-amd64 v1.10.0
quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.12.0
k8s.gcr.io/etcd-amd64 3.1.12
k8s.gcr.io/kube-addon-manager v8.6
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8
k8s.gcr.io/pause-amd64 3.1
k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.1
k8s.gcr.io/kube-addon-manager v6.5
gcr.io/k8s-minikube/storage-provisioner v1.8.0
gcr.io/k8s-minikube/storage-provisioner v1.8.1
k8s.gcr.io/defaultbackend 1.4
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.4
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.4
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.4
k8s.gcr.io/etcd-amd64 3.0.17
k8s.gcr.io/pause-amd64 3.0
If you don’t see the name:1.0-SNAPSHOT
and ping:1.0-SNAPSHOT
images, then check the Maven
build log for any potential errors. In addiiton, if you are using Minikube, make sure that your Docker CLI is configured to use Minikube’s Docker daemon and not your host’s as described in the previous section.
Now that your Docker images are built, deploy them using the Open Liberty Helm chart.
As mentioned previously, charts are sets of Kubernetes resources, such as Deployments, Services, Ingresses, and so on, all configured conveniently for some purpose. In this case, that purpose is to run microservices in Open Liberty. All resources that are installed through a chart are configurable just like any other Kubernetes resources, allowing you to tweak them to your liking. All chart resources are also deleted whenever the chart release is purged, allowing you to easily deploy a set of resources to your cluster, configure them, and then tear them all down simultaneously when they are no longer needed.
To install a chart release, use the helm install --name [RELEASE-NAME] [CHART] [FLAGS]
command.
First, install the Open Liberty chart for the name
microservice:
helm install --name ol-name --set image.pullPolicy=IfNotPresent --set image.repository=name --set image.tag=1.0-SNAPSHOT --set ssl.enabled=false --set service.port=9080 --set service.targetPort=9080 --set ingress.enabled=true --set ingress.rewriteTarget=/api/name --set ingress.path=/name ibm-charts/ibm-open-liberty
Then, for the ping
microservice:
helm install --name ol-ping --set image.pullPolicy=IfNotPresent --set image.repository=ping --set image.tag=1.0-SNAPSHOT --set ssl.enabled=false --set service.port=9080 --set service.targetPort=9080 --set ingress.enabled=true --set ingress.rewriteTarget=/api/ping --set ingress.path=/ping ibm-charts/ibm-open-liberty
Both of these chart releases will create three Kubernetes resources each: a Deployment for managing Pods,
a Service for defining how Pods are accessed, and an Ingress for defining how external traffic is routed
to the Service. All resources are prefixed with ol-name-ibm-open-liberty
and ol-ping-ibm-open-liberty
respectively.
Each command is long and has a lot of flags, so let’s break them down:
Flag | Description |
---|---|
name |
Name for the chart release. |
set |
Overrides a configuration value in the chart. |
Next, let’s break down the parameters:
Qualifier | Argument | Description |
---|---|---|
image |
pullPolicy |
Image pull policy. In this case, you’re using |
repository |
Image name. |
|
tag |
Image tag. In this case, you’re using |
|
ssl |
enable |
Specifies whether to use SSL. In this case, you’re disabling it since
both microservices are not secured. As a result, you are also using
the |
service |
port |
The port exposed by the container. |
targetPort |
The port that will be exposed externally by the Pod. |
|
ingress |
enable |
Specifies whether to create an Ingress. An Ingress is a collection of rules that enable inbound requests to reach the internal Kubernetes Services. |
rewriteTarget |
The endpoint where the traffic will be redirected. In this case, you’re using the endpoints where your microservices are served. |
|
path |
A path to which the Ingress will map a particular backend service. |
If you need to use additional parameters or if you would like more information on the existing parameters, visit the official IBM Helm chart repository.
When the charts are installed, run the following command to check the status of your Pods:
kubectl get pods
You’ll see an output similar to the following if all the Pods are healthy and running:
NAME READY STATUS RESTARTS AGE
ol-name-ibm-open-liberty-84fcb9475d-mgzjk 1/1 Running 0 55m
ol-ping-ibm-open-liberty-6cb6ffd7b6-5pp7w 1/1 Running 0 4m
You can also inspect individual Pods in more detail by running the following command:
kubectl describe pods
You can also issue the kubectl get
and kubectl describe
commands on other Kubernetes resources, so feel
free to inspect all other resources created by the chart.
Wait for the Pods to be in the ready state, then access them from the Ingress that you created earlier. You can get the Ingress hostname and port by running
kubectl get ingress
The default Ingress hostname for minikube is 192.168.99.100.
The default Ingress hostname for Docker for Windows is localhost
.
Then curl -k
or visit the following URLs to access your microservices, substituting the Ingress hostname for [ingress-ip]:
-
https://[ingress-ip]/name/
-
https://[ingress-ip]/ping/ol-name-ibm-open-liberty
The first URL will return a brief greeting followed by the name of the Pod that the name
microservice
runs in. The second URL will return pong
if it received a good response from the ol-name-ibm-open-liberty
Kubernetes Service. Visiting https://[ingress-ip]/ping/{kube-service} in general will return either
a good or a bad response depending on whether kube-service
is a valid Kubernetes Service that can be accessed.
There is a lot going when you send a request, so let’s break it down. When you issue a request to either
URL, the NGINX Ingress controller sees the request arrive at the apiserver’s /ingresses
endpoint and
re-routes this request appropriately using the set of rules defined in the appropriate Ingress resource.
This set of rules states that all requests made to the https://[ingress-ip]/name/ URL are to be mapped
to the /api/name
endpoint of the Kubernetes Service running name
Pods, and similarly for the https://[ingress-ip]/ping/
URL. When a request arrives at a Kubernetes Service, the Service uses its own set of rules to map this
request to a Pod, which then sends back a response, which the Service passes back to the client.
To make use of load balancing and session persistence that comes with your Ingress, you need to scale your deployments. When you scale a Deployment, you replicate its Pods, creating more running instances of your applications. Scaling is one of the primary advantages of Kubernetes because it allows to accommodate more traffic, and descale them to free up resources when the traffic decreases.
As an example, scale the name
Deployment to 3 Pods by running the following command:
kubectl scale deployment/ol-name-ibm-open-liberty --replicas=3
Wait for your two new Pods to be in the ready state, then curl -k
or visit the https://[ingress-ip]/name/ URL.
Each unique session that you open to this URL will display a different Pod name, one for each of
your three running Pods. Also notice that no matter how many unique sessions you open to this URL, your
Ingress controller balances your traffic evenly among the three Pods.
Opening a non-unique session results in you connecting to the same Pod each time. This behavior is called session persistence, meaning that requests from non-unique HTTP/HTTPs sessions are routed to the same backend services each time in order to persist any data that might have been created during the first session. Session persistence is provided by your Ingress controller and can be configured and disabled from your Ingress resource.
The Open Liberty helm chart also supports automatic scaling that you can enable by setting the
autoscaling.enabled
parameter to true
when installing the chart. See the official
IBM Helm chart repository
for more information on this parameter.
Whenever you make code changes and rebuild your Docker images, you need to update your Kubernetes Deployments with the new image versions for the new code changes to be picked up.
As an example, make a small change to the name
microservice and then update the ol-name-ibm-open-liberty
Deployment that’s already installed in your cluster.
First, navigate to the start
directory if you haven’t yet and change the greeting message in
name/src/main/java/io/openliberty/guides/name/NameResource.java
file from "Hello!" to "Greetings!" :
package io.openliberty.guides.name;
import javax.enterprise.context.RequestScoped;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@RequestScoped
@Path("/")
public class NameResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String getContainerName() {
return "Greetings! I'm container " + System.getenv("HOSTNAME");
}
}
Next, specify a new version for your Maven project. This way, when your microservice rebuilds, a Docker image with a new tag will be built alongside it. To change the Maven project version, run the following command:
mvn versions:set -DnewVersion=1.1-SNAPSHOT
This command will upgrade the parent project version from 1.0-SNAPSHOT
to 1.1-SNAPSHOT
and
propagate this change automatically to the child projects:
...
[INFO] --- versions-maven-plugin:2.5:set (default-cli) @ kube-demo ---
[INFO] Searching for local aggregator root...
[INFO] Local aggregation root: /Users/foo/Documents/repos/guides/wip/draft-guide-kubernetes/finish
[INFO] Processing change of io.openliberty.guides:kube-demo:1.0-SNAPSHOT -> 1.1-SNAPSHOT
[INFO] Processing io.openliberty.guides:kube-demo
[INFO] Updating project io.openliberty.guides:kube-demo
[INFO] from version 1.0-SNAPSHOT to 1.1-SNAPSHOT
[INFO]
[INFO] Processing io.openliberty.guides:name
[INFO] Updating parent io.openliberty.guides:kube-demo
[INFO] from version 1.0-SNAPSHOT to 1.1-SNAPSHOT
[INFO]
[INFO] Processing io.openliberty.guides:ping
[INFO] Updating parent io.openliberty.guides:kube-demo
[INFO] from version 1.0-SNAPSHOT to 1.1-SNAPSHOT
...
Next, navigate to the name
directory within the start
directory and run the mvn clean package
command to rebuild the name
microservice, then verify that a new name:1.1-SNAPSHOT
image was created:
$ docker images
REPOSITORY TAG
name 1.1-SNAPSHOT
ping 1.0-SNAPSHOT
name 1.0-SNAPSHOT
To deploy this new image into your cluster, you can either install a new chart release, specifying
the new image version in the image.tag
parameter, or you can upgrade the existing ol-name-ibm-open-liberty
Deployment that’s part of your ol-name
release.
Installing a new chart release is done using the same helm install
command as before. To
update an existing release, you need to update the image tag in the Deployment in order for it to point
to your new image version. To do this, run the following command:
kubectl set image deployment/ol-name-ibm-open-liberty ibm-open-liberty=name:1.1-SNAPSHOT --record
When you change the image tag, Kubernetes automatically creates new Pods that run this new image. Kubernetes also keeps some of the old Pods alive until enough of the new Pods are running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ol-name-ibm-open-liberty-84fcb9475d-hgvg2 1/1 Terminating 1 19h
ol-name-ibm-open-liberty-84fcb9475d-rctgp 1/1 Running 1 19h
ol-name-ibm-open-liberty-9db5b8b65-5ncgt 1/1 Running 0 28s
ol-name-ibm-open-liberty-9db5b8b65-88psh 0/1 Running 0 28s
ol-name-ibm-open-liberty-9db5b8b65-cxn5q 0/1 Running 0 1s
ol-ping-ibm-open-liberty-6cb6ffd7b6-fhchz 1/1 Running 1 19h
When all of the new Pods are in the ready state and all of the old Pods terminate, curl -k
or visit
the https://[ingress-ip]/name/ URL and verify that the greeting has changed.
To make the rollout of updates easier, we’ve created an update-deployment
profile in the pom.xml
files of each microservice. This profile uses the Maven exec plugin to automatically run the kubectl set image
command and update the Deployments.
To have Maven update the Deployments after it rebuilds your microservices, update the Maven project
version to a new version and then run a Maven build, specifying the profile name after the -P
flag:
mvn clean package -P update-deployment
If new updates were made to a Deployment, you will see a brief message in the Maven build log, like so:
...
[INFO] --- exec-maven-plugin:1.6.0:exec (update-kubernetes-deployment) @ name ---
deployment.apps "ol-name-ibm-open-liberty" image updated
...
If no updates were made to a Deployment, then no special messages will show. If you did make code changes, yet the deployment didn’t update, then make sure that you updated your image tag since Kubernetes will not update a deployment that’s already using the same image version.
If you rolled out an unstable Deployment update, then you can revert the Deployment back to a older revision by viewing its revision history. To do this, run the following command:
kubectl rollout history deployment/ol-name-ibm-open-liberty
You see the following revision history for the ol-name-ibm-open-liberty
Deployment:
deployments "ol-name-ibm-open-liberty"
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment/ol-name-ibm-open-liberty ibm-open-liberty=name:1.1-SNAPSHOT --record=true
To undo the greeting messages changes that you made in the current rollout and revert the Deployment back to its previous version, run the following command:
kubectl rollout undo deployment/ol-name-ibm-open-liberty
Kubernetes will terminate any existing Pods that run this image version of the Deployment and create new ones from the previous revision.
If you need to revert back to a specific revision, use the --to-revision
flag followed by the revision number:
kubectl rollout undo deployment/ol-name-ibm-open-liberty --to-revision=1
A few tests are included for you to test the basic functionality of the microservices. If a test failure
occurs, then you might have introduced a bug into the code. To run the tests, wait for all Pods to be
in the ready state, then run the mvn verify
command. The default properties defined in the pom.xml
are:
Property | Description |
---|---|
cluster.ip |
IP address of your Igress, |
name.ingress.path |
Ingress path of the |
ping.ingress.path |
Ingress path of the |
name.kube.service |
Name of the Kubernetes Service wrapping the |
To run integration tests against a cluster running at the default Minikube IP address:
mvn verify -Ddockerfile.skip=true
To run integration tests against a cluster running with an Ingress IP address of 192.168.99.101
:
mvn verify -Ddockerfile.skip=true -Dcluster.ip=192.168.99.101
To run integration tests against a cluster running with an Ingress IP address of localhost
:
mvn verify -Ddockerfile.skip=true -Dcluster.ip=localhost
The dockerfile.skip
parameter is set to true
in order to skip building a new Docker image.
If the tests pass, you’ll see an output similar to the following for each service respectively:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.name.NameEndpointTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.name.NameEndpointTest
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.ping.PingEndpointTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.ping.PingEndpointTest
Results :
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
When you no longer need your deployed microservices, you can delete all Kubernetes resource associated
with your chart releases by running the helm del
command:
helm del --purge ol-name
helm del --purge ol-ping
By deleting a chart release, Helm deletes all Kubernetes resource created by that chart.
Finally, if you are using minikube, you can perform the following two steps to return your environment to a clean state. Firstly, stop your Minikube cluster:
minikube stop
Secondly, point the Docker daemon back to your local machine:
# From Bash if you're on Linux or MacOS
eval $(minikube docker-env -u)
# From PowerShell or CMD if you're on Windows
minikube docker-env -u > tmp.cmd && call tmp.cmd && DEL tmp.cmd
While you don’t need to edit any of the Kubernetes resources in this guide, it might be helpful for you to know how editing is done for any future projects that you have in mind.
To make changes to a Kubernetes resource, you can either edit that resource’s whole YAML file in a text
editor or update particular parts of the YAML with the kubectl
command.
Resource YAML files can be directly edited from the Kubernetes dashboard or a text editor of your choice.
To use the Kubernetes dashboard to edit a resources you first need to install and start the dashboard. If you are using minikube then simply run the minikube dashboard
command to open the dashboard. Otherwise install the Kubernetes dashboard as described at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ . Once you have accessed the dashboard in a browser, use the left navigational panel to select which resources to edit.
To open and edit a resource in a text editor, run the kubectl edit (RESOURCE/NAME | -f FILENAME) [options]
command, specifying the resource type and its name. When you save your changes, they will be automatically
picked up and applied to your resource.
If you didn’t create resources through charts, but rather implemented and created them yourself from
YAML files by using the kubectl create
command, then you can edit these YAMLs directly and then
reapply them by running the kubectl apply -f [FILENAME] options
command.
To familiarize yourself with resource editing, edit the ol-name-ibm-open-liberty
Ingress and change
the path
field in the spec
object from /name
to /myname
. Use any editing method you prefer:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "ol-name-ibm-open-liberty",
"namespace": "default",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/ol-name-ibm-open-liberty",
"uid": "18353cff-5fc5-11e8-af4d-08002784f87f",
"resourceVersion": "64202",
"generation": 1,
"creationTimestamp": "2018-05-25T02:41:15Z",
"labels": {
"app": "ol-name-ibm-open-liberty",
"chart": "ibm-open-liberty-1.2.0",
"heritage": "Tiller",
"release": "ol-name"
},
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/affinity": "cookie",
"nginx.ingress.kubernetes.io/rewrite-target": "/api/name",
"nginx.ingress.kubernetes.io/session-cookie-hash": "sha1",
"nginx.ingress.kubernetes.io/session-cookie-name": "route"
}
},
"spec": {
"rules": [
{
"http": {
"paths": [
{
"path": "/myname",
"backend": {
"serviceName": "ol-name-ibm-open-liberty",
"servicePort": 9080
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "10.0.2.15"
}
]
}
}
}
When you’re done editing, visit the new Ingress endpoint to verify that your changes have been applied.
Indirect editing of resources can be done by using various kubectl
commands. If you recall,
you’ve already done this when you ran kubectl set image
to update the image used by your Deployment
to a newer version. kubectl set
is convenient, but it’s limited to a small set of fields that you can
change. Sometimes you might need to change parts of a resource that the kubectl set
command simply
doesn’t cover. In those cases and as an alternative to the kubectl set
command, you can use the
kubectl patch (-f FILENAME | TYPE NAME) -p PATCH [options]
command to make updates using strategic
merging. The kubectl patch
command works by supplying a piece of config in a form of a JSON or a YAML
that matches another piece of similar config in the resource and overrides the fields that don’t match.
For example, to change the Ingress path from /name
to /myname
as in the previous section, run the
following command:
# From Bash if you're on Linux or MacOS
kubectl patch ingress/ol-name-ibm-open-liberty -p '{"spec": {"rules": [{"http": {"paths": [{"path": "/myname", "backend": {"serviceName": "ol-name-ibm-open-liberty", "servicePort": 9080}}]}}]}}'
# From PowerShell or CMD if you're on Windows
kubectl patch ingress/ol-name-ibm-open-liberty -p "{\"spec\": {\"rules\": [{\"http\": {\"paths\": [{\"path\": \"/myname\", \"backend\": {\"serviceName\": \"ol-name-ibm-open-liberty\", \"servicePort\": 9080}}]}}]}}"
Kubernetes will match your config pattern against the spec
object defined in the ol-name-ibm-open-liberty
Ingress and override the path
field.
You have just deployed two microservices to Kubernetes using Helm charts. You then scaled a microservice, rolled out deployment updates, ran integration tests against miroservices that are running in a Kubernetes cluster, and learned how to edit Kubernetes resources.