Skip to content

Latest commit

 

History

History
 
 

cluster-logging

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Kubernetes Cluster Logging using Elasticsearch, Fluentd, and Kibana

There are many options for aggregating and streaming kubernetes logs. In this part of the workshop, we are going to focus on an AWS based deployment which consists of the following:

  1. Fluentd is an open source data collector providing a unified logging layer, supported by 500+ plugins connecting to many types of systems.

  2. Elasticsearch is a distributed, RESTful search and analytics engine.

  3. Kibana lets you visualize your Elasticsearch data.

Together, Fluentd, Elasticsearch and Kibana is also known as “EFK stack”.

In terms of architecture, Fluentd is deployed as a DaemonSet with the CloudWatch Logs plugin. The default configuration will stream system logs, Kubernetes container logs and Kubernetes API server audit logs to a CloudWatch log group. An AWS Lambda function, which is provided for you, will stream the Log Group into an Amazon Elasticsearch cluster. The logs can then be viewed using a Kibana dashboard.

Prerequisites

This chapter uses a cluster with 3 master nodes and 5 worker nodes as described here: multi-master, multi-node gossip based cluster.

All configuration files for this chapter are in the cluster-logging directory. Make sure you change to that directory before giving any commands in this chapter.

Provision an Amazon Elasticsearch cluster

This example creates a two instance Amazon Elasticsearch cluster named kubernetes-logs. This cluster is created in the same region as the Kubernetes cluster and CloudWatch log group. Note that this cluster has an open access policy which will need to be locked down in production environments:

aws es create-elasticsearch-domain \
  --domain-name kubernetes-logs \
  --elasticsearch-version 5.5 \
  --elasticsearch-cluster-config \
  InstanceType=m4.large.elasticsearch,InstanceCount=2 \
  --ebs-options EBSEnabled=true,VolumeType=standard,VolumeSize=100 \
  --access-policies '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["*"]},"Action":["es:*"],"Resource":"*"}]}'

It takes a little while for the cluster to be created and arrive at an active state. The AWS Console should show the following status when the cluster is ready:

logging cloudwatch es cluster

Look for the word Active in the right column.

You could also check this via AWS CLI:

$ aws es describe-elasticsearch-domain --domain-name kubernetes-logs --query 'DomainStatus.Processing'
false

If the output value is false that means the domain has been processed and is now available to use.

CloudWatch Log Group

A CloudWatch log group combines log streams that share the same retention, monitoring, and access control settings.

Create a CloudWatch log group:

aws logs create-log-group --log-group-name kubernetes-logs

Create the log group in the same region as your cluster.

Deploy Fluentd

Log group name and log stream name

Fluentd log group name and stream name are configured in the file templates/fluentd-configmap.yaml. An excerpt from the file is shown:

output.conf: |
  <match **>
    # Plugin specific settings
    type cloudwatch_logs
    log_group_name kubernetes-logs
    log_stream_name fluentd-cloudwatch
    auto_create_stream true
    # Buffer settings
    buffer_chunk_limit 2M
    buffer_queue_limit 32
    flush_interval 10s
    max_retry_wait 30
    disable_retry_limit
    num_threads 8
  </match>

It uses the log group name of kubernetes-logs and the log stream name of fluentd-cloudwatch. If a different log group name is used in the previous command or a different log stream name is needed, then that needs to be configured in this configuration file.

IAM configuration

You will need to create an IAM user and set the AWS_ACCESS_KEY, AWS_SECRET_KEY and AWS_REGION in the templates/fluentd-ds.yaml file. Other options to accomplish this (in a production environment) are to use either kubernetes secrets or EC2 IAM roles.

env:
- name: FLUENTD_CONFIG
  value: fluentd-standalone.conf
- name: AWS_REGION
  value: $REGION
- name: AWS_ACCESS_KEY
  value: $ACCESS_KEY
- name: AWS_SECRET_KEY
  value: $SECRET_KEY

If creating an IAM user, the associated policy needs the following permissions

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "logs:DescribeLogGroups"
                ],
                "Resource": [
                    "arn:aws:logs:us-east-1:<account>:log-group::log-stream:*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:DescribeLogStreams"
                ],
                "Resource": [
                    "arn:aws:logs:us-east-1:<account>:log-group:kubernetes-logs:log-stream:*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": [
                    "arn:aws:logs:us-east-1:<account>:log-group:kubernetes-logs:log-stream:fluentd-cloudwatch"
                ]
            }
        ]
    }

Create Kubernetes resources

First create the logging namespace

$ kubectl create ns logging
namespace "logging" created

Create all of the necessary service accounts and roles:

$ kubectl create -f ./templates/fluentd-service-account.yaml
serviceaccount "fluentd" created
$ kubectl create -f ./templates/fluentd-role.yaml
clusterrole "fluentd-read" created
$ kubectl create -f ./templates/fluentd-role-binding.yaml
clusterrolebinding "fluentd-read" created

Then deploy Fluentd:

$ kubectl create -f ./templates/fluentd-configmap.yaml
configmap "fluentd-config" created
$ kubectl create -f ./templates/fluentd-svc.yaml
service "fluentd" created
$ kubectl create -f ./templates/fluentd-ds.yaml
daemonset "fluentd" created

Watch for all of the pods to change to running status:

$ kubectl get pods -w --namespace=logging
NAME            READY     STATUS    RESTARTS   AGE
fluentd-00zxr   1/1       Running   0          58s
fluentd-05hc0   1/1       Running   0          58s
fluentd-089s4   1/1       Running   0          58s
fluentd-v4vjp   1/1       Running   0          58s
fluentd-zv6bv   1/1       Running   0          58s

Remember, Fluentd is deployed as a DaemonSet, i.e. one pod per worker node, so your output will vary depending on the size of your cluster. In our case, a 5 node cluster is used and so 5 pods are shown in the output.

We can now login to the AWS console → Management Tools → CloudWatch → Logs → kubernetes-logs → fluentd-cloudwatch

We should start to see logs arrive into the service and can use the search feature to looks for specific logs. It looks like as shown:

logging cloudwatch fluentd stream

Subscribe a CloudWatch Log Group to Amazon Elastisearch

CloudWatch Logs can be delivered to other services such as Amazon Elasticsearch for custom processing. This can be achieved by subscribing to a real-time feed of log events. A subscription filter defines the filter pattern to use for filtering which log events gets delivered to Elasticsearch, as well as information about where to send matching log events to.

In this section, we’ll subscribe to the CloudWatch log events from the fluent-cloudwatch stream from the kubernetes-logs log group. This feed will be streamed to the Elasticsearch cluster.

Original instructions for this are available at:

The instructions below show how this can be achieved for our setup:

  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

  2. In the navigation pane, choose Logs.

  3. Select the log group to subscribe.

  4. Choose Actions, Stream to Amazon Elasticsearch Service.

    logging cloudwatch es subscribe
  5. Select the IAM role

    logging cloudwatch es subscribe iam
  6. Click on Next.

  7. Select a Log Format:

    logging cloudwatch es subscribe log format

    The fields that are sent to the Elasticsearch cluster can be selected. Optionally, you can select a log stream and then click on Test Pattern to verify that your search filter is returning the results you expect.

  8. Click on Next

  9. Review all the information:

    logging cloudwatch es subscribe confirmation
  10. Click on Next and then Start streaming:

    logging cloudwatch es subscribe start streaming
  11. Cloudwatch page is refreshed to show that the filter was successfully created:

    logging cloudwatch es subscribe filter created

Kibana dashboard

In Amazon Elasticsearch console, select the Elasticsearch cluster.

logging cloudwatch es overview

Open the Kibana dashboard from the link:

logging cloudwatch kibana default

Begin using the capabilites to search and visuzalize your Kubernetes cluster metrics.

Delete resources

Delete the resources created during this chapter:

kubectl delete ns logging
aws es delete-elasticsearch-domain --domain-name kubernetes-logs
aws logs delete-log-group --log-group-name kubernetes-logs