ELK 6.2 on CentOS 7
Create an ELK Stack with a single bash command in Vmware, Parallels, VirtualBox, or libvirt :
vagrant up --no-parallel --provider <virtualbox|parallels|vmware_fusion|vmware_workstation|libvirt>
Software version information
Software | Version | Description |
---|---|---|
CentOS | 7.x | Guest OS VMWare and Virtual box : bento/centos-7.4 & parallels : parallels/centos-7.3 & libvirt : magneticone/centos-7 |
Java (Oracle) | 1.8.0_161 | Documentation |
ElasticSearch | 6.2.3 | Reference Guide / Definitive Guide |
Kibana | 6.2.3 | Reference Guide |
Logstash | 6.2.3 | Reference Guide |
Cluster Details
Default Cluster Name : es-dev-cluster
Default Network Setup: Private Network 10.1.1.0/24
Default CPU Cores Per Node : 1
Default RAM Per Node : 1024MB
ES Endpoint URL : http://localhost:9200/ (from Host Machine)
Kibana Endpoint URL : http://localhost:5601/ (from Host Machine)
Logstash Syslog Ports : localhost:5514 (TCP and UDP) (from Host Machine)
Logstash Beats Ports : localhost:5044 (TCP) (from Host Machine)
Cluster Nodes :
VM Name | Node Name | Default IP | VM Port <=> Host Port | Description |
---|---|---|---|---|
vm1 | thor | 10.1.1.11 | 9200<=>9201 9300<=>9301 |
1st Elasticsearch Node |
vm2 | zeus | 10.1.1.12 | 9200<=>9202 9300<=>9302 |
2nd Elasticsearch Node |
vm3 | isis | 10.1.1.13 | 9200<=>9203 9300<=>9303 |
3rd Elasticsearch Node |
vm4 | baal | 10.1.1.14 | 9200<=>9204 9300<=>9304 |
4th Elasticsearch Node (Not started by default) |
vm5 | shifu | 10.1.1.15 | 9200<=>9205 9300<=>9305 |
5th Elasticsearch Node (Not started by default) |
vm250 | kibana | 10.1.1.250 | 9200<=>9200 9300<=>9300 5601<=>5601 |
Kibana + ES Client Node |
vm251 | logstash | 10.1.1.251 | 5514<=>5514 (TCP & UDP) 5044<=>5044 (TCP) |
Logstash Node (syslog & beats) |
WARNING: You'll need enough RAM to run VMs in your cluster. Each new VM launched within your cluster will have 1024M of RAM allocated.
Elasticsearch Plugins (to be revisited, some of them are deprecated)
Plugin | Version | URL To Access |
---|---|---|
mapper-attachments | latest | N.A. |
analysis-icu | latest | N.A. |
lang-javascript | latest | N.A. |
elasticsearch-head | latest | http://localhost:9200/_plugin/head/ |
elasticsearch-kopf | 2.1.2 | http://localhost:9200/_plugin/kopf |
elasticsearch-paramedic | latest | http://localhost:9200/_plugin/paramedic/ |
elasticsearch-HQ | latest | http://localhost:9200/_plugin/HQ/ |
Sense | latest | http://localhost:5601/app/sense |
- Prerequisites & Set up --
Must have on your host machine
- VirtualBox OR VMWare desktop|fusion OR parallels
- Vagrant (>=1.7)
- Respective vagrant plugins for vmware or parallels
- cUrl (or another REST client to talk to ES)
Clone this repository
git clone https://github.com/rybskej/vagrant-elk-cluster.git
Download Installation Files
This needs to be done just once (this gets all done for you).
- Download JDK 8u161 64bit RPM from Oracle
- Download elasticsearch from elastic
- Download kibana from elastic
- Download logstash from elastic
- Place all the above files at the root of this repo.
If you need to upgrade any of the above, download respective version and change the version number in lib/upgrade-es.sh
OR lib/upgrade-kibana.sh
Or lib/upgrade-logstash.sh
accordingly and re-run provisioning.
- How to run a new ELK Stack cluster --
Run the cluster
Simply go in the cloned directory (vagrant-elk-cluster by default). Execute this command :
vagrant up --no-parallel --provider <virtualbox|parallels|vmware_fusion|vmware_workstation|libvirt>
I recommend starting in no-parallel
mode as it is the safest, but you can also try with removing this argument.
By default 3 ElasticSearch Nodes are started: vm1, vm2, and vm3. One kibana (vm250) and one logstash (vm251) node are also started. You can start a maximum of 5 Elasticsearch nodes. If you want you can increase this limit by changing the code but it is pointless to have a bigger cluster for dev|qa purposes.
You can change the cluster size with the CLUSTER_COUNT
variable (min 1 and max 5):
CLUSTER_COUNT=5 vagrant up
You can change the cluster name with the CLUSTER_NAME
variable:
CLUSTER_NAME='es-qa-cluster' vagrant up
You can change the cluster RAM used for each node with the CLUSTER_RAM
variable:
CLUSTER_RAM=512 vagrant up
You can change the cluster CPU used for each node with the CLUSTER_CPU
variable:
CLUSTER_CPU=2 vagrant up
You can change the cluster network IP address with the CLUSTER_IP_PATTERN
variable:
CLUSTER_IP_PATTERN='172.16.15.%d' vagrant up
NOTE : Providing the CLUSTER_NAME
, CLUSTER_COUNT
, CLUSTER_RAM
, CLUSTER_CPU
, CLUSTER_IP_PATTERN
variables is only required when you first start the cluster.
Vagrant will save/cache these values under the .vagrant
directory, so you can run other commands without repeating yourself.
Of course you can use all these variables at the same time :
$ CLUSTER_NAME='es-qa-cluster' CLUSTER_IP_PATTERN='172.16.25.%d' CLUSTER_COUNT=5 \
CLUSTER_RAM=512 CLUSTER_CPU=2 vagrant up
Sample output
$ vagrant status
----------------------------------------------------------
Your ES cluster configurations
----------------------------------------------------------
Cluster Name: dev-es-cluster
Cluster size: 3
Cluster network IP: 10.1.1.0
Cluster RAM (for each node): 1024
Cluster CPU (for each node): 1
----------------------------------------------------------
----------------------------------------------------------
Current machine states:
vm1 stopped (parallels)
vm2 stopped (parallels)
vm3 stopped (parallels)
kibana stopped (parallels)
logstash stopped (parallels)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`....
The names of the Elasticsearch VMs will follow the following pattern: vm[0-9]+
.
The trailing number represents the index of the VM, starting at 1. Kibana and Logstash instances are simply named kibana and logstash.
ElasticSearch, Kibana, & Logstash instances are started during provisioning of respective VMs. The command is launched into a new screen as root user inside the vagrant.
Stop the cluster
vagrant halt
This will stop the whole cluster. If you want to only stop one VM, you can use:
vagrant halt vm2
This will stop the vm2
instance.
Destroy the cluster
vagrant destroy
rm -rf .vagrant conf/*.yml conf/*.conf logs/* data/*
This will stop the whole cluster. If you want to only stop one VM, you can use:
vagrant destroy vm2
Managing ElasticSearch instances
Each VM has its own ElasticSearch instance running in a screen
session named elastic
.
Once connected to the VM, you can manage this instance with the following commands:
sudo node-start
: starts the ES instancesudo node-stop
: stops the ES instancesudo node-restart
: restarts the ES instancesudo node-status
: displays ES instance's statussudo node-attach
: bring you to the screen session hosting the ES instance. Use^Ad
to detach.
You should be brought to the screen session hosting ElasticSearch and see its log.
For Kibana use sudo kibana-<start|stop|restart|status|attach>
, and similarly for Logstash use sudo logstash-<start|stop|restart|status|attach>
Default Directories
By default the data
, logs
and config
directories live outside of the VMs on the host, this way you can destroy and rebuild VMs as much as you like without losing your data. You can also upgrade Elasticsearch and not lose data.
- Access the Cluster --
The 9200 and 9300 ports of the host machine have been setup to forward to respective ports of ES Client Node running on Kibana. To access ES Rest API from Host machine you can use http://localhost:9200/ which will route the API Access via the Client Node running on the Kibana Node.
TO access ES Rest Endpoint on a data node, use 9200 + <node number> on the host machine. so for vm1 it would be 9201 so http://localhost:9201/ and 9202 for vm2, and so forth. But you will hardly need to access these endpoints from host machine.
To access Kibana from host machine use http://localhost:5601/. Logstash node has been setup to receive syslog messages on port 5514 (TCP & UDP) and the host machine will forward anything on its port 5514 (TCP & UDP) to these ports. Logstash node has been setup to receive beats messages on port 5044 (TCP), and the host machine will forward anything on its port 5044 (TCP) to these ports.
- Configure your cluster --
If you need or want to change the default working configuration of your cluster,
you can do it adding/editing elasticsearch-.yml files in conf
directory.
Each node configuration is shared with VM thanks to this "conf" directory.
By default, this configuration files are auto-generated by Vagrant when running the cluster for the first time. In this case, default values listed at the top of this page are used.
Similarly for logstash you need to change conf/logstash-.conf file again in conf
directory.
- Working with your cluster --
Here are a few sample calls to get you started.
Send some sample syslog messages to Logstash to be indexed in Elasticsearch.
# From the host machine
# I'm assuming the host is some sort of UNIX box
# If windows, then you are on your own :)
echo "<133>$0[$$]: Test syslog message from Netcat" | nc -4 localhost 5514
echo "<133>$0[$$]: Second Test syslog message from Netcat" | nc -4 localhost 5514
Next go to Kibana Dashboard at http://localhost:5601/. Kibana will auto discover the logstash index and ask you some basic question, just select the default, and then go to the discover tab. You should see the two sample messages.
TODO
See issues.