Quick tutorial to Install, Run and Monitor Logstash in Elastic infrastructure

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.”

For this tutorial, I use a Ubuntu server installation with default package and default installation of Elastic stack.

Installation

Download the logstash package, the last version is available here.

$ wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.2.deb

Run  the dpkg command to install:

$ sudo dpkg -i  logstash-5.1.2.deb

Logstash config

Each .conf files in the /usr/share/logstash folder contains the scripting for each « flow ». Just add an other script file to create a new logstash instance.

The script example below, will analyse the network connection port based on syslog event.

So, go to the default logstash folder and create a new scripting file:

$ cd /usr/share/logstash
$ sudo nano logstash-simple.conf

Copy / paste this script:

input {
 tcp {
 port => 5000 # syslog port. can be changed
 type => syslog
 }
 udp { #optional. required if syslog events are sent using UDP.
 port => 5000
 type => syslog
 }
}
#Do not change the contents of filter codec
filter {
 if [type] == "syslog" {
 grok {
 match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:hostname} $
 }
 date {
 match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
 }
}

output {
 elasticsearch {
 hosts => ["127.0.0.1:9200"] #change host as required
 user=> "elastic"
 password=> "changeme"
 }
}

And save the file.

Edit the rsyslog.conf to activate the event:

$ sudo nano /etc/rsyslog.conf

And add this:

*.* @@127.0.0.1:5000
*.* @127.0.0.1:5000

START logstash

The command below start an new agent based on the config file:

$ cd /usr/share/logstash
$ sudo bin/logstash -f logstash-simple.conf

Monitoring

Now, I create a new config file for monitoring Logstash. The script reads log files and import the data into Elasticsearch. The final touch was it will be available on Kibana!

First, create a config file to specify the log and the Elasticsearch cluster:

input {
 file {
 path => "/var/log/logstash/logstash-plain.log"
 start_position => "beginning"
 type => "logs"
 }
}
output {
 elasticsearch {
 hosts => "127.0.0.1:9200"
 user => "elastic"
 password => "changeme"
 index => "logstash-test-%{+YYYY.MM.dd}"
 }
}

Copy it to the default conf.d folder of Logstash:

Now restart the service with the service command:

$ sudo service logstash restart

Finally, go to Kibana and create a new index pattern name « logstash-test-* »

All logs are now available via Elasticsearch and Kibana.

MONITOR with kibana

To see beautifull graphs about logstash, you need x-pack features.

install x-pack

run the command below to install x-pack for logstash:

$ sudo ./bin/logstash-plugin install x-pack
Configure x-pack

Modify the yml files and add parameter at the end of the file:

xpack.monitoring.enabled: "true"
 xpack.monitoring.elasticsearch.url: "http://localhost:9200"
 xpack.monitoring.elasticsearch.username: "elastic"
 xpack.monitoring.elasticsearch.password: "changeme"
Restart

Restart logstash and kibana with the service command,

A welcome dashboard is now available about your Logstash agent.

One thought on “Quick tutorial to Install, Run and Monitor Logstash in Elastic infrastructure

  1. Patrick Rivers Reply

    I have been surfing online more than 2 hours today, yet I never found any interesting article like yours.
    It is pretty worth enough for me. In my opinion, if all
    webmasters and bloggers made good content as you did, the internet will be a lot more useful than ever before.

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *