Quick tutorial to Install, Run and Monitor Logstash in Elastic infrastructure

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.”

For this tutorial, I use a Ubuntu server installation with default package and default installation of Elastic stack.

Installation

Download the logstash package, the last version is available here.

$ wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.2.deb

Run  the dpkg command to install:

$ sudo dpkg -i  logstash-5.1.2.deb

Logstash config

Each .conf files in the /usr/share/logstash folder contains the scripting for each « flow ». Just add an other script file to create a new logstash instance.

The script example below, will analyse the network connection port based on syslog event.

So, go to the default logstash folder and create a new scripting file:

$ cd /usr/share/logstash
$ sudo nano logstash-simple.conf

Copy / paste this script:

input {
 tcp {
 port => 5000 # syslog port. can be changed
 type => syslog
 }
 udp { #optional. required if syslog events are sent using UDP.
 port => 5000
 type => syslog
 }
}
#Do not change the contents of filter codec
filter {
 if [type] == "syslog" {
 grok {
 match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:hostname} $
 }
 date {
 match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
 }
}

output {
 elasticsearch {
 hosts => ["127.0.0.1:9200"] #change host as required
 user=> "elastic"
 password=> "changeme"
 }
}

And save the file.

Edit the rsyslog.conf to activate the event:

$ sudo nano /etc/rsyslog.conf

And add this:

*.* @@127.0.0.1:5000
*.* @127.0.0.1:5000

START logstash

The command below start an new agent based on the config file:

$ cd /usr/share/logstash
$ sudo bin/logstash -f logstash-simple.conf

Monitoring

Now, I create a new config file for monitoring Logstash. The script reads log files and import the data into Elasticsearch. The final touch was it will be available on Kibana!

First, create a config file to specify the log and the Elasticsearch cluster:

input {
 file {
 path => "/var/log/logstash/logstash-plain.log"
 start_position => "beginning"
 type => "logs"
 }
}
output {
 elasticsearch {
 hosts => "127.0.0.1:9200"
 user => "elastic"
 password => "changeme"
 index => "logstash-test-%{+YYYY.MM.dd}"
 }
}

Copy it to the default conf.d folder of Logstash:

Now restart the service with the service command:

$ sudo service logstash restart

Finally, go to Kibana and create a new index pattern name « logstash-test-* »

All logs are now available via Elasticsearch and Kibana.

MONITOR with kibana

To see beautifull graphs about logstash, you need x-pack features.

install x-pack

run the command below to install x-pack for logstash:

$ sudo ./bin/logstash-plugin install x-pack
Configure x-pack

Modify the yml files and add parameter at the end of the file:

xpack.monitoring.enabled: "true"
 xpack.monitoring.elasticsearch.url: "http://localhost:9200"
 xpack.monitoring.elasticsearch.username: "elastic"
 xpack.monitoring.elasticsearch.password: "changeme"
Restart

Restart logstash and kibana with the service command,

A welcome dashboard is now available about your Logstash agent.

How to Backup and Restore with Elasticsearch

The snapshot and restore module allows to create snapshots of individual indices or an entire cluster into a remote repository like shared file system, S3, or HDFS.

The full detailed documentation is here.

Requirements

Check if the config files have « path.repo »:

$ nano /etc/elasticsearch/elasticsearch.yml

 SNAPSHOT

Create snapshot with this command:

PUT /_snapshot/backup
{
  "type": "fs",
  "settings": {
      "compress": true,
      "location": "/usr/share/elasticsearch/backup"
  }
}

This command show snapshot path:

GET /_snaptshot/

BACKUP

 Initiate the backup with this command based on the snapshot:

PUT /_snapshot/backup/snapshot_1
{
  "indices": "recipes",
  "ignore_unavailable": true,
  "include_global_state": false
}

To show the status of the snapshot:

 GET /_snapshot/backup/snapshot_1

The files created:

RESTORE

Create also the same snapshot path:

Execute this type of command to restore:

 POST /_snapshot/my_backup/snapshot_1/_restore

DELETE

How to delete a snaptshot:

$ delete /_snapshot/backup/snapshot_1

How to quickly install Elasticsearch on Ubuntu server

This guide contained a quick reference on how to install Elastic search on ubuntu server 16.04 via deb package.

Download the package via wget command (click here for the last version):

 $ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.deb

Then unpackage the deb file:

$ sudo dpkg -i elasticsearch-5.2.2.deb

The command to set as startup:

$ sudo update-rc.d elasticsearch defaults 95 10

 And the usefull startup command:

$ sudo -i service elasticsearch start
$ sudo -i service elasticsearch stop

Now start your browser to check if Elasticsearch running (default port is 9200) :

The Debian package places config files, logs, and the data directory in the appropriate locations for a Debian-based system:

Type Description Default Location Setting
home Elasticsearch home directory or $ES_HOME /usr/share/elasticsearch
bin Binary scripts including elasticsearch to start a node and elasticsearch-pluginto install plugins /usr/share/elasticsearch/bin
conf Configuration files including elasticsearch.yml /etc/elasticsearch path.conf
conf Environment variables including heap size, file descriptors. /etc/default/elasticsearch
data The location of the data files of each index / shard allocated on the node. Can hold multiple locations. /var/lib/elasticsearch path.data
logs Log files location. /var/log/elasticsearch path.logs
plugins Plugin files location. Each plugin will be contained in a subdirectory. /usr/share/elasticsearch/plugins
repo Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. Not configured path.repo
script Location of script files. /etc/elasticsearch/scripts path.scripts

How to install free SSL (Let’s encrypt) Certificate on Debian server

Let’s Encrypt is a free, automated, and openCertificate Authority. The goal is to implemented the certificate on a web server. I used a standard Debian 7 server with webmin installed. Webmin is a web interface to manage your server. You can find more info here.

First connect to your Webmin interface (usually port 1000):

Click on the left menu and go to webmin configuration:

Click on « SSL Encryption »:

Select the last tab « Let’s encrypt » and enter your full hostmane, like: « server1.example.be » (whithout quote). Select the options below and click on « Request Certificate »:

With the menu, expand the « Servers » items and click on « Apache Webserver »

Select your default Virtual Server with the support of encryption (by default it is the 443 port):

Select « SSL Options » icon and enter the paths below:

Click « Save »  and then « Apply Changes », the apache service will restart and you have now your free certificate on your web server:

 

How to deploy your GIT repository to FTP server

In this article, I will explain how to deploy your code, hosted on Github to an FTP server.

Here was my requirements for deploying my php code to a standard FTP server:

  • Compatible with Bitbucket
  • Fully automated
  • FTP deployment
  • Free (with a possibility for upgrading to a paid version)
  • Web hosted

Solution founded:

After a bit of surfing I found this web service:

https://ftploy.com/

https://www.deployhq.com/

But after testing, I choosed DeployHQ.

This is the list of compatible source code platform :

You just need to create an account. If you have only one project hosted, the registration is free.

The dashboard view of your project with last deploiment and the number of servers:

The definition of your FTP server (you can use also SSH, Amazon S3,…):

To publish to multiple server in one time, you can set a group of server:

When you commit and push to your Git repository, you can transfer your files automatically:

The deployment window with the start and end commit. You have also the possibility to « Preview » your deployment: