Centralized logging system: rsyslog, logstash, Elasticsearch & kibana

So I have been a little bit busy with this voluntary work I do in my free time. I am associated with Wikimedia Foundation as a volunteer IT staff. People are very much open and helpful there.

Few months back, I was browsing through their ongoing projects hoping someone would need help in something which I can contribute to. As I was new to their community, things were half clear to me. I needed work on a project which would make me understand their infrastructure and it should be simple enough for me. Of course, I did not want to dive into the most complicated project and then sitting idle, looking at other people’s scribbles on IRC.

One day I stumbled upon an interesting project. Its objective was to build a centralized logging system with good search capability. Although they use Nagios for alerting, if someone needed to dive search through logs, they had to log into that particular server and they do a little grep or egrep against the logs. I thought this would be a perfect project for me. I would get to know about the surrounding, plus it’s relatively simple to setup something like this.

I was added to the project and I was the only one member. Sweet!!

So I first started with some open source products for experimenting various things. Few did just fine, few did not scale at all. At last I found a perfect combination: Logstash, Elasticsearch, and Kibana.

Logstash is very useful and versatile. It’s made of JRuby (Java+Ruby). You can specify inputs and outputs as well as filters. It supports various input types. One of them is “Linux Syslog”. Which means, you do not have to install logging agent on every server increasing the overall load of the server. Your default rsyslog client will do just fine. Then comes the filtering part, after taking input, you can filter out logs within Logstash itself. It’s awesome but it didn’t serve any purpose for me as I wanted to index every log. Next is the output part, Logstash can output logs on standard output (why would anyone want that). But as with input, it supports multiple output types too. One of them is Elasticsearch.

Elasticsearch is a Java based log indexer. You can search through Elasticsearch indices using Lucene search syntax for more complicated query. But, simple wildcard search works too.

Next comes Kibana. It provides the web frontend for Elasticsearch, written on Java Script and PHP, requires only one line to be edited for this to work out off the box.

As of now, I have configured all of them on one relatively larger lab VM. There were several hitches in the beginning, but apart from that it all went pretty smooth.

So here’s what is happening:

I had to make a little init script too for these services (logstash and elasticsearch). It’s based on Ubuntu 10.04.3 LTS, but should work on CentOS/RedHat as well with a little bit modification.

Here’s the script:

#! /bin/sh

export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

. /lib/lsb/init-functions

logstash_bin=”/usr/bin/java — -jar /logstash/logstash.jar”

NICE_LEVEL=”-n 19″

# This is needed by ElasticSearch
export ES_HOME=”/logstash/elasticsearch”

# sets max. open files limit to 65000
# otherwise, elasticsearch throws java.io.IOException: Too many open files

ulimit -n 65000

start () {

        command_es=”/usr/bin/nice ${NICE_LEVEL} /logstash/elasticsearch/bin/elasticsearch”
        #command_ls=”/usr/bin/nice ${NICE_LEVEL} ${logstash_bin} agent -f ${logstash_conf} — web –backend elasticsearch:///?local –log ${logstash_log}”
        command_ls=”/usr/bin/nice ${NICE_LEVEL} ${logstash_bin} agent -f ${logstash_conf} –log ${logstash_log}”

        log_daemon_msg “Starting” “elasticsearch”
        if start-stop-daemon –start -d “/logstash/elasticsearch” –quiet  –oknodo  -b –exec ${command_es}; then
                log_end_msg 0
                # I had to do this as -p option with elasticsearch gives wrong PID
                # The same with –pidfile option with start-stop-daemon
                sleep 1 # takes a little bit of time before getting caught by below
                # don’t why I chose to grep for “sigar”; maybe it looks like cigar
                ps -elf | grep [e]lasticsearch | grep sigar | awk ‘{ print $4 }’ >${es_pid_file}
                log_end_msg 1

        log_daemon_msg “Starting” “logstash”
        if start-stop-daemon –start -d “/logstash/” –quiet –oknodo –pidfile “$ls_pid_file” -b -m –exec ${command_ls}; then
                log_end_msg 0
                log_end_msg 1

stop () {
        start-stop-daemon –stop –quiet –oknodo –pidfile “$ls_pid_file”
        start-stop-daemon –stop –quiet –oknodo –pidfile “/var/run/elasticsearch.pid”

status () {
        status_of_proc -p $ls_pid_file “” “$name”
        status_of_proc -p ${ws_pid_file} “” “elasticsearch”

case $1 in
        if status; then exit 0; fi




                status && exit 0 || exit $?

        echo “Usage: $0 {start|stop|restart|reload|status}”
        exit 1

exit 0
The system is in the testing phase. We need to check how it scales for 2000+ servers, maybe we have to think about load-balancing too. But as of now, it really does a great job regarding memory consumption, disk space, processing power, etc.

Once the whole system gets ready to go live for production servers, I would definitely publish more detailed technical stuffs. Crossing my fingers!!


About admin_xor

Un*x/Linux junkie, loves to program, automate, automate and automate
This entry was posted in Uncategorized. Bookmark the permalink.

6 Responses to Centralized logging system: rsyslog, logstash, Elasticsearch & kibana

  1. Unknown says:

    Hi, curious what you found as far as scaling goes?

  2. Well, it's far better. After monitoring of 5-6 months, I can say it provides fast search results, does not consume that much disk space (you may add a cron job to delete 2 or 3 months old indices i.e old logs that you may not need any more).

    But, it is a little memory hungry, especially elasticsearch (you may control it setting the -Xms Java option). Then again, free memory is nothing but a waste 🙂

    Overall, I am happy with the combination. However, Kibana needs to be worked on to add more features like saved searches etc.

  3. Great article Soumyadip, would you be willing to share your logstash.conf file. I'm interested in seeing what input/filter/output settings worked best for your rsyslog.

  4. Thanks Trevor.

    My logstash.conf is simple and stupid. I use a filter to discard messages from dhclient (in lab, most of the hosts get IP from DHCP server), rest are indexed 🙂

    input {
    udp {
    type => “syslog”
    port => “5544”

    filter {
    grep {
    type => “syslog”
    match => [ “@message”, “dhclient:” ]
    negate => true

    output {
    elasticsearch {
    embedded => false
    host => “”

  5. Can you tell why do you need logstash as you can collect rsyslogd logs to a central server and put all into elasticsearch directly?

    Thank you

  6. someslowly says:

    Same question, I just setup rsyslog using omelasticsearch direct to elasticsearch. Then all my other rsyslogs point to that main one over RHEL. done with no logstash getting in the way. I could point them each to elasticsearch but I like the hub and spoke model better.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s