Installing Logstash on RHEL and CentOS 6

Logstash is an awesome opensource log management tool. These are my notes on installing it on a CentOS 6.5 64bit machine.

Update 30/03/2014: Updated post for LogStash version 1.4.

My Logstash installation uses the following components:

  • Logstash
  • Elasticsearch
  • Redis
  • Nginx
  • Kibana


  • fqdn: dev.kanbier.lan (should be resolvable!)
  • ip:

1. Install the required software

When ever I can, I prefer installing my software using RPM files. I prefer to use repositories for that even more, and luckily  the only component I couldn’t get an RPM for was Kibana.

Please mind the version numbers here, don’t go for the latest and greatest. The version of Elasticsearch should match the internal version used by Logstash. At the time of writing the latest Logstash version is 1.4.x, which needs a version of elastic search 1.0.x.

Create the following files:

$ vi /etc/yum.repos.d/logstash.repo
name=logstash repository for 1.4.x packages
$ vi /etc/yum.repos.d/elasticsearch.repo
name=Elasticsearch repository for 1.0.x packages
$ vi /etc/yum.repos.d/nginx.repo
name=nginx repo

Enable the EPEL repository as well:

$ rpm -Uvh

Install the software:

$ yum -y install elasticsearch redis nginx logstash

2. Enable Kibana

Download the Kibana software:

$ wget
$ tar -xvzf kibana-3.0.0.tar.gz
$ mv kibana-3.0.0 /usr/share/kibana3

We need to tell Kibana where to find elasticsearch. Open the configuration file and modify the elasticsearch parameter:

$ vi /usr/share/kibana3/config.js

Search for the “elasticsearch:” parameter and modify it to suit your environment:

elasticsearch: "http://dev.kanbier.lan:9200",

You can also modify the default_route parameter so the logstash dashboard opens per default instead of the Kibana welcome page:

default_route     : '/dashboard/file/logstash.json',

Now we need to make the Kibana website available through the nginx webserver. Elasticsearch has a sample file you can use to enable Kibana:

$ wget
$ mv nginx.conf /etc/nginx/conf.d/

Open this configuration file and change the “server_name” parameter to your needs:

$ vi /etc/nginx/conf.d/nginx.conf
server_name           dev.kanbier.lan;

3. Configure redis

Configure redis to listenen on the right interface:

$ vi /etc/redis.conf

4. Configure Logstash 

After following the tutorial from the Logstash documentation I have a logstash-complex.conf file I can use. It’s not really that complex, it:

  • reads files in /var/log
  • opens up port 5544 to enable receiving of remote syslog messages directly
  • tells logstash to use our own elasticsearch installation as opposed to the embedded one

Create the following file:

$ vi /etc/logstash/conf.d/logstash-complex.conf
input {
  file {
    type => "syslog"

    # Wildcards work, here 🙂
    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
    sincedb_path => "/opt/logstash/sincedb-access"
  redis {
    host => ""
    type => "redis-input"
    data_type => "list"
    key => "logstash"
  syslog {
    type => "syslog"
    port => "5544"

filter {
  grok {
    type => "syslog"
    match => [ "message", "%{SYSLOGBASE2}" ]
    add_tag => [ "syslog", "grokked" ]

output {
 elasticsearch { host => "dev.kanbier.lan" }

5. Start en testing time

Start and enable the services:

$ service redis start; chkconfig redis on
$ service elasticsearch start; chkconfig --add elasticsearch; chkconfig elasticsearch on
$ service logstash start; chkconfig logstash on
$ service nginx start; chkconfig nginx on

Everything should start up nicely, if not you can find logfiles for the services in /var/log/<SERVICE_NAME>.
Now point your browser to your host and observe the Kibana interface, which should have some data in it as well from the /var/log/* files:

Screen Shot 2014-03-13 at 14.24.26


Using this setup you can receive remote syslog messages from remote servers as well. These messages are received by logstash directly, omitting the redis broker.

For rsyslog you can add these lines to /etc/rsyslog.conf:

# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g., port optional
*.* @@
# ### end of the forwarding rule ###

Mind that if you have a firewall running on your server, you need to open up the appropriate ports. If you’ve followed this post you need to open up:

  • port 80 (for the web interface)
  • port 5544 (to receive remote syslog messages)
  • port 6379 (for the redis broker)
  • port 9200 (so the web interface can access elasticsearch)

That’s it for now. I’m still discovering Logstash, but I can see it’s potential for sure!

27 thoughts on “Installing Logstash on RHEL and CentOS 6

  1. Dear sir,

    Thank you for your great article.
    But i have a problem with syslog. I justed copy file logstash-complex.conf of your but port 5544 could not open.
    How can i resolve this problem ?


  2. I’ve noticed your syslog events are nicely parsed, but when I use the “file” output plugin with the “syslog” type, all I get is everything inside the log event slapped into the message field, like priority and pid. Any idea why this is?

    Collecting from a Centos 5 client, using Logstash 1.3.3. The remote logstash client conf looks practically identical to yours, save for the host stuff of course.

    • It’s correct that all information is still present in the message field, I believe that is default behaviour even if the output is parsed.

      I do see the same behaviour in my dev environment, logs being received from remote hosts is being parsed correctly but the local logs are not. I’ll get back to you once I figure out what is going wrong.

      • It’s correct that all information is still present in the message field, I believe that is default behaviour even if the output is parsed.

        If it’s all consolidated into the message field, then why does the Kibana output in your screenshot show them as separate fields? Or is Kibana just showing the fields it sees, which are after all the parsing? That’s kind of what I’m trying to get, as all I’m getting is host, message, type and path fields showing on Kibana from the file plugin with syslog type.

        Anyways, sounds good. I’d definitely like to hear cuz I’ve been scratching my head on it and haven’t found any solutions so far.

        • I was under the impression that this was default behaviour, but manually adding a grok filter for syslog seems to work for me:

          filter {
          grok {
          type => “syslog”
          match => [ “message”, “%{SYSLOGBASE2}” ]
          add_tag => [ “syslog”, “grokked” ]

          I’ve updated my post with this information as well!

          • Excellent! I was figuring I’d have to resort to a grok filter, but I was hoping that there would already be preset filter for syslog type for file plugin. Evidently that is not so, or rather it’s just a very basic filter covering timestamp. Time to read up some more! Thanks a mil!

  3. Hi i would like to share the updated and stable version of rsyslog if you are planning to use rsyslog as your shipper rather than installing logstash

    Create rsyslog repo file.

    # vi /etc/yum.repos.d/rsyslog.repo

    name=Adiscon CentOS-$releasever – local packages for $basearch

    # yum update rsyslog

  4. After following each step-by-step (customizing to my server name/address), I get the following with accessing the dashboard. Any thoughts?

  5. Pingback: logstash-1.4.0-1 on CentOS 5.10

  6. I want to use the same for a server lets say whose public domain and FQDN is{Lets call this ‘myvm’}. I want to setup log monitoring for that machine. I tried installing redis-server on ‘myvm’. Now what all are the required changes that I need to do. I think there is some problem with my redis-server on ‘myvm’. What is the IP or hostname to which I should bind the machine to. Sorry to trouble you. But the blog is awesome.

    • Thank you! I’m not totally sure what you are attempting to do. Are you trying to install the redis-server on a server other then the one running logstash-server?

      In any case binding should be fairly simple:

      For the redis-server you can optionally set the interface on the server on which it should listen for incoming requests. In my post I set the parameter to the primay IP number of the host that is running redis-server. If you don’t set this option in the redis configuration it will listen on all interfaces on that server.

      Now lets say you want to run the redis-server on “myvm” with ip “”, we need to tell logstash where it can find the redis server. You do this in the logstash.conf files in the “input” section:

      input {
      redis {
      host => “”
      type => “redis-input”
      data_type => “list”
      key => “logstash”

      The relation between the components basically is:

      • redis can be used by logstash for input, tell logstash where to find it in logstash.conf
      • elasticsearch can be used by logstash for output, tell logstash where to find it logstash.conf
      • elasticsearch is needed by kibana, tell kibina where to find it in config.js

      You can run all components on different servers if you like, you just need to tell the components where to find each other using the configuration files. If you do this please do keep in mind that a firewall might need to allow the traffic etc..

      Let me know if this helps you out! Cheers!

  7. Excellent!! This is exactly what i was looking for. Thank you so much for your effor and sharing it. Every thing worked perfectly so far to me. I am stuck on getting the Windows logs to this Logstash . Could you guide me on this plz. I want all the logs from the eventvwr to be sent and populated in Logstash.

  8. hi,

    i had followed all the instructions, but I m not able to see any messages in redis.

    please help

  9. Hi Dennis,

    Thanks for your post. This post is great. I have completed instructions above as well as I can access Kibana3 – Logstash search web interface. All of necessary ports was open. But I don’t know how to add systwm logs from any device. And which kind of device is compatible with ELK?

    Thanks & Regards.

  10. The configuration file saved at “” is no longer available, could you please post the contents of this file in the comments section? Thanks!

  11. Pingback: Logstash setup - IT headaches

  12. # service nginx start; chkconfig nginx on
    Starting nginx: nginx: [emerg] open() “/etc/nginx/conf.d/nginx.conf” failed (13: Permission denied) in /etc/nginx/nginx.conf:31

    Im getting this error when trying to start nginx. Any ideas? I’ve tried looking around, some sites said make sure selinux isnt interfering and its set to permissive.

  13. Pingback: Implementando a stack ELK (ElasticSearch, Logstash e Kibana) no CentOS - Ricardo Martins

  14. Dear Dennis,

    Thnx for the great blog entry!
    I finally found some time to play with LogStash myself on Fedora 21.
    I got ElasticSearch and Kibana working in no time, but LogStash takes me quite some effort.
    My configuration (input / filter / output) work fine.
    The problem I have is that my LogStash works fine when I start it by hand ( /opt/logstash/bin/logstash -f /etc/logstash/conf.d/httpd_logs.conf ), but when I start it using “service logstart start” then it does not work
    -> Did you ever experience something like this?

    I see that the initscript first does a chroot and sets ulimit, etc.. So maybe could me related?
    nice -n ${LS_NICE} chroot –userspec $LS_USER:$LS_GROUP / sh -c ”
    cd $LS_HOME
    ulimit -n ${LS_OPEN_FILES}
    exec \”$program\” $args
    ” > “${LS_LOG_DIR}/$name.stdout” 2> “${LS_LOG_DIR}/$name.err” &

    kind regards,

    Egon Kastelijn

  15. When I run the command “service nginx restart” appearing on the following error:

    nginx: [emerg] open() “/etc/nginx/conf.d/nginx.conf” failed (13: Permission denied) in /etc/nginx/nginx.conf:31
    nginx: configuration file /etc/nginx/nginx.conf test failed

    what can be, I need help, help me ..

  16. Thanks to complete guide

    But i tried to import the remote server logs not working, could explain how to configure remote server logs

Leave a Reply

Your email address will not be published. Required fields are marked *