Logstash ELK Stack Install & Configuration

来源:互联网 发布:网络节目收拾排行榜 编辑:程序博客网 时间:2024/05/13 16:40

Part 1 - Tweak OS

  1. Run the following

    vi /etc/sysctl.conf
  2. Add the following to the end of the file and save (:wq!)

    # Elasticsearch uses a hybrid mmapfs / niofs directory by default to# store its indices. The default operating system limits on mmap counts# is likely to be too low, which may result in out of memory exceptions.# We can mitigate this by setting the below vm.max_map_countvm.max_map_count=262144# Make sure to increase the number of open files descriptors on the machine#(or for the user running elasticsearch). Setting it to 32k or even 64k is recommended.fs.file-max=64000# Redis needs the following to avoid low memory conditionsvm.overcommit_memory = 1
  3.  Commit the change

    sysctl -p
  4. Run the following

    vi /etc/security/limits.conf
  5. Add the following and save (:wq!)

    elasticsearch soft nofile 32000elasticsearch hard nofile 32000elasticsearch -   memlock unlimited
  6. Update sources.list for our installs

    wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -  vi /etc/apt/sources.list
  7. Add the following to the end of the file and save (:wq!)

    #ELK Stackdeb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main  deb http://packages.elasticsearch.org/logstash/1.4/debian stable main  
  8. Install Oracle JDK

    sudo add-apt-repository ppa:webupd8team/javasudo apt-get update && sudo apt-get install oracle-java7-installer
  9. Confirm Java install

    java -version
  10. Run the following

    ulimit -l unlimited

Part 2 - Elasticsearch

  1. Install Elasticsearch

    sudo apt-get update && sudo apt-get install elasticsearch=1.1.1
  2. Run the following to configure Elasticsearch to start on boot after OS restart

    sudo update-rc.d elasticsearch defaults 95 10
  3. Configure elasticsearch

    vi /etc/init.d/elasticsearch
  4. Ensure the following are uncommented / set and save (:wq!)

    # Min/Max MemoryES_MIN_MEM=512mES_MAX_MEM=512m# Heap Size (defaults to 256m min, 1g max)ES_HEAP_SIZE=512m# Maximum number of open filesMAX_OPEN_FILES=65535# Maximum amount of locked memoryMAX_LOCKED_MEMORY=unlimited
  5. Configure elasticsearch more

    cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bakvi /etc/elasticsearch/elasticsearch.yml
  6. Ensure the following are uncommented / set and save (:wq!)

    bootstrap.mlockall: true#dont allow memory swappingcluster.name: RestonES       #identifies our elasticsearch cluster; must make this unique if multiple elasticsearch installs on same networknode.name: "logstashsimsky"       #identifies our elasticsearch node in our clusternode.master: true      #indicates if node provides cluster management; we ideally want dedicated server with this setting true and node.data falsenode.data: true       #indicates if node stores data (meaning, shards of indices can be stored here)path.conf: /etc/elasticsearchpath.data: /var/lib/elasticsearchpath.logs: /var/log/elasticsearchnetwork.host: 10.50.101.51       #your IP here# Search thread poolthreadpool.search.type: fixedthreadpool.search.size: 20threadpool.search.queue_size: 100# Index thread poolthreadpool.index.type: fixedthreadpool.index.size: 60threadpool.index.queue_size: 200indices.memory.index_buffer_size: 50% #give JVM equal parts search and querying memory buffer# Set the number of shards (splits) of an index (5 by default):#index.number_of_shards: 1 #we have only one ES server, so only 1 shared with no replicas# Set the number of replicas (additional copies) of an index (1 by default):#index.number_of_replicas: 0 #we have only one ES server, so only 1 shared with no replicas
  7. Install Elasticsearch plugins (optional)

    sudo /usr/share/elasticsearch/bin/plugin -install lukas-vlcek/bigdesk/2.4.0  sudo /usr/share/elasticsearch/bin/plugin -install mobz/elasticsearch-head 
  8. Restart Elasticsearch and test with the following (should see "mlockall" : true for your ES instance)

    sudo service elasticsearch restart curl http://10.50.101.51:9200curl http://10.50.101.51:9200/_nodes/process?pretty

Part 3 - Nginx (webserver) and Kibana

  1. Install Nginx

    sudo apt-get update && sudo apt-get install nginx
  2. Install Kibana

    sudo mkdir -p /srv/www/kibana wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz  sudo tar xf kibana-3.1.0.tar.gz -C /srv/www/  sudo chown -R www-data:www-data /srv/www/ 
  3. Configure Nginx for Kibana

    cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bakvi /etc/nginx/sites-available/default
  4. Ensure the following and save (:wq!)

    server {          listen 80 default_server;        root /srv/www;        index index.html index.htm;        # Make site accessible from http://localhost/        server_name localhost;        location / {                # First attempt to serve request as file, then                # as directory, then fall back to displaying a 404.                try_files $uri $uri/ =404;                # Uncomment to enable naxsi on this location                # include /etc/nginx/naxsi.rules        }        location /kibana {                alias /srv/www/kibana-3.1.0/;                try_files $uri $uri/ =404;        }}
  5. Configure Kibana to use Logstash Dashboard as Default

    cp /srv/www/kibana-3.1.0/app/dashboards/default.json /srv/www/kibana-3.1.0/app/dashboards/default.json.bakcp /srv/www/kibana-3.1.0/app/dashboards/logstash.json /srv/www/kibana-3.1.0/app/dashboards/logstash.json.bakmv /srv/www/kibana-3.1.0/app/dashboards/logstash.json /srv/www/kibana-3.1.0/app/dashboards/default.jsonmv /srv/www/kibana-3.1.0/app/dashboards/logstash.json.bak /srv/www/kibana-3.1.0/app/dashboards/logstash.json
  6. Restart Nginx

    sudo service nginx reload
  7. Confirm Elasticsearch can hit Kibana by going to the following in your web browser

    http://10.50.101.51/kibana/

     

    • If successful, will see
    • If unsuccessful, will see

Part 4 - Redis (Required only for multiple Logstash installs)

  1. Install Redis

    sudo apt-get update && sudo apt-get install redis-server
  2. Configure Redis

    cp /etc/redis/redis.conf /etc/redis/redis.conf.bakvi /etc/redis/redis.conf
  3. Ensure the following and save (:wq!)

    daemonize yespidfile /var/run/redis/redis-server.pidport 6379bind 0.0.0.0loglevel noticelogfile /var/log/redis/redis-server.logstop-writes-on-bgsave-error yesrdbcompression yesrdbchecksum yesdbfilename dump.rdbdir /var/lib/redismaxmemory 500mbmaxmemory-policy allkeys-lru
  4. Restart Redis

    sudo service redis-server restart

Part 5 - Logstash Indexer Install

  1. Install Logstash

    sudo apt-get update && sudo apt-get install logstash
  2. Configure Logstash

    cp /etc/logstash/conf.d/logstash.conf /etc/logstash/conf.d/logstash.conf.bakvi /etc/logstash/conf.d/logstash.conf
  3. Ensure the following and save (:wq!)

    input {  redis {    host => "10.50.101.51"    key => "logstash"    data_type => "list"    codec => json  }}output {  elasticsearch {    cluster => "RestonES"    node_name => "logstashsimsky"  }  if "alert" in [tags] {    email {       body => "Triggered in: %{message}"       subject => "Logstash Alert"       from => "no-reply.logstash@blackboardss.com"       to => "lora.brock@blackboard.com"       via => "sendmail"    }  }}
  4. Restart Logstash

    sudo service logstash restart
  5. Run the following to configure OS to not auto update Elasticsearch and Logstash to latest versions because it could cause compatibility issues

    sudo aptitude hold elasticsearch logstash

Part 6 - Sendmail Install

  1. Run the following

    sudo apt-get install sendmail
  2. Configure Sendmail

    cp /etc/mail/sendmail.cf /etc/mail/sendmail.cf.bakvi /etc/mail/sendmail.cf
  3. Ensure the following and save (:wq!)

    # "Smart" relay host (may be null)DSsmtp.inapps.presidium.inc
  4. Restart Sendmail

    sudo service sendmail restart
0 0
原创粉丝点击