在centos6上搭建ELK平台

来源:互联网 发布:java ee和java web 编辑:程序博客网 时间:2024/05/09 00:40

概述

ELK是 elasticsearch + logstash + kibana 的简称。这套组合类似于MVC模型,其中logstash是controller层,数据首先传给它,它负责将数据进行过滤和格式化;elasticsearch是model层,负责数据的存储,建立搜索索引;kibana是view层,负责数据的展示。
所以使用中会涉及到以下几个知识 O(∩_∩)O~
elasticsearch
logstash
当然,如果熟悉ruby就更开心了:)
好了,说了一大堆,开始干活吧 : D

环境

系统:centos 6.x
软件版本:
elasticsearch-5.6.0
kibana-5.6.0
logstash-5.6.0
下载到本地:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.0.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.6.0.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.0-x86_64.rpm

安装软件

直接rpm安装

rpm -ivh elasticsearch-5.6.0.rpmrpm -ivh logstash-5.6.0.rpmrpm -ivh kibana-5.6.0-x86_64.rpm

配置elasticsearch

主要配置文件/etc/elasticsearch/elasticsearch.yml,添加如下内容

cluster.name: erp #指定个集群名字,等下会用到node.name: node0 #节点名称也指定下path.data: /opt/data #指定数据的存放路径path.logs: /opt/data/logs #指定日志的存放路径,好排查问题network.host: 172.16.93.237 #指定监听地址http.port: 9200 #指定端口

如果没data目录,则新建下

mkdir -p /opt/data/logschown -R elasticsearch:elasticsearch /opt/data

启动elasticsearch服务

service elasticsearch start

查看运行状态

service elasticsearch status

curl http://ip:9200 如果有类似如下的输出则是正常启动了

# curl http://172.16.93.237:9200{  "name" : "node0",  "cluster_name" : "erp",  "cluster_uuid" : "Vy0zvSCRQ-y_nAo9YHHRMQ",  "version" : {    "number" : "5.6.0",    "build_hash" : "781a835",    "build_date" : "2017-09-07T03:09:58.087Z",    "build_snapshot" : false,    "lucene_version" : "6.6.0"  },  "tagline" : "You Know, for Search"}

在启动的过程中,可能会遇到些错误,导致无法启动。
错误1

max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]

elasticsearch对系统的文件打开数等有要求,需要支持下

vi /etc/security/limits.conf 添加或修改为如下内容:* soft nofile 65536* hard nofile 131072* soft nproc 2048* hard nproc 4096
vi /etc/security/limits.d/90-nproc.conf添加或修改为如下内容:* soft nproc 2048

错误2

system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

说的好清楚了,关闭system_call_filter

vi /etc/elasticsearch/elasticsearch.yml添加bootstrap.system_call_filter: false

错误3

memory locking request for elasticsearch process but memory is not locked

尝试关闭bootstrap.memory_lock试试

vi /etc/elasticsearch/elasticsearch.yml添加bootstrap.memory_lock: false

错误4

kernel: Out of memory: Kill process#表现为elasticsearch运行一会儿自动关闭了

是内存超了,如果你的系统内存不够用,可以限制下elasticsearch的内存开销(默认是2g)

vim /etc/elasticsearch/jvm.options-Xms2g-Xmx2g把这两个数值改小一点,比如-Xms1g-Xmx1g

配置logstash

默认没有init.d的脚本,需要执行如下命令生成下

/usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

logstash的配置文件分三部分:input,filter,output。顾名思义,分别就是输入,过滤,输出了。具体的配置语法可以参考文档。
配置文件放置在/etc/logstash/conf.d/目录下。
在配置时,需结合日志格式来配置。比如用ELK来分析nginx日志,nginx的日志配置如下:

log_format main "$http_x_forwarded_for | $time_local | $request | $status | $body_bytes_sent | "                "$request_body | $content_length | $http_referer | $http_user_agent | nuid  | "                "$http_cookie | $remote_addr | $hostname | $upstream_addr | $upstream_response_time | $request_time";

则可以在/etc/logstash/conf.d/目录下新建个nginx-access.conf的配置文件,内容如下:

input {        file {                path => "/data/log/nginx/*" #nginx日志的路径                start_position => "beginning"                sincedb_path => "/data/log/el/nginx_progress"        }}filter {    ruby {        init => "@kname = ['http_x_forwarded_for','time_local','request','status','body_bytes_sent','request_body','content_length','http_referer','http_user_agent','nuid','http_cookie','remote_addr','hostname','upstream_addr','upstream_response_time','request_time']"        code => "            new_event = LogStash::Event.new(Hash[@kname.zip(event.get('message').split('|'))])            new_event.remove('@timestamp')            event.append(new_event)        "    }    if [request] {        ruby {            init => "@kname = ['method','uri','verb']"            code => "                new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))])                new_event.remove('@timestamp')                event.append(new_event)            "        }        if [uri] {            ruby {                init => "@kname = ['url_path','url_args']"                code => "                    new_event = LogStash::Event.new(Hash[@kname.zip(event.get('uri').split('?'))])                    new_event.remove('@timestamp')                    event.append(new_event)                "            }            kv {                prefix => "url_"                source => "url_args"                field_split => "& "                remove_field => [ "url_args","uri","request" ]            }        }    }    mutate {        convert => [            "body_bytes_sent" , "integer",            "content_length", "integer",            "upstream_response_time", "float",            "request_time", "float"        ]    }    date {        match => [ "time_local", "dd/MMM/yyyy:hh:mm:ss Z" ]        locale => "en"    }}output {  elasticsearch {    action => "index"              hosts  => "172.16.93.237:9200"    index  => "erp"  }}

logstash的配置很丰富,比如我们还可以用json格式的,示例:
nginx配置如下:

log_format  main  '{"data":"$time_local","ip":"$remote_addr","status":"$status","http_referer":"$http_referer","request_length":"$request_length","request":"$request","request_time":"$request_time","Authorization_User":"$http_authorization_user","http_user_agent":"$http_user_agent"}';

logstash的配置如下:

input {        file {                path => "/data/log/nginx/*"                start_position => "beginning"                sincedb_path => "/data/log/el/nginx_progress"                codec => "json"        }}output {  elasticsearch {    action => "index"              hosts  => "172.16.93.237:9200"    index  => "erp"  }}

启动logstash

service logstash start

配置kibana

主要是配置下端口和地址

vim /etc/kibana/kibana.yml配置或添加如下参数server.port: 5601server.host: "172.16.93.237"server.name: "172.16.93.237"elasticsearch.url: "http://172.16.93.237:9200"

启动kibana

service kibana start

如果一切正常,浏览器访问http://172.16.93.237:5601即可进入kibana
(PS:kibana默认是没用户名密码的,如果kibana暴露在公网,担心安全问题,可以通过nginx等加一层验证。)
第一次进入需选择一个es的index,这里填入刚才配置集群erp*,点击create即可。之后就愉快地使用吧。

结语

ELK解决日志分析及可视化的问题,它的精华在于logstash,这也是要多花点时间的地方。
当前的场景是基于一台日志服务器搭建的,如果要采集多台服务器的日志时,可使用filebeat + logstash + elasticsearch + kibana 的架构。

参考

https://kibana.logstash.es/content/logstash/examples/nginx-access.html
https://my.oschina.net/shawnplaying/blog/670217
https://my.oschina.net/itblog/blog/547250
https://es.xiaoleilu.com/010_Intro/05_What_is_it.html
https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details

原创粉丝点击