flume + elasticSearch + kibana, 分析nginx日志

来源:互联网 发布:中国移动办公软件 编辑:程序博客网 时间:2024/06/06 04:40

efk分析nginx日志

nginx 日志格式

2.81.22.87 - - [2016-01-08T15:33:55+08:00] "GET /do_not_delete/noc.gif HTTP/1.1" 200 3166 "-" "ChinaCache" "-" "0.000" "-" "-"2.81.22.83 - - [2016-01-08T15:33:55+08:00] "GET /do_not_delete/noc.gif HTTP/1.1" 200 3166 "-" "ChinaCache" "-" "0.000" "-" "-"2.68.137.194 - - [2016-01-08T15:33:55+08:00] "POST /api/user/addXingeDeviceInfo HTTP/1.1" 200 273 "-" "111111" "123.233.132.208" "0.054" "192.168.2.28:31086" "0.054"2.202.80.20 - - [2016-01-08T15:33:56+08:00] "POST /api/user/getAmount HTTP/1.1" 200 453 "-" "111111" "124.94.99.36" "1.210" "192.168.2.28:31086" "1.210"2.185.231.55 - - [2016-01-08T15:38:45+08:00] "GET /authCode?pageId=userregister HTTP/1.1" 301 184 "http://www.xxx.com/authCode?pageId=userregister" "Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1)" "-" "0.000" "-" "

nginx 日志配置(时间使用ISO8601格式)

log_format  main  '$remote_addr - $remote_user [$time_iso8601] "$request" '        '$status $body_bytes_sent "$http_referer" '        '"$http_user_agent" "$http_x_forwarded_for" "$request_time" "$upstream_addr" "$upstream_response_time"';

nginxFlume.conf

agent.sources = api agent.channels = mch fchagent.sinks = elasticSearchagent.sources.api.type = exec agent.sources.api.command = tail -F /data/nginx/logs/api.longdai.com.logagent.sources.api.restart = trueagent.sources.api.logStdErr = trueagent.sources.api.batchSize = 100agent.sources.api.channels = fchagent.sources.api.interceptors = cdn sr api i1 agent.sources.api.interceptors.api.type = staticagent.sources.api.interceptors.api.key = appagent.sources.api.interceptors.api.value = apiagent.sources.api.interceptors.cdn.type = regex_filteragent.sources.api.interceptors.cdn.regex = .*\\s+\\"ChinaCache\\"\\s+.*agent.sources.api.interceptors.cdn.excludeEvents = trueagent.sources.api.interceptors.sr.type=search_replaceagent.sources.api.interceptors.sr.searchPattern=\\{agent.sources.api.interceptors.sr.replaceString=%7bagent.sources.api.interceptors.sr.charset=UTF-8agent.sources.api.interceptors.i1.type = regex_extractoragent.sources.api.interceptors.i1.regex = ([^\\s]*)\\s-\\s([^\\s]*)\\s\\[(.*)\\]\\s+\\"([\\S]*)\\s+([\\S]*)\\s+[\\S]*\\"\\s+(\\d+)\\s+(\\d+)\\s+\\"([^\\"]*)\\"\\s+\\"([^\\"]*)\\"\\s+\\"([^\\"]*)\\"\\s+\\"([^\\"]*)\\"\\s+\\"([^\\"]*)\\"\\s+\\"([^\\"]*)\\"agent.sources.api.interceptors.i1.serializers = s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13agent.sources.api.interceptors.i1.serializers.s1.name = remote_addragent.sources.api.interceptors.i1.serializers.s2.name = remote_useragent.sources.api.interceptors.i1.serializers.s3.name = datetime#这里的时间已经是ISO8601格式,kibana可以直接识别为时间格式,所以下面的3行可以不用#agent.sources.api.interceptors.i1.serializers.s3.type = org.apache.flume.interceptor.RegexExtractorInterceptorMillisSerializer#agent.sources.api.interceptors.i1.serializers.s3.name = timestamp#agent.sources.api.interceptors.i1.serializers.s3.pattern  = yyyy-MM-dd'T'HH:mm:ssZagent.sources.api.interceptors.i1.serializers.s4.name = http_methodagent.sources.api.interceptors.i1.serializers.s5.name = uriagent.sources.api.interceptors.i1.serializers.s6.name = statusagent.sources.api.interceptors.i1.serializers.s7.name = body_lengthagent.sources.api.interceptors.i1.serializers.s8.name = http_refereragent.sources.api.interceptors.i1.serializers.s9.name = user_agentagent.sources.api.interceptors.i1.serializers.s10.name = http_x_forwarded_foragent.sources.api.interceptors.i1.serializers.s11.name = request_timeagent.sources.api.interceptors.i1.serializers.s12.name = upstream_addragent.sources.api.interceptors.i1.serializers.s13.name = upstream_response_timeagent.sinks.elasticSearch.type = org.apache.flume.sink.elasticsearch.ElasticSearchSinkagent.sinks.elasticSearch.channel = fchagent.sinks.elasticSearch.batchSize = 2000agent.sinks.elasticSearch.hostNames = 192.168.2.25:9300 agent.sinks.elasticSearch.indexName = nginxagent.sinks.elasticSearch.indexType = nginxagent.sinks.elasticSearch.clusterName = myES agent.sinks.elasticSearch.client = transportagent.sinks.elasticSearch.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchLogStashEventSerializer# Each sink's type must be definedagent.sinks.loggerSink.type = logger#Specify the channel the sink should useagent.sinks.loggerSink.channel = mch# Each channel's type is defined.agent.channels.mch.type = memoryagent.channels.mch.capacity = 2000agent.channels.mch.transactionCapacity = 2000agent.channels.mch.byteCapacityBufferPercentage = 20agent.channels.mch.keep-alive = 30agent.channels.fch.type = fileagent.channels.fch.checkpointDir = /data/flume/data/checkpointDir agent.channels.fch.dataDirs = /data/flume/data/dataDirs

elasticSearch 存储的nginx日志
elasticSearch 存储的nginx日志

分析日志展示

这里写图片描述

这里写图片描述

这里写图片描述

0 0
原创粉丝点击