ELK基础环境搭建-kibana配置

来源:互联网 发布:用c语言编写杨辉三角 编辑:程序博客网 时间:2024/06/03 18:10

1        kibana配置学习

1.1 装载sample data

1.        这里例数据有三种:shakespeare.json、accounts.zip、logs.jsonl.gz。相应的格式如下:

TheShakespeare data set is organized in the following schema:

{

    "line_id": INT,

    "play_name": "String",

    "speech_number": INT,

    "line_number": "String",

    "speaker": "String",

    "text_entry": "String",

}

Theaccounts data set is organized in the following schema:

{

    "account_number": INT,

    "balance": INT,

    "firstname": "String",

    "lastname": "String",

    "age": INT,

    "gender": "M orF",

    "address": "String",

    "employer": "String",

    "email": "String",

    "city": "String",

    "state": "String"

}

Theschema for the logs data set has dozens of different fields, but the notableones used in this tutorial are:

{

    "memory": INT,

    "geo.coordinates": "geo_point"

    "@timestamp": "date"

}

2.        mapping字段

mapping的作用主要用于告诉elasticsearch日志数据的格式和处理方式,比如speaker字段是string类型,不需要做分析,这样即使speaker字段还可细分,也作为一个整体进行处理。

curl -XPUThttp://192.168.2.11:9200/shakespeare -d '

{

 "mappings" : {

  "_default_" : {

   "properties" : {

    "speaker" : {"type":"string", "index" : "not_analyzed" },

    "play_name" : {"type":"string", "index" : "not_analyzed" },

    "line_id" : { "type" :"integer" },

    "speech_number" : {"type" : "integer" }

   }

  }

 }

}

';

{"acknowledged":true}

log data

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.18-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type": "geo_point"

            }

          }

        }

      }

    }

  }

}

';

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.19-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type":"geo_point"

            }

          }

        }

      }

    }

  }

}

';

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.20-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type":"geo_point"

            }

          }

        }

      }

    }

  }

}

';

accounts的数据不需要mapping.

3.        采用elasticsearch的bulk的API进行数据装载

这里数据可以放到logstash服务器或者elasticsearch服务器。

curl-XPOST '192.168.2.11:9200/bank/account/_bulk?pretty' --data-binary@accounts.json

curl-XPOST '192.168.2.11:9200/shakespeare/_bulk?pretty' --data-binary@shakespeare.json

curl-XPOST '192.168.2.11:9200/_bulk?pretty' --data-binary @logs.jsonl

可以使用以下命令查看装载情况:

curl '192.168.2.11:9200/_cat/indices?v'

输出大致如下:

[cendish@es1 logs]$ curl'192.168.2.11:9200/_cat/indices?v'

health status index               pri rep docs.count docs.deletedstore.size pri.store.size

yellow open   bank                  5   1      1000            0    475.7kb        475.7kb

yellow open   .kibana               1   1         2            0     11.6kb         11.6kb

yellow open   shakespeare           5  1     111396            0     18.4mb         18.4mb

yellow open   logstash-2016.10.09   5  1        100            0   241.8kb        241.8kb

yellow open   logstash-2015.05.20   5  1          0            0       795b           795b

yellow open   logstash-2015.05.18   5  1          0            0       795b           795b

yellow open   logstash-2015.05.19   5  1          0           0       795b           795b

[cendish@es1 logs]$

1.2 定义Index Patterns

Eachset of data loaded to Elasticsearch has an index pattern. In the previous section, theShakespeare data set has an index namedshakespeare, and the accounts data set has anindex namedbank. An index pattern is a string with optional wildcards thatcan match multiple indices. For example, in the common logging use case, atypical index name contains the date in MM-DD-YYYY format, and an index patternfor May would look something likelogstash-2015.05*.

访问http://192.168.2.31:5601

Setting->Indices->AddNew->Create[make sure ' Index contains time-basedevents ' is unchecked]



1.3 发现数据

Discover->Chose a pattern->Inputsearch expression


You can construct searches byusing the field names and the values you’re interested in. With numeric fieldsyou can use comparison operators such as greater than (>), less than (<),or equals (=). You can link elements with the logical operators AND, OR, andNOT, all in uppercase.

For Example: account_number:<100 ANDbalance:>47500


如果只想显示特定列,那么在字段列表中添加特定列即可。


1.4 数据可视化(Data Visualization)

Visualize->Create a NewVisualization->Pie Chart->From a New Search->Chose a pattern[ban*]


Visualizationsdepend on Elasticsearch aggregations in two different types: bucketaggregationsand metricaggregations. A bucket aggregation sorts your data according to criteria youspecify. For example, in our accounts data set, we can establish a range ofaccount balances, then display what proportions of the total fall into whichrange of balances.




设置好范围后,点击“Apply changes”按钮生成饼图。


设置好范围后,点击“Apply changes”按钮生成饼图。
0 0
原创粉丝点击