Logstash从数据库一次同步多张表

来源:互联网 发布:ac尼尔森数据 方便面 编辑:程序博客网 时间:2024/06/07 08:19

一次同步多张表是开发中的一般需求。之前研究了很久找到方法,但没有详细总结。 
博友前天在线提问,说明这块理解的还不够透彻。 
我整理下, 
一是为了尽快解决博友问题, 
二是加深记忆,便于未来产品开发中快速上手。

1、同步原理

原有ES专栏中有详解,不再赘述。详细请参考我的专栏: 
深入详解Elasticsearch 
以下是通过ES5.4.0, logstash5.4.1 验证成功。 
可以确认的是2.X版本同样可以验证成功。

2、核心配置文件

input {  stdin {  }  jdbc {  type => "cxx_article_info"  # mysql jdbc connection string to our backup databse 后面的test对应mysql中的test数据库  jdbc_connection_string => "jdbc:mysql://110.10.15.37:3306/cxxwb"  # the user we wish to excute our statement as  jdbc_user => "root"  jdbc_password => "xxxxx"  record_last_run => "true"  use_column_value => "true"  tracking_column => "id"  last_run_metadata_path => "/opt/logstash/bin/logstash_xxy/cxx_info"  clean_run => "false"  # the path to our downloaded jdbc driver  jdbc_driver_library => "/opt/elasticsearch/lib/mysql-connector-java-5.1.38.jar"  # the name of the driver class for mysql  jdbc_driver_class => "com.mysql.jdbc.Driver"  jdbc_paging_enabled => "true"  jdbc_page_size => "500"  statement => "select * from cxx_article_info where id > :sql_last_value"#定时字段 各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新  schedule => "* * * * *"#设定ES索引类型  }  jdbc {  type => "cxx_user"  # mysql jdbc connection string to our backup databse 后面的test对应mysql中的test数据库  jdbc_connection_string => "jdbc:mysql://110.10.15.37:3306/cxxwb"  # the user we wish to excute our statement as  jdbc_user => "root"  jdbc_password => "xxxxxx"  record_last_run => "true"  use_column_value => "true"  tracking_column => "id"  last_run_metadata_path => "/opt/logstash/bin/logstash_xxy/cxx_user_info"  clean_run => "false"  # the path to our downloaded jdbc driver  jdbc_driver_library => "/opt/elasticsearch/lib/mysql-connector-java-5.1.38.jar"  # the name of the driver class for mysql  jdbc_driver_class => "com.mysql.jdbc.Driver"  jdbc_paging_enabled => "true"  jdbc_page_size => "500"  statement => "select * from cxx_user_info where id > :sql_last_value"#以下对应着要执行的sql的绝对路径。#statement_filepath => "/opt/logstash/bin/logstash_mysql2es/department.sql"#定时字段 各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新schedule => "* * * * *"#设定ES索引类型  }}filter {mutate {  convert => [ "publish_time", "string" ] }date {  timezone => "Europe/Berlin"  match => ["publish_time" , "ISO8601", "yyyy-MM-dd HH:mm:ss"]}#date { # match => [ "publish_time", "yyyy-MM-dd HH:mm:ss,SSS" ]  # remove_field => [ "publish_time" ]  # }json {  source => "message"  remove_field => ["message"]  }}output {if [type]=="cxxarticle_info" {  elasticsearch {#ESIP地址与端口  hosts => "10.100.11.231:9200"#ES索引名称(自己定义的)  index => "cxx_info_index"#自增ID编号 # document_id => "%{id}"  }}if [type]=="cxx_user" {  elasticsearch {#ESIP地址与端口  hosts => "10.100.11.231:9200"#ES索引名称(自己定义的)  index => "cxx_user_index"#自增ID编号 # document_id => "%{id}"  }}}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104

3、同步成功结果

[2017-07-19T15:08:05,438][INFO ][logstash.pipeline ] Pipeline main startedThe stdin plugin is now waiting for input:[2017-07-19T15:08:05,491][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}[2017-07-19T15:09:00,721][INFO ][logstash.inputs.jdbc ] (0.007000s) SELECT count(*) AS `count` FROM (select * from cxx_article_info where id > 0) AS `t1` LIMIT 1[2017-07-19T15:09:00,721][INFO ][logstash.inputs.jdbc ] (0.008000s) SELECT count(*) AS `count` FROM (select * from cxx_user_info where id > 0) AS `t1` LIMIT 1[2017-07-19T15:09:00,730][INFO ][logstash.inputs.jdbc ] (0.004000s) SELECT * FROM (select * from cxx_user_info where id > 0) AS `t1` LIMIT 500 OFFSET 0[2017-07-19T15:09:00,731][INFO ][logstash.inputs.jdbc ] (0.007000s) SELECT * FROM (select * from cxx_article_info where id > 0) AS `t1` LIMIT 500 OFFSET 0[2017-07-19T15:10:00,173][INFO ][logstash.inputs.jdbc ] (0.002000s) SELECT count(*) AS `count` FROM (select * from cxx_article_info where id > 3) AS `t1` LIMIT 1[2017-07-19T15:10:00,174][INFO ][logstash.inputs.jdbc ] (0.003000s) SELECT count(*) AS `count` FROM (select * from cxx_user_info where id > 2) AS `t1` LIMIT 1[2017-07-19T15:11:00,225][INFO ][logstash.inputs.jdbc ] (0.001000s) SELECT count(*) AS `count` FROM (select * from cxx_article_info where id > 3) AS `t1` LIMIT 1[2017-07-19T15:11:00,225][INFO ][logstash.inputs.jdbc ] (0.002000s) SELECT count(*) AS `count` FROM (select * from cxx_user_info where id > 2) AS `t1` LIMIT 1
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

4、扩展

1)多个表无非就是在input里面多加几个类型,在output中多加基础 
类型判定。 
举例:

if [type]=="cxx_user"  
  • 1

2)input里的type和output if判定的type**保持一致**,该type对应ES中的type。

后记

死磕ES,有问题欢迎大家提问探讨!

—————————————————————————————————— 
更多ES相关实战干货经验分享,请扫描下方【铭毅天下】微信公众号二维码关注。 
(每周至少更新一篇!)

2017年07月19日 23:32 于家中床前

作者:铭毅天下 
转载请标明出处,原文地址: 
http://blog.csdn.net/laoyang360/article/details/75452953 
如果感觉本文对您有帮助,请点击‘顶’支持一下,您的支持是我坚持写作最大的动力,谢谢!


阅读全文
0 0
原创粉丝点击