spark数据分析之ip归属地查询
来源:互联网 发布:我的世界 知乎 编辑:程序博客网 时间:2024/05/29 08:30
前一段时间,在项目中,领导要求实时查看来自各个省份的ip访问的详情,根据这一需求,通过flume/logstack实时采集nginx的日志到生产到kafka,再通过Spark实时消费分析保存到Redis/MySQL中,最后前端通过百度的echart图实时的显示出来。
首先,得有一份ip归属地的规则表,可以本地的文档,也可以是分布式的在多台机器上的(如hdfs)。
ip规则表部分如下:
1.0.1.0|1.0.3.255|16777472|16778239|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.0753021.0.8.0|1.0.15.255|16779264|16781311|亚洲|中国|广东|广州||电信|440100|China|CN|113.280637|23.1251781.0.32.0|1.0.63.255|16785408|16793599|亚洲|中国|广东|广州||电信|440100|China|CN|113.280637|23.1251781.1.0.0|1.1.0.255|16842752|16843007|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.0753021.1.2.0|1.1.7.255|16843264|16844799|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.0753021.1.8.0|1.1.63.255|16844800|16859135|亚洲|中国|广东|广州||电信|440100|China|CN|113.280637|23.1251781.2.0.0|1.2.1.255|16908288|16908799|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.0753021.2.2.0|1.2.2.255|16908800|16909055|亚洲|中国|北京|北京|海淀|北龙中网|110108|China|CN|116.29812|39.959311.2.4.0|1.2.4.255|16909312|16909567|亚洲|中国|北京|北京||中国互联网信息中心|110100|China|CN|116.405285|39.9049891.2.5.0|1.2.7.255|16909568|16910335|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.0753021.2.8.0|1.2.8.255|16910336|16910591|亚洲|中国|北京|北京||中国互联网信息中心|110100|China|CN|116.405285|39.9049891.2.9.0|1.2.127.255|16910592|16941055|亚洲|中国|广东|广州||电信|440100|China|CN|113.280637|23.1251781.3.0.0|1.3.255.255|16973824|17039359|亚洲|中国|广东|广州||电信|440100|China|CN|113.280637|23.1251781.4.1.0|1.4.3.255|17039616|17040383|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.075302
本地模式
import java.sql.{Date, PreparedStatement, Connection, DriverManager}import org.apache.spark.{SparkContext, SparkConf}/** * 计算ip从属地 * Created by tianjun on 2017/2/13. */object IpLocation { def ip2Long(ip:String):Long = { val fragments = ip.split("[.]") var ipNum = 0L for(i <- 0 until fragments.length){ ipNum=fragments(i).toLong | ipNum << 8L } ipNum } def binarySearch(lines:Array[(String,String,String)],ip:Long): Int ={ var low =0 var high = lines.length-1 while (low<=high){ val middle = (low + high)/2 if((ip>=lines(middle)._1.toLong)&&(ip<=lines(middle)._2.toLong)){ return middle } if(ip<lines(middle)._1.toLong){ high=middle-1 }else{ low = middle +1 } } -1 } val data2MySql = (iterator:Iterator[(String,Int)])=>{ var conn:Connection = null var ps: PreparedStatement = null val sql = "INSERT INTO location_info(location,counts,access_date) values(?,?,?)" try { conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/bigdata?useUnicode=true&characterEncoding=utf-8", "root", "123") iterator.foreach(line => { ps = conn.prepareStatement(sql) ps.setString(1, line._1) ps.setInt(2, line._2) ps.setDate(3, new Date(System.currentTimeMillis())) ps.executeUpdate() }) } catch { case e: Exception => e.printStackTrace() } finally { if (ps != null) ps.close() if (conn != null) conn.close() } } def main (args: Array[String]){ System.setProperty("hadoop.home.dir","C:\\tianjun\\winutil\\") val conf = new SparkConf().setMaster("local").setAppName("IpLocation") val sc = new SparkContext(conf) val ipRuelsRdd = sc.textFile("c://ip.txt").map(line=>{ val fields = line.split("\\|") val start_num = fields(2) val end_num = fields(3) val province = fields(6) (start_num,end_num,province) }) val ipRulesArray = ipRuelsRdd.collect() val ipRulesBroadcast = sc.broadcast(ipRulesArray) val ipsRDD = sc.textFile("c://log").map(line=>{ val fields = line.split("\\|") fields(1) }) val result = ipsRDD.map(ip =>{ val ipNum = ip2Long(ip) val index = binarySearch(ipRulesBroadcast.value,ipNum) val info = ipRulesBroadcast.value(index) info }) .map(t => (t._3,1)).reduceByKey(_+_) result.foreachPartition(data2MySql) sc.stop() }}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
可以看到,利用spark的算子来进行数据分析是非常容易的。
在spark官网可以看到spark对接kafka,数据库,等,是十分容易的。
再来看看本例子中的写到数据库的结果:
+----+----------+--------+---------------------+| 7 | 陕西 | 1824 | 2017-02-13 00:00:00 || 8 | 河北 | 383 | 2017-02-13 00:00:00 || 9 | 云南 | 126 | 2017-02-13 00:00:00 || 10 | 重庆 | 868 | 2017-02-13 00:00:00 |
在本次的测试中,只截取了nginx日志里面的4700条左右的日志,这个文件大小约为1.9M左右。
0 0