Spark获取并分析Mysql数据

来源:互联网 发布:传感器 java 编辑:程序博客网 时间:2024/06/05 19:53

安装环境

Java环境

安装并启动Spark

下载并解压Spark

wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgztar xzvf spark-2.2.0-bin-hadoop2.7.tgz /usr/localcd /usr/localln -s spark spark-2.2.0-bin-hadoop2.7cd spark

运行master和slave

./sbin/start-master.sh -h 192.168.0.166./sbin/start-slave.sh spark://192.168.0.166:7077

其中192.168.0.166是本地ip

下载Mysql JDBC

下载JDBC,然后解压到spark目录,然后配置conf/spark-defaults.conf

spark.driver.extraClassPath      /usr/local/spark/mysql-connector-java-5.1.39-bin.jarspark.executor.extraClassPath    /usr/local/spark/mysql-connector-java-5.1.39-bin.jar

运行scala命令(spark-shell)

./bin/spark-shell --master spark://192.168.0.166:7077
var jdbcDF = spark.read.format("jdbc").options(    Map("url"->"jdbc:mysql://localhost:3306/collection?user=root&password=pw",    "dbtable"->"collection.iqilu_news",    "fetchSize"->"100",    "partitionColumn"->"catid",    "lowerBound"->"1",    "upperBound"->"300",    "numPartitions"->"30"    )).load()// 其中Spark根据partitionColumn里的字段来决定并发,numPartitions是并发数//  创建collection临时视图,以供下面查询使用jdbcDF.createOrReplaceTempView("collection")var sqlDF = sql("SELECT title FROM collection ORDER BY id DESC LIMIT 10")// 查看数据sqlDF.show()// 统计sqlDF.count()

参考:

  • http://spark.apache.org/examples.html
  • https://www.percona.com/blog/2016/08/17/apache-spark-makes-slow-mysql-queries-10x-faster/
原创粉丝点击