spark 读取mysql python和chunked分块

来源:互联网 发布:程序员刚入职很闲 编辑:程序博客网 时间:2024/06/05 21:52

1.下载-jars包  mysql-connector-java-5.1.44.zip 解压后要用的是 mysql-connector-java-5.1.44-bin.jar 把这个文件放到指定目录下,自己指定就行。我是放到了/app/hadoop/spark-2.0.1-bin-hadoop2.7/mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar

启动spark_shell加入

--driver-class-path /app/hadoop/spark-2.0.1-bin-hadoop2.7/mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar

2.在conf/spark-env.sh通过配置SPARK_CLASSPATH来设置driver的环境变量

export SPARK_CLASSPATH=$SPARK_CLASSPATH:/app/hadoop/spark-2.0.1-bin-hadoop2.7/mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar


3.不能1和2同时做 不能同时在conf/spark-env.sh里面配置SPARK_CLASSPATH和提交作业加上--driver-class-path参数,否则会出现异常

在集群上运行是要每台机器都要装上

scp -r mysql-connector-java-5.1.44 slave55:/app/hadoop/spark-2.0.1-bin-hadoop2.7/

每台机器都要设置2


4.

RawUserRDD= sc.textFile(Path+"dr4Pilot_0_10.txt").map(map_lamost)
sqlContext = SparkSession.builder.getOrCreate()

line_Rows = RawUserRDD.map(lambda p:
     Row(
         specid=p[0],
         target=p[1],....)

user_df = sqlContext.createDataFrame(line_Rows)

mode="append"
url="jdbc:mysql://localhost:3306/hadoop?useUnicode=true&characterEncoding=utf-8&useSSL=false"
properties={"user": "wkf", "password": "Lamost_wkf_2017"}
user_df.write.jdbc(url=url, table="lamostDRsumstd", mode=mode, properties=properties)

5 安装 more_itertools 包

pip install  more_itertools

from more_itertools import chunked

for x in chunked(split_flux_all,50):

把split_flux_all分成50等分

spark-submit --driver-memory 90G --master local[12] --py-files splitfeature_db.py readtxt2db.py

原创粉丝点击