spark一千篇旅游日记0006 之 DataFrame(二)

来源:互联网 发布:java中的double类型 编辑:程序博客网 时间:2024/04/29 23:57

学习书籍《Spark核心技术与高级应用》

一. 加载数据编程:
1.题目:
通过sqlContext.implictis._隐式转换一个RDD为DataFrame,并将DataFrame保存为Parquet文件,加载保存的Parquet文件,重新构建一个DataFrame,注册为临时表,供SQL查询使用

2.代码:

val sqlContext = new org.apache.spark.sql.SQLContext(sc)case class Person(name:Stirng,age:Int)val peopleDF = sc.textFile("E:/spark-2.1.1-bin-hadoop2.6/examples/src/main/resources/people.txt").map(_split(",")).map(p => Person(p(0),p(1).trim.toInt).toDFpeopleDF.write.parquet("people.parquet")val parquetFile = sqlContext.read.parquet("people.parquet")parquetFile.registerTempTable("parquetTable")val result = sqlContext.sql("SELECT name FROM parquetTale WHERE age>=13 AND age<=19")result.map(t => "name:" +t(0)).collect.foreach(println)

二. 模式合并
用户可以从简单的模式开始,逐步根据需要添加更多的列.通过这种方式,用户最终得到多个不同的但是能相互兼容模式是的Parquet文件,Parquet数据源能够自动检测这种情况,进而合并这些文件.

import sqlContext.implicits._val df1 = sc.makeRDD(1 to 5).map(i => (i,i*2)).toDF("single","double")df1.write.parquet("data/test_table/key=1")val df2 = sc.makeRDD(5 to 10).map(i => (i,i*3).toDF("single","triple")df2.write.parquet("data/test_table/key=2")val df3 = sqlContext.read.option("mergeSchema","true").parquet("data/test_table")df3.printSchema

三. JSON数据集:

val people = sqlContext.read.json("E:/spark-2.1.1-bin-hadoop2.6/examples/src/main/resources/people.txt")people.printSchemapeople.registerTempTable("jsonTable")val teenagers = sqlContext.sql("SELECT name FROM jsonTable WHERE age >=13 AND age <= 19")

通过转换json对象的RDD创建DataFrame

val anotherRDD = sc.parallelize("""{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}"""::Nil)val anotherPeople = sqlContext.read.json(anotherRDD)
1 0
原创粉丝点击