Spark DataFrme操作Hive

来源:互联网 发布:linux管道命令是啥 编辑:程序博客网 时间:2024/05/22 06:38

背景

从spark1.3起,spark SQL中的SchemaRDD变为了DataFrame,DataFrame相对于SchemaRDD有了较大改变,同时提供了更多好用且方便的API。

本文主要演示如何在spark1.5.2中使用DataFrame将数据写入hive中以及DataFrame的一些其他API,仅供参考。

DataFrame SaveAsTable

示例

新建数据文件test.txt,并上次至HDFS的/test/zhangrenhua目录下:

1234
zrh,19z,20r,21h,90

创建hive表:

12345678910111213
CREATE DATABASE test;DROP TABLE    test.test;CREATE TABLE    test.test    (        t_string string,        t_integer INT,        t_boolean BOOLEAN,        t_double DOUBLE,        t_decimal DECIMAL(20,8)    )    stored AS orc;

Demo.java:

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
import java.math.BigDecimal;import java.util.ArrayList;import java.util.List;import org.apache.spark.SparkConf;import org.apache.spark.SparkContext;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.api.java.function.Function;import org.apache.spark.sql.DataFrame;import org.apache.spark.sql.Row;import org.apache.spark.sql.SaveMode;import org.apache.spark.sql.hive.HiveContext;import org.apache.spark.sql.types.DataTypes;import org.apache.spark.sql.types.Decimal;import org.apache.spark.sql.types.StructField;import org.apache.spark.sql.types.StructType;/** * spark程序入口 */public class Demo {    public static void main(String[] args) {        String input = "/test/zhangrenhua";        SparkConf sparkConf = new SparkConf().setAppName("Demo");        SparkContext sc = new SparkContext(sparkConf);        HiveContext hiveContext = new HiveContext(sc);        hiveContext.setConf("spark.sql.hive.metastore.version", "0.13.0.2.1");        // 创建spark Context        try (JavaSparkContext ctx = new JavaSparkContext(sc);) {            // 读取测试文件            JavaRDD<String> textFile = ctx.textFile(input);            JavaRDD<Row> map = textFile.map(new Function<String, Row>() {                /**                 *                  */                private static final long serialVersionUID = 8745604304589989962L;                @Override                public Row call(String v1) throws Exception {                    // 解析测试数据,并setting默认值                    String[] array = v1.split(",");                    TestBean result = new TestBean();                    result.setT_string(array[0]);                    result.setT_integer(Integer.parseInt(array[1]));                    result.setT_boolean(true);                    result.setT_double(12.12);                    Decimal t_decimal = new Decimal();                    t_decimal.set(new scala.math.BigDecimal(new BigDecimal("11111111.11111111")));                    result.setT_decimal(t_decimal);                    // 不能使用hiveContext.createDataFrame(map, TestBean.class);                    // 字段顺序不一致,不知道是bug还是什么。所以只能选择row                    return result.toRow();                }            });            // Generate the schema based on the string of schema            List<StructField> fields = new ArrayList<StructField>();            fields.add(DataTypes.createStructField("t_string", DataTypes.StringType, true));            fields.add(DataTypes.createStructField("t_integer", DataTypes.IntegerType, true));            fields.add(DataTypes.createStructField("t_boolean", DataTypes.BooleanType, true));            fields.add(DataTypes.createStructField("t_double", DataTypes.DoubleType, true));            fields.add(DataTypes.createStructField("t_decimal", DataTypes.createDecimalType(20, 8), true));            StructType schema = DataTypes.createStructType(fields);            DataFrame df = hiveContext.createDataFrame(map, schema);            df.write().format("orc").mode(SaveMode.Append).saveAsTable("test.test");        }    }}

TestBean.java:

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
import java.io.Serializable;import org.apache.spark.sql.Row;import org.apache.spark.sql.RowFactory;import org.apache.spark.sql.types.Decimal;public class TestBean implements Serializable {    /**     *      */    private static final long serialVersionUID = -5868257956951746438L;    private String t_string;    private Integer t_integer;    private Boolean t_boolean;    private Double t_double;    private Decimal t_decimal;    public String getT_string() {        return t_string;    }    public void setT_string(String t_string) {        this.t_string = t_string;    }    public Integer getT_integer() {        return t_integer;    }    public void setT_integer(Integer t_integer) {        this.t_integer = t_integer;    }    public Boolean getT_boolean() {        return t_boolean;    }    public void setT_boolean(Boolean t_boolean) {        this.t_boolean = t_boolean;    }    public Double getT_double() {        return t_double;    }    public void setT_double(Double t_double) {        this.t_double = t_double;    }    public Decimal getT_decimal() {        return t_decimal;    }    public void setT_decimal(Decimal t_decimal) {        this.t_decimal = t_decimal;    }    public Row toRow() {        return RowFactory.create(t_string, t_integer, t_boolean, t_double, t_decimal);    }}

根据以上代码,编译打包即刻使用spark-submit命令执行:

1
spark-submit --master yarn-client --class Demo xx.jar

数据查询:

Alt text

要点记录

1、根据官方例子中使用hiveContext.createDataFrame(map, TestBean.class);是可以将对象转换成DataFrame的,但是我在测试中发现如果使用这种方式创建DataFrame,最终存到hive表中的数据字段是不对的。,所以还是自定义schema,返回Row对象。

123456789
RowFactory.create(t_string, t_integer, t_boolean, t_double, t_decimal);// Generate the schema based on the string of schemaList<StructField> fields = new ArrayList<StructField>();fields.add(DataTypes.createStructField("t_string", DataTypes.StringType, true));fields.add(DataTypes.createStructField("t_integer", DataTypes.IntegerType, true));fields.add(DataTypes.createStructField("t_boolean", DataTypes.BooleanType, true));fields.add(DataTypes.createStructField("t_double", DataTypes.DoubleType, true));fields.add(DataTypes.createStructField("t_decimal", DataTypes.createDecimalType(20, 8), true));
创建List fields和Row对象时,传入的字段顺序一定要和表中的顺序保持一致。

2、将DataFrame存入自定义数据库的表中

1
df.saveAsTable("test.test");

从spark1.5.1开始支持显示指定数据库名,在spark1.5.1之前的版本需要使用下面方法:

1
hiveContext.sql("use test");

DataFrame Save As 分区表

上面演示了如何将常用数据类型(string、integer、boolean、double、decimal)的DataFrame存入到hive表中。但是没有描述如何将数据存入分区表中,下面我会给出如何写入分区表中的具体逻辑。

1、动态分区

123
// 设置动态分区hiveContext.sql("set hive.exec.dynamic.partition=true");hiveContext.sql("set hive.exec.dynamic.partition.mode=nonstrict");

创建Row对象时,将分区字段放到最后

1
RowFactory.create(t_string, t_integer, t_boolean, t_double, t_decimal, '分区字段');

df.saveAsTable(“test.test”);

使用动态分区会影响写入性能,如果分区字段是可以固定的,则建议使用下面 固定分区方法。

2、固定分区

12
df.registerTempTable("demo");hiveContext.sql("insert into test.test partition(date='20151205') select * from demo");

注:date为分区字段,20151205为分区值,可由参数传入

Metadata Refreshing

Spark SQL caches Parquet metadata for better performance. When Hive metastore Parquet table conversion is enabled, metadata of those converted tables are also cached. If these tables are updated by Hive or other external tools, you need to refresh them manually to ensure consistent metadata.

12
// sqlContext is an existing HiveContextsqlContext.refreshTable("my_table")

DataFrame Operations

DataFrame提供一个结构化的数据,下面是使用Java操作DataFrame的一些基本例子:

1234567891011121314151617181920212223242526272829303132333435363738394041424344
JavaSparkContext sc // An existing SparkContext.SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc)// Create the DataFrameDataFrame df = sqlContext.read().json("examples/src/main/resources/people.json");// Show the content of the DataFramedf.show();// age  name// null Michael// 30   Andy// 19   Justin// Print the schema in a tree formatdf.printSchema();// root// |-- age: long (nullable = true)// |-- name: string (nullable = true)// Select only the "name" columndf.select("name").show();// name// Michael// Andy// Justin// Select everybody, but increment the age by 1df.select(df.col("name"), df.col("age").plus(1)).show();// name    (age + 1)// Michael null// Andy    31// Justin  20// Select people older than 21df.filter(df.col("age").gt(21)).show();// age name// 30  Andy// Count people by agedf.groupBy("age").count().show();// age  count// null 1// 19   1// 30   1

Parquet Configuration

Configuration of Parquet can be done using the setConf method on SQLContext or by running

SET key=value commands using SQL.

地址:http://www.zhangrenhua.com/2015/11/28/hadoop-spark-dataFrame%E6%93%8D%E4%BD%9Chive/

原创粉丝点击