spark读取hive表数据实操

来源:互联网 发布:计算器编程代码 编辑:程序博客网 时间:2024/06/05 11:38

环境:spark1.6 hive1.2.1 hadoop2.6.4
1.添加一下依赖包
spark-hive_2.10的添加为了能创建hivecontext对象

    <dependency>      <groupId>org.apache.spark</groupId>      <artifactId>spark-hive_2.10</artifactId>      <version>1.6.1</version>    </dependency>

mysql驱动链接元数据

    <dependency>      <groupId>mysql</groupId>      <artifactId>mysql-connector-java</artifactId>      <version>5.1.38</version>      <scope>compile</scope>    </dependency>

2.添加hive-site.xml文件内容如下
其中mysql中hive库是hive的元数据库

<?xml version="1.0" encoding="UTF-8"?><!--Autogenerated by Cloudera Manager--><configuration>    <property>        <name>javax.jdo.option.ConnectionURL</name>        <value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true</value>    </property>    <property>        <name>javax.jdo.option.ConnectionDriverName</name>        <value>com.mysql.jdbc.Driver</value>    </property>    <property>        <name>javax.jdo.option.ConnectionUserName</name>        <value>hive</value>    </property>    <property>        <name>javax.jdo.option.ConnectionPassword</name>        <value>hive</value>    </property></configuration>

3.开始读取hive表的数据了,代码如下

object App {  def main(args: Array[String]): Unit = {    val conf = new SparkConf().setAppName("test").setMaster("local[2]")    val sc = new SparkContext(conf)    val sqlContext = new HiveContext(sc)    sqlContext.table("test.person") // 库名.表名 的格式              .registerTempTable("person")  // 注册成临时表    sqlContext.sql(      """        | select *        |   from person        |  limit 10      """.stripMargin).show()    sc.stop()  }}
原创粉丝点击