spark sql on hive初探

来源:互联网 发布:win7 nls数据丢失 编辑:程序博客网 时间:2024/03/28 19:49
   前一段时间由于shark项目停止更新,sql on spark拆分为两个方向,一个是spark sql on hive,另一个是hive on spark。hive on spark达到可用状态估计还要等很久的时间,所以打算试用下spark sql on hive,用来逐步替代目前mr on hive的工作。
  当前试用的版本是spark1.0.0,如果要支持hive,必须重新进行编译,编译的命令有所变化  
  1. export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
  2. mvn -Pyarn -Phive -Dhadoop.version=2.3.0-cdh5.0.0 -DskipTests clean package
复制代码
  写了段比较简单的代码
  1. val conf = new SparkConf().setAppName("SqlOnHive")
  2.     val sc = new SparkContext(conf)
  3.     val hiveContext = new HiveContext(sc)
  4.     import hiveContext._
  5.     hql("FROM tmp.test SELECT id limit 1").foreach(println)
复制代码
编译后export出jar文件,使用standalone模式,故采用java -cp的方式提交,提交之前需要将hive-site.xml文件copy到$SPARK_HOME/conf目录下
  1. java -XX:PermSize=256M -cp /home/hadoop/hql.jar com.yintai.spark.sql.SqlOnHive spark://h031:7077
复制代码
提交后会报异常
  1. java.lang.RuntimeException: Error in configuring object
  2.     at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
  3.     at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
  4.     at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
  5.     at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:155)
  6.     at org.apache.spark.rdd.HadoopRDD$anon$1.<init>(HadoopRDD.scala:187)
  7.     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:181)
  8.     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:93)
  9.     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  10.     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  11.     at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
  12.     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  13.     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  14.     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  15.     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  16.     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  17.     at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
  18.     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  19.     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  20.     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  21.     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
  22.     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
  23.     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
  24.     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
  25.     at org.apache.spark.scheduler.Task.run(Task.scala:51)
  26.     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
  27.     at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
  28.     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
  29.     at java.lang.Thread.run(Thread.java:662)
  30. Caused by: java.lang.reflect.InvocationTargetException
  31.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  32.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  33.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  34.     at java.lang.reflect.Method.invoke(Method.java:597)
  35.     at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
  36.     ... 27 more
  37. Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
  38.     at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135)
  39.     at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)
  40.     at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
  41.     ... 32 more
  42. Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
  43.     at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
  44.     at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
  45.     ... 34 more

复制代码


解决办法是需要设置相关的环境变量,在spark-env.sh中设置
  1. SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/path/to/your/hadoop-lzo/libs/native
  2.     SPARK_CLASSPATH=$SPARK_CLASSPATH:/path/to/your/hadoop-lzo/java/libs
复制代码


修改过环境变量之后重新提交,继续报错
  1. 14/07/23 10:25:19 ERROR RetryingHMSHandler: NoSuchObjectException(message:There is no database named tmp)
  2.         at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:431)
  3.         at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:441)
  4.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  5.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  6.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  7.         at java.lang.reflect.Method.invoke(Method.java:597)
  8.         at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:124)
  9.         at com.sun.proxy.$Proxy9.getDatabase(Unknown Source)
  10.         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:628)
  11.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  12.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  13.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  14.         at java.lang.reflect.Method.invoke(Method.java:597)
  15.         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
  16.         at com.sun.proxy.$Proxy10.get_database(Unknown Source)
  17.         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:810)
  18.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  19.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  20.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  21.         at java.lang.reflect.Method.invoke(Method.java:597)
  22.         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
  23.         at com.sun.proxy.$Proxy11.getDatabase(Unknown Source)
  24.         at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1139)
  25.         at org.apache.hadoop.hive.ql.metadata.Hive.databaseExists(Hive.java:1128)
  26.         at org.apache.hadoop.hive.ql.exec.DDLTask.switchDatabase(DDLTask.java:3479)
  27.         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:237)
  28.         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
  29.         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
  30.         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
  31.         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
  32.         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
  33.         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
  34.         at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:185)
  35.         at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:160)
  36.         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:249)
  37.         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:246)
  38.         at org.apache.spark.sql.hive.HiveContext.hiveql(HiveContext.scala:85)
  39.         at org.apache.spark.sql.hive.HiveContext.hql(HiveContext.scala:90)
  40.         at com.yintai.spark.sql.SqlOnHive$.main(SqlOnHive.scala:20)
  41.         at com.yintai.spark.sql.SqlOnHive.main(SqlOnHive.scala)

  42. 14/07/23 10:25:19 ERROR DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: tmp
  43.         at org.apache.hadoop.hive.ql.exec.DDLTask.switchDatabase(DDLTask.java:3480)
  44.         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:237)
  45.         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
  46.         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
  47.         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
  48.         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
  49.         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
  50.         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
  51.         at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:185)
  52.         at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:160)
  53.         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:249)
  54.         at org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:246)
  55.         at org.apache.spark.sql.hive.HiveContext.hiveql(HiveContext.scala:85)
  56.         at org.apache.spark.sql.hive.HiveContext.hql(HiveContext.scala:90)
  57.         at com.yintai.spark.sql.SqlOnHive$.main(SqlOnHive.scala:20)
  58.         at com.yintai.spark.sql.SqlOnHive.main(SqlOnHive.scala)
复制代码
造成这个错误的原因就是spark程序无法加载到hive-site.xml,从而无法获取到远程metastore服务的地址,只能在本地的derby数据库中查找,自然找不到相关库表的元数据信息。spark sql实际上是通过实例化HiveConf类来加载hive-site.xml文件的,这个跟hive cli的方式是一致的,代码如下
  1. ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
  2.     if (classLoader == null) {
  3.       classLoader = HiveConf.class.getClassLoader();
  4.     }
  5.     hiveDefaultURL = classLoader.getResource("hive-default.xml");
  6.     // Look for hive-site.xml on the CLASSPATH and log its location if found.
  7.     hiveSiteURL = classLoader.getResource("hive-site.xml");

复制代码
使用java -cp提交的方式不能正确的设置环境变量,在1.0.0版本中,新增了使用spark-submit脚本进行提交的方式,后来改为使用此脚本来提交
  1. /usr/lib/spark/bin/spark-submit --class com.yintai.spark.sql.SqlOnHive
  2.     --master spark://h031:7077
  3.     --executor-memory 1g
  4.     --total-executor-cores 1
  5.     /home/hadoop/hql.jar
复制代码

这个脚本在提交过程中会设置SparkConf中的spark.executor.extraClassPath和spark.driver.extraClassPath属性,从而保证可以正确加载到所需的配置文件,到此测试成功。

    目前spark sql on hive兼容了大部分hive的语法和UDF,在SQL解析的时候使用了Catalyst框架,作业在运行效率上高出hive很多,不过目前的版本还存在一些BUG,稳定性上会有一些问题,需要等到新的稳定版发布再进行进一步的测试。


    参考资料

   http://spark.apache.org/docs/1.0.0/sql-programming-guide.html

[url]http://hsiamin.com/posts/2014/05/03/enable-lzo-compression-on-hadoop-pig-and-spark/[/url]

本文出自 “17的博客” 博客,请务必保留此出处http://xiaowuliao.blog.51cto.com/3681673/1441737

0 0
原创粉丝点击