kylin使用中曾遇到的问题整理

来源:互联网 发布:桂正和is知乎 编辑:程序博客网 时间:2024/06/06 19:20

 

1.1. kylin安装问题

1.1.1.  问题1:Please make sure the user has the privilege to run hive shell

[root@localhost65 apache-kylin-1.6.0-bin]#

[root@localhost65 apache-kylin-1.6.0-bin]# bin/kylin.sh start

KYLIN_HOME is set to/usr/local/apache-kylin-1.6.0-bin

Please make sure the userhas the privilege to run hive shell

[root@localhost65 apache-kylin-1.6.0-bin]#

原因:

         查看check-env.sh源码如下:

可以看到,源码-z是用来检测各环境变量是否存在,因为Kylin需要这些依赖。错误出现在hive命令行,可能是hive命令不能使用导致。

 

解决办法:

         检查/etc/profile文件,配置hive目录,并使文件生效。执行hive命令查询。

 

 

1.1.2.  问题2:KYLIN_REST_ADDRESS not found, will use localhost65:7070

[root@localhost65 apache-kylin-1.6.0-bin]# bin/kylin.sh start

KYLIN_HOME is set to/usr/local/apache-kylin-1.6.0-bin

kylin.security.profile is set to testing

KYLIN_HOME is set to/usr/local/apache-kylin-1.6.0-bin

SLF4J: Class path contains multiple SLF4Jbindings.

SLF4J: Found binding in[jar:file:/usr/local/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in[jar:file:/usr/local/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindingsfor an explanation.

SLF4J: Actual binding is of type[org.apache.logging.slf4j.Log4jLoggerFactory]

 

Logging initialized using configuration infile:/usr/local/apache-hive-2.1.1-bin/conf/hive-log4j2.properties Async: true

HCAT_HOME not found, try to find hcatalogpath from hadoop home

hive dependency:/usr/local/apache-hive-2.1.1-bin/conf:/usr/local/apache-hive-2.1.1-bin/lib/jamon-runtime-2.3.1.jar:/usr/local/apache-hive-2.1.1-bin/lib/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:/usr/local/apache-hive-2.1.1-bin/lib/geronimo-jaspic_1.0_spec-1.0.jar:/usr/local/apache-hive-2.1.1-bin/lib/netty-all-

。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。

KYLIN_JVM_SETTINGS is -Xms1024M -Xmx4096M-Xss1024K -XX:MaxPermSize=128M -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps-Xloggc:/usr/local/apache-kylin-1.6.0-bin/logs/kylin.gc.42255-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=64M

KYLIN_DEBUG_SETTINGS is not set, will notenable remote debuging

KYLIN_LD_LIBRARY_SETTINGS is not set,Usually it's okay unless you want to specify your own native path

KYLIN_REST_ADDRESS notfound, will use localhost65:7070

hbase classpath is:

/usr/local/apache-kylin-1.6.0-bin/bin/../tomcat/bin/bootstrap.jar:/usr/local/apache-kylin-1.6.0-bin/bin/../tomcat/bin/tomcat-juli.jar:/usr/local/apache-kylin-1.6.0-bin/bin/../tomcat/lib/*:/usr/local/apache-kylin-1.6.0-bin/conf:/usr/local/apache-kylin-1.6.0-bin/lib/*:/usr/local/apache-kylin-1.6.0-bin/tool/*:/usr/local/apache-kylin-

。。。。。。。。。。。。。。。。。。。。。。。

A new Kylin instance is started by root,stop it using "kylin.sh stop"

Please visit http://<ip>:7070/kylin

You can check the log at/usr/local/apache-kylin-1.6.0-bin/logs/kylin.log

[root@localhost65 apache-kylin-1.6.0-bin]#

 

 

原因分析:

         根据hive启动脚本分析。

   #additionally add tomcat libs to HBASE_CLASSPATH_PREFIX

   exportHBASE_CLASSPATH_PREFIX=${tomcat_root}/bin/bootstrap.jar:${tomcat_root}/bin/tomcat-juli.jar:${tomcat_root}/lib/*:${HBASE_CLASSPATH_PREFIX}

 

   if [ -z "$KYLIN_REST_ADDRESS" ]

   then

       kylin_rest_address=`hostname-f`":"`grep "<Connector port="${tomcat_root}/conf/server.xml |grep protocol=\"HTTP/1.1\" | cut -d'=' -f 2 | cut -d \" -f 2`

        echo "KYLIN_REST_ADDRESS notfound, will use ${kylin_rest_address}"

   else

       echo "KYLIN_REST_ADDRESS is set to: $KYLIN_REST_ADDRESS"

       kylin_rest_address=$KYLIN_REST_ADDRESS

   fi

 

问题原因:

         以上是通过apache-kylin-1.6.0-bin.tar.gz版本启动,该问题通过网络查找,各种修改,最终还是不行,最后通过版本兼容排除还是不行,怀疑是不是缺少中间包导致启动失败,因为kylin要使用hive,hbase,hadoop,重新下载了一个kylin和hbase整合版本的kylin工具包apache-kylin-1.5.0-HBase1.1.3-bin.tar.gz,从新配置,这次启动成功。Web访问也没问题,并且web端可以操作配置等。

 

 

 

 

1.1.3.  Failed to find metadata storeby url: kylin_metadata@hbase

2017-08-28 14:24:19,487 INFO  [localhost-startStop-1]hbase.HBaseConnection:139 : connection is null or closed, creating a new one

2017-08-28 14:24:22,931 DEBUG[localhost-startStop-1] hbase.HBaseConnection:189 : Creating HTable'kylin_metadata'

2017-08-28 14:24:27,326 ERROR[localhost-startStop-1] persistence.ResourceStore:88 : Create new storeinstance failed

java.lang.reflect.InvocationTargetException

         atsun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

         atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

。。。。。。。。。。。。。。。。。。。

         atjava.lang.Thread.run(Thread.java:745)

Caused by:java.lang.IllegalArgumentException: File not exist by 'kylin_metadata@hbase':/usr/local/apache-kylin-1.5.0-HBase1.1.3-bin/kylin_metadata@hbase

         atorg.apache.kylin.common.persistence.FileResourceStore.<init>(FileResourceStore.java:48)

         ...58 more

2017-08-28 14:24:27,334 ERROR[localhost-startStop-1] persistence.ResourceStore:88 : Create new storeinstance failed

java.lang.reflect.InvocationTargetException

。。。。。。。。。。。。。。。。

Caused by:org.apache.hadoop.hbase.TableExistsException: kylin_metadata

         atsun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

。。。。。。。。。。。。。。。。。。。

         ...58 more

Caused by:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):kylin_metadata

         atorg.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:300)

。。。。。。。。。。。。。。。。。。。。

2017-08-28 14:24:27,370 ERROR[localhost-startStop-1] context.ContextLoader:307 : Context initializationfailed

org.springframework.beans.factory.BeanCreationException:Error creating bean with name 'cacheController': Injection of autowireddependencies failed; nested exception isorg.springframework.beans.factory.BeanCreationException: Could not autowirefield: private org.apache.kylin.rest.service.CacheServiceorg.apache.kylin.rest.controller.CacheController.cacheService; nested exceptionis org.springframework.beans.factory.BeanCreationException: Error creating beanwith name 'cacheService': Invocation of init method failed; nested exception isjava.lang.IllegalArgumentException: Failed to find metadata store by url:kylin_metadata@hbase

         atorg.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:287)

         atorg.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1106)

         。。。。。。。。。。。。。。

         atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

         atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

         atjava.lang.Thread.run(Thread.java:745)

Caused by:org.springframework.beans.factory.BeanCreationException: Could not autowirefield: private org.apache.kylin.rest.service.CacheService org.apache.kylin.rest.controller.CacheController.cacheService; nested exception isorg.springframework.beans.factory.BeanCreationException: Error creating beanwith name 'cacheService': Invocation of init method failed; nested exception isjava.lang.IllegalArgumentException: Failed to find metadata store by url:kylin_metadata@hbase

         atorg.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:506)

         atorg.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)

         atorg.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:284)

         ...26 more

Caused by:org.springframework.beans.factory.BeanCreationException: Error creating beanwith name 'cacheService': Invocation of init method failed; nested exception isjava.lang.IllegalArgumentException: Failed to find metadata store by url:kylin_metadata@hbase

         atorg.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:135)

。。。。。。。。。。。。。。。。。。。。。。。。

         ...28 more

Caused by:java.lang.IllegalArgumentException: Failed to find metadata store by url:kylin_metadata@hbase

         atorg.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:90)

         atorg.apache.kylin.common.persistence.ResourceStore.getStore(ResourceStore.java:101)

         atorg.apache.kylin.cube.CubeManager.getStore(CubeManager.java:868)

         atorg.apache.kylin.cube.CubeManager.loadAllCubeInstance(CubeManager.java:807)

         atorg.apache.kylin.cube.CubeManager.<init>(CubeManager.java:126)

         atorg.apache.kylin.cube.CubeManager.getInstance(CubeManager.java:95)

         atorg.apache.kylin.rest.service.CacheService.initCubeChangeListener(CacheService.java:78)

         atsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

         atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         atjava.lang.reflect.Method.invoke(Method.java:497)

         atorg.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:346)

         atorg.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:299)

         atorg.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:132)

         ...40 more

 

待解决。。

 

 

1.1.4.  登录麒麟时如何ADMIN/KYLIN错误:

Unable to login,please check your username/password.

 

待解决。。

 

 

 

1.1.5.  org/apache/hadoop/hive/ql/session/SessionState

一、点击加载Hive表出现如下异常:

Java.lang.NoClassDefFoundError:org/apache/Hadoop/hive/cli/CliSessionState

java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState

 

解决:将hive lib文件夹下的lib拷贝到kylin lib文件夹下。

查看了下bin里的find-hive-dependency.sh,有设置hive_dependency 包含了所有hive lib下的jar包,但不知道为啥没起作用。所以只好拷贝操作了

在网上看到一个人说,修改bin目录下的kylin.sh

在HBASE_CLASSPATH_PREFIX把hive_dependency 加上。经测可用。

 

 

二、解决jar问题,hive表依然加载不出来(点击加载树,一直在转圈)

查看log,如:tail -f -n200 /opt/app/apache-kylin-2.0.0-bin/logs/kylin.log

发现没有异常,弄得我直挠头。尝试点了下reload table按钮,终于报出了异常,但是后台log依然没打出来,报出异常提示连接不上元数据。

 

hive不知道什么时候挂了,输入: hive--service metastore & 启动一下元数据。终于看到加载的hive表了

 

 

 

为了配置kylin,/etc/profile文件加入如下:

export JAVA_HOME=/opt/app/jdk1.8.0_131
PATH=$PATH:/$JAVA_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
JRE_HOME=$JAVA_HOME/jre
export HADOOP_HOME=/opt/app/hadoop-2.8.0
PATH=$PATH:$HADOOP_HOME/bin:$PATH
export HIVE_HOME=/opt/app/apache-hive-2.1.1-bin
export HCAT_HOME=$HIVE_HOME/hcatalog
export HIVE_CONF=$HIVE_HOME/conf
PATH=$PATH:$HIVE_HOME/bin:$PATH
export HBASE_HOME=/opt/app/Hbase-1.3.1
PATH=$PATH:$HBASE_HOME/bin:$PATH
#export HIVE_CONF=/opt/app/apache-hive-2.1.1-bin/conf
#PATH=$PATH:$HIVE_HOME/bin:$PATH
export KYLIN_HOME=/opt/app/apache-kylin-2.0.0-bin
PATH=$PATH:$KYLIN_HOME/bin:$PATH
#export KYLIN_HOME=/opt/app/kylin/
# zookeeper env 本人zookeeper用了hbase自带的,此没有用到
export ZOOKEEPER_HOME=/opt/app/zookeeper-3.4.6
export PATH=$ZOOKEEPER_HOME/bin:$PATH

 

 

1.1.6.  org.apache.hadoop.hbase.TableExistsException:kylin_metadata_user

问题:将hadoop format,重新搭建环境,报异常: org.apache.hadoop.hbase.TableExistsException:kylin_metadata_user

解决办法:

1.如果是独立的zookeeper

在zookeeper的bin目录下有一个zkCli.sh ,运行进入;
进入后运行 rmr  /hbase , 然后quit退出;


2.如果是用hbase自带的zookeeper

用 hbase zkcli进入

进入后运行 rmr  /hbase , 然后quit退出;

 

 

1.1.7.  File does not exist:hdfs://localhost:9000/home/hadoop/data/mapred/staging/xxxxxx

问题:File does not exist:

hdfs://localhost:9000/home/hadoop/data/mapred/staging/root1629090427/.staging/job_local1629090427_0006/libjars/hive-metastore-1.2.1.jar

原因:

是由于mapred-site.xml、yarn没有配置正确造成的。

记得启动/hadoop-2.8.0/sbin/mr-jobhistory-daemon.sh start historyserver

 

1.1.8.  Kylin: Extract Fact TableDistinct Columns 时报错java.net.ConnectException: Connection refused

Kylin: Extract Fact Table Distinct Columns 时报错java.net.ConnectException:Connection refused

解决办法:

启动historyserver:sbin/mr-jobhistory-daemon.sh starthistoryserver

 

1.1.9.  问题:build cube没有成功,得到如下log

Counters: 41

         FileSystem Counters

                   FILE:Number of bytes read=0

                   FILE:Number of bytes written=310413

                   FILE:Number of read operations=0

                   FILE:Number of large read operations=0

                   FILE:Number of write operations=0

                   HDFS:Number of bytes read=1700

                   HDFS:Number of bytes written=0

                   HDFS:Number of read operations=4

                   HDFS:Number of large read operations=0

                   HDFS:Number of write operations=0

         JobCounters

                   Failed map tasks=1

                   Failed reduce tasks=6

                   Killed reduce tasks=3

                   Launchedmap tasks=2

                   Launchedreduce tasks=9

                   Otherlocal map tasks=1

                   Data-localmap tasks=1

                   Totaltime spent by all maps in occupied slots (ms)=10602

                   Totaltime spent by all reduces in occupied slots (ms)=76818

                   Totaltime spent by all map tasks (ms)=10602

                   Totaltime spent by all reduce tasks (ms)=76818

                   Totalvcore-seconds taken by all map tasks=10602

                   Totalvcore-seconds taken by all reduce tasks=76818

                   Totalmegabyte-seconds taken by all map tasks=10856448

                   Totalmegabyte-seconds taken by all reduce tasks=78661632

         Map-ReduceFramework

                   Mapinput records=0

                   Mapoutput records=3

                   Mapoutput bytes=42

                   Mapoutput materialized bytes=104

                   Inputsplit bytes=1570

                   Combineinput records=3

                   Combineoutput records=3

                   SpilledRecords=3

         FailedShuffles=0

                   MergedMap outputs=0

                   GCtime elapsed (ms)=168

                   CPUtime spent (ms)=3790

                   Physicalmemory (bytes) snapshot=356413440

                   Virtualmemory (bytes) snapshot=2139308032

                   Totalcommitted heap usage (bytes)=195035136

         FileInput Format Counters

                   BytesRead=0

 

解决办法:在mapred-site.xml中加上如下配置:

<property>  

    <name>mapreduce.reduce.java.opts</name>  

    <value>-Xms2000m -Xmx4600m</value>  

</property>  

<property>  

    <name>mapreduce.map.memory.mb</name>  

    <value>5120</value>  

</property>  

<property>  

    <name>mapreduce.reduce.input.buffer.percent</name>  

    <value>0.5</value>  

</property>  

 <property>  

   <name>mapreduce.reduce.memory.mb</name>  

   <value>2048</value>  

 </property> 

 

 

 

1.1.10.          问题:build cube没有成功,java.lang.RuntimeException:native snappy library not available: this version of libhadoop was builtwithout snappy support.

。。。。。。。。。。。。。。

Execution completed successfully

MapredLocal task succeeded

Launching Job 1 out of 3

Number of reduce tasks is set to 0 sincethere's no reduce operator

Starting Job = job_1503968550530_0005,Tracking URL = http://localhost65:8088/proxy/application_1503968550530_0005/

Kill Command =/usr/local/hadoop-2.6.2/bin/hadoop job -kill job_1503968550530_0005

Hadoop job information for Stage-11: numberof mappers: 1; number of reducers: 0

2017-08-29 11:38:51,191 Stage-11 map =0%,  reduce = 0%

2017-08-29 11:39:18,309 Stage-11 map =100%,  reduce = 0%

2017-08-29 11:39:19,393 Stage-11 map =0%,  reduce = 0%

2017-08-29 11:39:58,126 Stage-11 map =100%,  reduce = 0%

Ended Job = job_1503968550530_0005 witherrors

Error during job, obtaining debugginginformation...

Examining task ID:task_1503968550530_0005_m_000000 (and more) from job job_1503968550530_0005

 

Task with the most failures(4):

-----

Task ID:

 task_1503968550530_0005_m_000000

 

URL:

 http://localhost65:8088/taskdetails.jsp?jobid=job_1503968550530_0005&tipid=task_1503968550530_0005_m_000000

-----

Diagnostic Messages for this Task:

Error:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException:Hive Runtime Error while processing row{"cal_dt":"2012-08-16","year_beg_dt":"2012-01-01","qtr_beg_dt":"2012-07-01","month_beg_dt":"2012-08-01","week_beg_dt":"2012-08-11","age_for_year_id":0,"age_for_qtr_id":-1,"age_for_month_id":-3,"age_for_week_id":-15,"age_for_dt_id":-103,"age_for_rtl_year_id":0,"age_for_rtl_qtr_id":-1,"age_for_rtl_month_id":-4,"age_for_rtl_week_id":-15,"age_for_cs_week_id":-15,"day_of_cal_id":41501,"day_of_year_id":228,"day_of_qtr_id":47,"day_of_month_id":16,"day_of_week_id":6,"week_of_year_id":33,"week_of_cal_id":5928,"month_of_qtr_id":2,"month_of_year_id":8,"month_of_cal_id":1364,"qtr_of_year_id":3,"qtr_of_cal_id":455,"year_of_cal_id":114,"year_end_dt":"2012-12-31","qtr_end_dt":"2012-09-30","month_end_dt":"2012-08-31","week_end_dt":"2012-08-17","cal_dt_name":"16-Aug-2012","cal_dt_desc":"Aug16th 2012","cal_dt_short_name":"Fri 08-16-13","ytd_yn_id":1,"qtd_yn_id":0,"mtd_yn_id":0,"wtd_yn_id":0,"season_beg_dt":"2012-06-21","day_in_year_count":365,"day_in_qtr_count":92,"day_in_month_count":31,"day_in_week_count":7,"rtl_year_beg_dt":"2012-12-30","rtl_qtr_beg_dt":"2012-06-30","rtl_month_beg_dt":"2012-07-28","rtl_week_beg_dt":"2012-08-11","cs_week_beg_dt":"2012-08-12","cal_date":"2012-08-16","day_of_week":"Fri      ","month_id":"2012M08","prd_desc":"Aug-2012","prd_flag":"N","prd_id":"2012M08  ","prd_ind":"N","qtr_desc":"Year2012 - Quarter 03","qtr_id":"2012Q03   ","qtr_ind":"N","retail_week":"33","retail_year":"2012","retail_start_date":"2012-08-11","retail_wk_end_date":"2012-08-17","week_ind":"N","week_num_desc":"Wk.33- 13","week_beg_date":"2012-08-1100:00:00","week_end_date":"2012-08-17 00:00:00","week_in_year_id":"2012W33  ","week_id":"2012W33  ","week_beg_end_desc_mdy":"08/11/13 - 08/17/13","week_beg_end_desc_md":"08/11-08/17","year_id":"2012","year_ind":"N","cal_dt_mns_1year_dt":"2012-08-16","cal_dt_mns_2year_dt":"2011-08-16","cal_dt_mns_1qtr_dt":"2012-05-16","cal_dt_mns_2qtr_dt":"2012-02-16","cal_dt_mns_1month_dt":"2012-07-16","cal_dt_mns_2month_dt":"2012-06-16","cal_dt_mns_1week_dt":"2012-08-09","cal_dt_mns_2week_dt":"2012-08-02","curr_cal_dt_mns_1year_yn_id":0,"curr_cal_dt_mns_2year_yn_id":0,"curr_cal_dt_mns_1qtr_yn_id":0,"curr_cal_dt_mns_2qtr_yn_id":0,"curr_cal_dt_mns_1month_yn_id":0,"curr_cal_dt_mns_2month_yn_id":0,"curr_cal_dt_mns_1week_yn_ind":0,"curr_cal_dt_mns_2week_yn_ind":0,"rtl_month_of_rtl_year_id":"8","rtl_qtr_of_rtl_year_id":3,"rtl_week_of_rtl_year_id":33,"season_of_year_id":3,"ytm_yn_id":1,"ytq_yn_id":1,"ytw_yn_id":1,"cre_date":"2005-09-07","cre_user":"USER_X  ","upd_date":"2012-11-2700:16:56","upd_user":"USER_X"}

         atorg.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:169)

         atorg.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)

         atorg.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)

         atorg.apache.hadoop.mapred.MapTask.run(MapTask.java:343)

         atorg.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)

         atjava.security.AccessController.doPrivileged(Native Method)

         atjavax.security.auth.Subject.doAs(Subject.java:422)

         atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)

         atorg.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by:org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error whileprocessing row{"cal_dt":"2012-08-16","year_beg_dt":"2012-01-01","qtr_beg_dt":"2012-07-01","month_beg_dt":"2012-08-01","week_beg_dt":"2012-08-21,

…………………………..

         atorg.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:499)

         atorg.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:160)

         ...8 more

Caused by:org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception fromMapJoinOperator : Unexpected exception from MapJoinOperator :org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:native snappy library not available: this version of libhadoop was builtwithout snappy support.

         atorg.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:465)

         atorg.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)

         atorg.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)

         atorg.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)

         atorg.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:126)

         atorg.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)

         atorg.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)

         atorg.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:149)

         atorg.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:489)

         ...9 more

Caused by:org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception fromMapJoinOperator : org.apache.hadoop.hive.ql.metadata.HiveException:java.lang.RuntimeException: native snappy library not available: this versionof libhadoop was built without snappy support.

         atorg.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:465)

         atorg.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:647)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:660)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:663)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:759)

         atorg.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:452)

         ...17 more

Caused by:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:native snappy library not available: this version of libhadoop was built withoutsnappy support.

         atorg.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:564)

         atorg.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:663)

         atorg.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)

         atorg.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)

         atorg.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:879)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:647)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:679)

         atorg.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:757)

         atorg.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:452)

         ...23 more

Caused by:org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:native snappy library not available: this version of libhadoop was builtwithout snappy support.

         atorg.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:272)

         atorg.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:609)

         atorg.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:553)

         ...31 more

Caused by:java.lang.RuntimeException: native snappy library not available: this versionof libhadoop was built without snappy support.

         atorg.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:64)

         atorg.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:133)

         atorg.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148)

         atorg.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163)

         atorg.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1261)

         atorg.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1154)

         atorg.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1509)

         atorg.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)

         atorg.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:528)

         atorg.apache.hadoop.hive.ql.exec.Utilities.createSequenceWriter(Utilities.java:993)

         atorg.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat.getHiveRecordWriter(HiveSequenceFileOutputFormat.java:64)

         atorg.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:284)

         atorg.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:269)

         ...33 more

 

FAILED: Execution Error, return code 2 fromorg.apache.hadoop.hive.ql.exec.mr.MapRedTask

MapReduce Jobs Launched:

Stage-Stage-11: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL

Total MapReduce CPU Time Spent: 0 msec

 

问题原因:

         由于hadoop中用得比较多的有lzo,gzip,snappy,bzip2这4种压缩格式,但是hadoop本身不支持snappy压缩方式,需要自己导入压缩包。

 

解决办法:

         导入hadoop-native压缩包信息,在mapred-site.xml文件添加以下配置项以支持snappy压缩:

<property>  

    <name>mapreduce.map.output.compress</name>  

    <value>true</value>  

    <final>true</final>  

  </property>  

  <property>  

    <name>mapreduce.map.output.compress.codec</name>  

    <value>org.apache.hadoop.io.compress.SnappyCodec</value>  

    <final>true</final>  

  </property> 

注意:配置完成后,重新启动hadoop服务。

在hbase测试是否支持snappy压缩方式:

[root@localhost65 hbase-1.3.1]# hbase org.apache.hadoop.hbase.util.CompressionTestfile:///PATH-TO-A-LOCAL-TMP-FILE snappy

2017-08-31 11:13:32,533 INFO  [main] hfile.CacheConfig: CreatedcacheConfig: CacheConfig:disabled

2017-08-31 11:13:32,781 INFO  [main] compress.CodecPool: Got brand-newcompressor [.snappy]

2017-08-31 11:13:32,793 INFO  [main] compress.CodecPool: Got brand-newcompressor [.snappy]

2017-08-31 11:13:33,573 INFO  [main] hfile.CacheConfig: CreatedcacheConfig: CacheConfig:disabled

2017-08-31 11:13:33,612 INFO  [main] compress.CodecPool: Got brand-newdecompressor[.snappy]

SUCCESS #如果执行成功,表示hbase支持snappy压缩格式

[root@localhost65 hbase-1.3.1]#

参考;https://my.oschina.net/muou/blog/414188
原创粉丝点击