hive执行SQL任务时报错Execution failed with exit status: 3

来源:互联网 发布:mac touch bar fn 编辑:程序博客网 时间:2024/05/29 19:31
错误信息:
Ended Job = job_1512373388022_42906SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/apps/apache-hive-2.0.0-bin/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/apps/apache-hive-2.0.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/apps/spark-1.6.1-bin-hadoop2.7/lib/spark-assembly-1.6.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/apps/hbase-1.1.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/apps/hadoop-2.7.2-1.2.11/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Execution log at: /tmp/hadoop/hadoop_20171211173457_c58e4059-5169-4f57-9c98-ebd62a4040cc.log2017-12-12 10:04:32     Starting to launch local task to process map join;      maximum memory = 5085593602017-12-12 10:04:34     Processing rows:        200000  Hashtable size: 199999  Memory usage:   109455336       percentage:     0.2292017-12-12 10:04:34     Processing rows:        300000  Hashtable size: 299999  Memory usage:   136518976       percentage:     0.2862017-12-12 10:04:34     Processing rows:        400000  Hashtable size: 399999  Memory usage:   167433248       percentage:     0.3512017-12-12 10:04:35     Processing rows:        500000  Hashtable size: 499999  Memory usage:   196987280       percentage:     0.4122017-12-12 10:04:35     Processing rows:        600000  Hashtable size: 599999  Memory usage:   223666240       percentage:     0.4682017-12-12 10:04:36     Processing rows:        700000  Hashtable size: 699999  Memory usage:   236396720       percentage:     0.4952017-12-12 10:04:36     Processing rows:        800000  Hashtable size: 799999  Memory usage:   272007016       percentage:     0.569Execution failed with exit status: 3Obtaining error informationTask failed!Task ID:  Stage-12Logs:/tmp/hadoop/hive.logFAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTaskMapReduce Jobs Launched: Stage-Stage-11: Map: 14   Cumulative CPU: 48.83 sec   HDFS Read: 623307 HDFS Write: 47376 SUCCESSStage-Stage-2: Map: 17  Reduce: 2   Cumulative CPU: 189.43 sec   HDFS Read: 356438739 HDFS Write: 67873 SUCCESSTotal MapReduce CPU Time Spent: 3 minutes 58 seconds 260 msec
一般认为,该问题是由于内存不足造成!
官方FAQ:
Execution failed with exit status: 3FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTaskHive converted a join into a locally running and faster 'mapjoin', but ran out of memory while doing so. There are two bugs responsible for this.Bug 1)hives metric for converting joins miscalculated the required amount of memory. This is especially true for compressed files and ORC files, as hive uses the filesize as metric, but compressed tables require more memory in their uncompressed 'in memory representation'.You could simply decrease 'hive.smalltable.filesize' to tune the metric, or increase 'hive.mapred.local.mem' to allow the allocation of more memory for map tasks.The later option may lead to bug number two if you happen to have a affected hadoop version.Bug 2)Hive/Hadoop ignores 'hive.mapred.local.mem' ! (more exactly: bug in Hadoop 2.2 where hadoop-env.cmd sets the -xmx parameter multiple times, effectively overriding the user set hive.mapred.local.mem setting. see: https://issues.apache.org/jira/browse/HADOOP-10245There are 3 workarounds for this bug:1) assign more memory to the local! Hadoop JVM client (this is not! mapred.map.memory) because map-join child jvm will inherit the parents jvm settingsIn cloudera manager home, click on "hive" service,then on the hive service page click on "configuration"Gateway base group --(expand)--> Resource Management -> Client Java Heap Size in Bytes -> 1GB2) reduce "hive.smalltable.filesize" to ~1MB or below (depends on your cluster settings for the local JVM)3) turn off "hive.auto.convert.join" to prevent hive from converting the joins to a mapjoin.2) & 3) can be set in Big-Bench/engines/hive/conf/hiveSettings.sql
解决方法:
set hive.auto.convert.join=false;关闭自动转化MapJoin,默认为true;
set hive.ignore.mapjoin.hint=false; 关闭忽略mapjoin的hints(不忽略,hints有效),默认为true(忽略hints)。

hive> set hive.auto.convert.join=false;
然后再执行Sql。
阅读全文
0 0
原创粉丝点击