Hive ERROR: Out of memory due to hash maps used in map-side aggregation

来源:互联网 发布:java具体是做什么的 编辑:程序博客网 时间:2024/05/16 14:56

在执行H-SQL:  select collect_set(messageDate)[0],count(*) from incidents_hive group by substr(messageDate,8,2);

时报以下错误:

URL:
  http://RDCMaster.cluster:50030/taskdetails.jsp?jobid=job_201403041024_0002&tipid=task_201403041024_0002_m_000197

Possible error:
  Out of memory due to hash maps used in map-side aggregation.

Solution:
  Currently hive.map.aggr.hash.percentmemory is set to 0.5. Try setting it to a lower value. i.e 'set hive.map.aggr.hash.percentmemory = 0.25;'
-----
Diagnostic Messages for this Task:
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 65.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 433  Reduce: 20   Cumulative CPU: 12732.44 sec   HDFS Read: 67006 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 days 3 hours 32 minutes 12 seconds 440 msec

错误原因:Map端聚合时hash表所占用的内存比例默认为0.5,这个值超过可用内存大小,导致内存溢出。

解决办法:

如果按照异常信息里的建议:Try setting it to a lower value. i.e 'set hive.map.aggr.hash.percentmemory = 0.25,解决不了问题。

意思是说,当内存的Map大小,占到JVM配置的Map进程的25%的时候(默认是50%),就将这个数据flush到reducer去,以释放内存Map的空间。

我把H-SQL改成:

select hourNum, count(1) from (select substr(messageDate,9,2) as hourNum from incidents_hive ) t group by hourNum;   问题解决!

其他的解决方法:

或者干脆关掉 MapAggregation ,不建议~~

set hive.map.aggr=false;

如果内存Map超过一定大小,就关闭MapAggregation功能

set hive.map.aggr.hash.min.reduction=0.5;



0 0