hadoop提交jar包卡住不会往下执行的解决方案

来源:互联网 发布:linux运维日志 编辑:程序博客网 时间:2024/05/16 08:31

转载请注明出处:http://blog.csdn.net/gamer_gyt
博主微博:http://weibo.com/234654758
Github:https://github.com/thinkgamer


写在前边的话

这是一个很蛋疼的问题,说实话在以前玩这个hadoop集群,不管是伪分布式还是集群都没有注意过分配内存这个问题,即job执行时的内存分配,然后在今天遇到了,搞了好久

错误描述

执行jar包时,卡住不会动一般卡在两个地方
第一个是提交不到集群

[breakpad@master hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output16/09/22 12:12:15 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.162.89:803216/09/22 12:12:16 INFO input.FileInputFormat: Total input paths to process : 116/09/22 12:12:16 INFO mapreduce.JobSubmitter: number of splits:116/09/22 12:12:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474517485267_000116/09/22 12:12:17 INFO impl.YarnClientImpl: Submitted application application_1474517485267_000116/09/22 12:12:17 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474517485267_0001/16/09/22 12:12:17 INFO mapreduce.Job: Running job: job_1474517485267_000116/09/22 12:12:25 INFO mapreduce.Job: Job job_1474517485267_0001 running in uber mode : false

第二种是提交到集群之后,不会往下运行

[breakpad@master hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output16/09/22 12:12:15 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.162.89:803216/09/22 12:12:16 INFO input.FileInputFormat: Total input paths to process : 116/09/22 12:12:16 INFO mapreduce.JobSubmitter: number of splits:116/09/22 12:12:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474517485267_000116/09/22 12:12:17 INFO impl.YarnClientImpl: Submitted application application_1474517485267_000116/09/22 12:12:17 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474517485267_0001/16/09/22 12:12:17 INFO mapreduce.Job: Running job: job_1474517485267_000116/09/22 12:12:25 INFO mapreduce.Job: Job job_1474517485267_0001 running in uber mode : false16/09/22 12:12:25 INFO mapreduce.Job:  map 0% reduce 0%

解决办法

这两种错误的本质是一样的,就是在运行jar包时,节点为期分配的内存不够,且也没有指定最大最小值
官网上有三个这样的配置项 yarn-site.xml

yarn.nodemanager.resource.memory-mb8192Amount of physical memory, in MB, that can be allocated for containers.---yarn.scheduler.minimum-allocation-mb1024The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException.---yarn.nodemanager.vmem-pmem-ratio2.1Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.

这里我们在集群的yarn-site.xml中添加配置

<property>    <name>yarn.nodemanager.resource.memory-mb</name>    <value>4096</value></property><property>    <name>yarn.scheduler.minimum-allocation-mb</name>    <value>2048</value></property><property>    <name>yarn.nodemanager.vmem-pmem-ratio</name>    <value>2.1</value></property>

重新启动集群,运行jar包即可

1 0
原创粉丝点击