hadoop部分异常处理

来源:互联网 发布:手写图片识别软件 编辑:程序博客网 时间:2024/04/30 10:10

2016-12-31 22:39:45,304 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: NameNode/192.168.174.128:9090. Already tried 9 time(s).
2016-12-31 22:39:46,314 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to NameNode/192.168.174.128:9090 failed on local exception: java.net.NoRouteToHostException: No route to host
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client\$Connection.access\$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
… 13 more
2016-12-31 22:39:46,316 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/**************************************************
SHUTDOWN_MSG: Shutting down DataNode at DataNode_01/192.168.174.129
**************************************************/

解决方法:DataNode节点无法连接NameNode节点的原因linux的防火墙的某些规则限制了maters-slaves之间的通讯,直接关闭linux的防火墙即可:sudo service iptables stop(每个节点都执行一遍)
上述问题会导致DataNode进程启动一会后又自动关闭

hadoop启动之后,slave节点的dataNode节点未正常启动

解决方法:检查hdfs-site.xml文件关于数据存储位置的配置,如果配置的文件不存在则不会启动DataNode

hadoop启动mapreduce作业莫名出现map可以处理,reduce就出现空指针等异常,但是代码和配置却没有问题

解决方法:可能是hadoop集群节点的主机名格式不对:一定不能包含‘_’下划线,切记。

1 0
原创粉丝点击