DataNode没启动且TaskTracker自动消失的解决过程

来源:互联网 发布:java 畅言评论插件 编辑:程序博客网 时间:2024/05/22 06:13

最近在自学hadoop,按照教程搭建伪分布环境。

一个master节点和两个slave节点,在启动hadoop后,slave节点没有启动DataNode,并且TaskTracker自动消失,如下:

[root@slave1 logs]# jps

9927 TaskTracker
9979 Jps
[root@slave1 logs]# jps

9992 Jps


想到查看日志文件,在hadoop安装目录下的logs中:

[root@slave1 logs]# ls
hadoop-root-datanode-slave1.log             hadoop-root-datanode-slave1.out.3              hadoop-root-tasktracker-slave1.out
hadoop-root-datanode-slave1.log.2017-10-30  hadoop-root-datanode-slave1.out.4              hadoop-root-tasktracker-slave1.out.1
hadoop-root-datanode-slave1.log.2017-10-31  hadoop-root-datanode-slave1.out.5              hadoop-root-tasktracker-slave1.out.2
hadoop-root-datanode-slave1.log.2017-11-01  hadoop-root-tasktracker-slave1.log             hadoop-root-tasktracker-slave1.out.3
hadoop-root-datanode-slave1.log.2017-11-02  hadoop-root-tasktracker-slave1.log.2017-10-30  hadoop-root-tasktracker-slave1.out.4
hadoop-root-datanode-slave1.out             hadoop-root-tasktracker-slave1.log.2017-10-31  hadoop-root-tasktracker-slave1.out.5
hadoop-root-datanode-slave1.out.1           hadoop-root-tasktracker-slave1.log.2017-11-01
hadoop-root-datanode-slave1.out.2           hadoop-root-tasktracker-slave1.log.2017-11-02


日志是按一天一归档的,查看没有日期的log文件就是今天的:

[root@slave1 logs]# cat hadoop-root-datanode-slave1.log

2017-11-03 01:08:07,846 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = slave1/192.168.132.11
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.6.0_45
************************************************************/
2017-11-03 01:08:08,326 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-11-03 01:08:08,361 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2017-11-03 01:08:08,362 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-11-03 01:08:08,362 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-11-03 01:08:08,612 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-11-03 01:08:10,218 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:11,219 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:12,223 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:14,038 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:15,086 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:17,128 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:18,135 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:19,137 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:20,140 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.132.10:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-11-03 01:08:20,147 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to master/192.168.132.10:9000 failed on local exception: java.net.NoRouteTo和ostException: No route to host连接失败
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
        at org.apache.hadoop.ipc.Client.call(Client.java:1118)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
        at com.sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453)
        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335)
        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
Caused by: java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)
        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
        at org.apache.hadoop.ipc.Client.call(Client.java:1093)
        ... 16 more

异常信息显而易见,和master/192.168.132.10:9000连接失败,再次百度,发现是防火墙没有关闭,关掉它:

# service iptables stop

但这种关闭会在重启后自动打开,因为是学习用,所以我永久的关闭了防火墙:

[root@master bin]# chkconfig iptables off

当然如果想打开可以

[root@master bin]# chkconfig iptables on


这样,重启hadoop,问题解决,搭建环境完毕。

出现问题第一时间查看log文件,才好定位原因,提高解决问题的效率。

阅读全文
1 0