hadoop异常:hdfs.server.datanode.DataNode: Problem connecting to server: localhost/127.0.0.1:8020

来源:互联网 发布:淘宝上的尼康典范店 编辑:程序博客网 时间:2024/06/05 17:19

已经在我机子UBUNTU13.10上面安装了hadoop的伪节点分布,现在我想运行一个最基本的测试程序。

首先我的电脑名字是Tank, 我在这台电脑上的账户名是joe

我的hadoop安装在本地电脑/usr/local/hadoop-2.4.0

我的单机伪分布配置和这篇文章中http://www.csdn123.com/html/itweb/20130801/34361_34373_34414.htm的前1,2,3十一样的,并且在3中,

上文中的dfs.name.dir我是dfs.namenode.name.dir, dfs.data.dir我的是dfs.datanode.data.dir

这是一个连环的故事,调了整整一晚上。为了解决一个问题,我自作聪明的吧/usr/local/hadoop-2.4.0/dfs文件删除。我当时以为当我再次启动hadoop时会自动创建一个。的确,当我重启hadoop时,自动创建了一个dfs,里面含有一个name文件,于是我自己又手动添加了一个data文件。但是在/usr/local/hadoop-2.4.0/logs下的登录日志中总会出现这样的错误

2014-07-13 22:15:52,014 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:53,015 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:54,016 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:55,016 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:56,017 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:57,018 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:58,019 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:15:59,020 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:16:00,020 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:16:01,021 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-07-13 22:16:01,023 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: localhost/127.0.0.1:8020

试了很多方法都不管用,最后发现,只要关闭hadoop,然后进行格式化即可正常。因为我是在一个伪分布,HDFS里面没有任何内容,所以我就对namenode和datanode都进行了格式化

$hadoop namenode -format

$hadoop datanode  -format







0 0
原创粉丝点击