eclipse链接hadoop2.5.1时报错Error:Call From roo/10.30.12.xxx to hostname1:9000 failed on connection excep

来源:互联网 发布:数据专家 编辑:程序博客网 时间:2024/05/16 05:16

    问题:我用eclipse(Mar.2 Release(4.5.2))连接hadoop2.5.1时报错:Error:Call From roo/10.30.12.xxx to hostname1:9000 failed on connection exception:java.net.ConnectException:Connection refused:no further information;...如下图所示:


    出现原因:hdfs-site.xml中配置的DFS Master的端口号和eclipse连接hdfs的端口号不同。hdfs-site.xml的配置如下:

<configuration><property>  <name>dfs.nameservices</name>  <value>mycluster</value></property><property>  <name>dfs.ha.namenodes.mycluster</name>  <value>nn1,nn2</value></property><property>  <name><span style="color:#ff0000;">dfs.namenode.rpc-address.mycluster.nn1</span></name>  <value><span style="color:#ff0000;">lida1:8020</span></value></property><property>  <name>dfs.namenode.rpc-address.mycluster.nn2</name>  <value>lida2:8020</value></property><property>  <name>dfs.namenode.http-address.mycluster.nn1</name>  <value>lida1:50070</value></property><property>  <name>dfs.namenode.http-address.mycluster.nn2</name>  <value>lida2:50070</value></property><property>  <name>dfs.namenode.shared.edits.dir</name>  <value>qjournal://lida2:8485;lida3:8485;lida4:8485/mycluster</value></property><property>  <name>dfs.client.failover.proxy.provider.mycluster</name>  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><property>  <name>dfs.ha.fencing.methods</name>  <value>sshfence</value></property><property>  <name>dfs.ha.automatic-failover.enabled</name>  <value>true</value></property><property>  <name>dfs.permissions</name>  <value>false</value></property></configuration>
hdfs-site.xml里配置的dfs的端口号是8020,我在用eclipse连接hdfs时却把端口号写成了9000,因此,本地的eclipse连接hdfs肯定被拒绝。

    解决办法:修改eclipse连接hdfs的端口号即可。如下图所示:




修改完端口后,重启eclipse,然后再打开DFS locations,就不报上面的错了。

0 1