java.net.ConnectException: Call From localhost/127.0.0.1 to localhost:8020 failed on connection

来源:互联网 发布:有深度的电影。 知乎 编辑:程序博客网 时间:2024/05/18 09:16

执行《Hadoop权威指南》范例3-2时报错
执行命令

 export HADOOP_CLASSPATH=ch03-hdfs.jar hadoop FileSystemCat hdfs://localhost/user/liuzc/quangle.txt

错误信息

Exception in thread "main" java.net.ConnectException: Call From localhost/127.0.0.1 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)    at org.apache.hadoop.ipc.Client.call(Client.java:1479)    at org.apache.hadoop.ipc.Client.call(Client.java:1412)    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)    at com.sun.proxy.$Proxy10.getBlockLocations(Unknown Source)    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255)    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    at java.lang.reflect.Method.invoke(Method.java:498)    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)    at com.sun.proxy.$Proxy11.getBlockLocations(Unknown Source)    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1226)    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1213)    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1201)    at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:306)    at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:272)    at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:264)    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1526)    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:299)    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)    at FileSystemCat.main(FileSystemCat.java:19)Caused by: java.net.ConnectException: Connection refused    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)    at org.apache.hadoop.ipc.Client.call(Client.java:1451)    ... 24 more

解决方案
修改core-site.xml文件为

<configuration><!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的老大(NameNode)的地址 --> <property>   <name>fs.defaultFS</name>   <value>hdfs://hadoop01:8020</value> </property><!-- 指定hadoop运行时产生文件的存储目录 --> <property>   <name>hadoop.tmp.dir</name>   <value>/usr/local/hadoop-2.7.3/data</value> </property></configuration>

fs.defaultFS属性的端口原来配置为9000

<property>   <name>fs.defaultFS</name>   <value>hdfs://hadoop01:9000</value> </property>

改为8020重启hadoop就好了

<property>   <name>fs.defaultFS</name>   <value>hdfs://hadoop01:8020</value> </property>
0 2
原创粉丝点击