Window下使用eclipse连接hadoop报错(二)

来源:互联网 发布:最优化问题求解 编辑:程序博客网 时间:2024/06/05 08:31

  1. 报错

15/05/10 14:12:11 INFOip.Client: Retrying connect to server: hadoop/192.168.110:9000.Already tried 0time(s); maxRet


15/05/10 14:12:11 INFOip.Client: Retrying connect to server: hadoop/192.168.110:9000.Already tried 1time(s); maxRet


15/05/10 14:12:11 INFOip.Client: Retrying connect to server: hadoop/192.168.110:9000.Already tried 2time(s); maxRet



可能原因1)网络原因:关闭防火墙我就是在这里栽跟头了

永久关闭防火墙         关闭: chkconfigiptables off

即时生效,重启后复原   关闭: service iptables stop

2)参数原因:(网上说法)

原因是:hadoop默认配置是把一些tmp文件放在/tmp目录下,重启系统后,tmp目录下的东西被清除,所以报错  解决方法:在conf/core-site.xml0.19.2版本的为conf/hadoop-site.xml)中增加以下内容

 <property>

<name>hadoop.tmp.dir</name>

<value>/usr/newdir/hadoop/tmp</value>

<description>Abase for other temporary directories</description>

 </property>

重启hadoop后,格式化namenode即可

  1. 报错Exceptionin thread "main"java.io.FileNotFoundException: File does not exist: /hello     atorg.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchLocatedBlocks(DFSClient.java:2006)

   atorg.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1975)

   atorg.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1967)

   atorg.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:735)

   atorg.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)

   atorg.apache.hadoop.fs.FileSystem.open(FileSystem.java:436)

   atorg.apache.hadoop.fs.FsUrlConnection.connect(FsUrlConnection.java:46)

   atorg.apache.hadoop.fs.FsUrlConnection.getInputStream(FsUrlConnection.java:56)

   atjava.net.URL.openStream(URL.java:1037)

   athdfs.App1.main(App1.java:20)

问题原因:HDFS根目录下没有hello文件

解决方法新建hello文件上传到HDFS

        touch hello

        vi hello   

            hello you

            hello me

         hadoop fs –put ./hello /

       重新连接成功问题解决


0 0
原创粉丝点击