基于ECLIPSE的HADOOP1.0应用开发环境配置

来源:互联网 发布:bandcamp 知乎 编辑:程序博客网 时间:2024/05/01 02:25

好文章太少了,这个拿过来分享一下,原文链接:

http://www.cnblogs.com/wly603/archive/2012/04/18/2454936.html


*******************************************下面是正文**********************************************************


概要:在eclipse环境下配置Hadoop的开发环境

环境: ubuntu8.04.4          

             eclipse:Release 3.7.2

             Hadoop:hadoop-1.0.2

 

参考前辈资料:

        http://www.cnblogs.com/flyoung2008/archive/2011/12/09/2281400.html

一、配置过程

      1、先启动hadoop守护进程

             root@localhost:/usr/local/hadoop-1.0.2# bin/hadoop namenode -format

             root@localhost:/usr/local/hadoop-1.0.2# bin/start-all.sh

             我不明白,为什么每次启动hadoop都要格式化文件系统。如果我不格式化,namenode就无法启动。为什么?

             已解决:因为我以前配置伪分布式系统时,没有在conf/hadoop-site.xml中指定dfs.data.dir和dfs.name.dir,这样它默认就在/tmp目录下,重启电脑后,/tmp目录下的文件就自动删除了,从而导致每次都要重新格式化文件系统。修改hadoop-site.xml即可,详细见http://www.cnblogs.com/wly603/archive/2012/04/10/2441336.html(伪分布式系统的配置)。

                     dfs.data.dir:表示本地文件系统中用于存储data node数据块

                     dfs.name.dir: 表示本地文件系统中用于存储name table的路径

           注意:一定要先在终端启动hadoop,我开始未启动,一直出现错误,如下:           

 

2、在eclipse上安装hadoop插件,将插件拷贝到 eclipse安装目录/plugins/ 下即可。

     网上有人说:低版本的hadoop插件位于:/contrib/eclipse-plugin/

    但我使用的版本中没有插件,但在hadoop-1.0.2/src/contrib/eclipse-plugin 有插件的源代码。高手可以自己编译获得插件。我是直接在网上下载了一个插件:

    网址为:http://download.csdn.net/download/shf0824/4094050                            插件版本为:hadoop-eclipse-plugin-1.0.0.jar

 

3、重启eclipse,配置hadoop installation directory

    如果安装插件成功,打开Window-->Preferens,你会发现Hadoop Map/Reduce选项,在这个选项里你需要配置Hadoop installation directory。配置完成后退出。

   

 

4、配置Map/Reduce Locations

       通过Show View 打开Map/Reduce Locations
                                 Window-->Show View-->other中,MapReduce Tools下打开Map/Reduce Locations

        在Map/Reduce Locations中新建一个Hadoop Location。在这个View中,右键-->New Hadoop Location。在弹出的对话框中你需要配置Location name,如Hadoop,还有Map/Reduce Master和DFS Master。这里面的Host、Port分别为你在mapred-site.xml、core-site.xml中配置的地址及端口。如:

       

 

5、配置完后退出。点击DFS Locations-->Hadoop如果能显示文件夹(1)说明配置正确,如果显示"拒绝连接",请检查你的配置。

  注:我出现过伪分布式配置成功,但eclipse显示“连接被拒绝”

        解决办法:我把/tmp/hadoop-root 文件夹删除了,重新启动hadoop进程就可以了

                       gqy@localhost:/tmp$ sudo rm -rf  hadoop-root

 

二、测试

1、新建一个Map/Reduce Project,   File-->New-->Other-->Map/Reduce Project 
        项目名可以随便取,如WordCount。 
        复制 hadoop安装目录hadoop-1.0.2/src/examples/org/apache/hadoop/examples/WordCount.java到刚才新建的项目下面

2、上传模拟数据文件夹。 
         为了运行程序,我们需要一个输入的文件夹,和输出的文件夹。在/home/gqy/workspace/WordCount/新建word.txt,文件内容如下:

          java c++ python c
         java c++ javascript 
         helloworld hadoop
         mapreduce java hadoop hbase

    通过hadoop的命令在HDFS上创建/tmp/workcount目录,命令如下:bin/hadoop fs -mkdir /tmp/wordcount

    通过copyFromLocal命令把本地的word.txt复制到HDFS上,命令如下:bin/hadoop fs -copyFromLocal /home/gqy/workspace/WordCount/word.txt  /tmp/wordcount/word.txt

3、运行项目

    在新建的项目Hadoop,点击WordCount.java,右键-->Run As-->Run Configurations 
    在弹出的Run Configurations对话框中,点Java Application,点WordCount ,配置运行参数,点Arguments,在Program arguments中输入“你要传给程序的输入文件夹和你要求程序将计算结果保存的文件夹”,如:

     hdfs://localhost:9000/tmp/wordcount/word.txt   hdfs://localhost:9000/tmp/wordcount/out

4、点击Run,运行程序

控制台输出运行Log信息
复制代码
12/04/18 09:46:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable12/04/18 09:46:21 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).****hdfs://localhost:9000/tmp/wordcount/word.txt12/04/18 09:46:21 INFO input.FileInputFormat: Total input paths to process : 112/04/18 09:46:21 WARN snappy.LoadSnappy: Snappy native library not loaded12/04/18 09:46:21 INFO mapred.JobClient: Running job: job_local_000112/04/18 09:46:21 INFO util.ProcessTree: setsid exited with exit code 012/04/18 09:46:21 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@155c37d12/04/18 09:46:21 INFO mapred.MapTask: io.sort.mb = 10012/04/18 09:46:22 INFO mapred.MapTask: data buffer = 79691776/9961472012/04/18 09:46:22 INFO mapred.MapTask: record buffer = 262144/32768012/04/18 09:46:22 INFO mapred.JobClient:  map 0% reduce 0%12/04/18 09:46:22 INFO mapred.MapTask: Starting flush of map output12/04/18 09:46:23 INFO mapred.MapTask: Finished spill 012/04/18 09:46:23 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting12/04/18 09:46:24 INFO mapred.LocalJobRunner: 12/04/18 09:46:24 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.12/04/18 09:46:24 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1bc4c3712/04/18 09:46:24 INFO mapred.LocalJobRunner: 12/04/18 09:46:24 INFO mapred.Merger: Merging 1 sorted segments12/04/18 09:46:24 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 119 bytes12/04/18 09:46:24 INFO mapred.LocalJobRunner: 12/04/18 09:46:24 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting12/04/18 09:46:24 INFO mapred.LocalJobRunner: 12/04/18 09:46:24 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now12/04/18 09:46:24 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9000/tmp/wordcount/out12/04/18 09:46:25 INFO mapred.JobClient:  map 100% reduce 0%12/04/18 09:46:27 INFO mapred.LocalJobRunner: reduce > reduce12/04/18 09:46:27 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.12/04/18 09:46:28 INFO mapred.JobClient:  map 100% reduce 100%12/04/18 09:46:28 INFO mapred.JobClient: Job complete: job_local_000112/04/18 09:46:28 INFO mapred.JobClient: Counters: 2212/04/18 09:46:28 INFO mapred.JobClient:   File Output Format Counters 12/04/18 09:46:28 INFO mapred.JobClient:     Bytes Written=8112/04/18 09:46:28 INFO mapred.JobClient:   FileSystemCounters12/04/18 09:46:28 INFO mapred.JobClient:     FILE_BYTES_READ=44912/04/18 09:46:28 INFO mapred.JobClient:     HDFS_BYTES_READ=17212/04/18 09:46:28 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=8119412/04/18 09:46:28 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=8112/04/18 09:46:28 INFO mapred.JobClient:   File Input Format Counters 12/04/18 09:46:28 INFO mapred.JobClient:     Bytes Read=8612/04/18 09:46:28 INFO mapred.JobClient:   Map-Reduce Framework12/04/18 09:46:28 INFO mapred.JobClient:     Map output materialized bytes=12312/04/18 09:46:28 INFO mapred.JobClient:     Map input records=412/04/18 09:46:28 INFO mapred.JobClient:     Reduce shuffle bytes=012/04/18 09:46:28 INFO mapred.JobClient:     Spilled Records=1812/04/18 09:46:28 INFO mapred.JobClient:     Map output bytes=13612/04/18 09:46:28 INFO mapred.JobClient:     Total committed heap usage (bytes)=32100352012/04/18 09:46:28 INFO mapred.JobClient:     CPU time spent (ms)=012/04/18 09:46:28 INFO mapred.JobClient:     SPLIT_RAW_BYTES=10912/04/18 09:46:28 INFO mapred.JobClient:     Combine input records=1312/04/18 09:46:28 INFO mapred.JobClient:     Reduce input records=912/04/18 09:46:28 INFO mapred.JobClient:     Reduce input groups=912/04/18 09:46:28 INFO mapred.JobClient:     Combine output records=912/04/18 09:46:28 INFO mapred.JobClient:     Physical memory (bytes) snapshot=012/04/18 09:46:28 INFO mapred.JobClient:     Reduce output records=912/04/18 09:46:28 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=012/04/18 09:46:28 INFO mapred.JobClient:     Map output records=13
复制代码

     运行结束后,查看运行结果,使用命令: bin/hadoop fs -ls /tmp/wordcount/out查看例子的输出结果,

     输出结果为:

        Found 2 items
            -rw-r--r--   3 gqy supergroup          0 2012-04-18 09:46 /tmp/wordcount/out/_SUCCESS
            -rw-r--r--   3 gqy supergroup         81 2012-04-18 09:46 /tmp/wordcount/out/part-r-00000

     进一步查看运行结果:

        命令:gqy@localhost:/tmp$ hadoop fs -cat /tmp/wordcount/out/part-r-00000

        输出显示:

               c    1
              c++    2
               hadoop    2
              hbase    1
              helloworld    1
              java    3
             javascript    1
             mapreduce    1
             python    1

*******************************************到这截止**********************************************************

期间,我出现了这个异常:

org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="idsl":idsl:Supergroup:rwxr-xr-x

没有找到更好的办法,现有的解决办法:

在hdfs-site.xml里加入下面的
<property> 
    <name>dfs.permissions</name> 
    <value>false</value> 
  </property>

重启即可

原创粉丝点击