【分布式集群】hadoop2.6.0在eclipse上的运行和命令行运行

来源:互联网 发布:日本进出口数据 编辑:程序博客网 时间:2024/05/16 23:48

环境配置:

jdk1.7.0_80

hadoop-2.6.0

eclipse-jee-mars-R-linux-gtk.tar.gz(官网下的目前为止最新的吧)

hadoop2x-eclipse-plugin-master(hadoop在eclipse上的插件,网上一搜一大把)


安装eclipse,解压即可

tar -xvf eclipse-jee-mars-R-linux-gtk.tar.gz

将插件在解压出来有release文件夹下

hadoop-eclipse-kepler-plugin-2.2.0.jar

hadoop-eclipse-kepler-plugin-2.4.1.jar

hadoop-eclipse-plugin-2.6.0.jar

前面两个二选一扔到eclipse的插件plugins下面,第三个不用,因为环境不同,所以要自己编译一个。

编译插件

cd hadoop2x-eclipse-plugin-master/src/contrib/eclipse-plugin  ant jar -Dversion=2.6.0 -Declipse.home=/home/shamrock/eclipse -Dhadoop.home=/home//hadoop-2.6.0
其中/home/shamrock/eclipse是eclipse的安装目录,/home/shamrock/hadoop-2.6.0是hadoop的安装目录

编译完成出现hadoop-eclipse-plugin-2.6.0.jar,将这个扔到eclipse的插件中。

启动eclipse.

然后按照Eclipse下搭建Hadoop2.4.0开发环境   就可以了。

写的程序运行完,如果要想在eclipse的DFS Localtion下的hadoop显示结果,只需要点击这个hadoop选择重连(ReConnect)即可。


接下来,我们想直接命令行运行,该如何运行呢?

首先进入eclipse的workspace找到新建项目的类WordCOunt.java

用vi编辑它,把文件里相对于eclipse的包去掉,比如我的文件只要去掉package myWordCount;

编译java

 javac -classpath "$HADOOP_HOME/share/hadoop/common/hadoop-common-2.6.0.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jar:$CLASSPATH" WordCount.java

打包

jar -cvf wordcount.jar ./*.calss

执行,如果你之前在eclipse运行过,则有输入文件,如我的是/input,同时把之前运行产生的输出文件/output删除

如果你把hadoop的bin和sbin都加入环境配置了,那就可以直接使用hadoop语句

 hadoop jar wordcount.jar WordCount /input /output

15/07/23 16:46:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable15/07/23 16:46:36 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id15/07/23 16:46:36 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=15/07/23 16:46:36 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized15/07/23 16:46:36 INFO mapred.FileInputFormat: Total input paths to process : 215/07/23 16:46:36 INFO mapreduce.JobSubmitter: number of splits:215/07/23 16:46:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1599291624_000115/07/23 16:46:37 INFO mapreduce.Job: The url to track the job: http://localhost:8080/15/07/23 16:46:37 INFO mapred.LocalJobRunner: OutputCommitter set in config null15/07/23 16:46:37 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter15/07/23 16:46:37 INFO mapreduce.Job: Running job: job_local1599291624_000115/07/23 16:46:37 INFO mapred.LocalJobRunner: Waiting for map tasks15/07/23 16:46:37 INFO mapred.LocalJobRunner: Starting task: attempt_local1599291624_0001_m_000000_015/07/23 16:46:37 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]15/07/23 16:46:37 INFO mapred.MapTask: Processing split: hdfs://master:9000/input/readme.txt:0+5515/07/23 16:46:37 INFO mapred.MapTask: numReduceTasks: 115/07/23 16:46:38 INFO mapreduce.Job: Job job_local1599291624_0001 running in uber mode : false15/07/23 16:46:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)15/07/23 16:46:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 10015/07/23 16:46:38 INFO mapred.MapTask: soft limit at 8388608015/07/23 16:46:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 10485760015/07/23 16:46:38 INFO mapred.MapTask: kvstart = 26214396; length = 655360015/07/23 16:46:38 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer15/07/23 16:46:38 INFO mapreduce.Job:  map 0% reduce 0%I love hadoop.So I hope I can do my best to finish it!哈哈15/07/23 16:46:38 INFO mapred.LocalJobRunner:15/07/23 16:46:38 INFO mapred.MapTask: Starting flush of map output15/07/23 16:46:38 INFO mapred.MapTask: Spilling map output15/07/23 16:46:38 INFO mapred.MapTask: bufstart = 0; bufend = 107; bufvoid = 10485760015/07/23 16:46:38 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214348(104857392); length = 49/655360015/07/23 16:46:38 INFO mapred.MapTask: Finished spill 015/07/23 16:46:38 INFO mapred.Task: Task:attempt_local1599291624_0001_m_000000_0 is done. And is in the process of committing15/07/23 16:46:38 INFO mapred.LocalJobRunner: hdfs://master:9000/input/readme.txt:0+5515/07/23 16:46:38 INFO mapred.Task: Task 'attempt_local1599291624_0001_m_000000_0' done.15/07/23 16:46:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local1599291624_0001_m_000000_015/07/23 16:46:38 INFO mapred.LocalJobRunner: Starting task: attempt_local1599291624_0001_m_000001_015/07/23 16:46:38 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]15/07/23 16:46:38 INFO mapred.MapTask: Processing split: hdfs://master:9000/input/readme2:0+5515/07/23 16:46:38 INFO mapred.MapTask: numReduceTasks: 115/07/23 16:46:39 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)15/07/23 16:46:39 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 10015/07/23 16:46:39 INFO mapred.MapTask: soft limit at 8388608015/07/23 16:46:39 INFO mapred.MapTask: bufstart = 0; bufvoid = 10485760015/07/23 16:46:39 INFO mapred.MapTask: kvstart = 26214396; length = 655360015/07/23 16:46:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBufferI love hadoop.So I hope I can do my best to finish it!哈哈15/07/23 16:46:39 INFO mapred.LocalJobRunner:15/07/23 16:46:39 INFO mapred.MapTask: Starting flush of map output15/07/23 16:46:39 INFO mapred.MapTask: Spilling map output15/07/23 16:46:39 INFO mapred.MapTask: bufstart = 0; bufend = 107; bufvoid = 10485760015/07/23 16:46:39 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214348(104857392); length = 49/655360015/07/23 16:46:39 INFO mapred.MapTask: Finished spill 015/07/23 16:46:39 INFO mapred.Task: Task:attempt_local1599291624_0001_m_000001_0 is done. And is in the process of committing15/07/23 16:46:39 INFO mapred.LocalJobRunner: hdfs://master:9000/input/readme2:0+5515/07/23 16:46:39 INFO mapred.Task: Task 'attempt_local1599291624_0001_m_000001_0' done.15/07/23 16:46:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local1599291624_0001_m_000001_015/07/23 16:46:39 INFO mapred.LocalJobRunner: map task executor complete.15/07/23 16:46:39 INFO mapred.LocalJobRunner: Waiting for reduce tasks15/07/23 16:46:39 INFO mapred.LocalJobRunner: Starting task: attempt_local1599291624_0001_r_000000_015/07/23 16:46:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]15/07/23 16:46:39 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@d3ec2f15/07/23 16:46:39 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=334154944, maxSingleShuffleLimit=83538736, mergeThreshold=220542272, ioSortFactor=10, memToMemMergeOutputsThreshold=1015/07/23 16:46:39 INFO reduce.EventFetcher: attempt_local1599291624_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events15/07/23 16:46:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1599291624_0001_m_000001_0 decomp: 119 len: 123 to MEMORY15/07/23 16:46:39 INFO reduce.InMemoryMapOutput: Read 119 bytes from map-output for attempt_local1599291624_0001_m_000001_015/07/23 16:46:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 119, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->11915/07/23 16:46:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1599291624_0001_m_000000_0 decomp: 119 len: 123 to MEMORY15/07/23 16:46:39 INFO reduce.InMemoryMapOutput: Read 119 bytes from map-output for attempt_local1599291624_0001_m_000000_015/07/23 16:46:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 119, inMemoryMapOutputs.size() -> 2, commitMemory -> 119, usedMemory ->23815/07/23 16:46:39 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning15/07/23 16:46:39 INFO mapred.LocalJobRunner: 2 / 2 copied.15/07/23 16:46:39 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-memory map-outputs and 0 on-disk map-outputs15/07/23 16:46:39 INFO mapred.Merger: Merging 2 sorted segments15/07/23 16:46:39 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 230 bytes15/07/23 16:46:39 INFO reduce.MergeManagerImpl: Merged 2 segments, 238 bytes to disk to satisfy reduce memory limit15/07/23 16:46:39 INFO reduce.MergeManagerImpl: Merging 1 files, 240 bytes from disk15/07/23 16:46:39 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce15/07/23 16:46:39 INFO mapred.Merger: Merging 1 sorted segments15/07/23 16:46:39 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 232 bytes15/07/23 16:46:39 INFO mapred.LocalJobRunner: 2 / 2 copied.15/07/23 16:46:39 INFO mapreduce.Job:  map 100% reduce 0%15/07/23 16:46:40 INFO mapred.Task: Task:attempt_local1599291624_0001_r_000000_0 is done. And is in the process of committing15/07/23 16:46:40 INFO mapred.LocalJobRunner: 2 / 2 copied.15/07/23 16:46:40 INFO mapred.Task: Task attempt_local1599291624_0001_r_000000_0 is allowed to commit now15/07/23 16:46:40 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1599291624_0001_r_000000_0' to hdfs://master:9000/output/_temporary/0/task_local1599291624_0001_r_00000015/07/23 16:46:40 INFO mapred.LocalJobRunner: reduce > reduce15/07/23 16:46:40 INFO mapred.Task: Task 'attempt_local1599291624_0001_r_000000_0' done.15/07/23 16:46:40 INFO mapred.LocalJobRunner: Finishing task: attempt_local1599291624_0001_r_000000_015/07/23 16:46:40 INFO mapred.LocalJobRunner: reduce task executor complete.15/07/23 16:46:40 INFO mapreduce.Job:  map 100% reduce 100%15/07/23 16:46:40 INFO mapreduce.Job: Job job_local1599291624_0001 completed successfully15/07/23 16:46:40 INFO mapreduce.Job: Counters: 38        File System Counters                FILE: Number of bytes read=18012                FILE: Number of bytes written=751783                FILE: Number of read operations=0                FILE: Number of large read operations=0                FILE: Number of write operations=0                HDFS: Number of bytes read=275                HDFS: Number of bytes written=73                HDFS: Number of read operations=25                HDFS: Number of large read operations=0                HDFS: Number of write operations=5        Map-Reduce Framework                Map input records=2                Map output records=26                Map output bytes=214                Map output materialized bytes=246                Input split bytes=171                Combine input records=26                Combine output records=22                Reduce input groups=11                Reduce shuffle bytes=246                Reduce input records=22                Reduce output records=11                Spilled Records=44                Shuffled Maps =2                Failed Shuffles=0                Merged Map outputs=2                GC time elapsed (ms)=0                CPU time spent (ms)=0                Physical memory (bytes) snapshot=0                Virtual memory (bytes) snapshot=0                Total committed heap usage (bytes)=806354944        Shuffle Errors                BAD_ID=0                CONNECTION=0                IO_ERROR=0                WRONG_LENGTH=0                WRONG_MAP=0                WRONG_REDUCE=0        File Input Format Counters                Bytes Read=110        File Output Format Counters                Bytes Written=73


到此结束!!!!!


0 0