hadoop 开发调试环境-eclipse配置记录

来源:互联网 发布:政府的顶级域名 编辑:程序博客网 时间:2024/06/05 06:17

一 目标

    基于前面安装的hadoop单机伪分布环境安装配置eclipse和eclipse hadoop 插件,并运行wordcount程序检验是否正确。参考了网上的资料。

二  配置过程 

1. 准备软件 

eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gzhadoop-eclipse-plugin-2.2.0.jar
2.安装

   1)运行tar - xvf eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gz 解压,然后用mv命令将其搬到/usr/local/eclipse

   2)将hadoop-eclipse-plugin-2.2.0.jar拷贝到eclipse下的plugins目录

oliver@bigdatadev:/usr/local/eclipse/plugins$ pwd/usr/local/eclipse/pluginsoliver@bigdatadev:/usr/local/eclipse/plugins$ ls hadoop-eclipse*hadoop-eclipse-plugin-2.2.0.jar
   重新启动eclipse,能在ProjectEcplorer下看到DFS Locations 则表示插件加载成功。

 

3.配置

  1)点window->show viwe->Other...->菜单,弹出对话框,选择MapReduce Tools->Map/Reduce Locations,然后点击OK

  

 2) 点window->Open Perspactive->Other...->菜单,弹出对话框,选择Map/Reduce,然后点击OK


3)邮件点击Map/Reduce Locations的空白区域,弹出菜单


4)选择New Hadoop location... 弹出对话框

  第一页


第二页,把fs.defaultFS填好与core-site.xml中的内容保持一致

  

5)检查能连接上hdfs就可以了,先要启动hdfs(start-all.sh)

  

4.运行wordcount示例程序

    新建工程选择Map/Reduce Project

    

  

示例代码

package demo.n02.hadoop;import java.io.*;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCount{    public static class WordCountMapper extends Mapper<Object, Text, Text, IntWritable>    {        private final static IntWritable one = new IntWritable(1);        private Text word = new Text();                public void map(Object key, Text value, Context context) throws IOException, InterruptedException        {            String[] words = value.toString().split(" ");            for(String str: words)            {                word.set(str);                context.write(word, one);            }        }    }    public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable>    {        public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException        {            int total = 0;            for(IntWritable val: values)            {                total++;            }            context.write(key, new IntWritable(total));        }    }    public static void main(String[] args) throws Exception    {        Configuration conf = new Configuration();        Job job = new Job(conf, "word count");        job.setJarByClass(WordCount.class);        job.setMapperClass(WordCountMapper.class);        job.setReducerClass(WordCountReducer.class);        job.setOutputKeyClass(Text.class);        job.setOutputValueClass(IntWritable.class);        FileInputFormat.addInputPath(job, new Path(args[0]));        FileOutputFormat.setOutputPath(job, new Path(args[1]));        System.exit(job.waitForCompletion(true)?0:1);    }}
定义运行或调试




运行不出错,配置成功。

0 0