windows系统通过eclipse远程MapReduce服务器

来源:互联网 发布:ios 元数据丢失 编辑:程序博客网 时间:2024/06/06 15:48
本文主要介绍在windows系统中通过eclipse编写MapReduce程序,远程访问Hadoop服务器。前提是Linux服务器中已经部署成功hadoop环境。

windows中安装hadoop

  1. 将hadoop-2.7.1.tar.gz文件解压到windows硬盘中(如:C:\Program Files\hadoop-2.7.1)。

  2. 将hadoop2.7.1_win_bin文件夹中hadoop.dllwinutils.exe复制到上述安装目录的bin路径下。

    hadoop2.7.1_win_bin下载地址http://pan.baidu.com/s/1qYK6EZE 密码:cmsd

  3. 配置环境变量:将hadoop的bin目录配置到系统环境变量中。

eclipse中建立MapReduce程序

  1. 新建java project(注意不是MapReduce Project),将hadoop目录及子目录的jar添加到build path中。
  2. 新建MapReduce程序(如:WordCount)
package com.mr.test;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Partitioner;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.mapreduce.lib.partition.HashPartitioner;import org.apache.hadoop.util.GenericOptionsParser;public class WordCount {    public static class TokenizerMapper extends            Mapper<Object, Text, Text, IntWritable> {        private final static IntWritable one = new IntWritable(1);        private Text word = new Text();        public void map(Object key, Text value, Context context)                throws IOException, InterruptedException {            StringTokenizer itr = new StringTokenizer(value.toString());            while (itr.hasMoreTokens()) {                word.set(itr.nextToken());                context.write(word, one);            }        }    }    public static class WordCountPartitioner extends            Partitioner<Text, IntWritable> {        @Override        public int getPartition(Text arg0, IntWritable arg1, int arg2) {            // TODO Auto-generated method stub            char ch = arg0.toString().charAt(0);            if (ch > 'J') {                return 0;            } else {                return 1;            }        }    }    public static class IntSumReducer extends            Reducer<Text, IntWritable, Text, IntWritable> {        private IntWritable result = new IntWritable();        public void reduce(Text key, Iterable<IntWritable> values,                Context context) throws IOException, InterruptedException {            int sum = 0;            for (IntWritable val : values) {                sum += val.get();            }            result.set(sum);            context.write(key, result);        }    }    public static void main(String[] args) throws Exception {        System.setProperty("hadoop.home.dir", "C:\\Program Files\\hadoop-2.7.1");        Configuration conf = new Configuration();        String[] otherArgs = new GenericOptionsParser(conf, args)                .getRemainingArgs();        if (otherArgs.length != 2) {            System.err.println("Usage: wordcount <in> <out>");            System.exit(2);        }        Job job = new Job(conf, "word count");        job.setJarByClass(WordCount.class);        job.setMapperClass(TokenizerMapper.class);        job.setCombinerClass(IntSumReducer.class);        job.setReducerClass(IntSumReducer.class);        job.setOutputKeyClass(Text.class);              job.setOutputValueClass(IntWritable.class);        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));        job.setNumReduceTasks(2);        job.setPartitionerClass(WordCountPartitioner.class);        System.exit(job.waitForCompletion(true) ? 0 : 1);    }}

3 Run As Configution配置远程服务器的HDFS输入路径及输出路径
Run As Configution

4 执行成功。

阅读全文
1 0
原创粉丝点击