用MapReduce清洗数据

来源:互联网 发布:淘宝代销怎么发布宝贝 编辑:程序博客网 时间:2024/05/16 20:16

用MapReduce清洗数据

  • 接触Hadoop平台大半年了,还从来没写过一次MapReduce的业务代码,刚好赶上清洗数据的业务需求,写了一个简单的MapReduce类,用来清洗数据,顺手把一个简单的MapReduce工作流的代码框架记录下来
  • 第一个MapReduce程序不是流行的WordCount

类的整个框架如下:

public class DataCleaner extends Configured implements Tool {    public static class ReaderMapper extends Mapper<LongWritable, Text, LongWritable, Text> {        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {        }    }    public static class WriterReducer extends Reducer<LongWritable, Text, Text, NullWritable> {        protected void reduce(LongWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException {        }    }        public int run(String[] args) throws Exception {        Configuration conf = getConf();        Job job = Job.getInstance(conf);    }    public static void main(String[] args) throws Exception {        int result = ToolRunner.run(new TestDataClean(), args);        System.exit(result);    }}    

主函数: 通过ToolRunner.run()启动MR程序

  • 在主函数中,通过ToolRunner.run()调用实现了Tool接口DataCleaner类
  • ToolRunner.run()中,可以传递由用户在命令行中输入的参数
  • 等待MR的运行结果

重写的run函数

  • 实例化Job对象,在实例化的过程中添加Configuration对象conf作为配置
  • 通过Job类的setter()方法设置提交的Job名称,还有用户自己编写的Mapper类Reducer类
  • 通过Job类的setter()方法设置相应的值,包括Map和Reduce各自的输入输出类型
  • 设置数据的输入路径和输出路径,这里的数据存放在HDFS上
public int run(String[] args) throws Exception {        Configuration conf = getConf();        Job job = Job.getInstance(conf);        job.setJarByClass(DataCleaner.class);        job.setJobName("DataClean");        job.setMapperClass(ReaderMapper.class);        job.setReducerClass(WriterReducer.class);        job.setMapOutputKeyClass(LongWritable.class);        job.setMapOutputValueClass(Text.class);        job.setOutputKeyClass(Text.class);        job.setOutputValueClass(NullWritable.class);        FileInputFormat.setInputPaths(job, new Path(args[1]));        FileOutputFormat.setOutputPath(job, new Path(args[2]));        boolean success = job.waitForCompletion(true);        return success? 0 : 1;    }

Map & Reduce

  • Map和Reduce的具体处理逻辑由用户自己实现,分别继承Mapper类Reducer类
  • 需要指定Map和Reduce各自的输入输出的key/value的类型,Map的输出和Reduce的输入的key/value类型一致

继承了Mapper类的静态内部类

  • 静态内部类Map,继承了Mapper类,重写了其map()方法
/* Mapper类通过泛型限制了输入的key/value和输出的key/value的类型 */public static class Map extends Mapper<LongWritable, Text, LongWritable, Text> {    /* 重写的map函数,前两个参数的key/value的类型对应Mapper泛型检验的前两个参数 */        protected void map(LongWritable key, Text value, Context context)         throws IOException, InterruptedException {        Text outputValue = new Text();        doSomeProcess();        /* 输出给Reducer处理,参数对应的是Mapper泛型检验的后两个参数  */        context.write(key, outputValue);       }}

继承了Reducer类的静态内部类

  • 静态内部类Reducer,继承了Reducer类,重写了其reduce()方法
/* Reducer类通过泛型限制了输入的key/value和输出的key/value的类型 * 其中输入的key/value的类型和Map类的输出的key/value的类型一致  */public static class Reducer extends Reducer<LongWritable, Text, Text, NullWritable> {    /* 重写的reduce函数,前两个参数的key/value的类型对应Reducer类泛型检验的前两个参数      * reduce函数处理的输入的value是一个迭代器      */     protected void reduce(LongWritable key, Iterable<Text> values, Context context)         throws IOException, InterruptedException {        for (Text line : values) {            /* 输出的key/value的类型对应的是Reducer类的后两个参数 */                context.write(line, NullWritable.get());        }    }}
3 0
原创粉丝点击