024_MapReduce中的基类Mapper和基类Reducer

来源:互联网 发布:梦幻邮箱数据 编辑:程序博客网 时间:2024/05/21 00:46
内容提纲

1) MapReduce中的基类Mapper类,自定义Mapper类的父类。

2) MapReduce中的基类Reducer类,自定义Reducer类的父类。

1、Mapper类

API文档

1) InputSplit输入分片,InputFormat输入格式化

2) 对Mapper输出结果进行Sorted排序和Group分组

3) 对Mapper输出结果依据Reducer个数进行分区Patition

4) 对Mapper输出数据进行Combiner

  • 在Hadoop官方文档的Mapper类说明:

  Maps input key/value pairs to a set of intermediate key/value pairs.

  Maps are the individual tasks which transform input records into a intermediate records. The transformed intermediate records need not be of the same type as the input records. A given input pair may map to zero or many output pairs.

  The Hadoop Map-Reduce framework spawns one map task for each InputSplit generated by the InputFormat for the job. Mapper implementations can access the Configuration for the job via the JobContext.getConfiguration().

  The framework first calls setup(org.apache.hadoop.mapreduce.Mapper.Context), followed by map(Object, Object, Context) for each key/value pair in the InputSplit. Finally cleanup(Context) is called.

  All intermediate values associated with a given output key are subsequently grouped by the framework, and passed to a Reducer to determine the final output. Users can control the sorting and grouping by specifying two key RawComparator classes.

  The Mapper outputs are partitioned per Reducer. Users can control which keys (and hence records) go to which Reducer by implementing a custom Partitioner.

  Users can optionally specify a combiner, via Job.setCombinerClass(Class), to perform local aggregation of the intermediate outputs, which helps to cut down the amount of data transferred from the Mapper to the Reducer.

  Applications can specify if and how the intermediate outputs are to be compressed and which CompressionCodecs are to be used via the Configuration.

If the job has zero reduces then the output of the Mapper is directly written to the OutputFormat without sorting by keys.

  • Mapper类的结构:

  

  • 方法如下:

第一类:protected类型,用户根据实际需要进行覆写。

1) setup:每个任务执行前调用一次。

2) map:每个Key/Value对调用一次。

3) clearup:每个任务执行结束前调用一次。

第二类,运行的方法

    run()方法,是Mapper类的入口,方法内部调用了setup()、map()、clearup()三个方法。

0 0