Hadoop学习笔记---1.wordcount程序的剖析

来源:互联网 发布:淘宝美妆正品店铺推荐 编辑:程序博客网 时间:2024/06/05 17:25

    前些天一直都把时间花在了装hadoop上,今天终于运行了自己的首个hadoop程序,现在将程序和自己对程序的理解贴出来和大家分享

import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCount {  /**   * TokenizerMapper 继续自 Mapper<Object, Text, Text, IntWritable>   *   * [一个文件就一个map,两个文件就会有两个map]   * map[这里读入输入文件内容 以" \t\n\r\f" 进行分割,然后设置 word ==> one 的key/value对]   *   * @param Object  Input key Type:   * @param Text    Input value Type:   * @param Text    Output key Type:   * @param IntWritable Output value Type:   *   * Writable的主要特点是它使得Hadoop框架知道对一个Writable类型的对象怎样进行serialize以及deserialize.   * WritableComparable在Writable的基础上增加了compareT接口,使得Hadoop框架知道怎样对WritableComparable类型的对象进行排序。   *   * @author yangchunlong.tw   *   */  public static class TokenizerMapper       extends Mapper<Object, Text, Text, IntWritable>{    private final static IntWritable one = new IntWritable(1);    private Text word = new Text();    public void map(Object key, Text value, Context context                    ) throws IOException, InterruptedException {      StringTokenizer itr = new StringTokenizer(value.toString());      while (itr.hasMoreTokens()) {        word.set(itr.nextToken());        context.write(word, one);      }    }  }  /**   * IntSumReducer 继承自 Reducer<Text,IntWritable,Text,IntWritable>   *   * [不管几个Map,都只有一个Reduce,这是一个汇总]   * reduce[循环所有的map值,把word ==> one 的key/value对进行汇总]   *   * 这里的key为Mapper设置的word[每一个key/value都会有一次reduce]   *   * 当循环结束后,最后的确context就是最后的结果.   *   * @author yangchunlong.tw   *   */  public static class IntSumReducer       extends Reducer<Text,IntWritable,Text,IntWritable> {    private IntWritable result = new IntWritable();    public void reduce(Text key, Iterable<IntWritable> values,                       Context context                       ) throws IOException, InterruptedException {      int sum = 0;      for (IntWritable val : values) {        sum += val.get();      }      result.set(sum);      context.write(key, result);    }  }  public static void main(String[] args) throws Exception {    Configuration conf = new Configuration();    Job job = new Job(conf, "word count");    job.setJarByClass(WordCount.class);//主类    job.setMapperClass(TokenizerMapper.class);//mapper    job.setCombinerClass(IntSumReducer.class);//作业合成类    job.setReducerClass(IntSumReducer.class);//reducer    job.setOutputKeyClass(Text.class);//设置作业输出数据的关键类    job.setOutputValueClass(IntWritable.class);//设置作业输出值类    FileInputFormat.addInputPath(job, new Path("in"));//输入文件    FileInputFormat.addInputPath(job, new Path("in2"));//输入文件2   会产生2个 map    FileOutputFormat.setOutputPath(job, new Path("out"));//文件输出    System.exit(job.waitForCompletion(true) ? 0 : 1);//等待完成退出.  }}

下面来分析一下这个程序的含义:

         上述是一个典型的MapRedue程序,其中充当mapper combiner 和reducer的分别是:TokenizerMapper,IntSumReducer,IntSumReducer。Mapreduce是通过操作键/值对来处理数据。一般形式为:map :(k1, v1)->list(k2, b2)

                                                reduce:(k2, list(v2))->list(k3, v3)

           map阶段的key-value对的格式是由输入的格式所决定的,如果是默认的TextInputFormat,则每行作为一个记录进程处理,其中key为此行的开头相对于文件的起始位置,value就是此行的字符文本.(本例采用默认的TextInputFormat)。假设有如下天气的数据:

                                                                                                      

  • 按照ASCII码存储,每行一条记录
  • 每一行字符从0开始计数,第15个到第18个字符为年
  • 第25个到第29个字符为温度,其中第25位是符号+/-

          

                             0067011990999991950051507+0000+

                            0043011990999991950051512+0022+

                             0043011990999991950051518-0011+

                            0043012650999991949032412+0111+

                            0043012650999991949032418+0078+

                            0067011990999991937051507+0001+

                           0043011990999991937051512-0002+

                            0043011990999991945051518+0001+

                            0043012650999991945032412+0002+

                           0043012650999991945032418+0078+

                          现在需要统计出每年的最高温度。

                  对于上面的例子,在map过程,输入的key-value对如下:

                                  

(0, 0067011990999991950051507+0000+)

(33, 0043011990999991950051512+0022+)

(66, 0043011990999991950051518-0011+)

(99, 0043012650999991949032412+0111+)

(132, 0043012650999991949032418+0078+)

(165, 0067011990999991937051507+0001+)

(198, 0043011990999991937051512-0002+)

(231, 0043011990999991945051518+0001+)

(264, 0043012650999991945032412+0002+)

(297, 0043012650999991945032418+0078+)

输出是这样的:

(1950, 0)

(1950, 22)

(1950, -11)

(1949, 111)

(1949, 78)

(1937, 1)

(1937, -2)

(1945, 1)

(1945, 2)

(1945, 78)

在reduce过程,将map过程中的输出,按照相同的key将value放到同一个列表中作为reduce的输入

(1950, [0, 22, –11])

(1949, [111, 78])

(1937, [1, -2])

(1945, [1, 2, 78])

在reduce过程中,在列表中选择出最大的温度,将年-最大温度的key-value作为输出:

(1950, 22)

(1949, 111)

(1937, 1)

(1945, 78)


原创粉丝点击