Hadoop_MapReduce中的WordCount运行详解 运行原理
来源:互联网 发布:淘宝达人怎么赚钱的 编辑:程序博客网 时间:2024/05/21 06:32
源代码程序
- import java.io.IOException;
- import java.util.StringTokenizer;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.Job;
- import org.apache.hadoop.mapreduce.Mapper;
- import org.apache.hadoop.mapreduce.Reducer;
- import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
- import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
- import org.apache.hadoop.util.GenericOptionsParser;
- public class WordCount {
- public static class TokenizerMapper
- extends Mapper<Object, Text, Text, IntWritable>{
- private final static IntWritable one = new IntWritable(1);
- private Text word = new Text();
- public void map(Object key, Text value, Context context)
- throws IOException, InterruptedException {
- StringTokenizer itr = new StringTokenizer(value.toString());
- while (itr.hasMoreTokens()) {
- word.set(itr.nextToken());
- context.write(word, one);
- }
- }
- }
- public static class IntSumReducer
- extends Reducer<Text,IntWritable,Text,IntWritable> {
- private IntWritable result = new IntWritable();
- public void reduce(Text key, Iterable<IntWritable> values,Context context)
- throws IOException, InterruptedException {
- int sum = 0;
- for (IntWritable val : values) {
- sum += val.get();
- }
- result.set(sum);
- context.write(key, result);
- }
- }
- public static void main(String[] args) throws Exception {
- Configuration conf = new Configuration();
- String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
- if (otherArgs.length != 2) {
- System.err.println("Usage: wordcount <in> <out>");
- System.exit(2);
- }
- Job job = new Job(conf, "word count");
- job.setJarByClass(WordCount.class);
- job.setMapperClass(TokenizerMapper.class);
- job.setCombinerClass(IntSumReducer.class);
- job.setReducerClass(IntSumReducer.class);
- job.setOutputKeyClass(Text.class);
- job.setOutputValueClass(IntWritable.class);
- FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
- FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
- System.exit(job.waitForCompletion(true) ? 0 : 1);
- }
- }
本节将对WordCount进行更详细的讲解。详细执行步骤如下
1)将文件拆分成splits,由于测试用的文件较小,所以每个文件为一个split,并将文件按行分割形成<key,value>对,如图所示。这一步由MapReduce框架自动完成,其中偏移量(即key值)包括了回车所占的字符数
2)将分割好的<key,value>对交给用户定义的map方法进行处理,生成新的<key,value>对,如图所示
3)得到map方法输出的<key,value>对后,Mapper会将它们按照key值进行排序,并执行Combine过程,将key至相同value值累加,得到Mapper的最终输出结果。如图所示
4)Reducer先对从Mapper接收的数据进行排序,再交由用户自定义的reduce方法进行处理,得到新的<key,value>对,并作为WordCount的输出结果,如图所示
0 0
- Hadoop_MapReduce中的WordCount运行详解 运行原理
- Hadoop_MapReduce中的WordCount运行详解
- Hadoop_MapReduce中的WordCount运行详解
- WordCount运行详解
- [Hadoop] WordCount运行详解
- hadoop WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- WordCount运行详解
- Hadoop WordCount运行详解
- WordCount运行详解
- Hadoop-WordCount运行详解
- 基于Freescale Android4.2.2 max11801 的10寸电阻屏校准方法总结
- proc文件系统中可能对用户有用的信息
- Android URI简介
- Linux下的I/O
- 解答有关REST的十点疑惑
- Hadoop_MapReduce中的WordCount运行详解 运行原理
- 设计模式之工厂模式
- 使用Spring的jdbcTemplate进一步简化JDBC操作
- jquery 控制css样式
- Webots机器人仿真软件
- Android MIME类型结构
- ByteBuffer常用方法详解
- Code Vs 1082 线段树练习 3
- 浅谈程序员的薪资和工作经验