WordCount案例---MapReduce学习小结(-)
来源:互联网 发布:分体式集成灶 品牌知乎 编辑:程序博客网 时间:2024/06/05 03:03
距离第一次接触MapReduce已经过去了很久很久.回忆起来最开始的时候,看到一个程序那么多代码.都不知道如何入手…走了很多的弯路.到后来,慢慢的一步步学习中,才发现它并不是一个什么不可跨越的坑.
多练习,多总结.就能发现其中的秘密.
经典的WordCount
思路:
1. Map阶段:读取文件,这里读的时候,是一行一行的读取.然后,便可以分割开.输出的结果是这样的格式:
package day1;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class wordCountDemo { public static void main(String[] args) throws Exception { if (args.length!=2) { System.err.println("input err!"); System.exit(-1); } Job job=new Job(new Configuration(), "WordCount"); job.setJarByClass(wordCountDemo.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.setMapperClass(wMap.class); job.setReducerClass(wReduce.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.waitForCompletion(true); } //map阶段 public static class wMap extends Mapper<LongWritable, Text, Text,IntWritable>{ @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { String[] lines = value.toString().split(" "); for (String words : lines) { context.write(new Text(words), new IntWritable(1)); } } } //map阶段:<key,value> //shuffle:<key,{value,value....} //reduce阶段 public static class wReduce extends Reducer<Text, IntWritable, Text, IntWritable>{ @Override protected void reduce(Text k1, Iterable<IntWritable> v1, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum=0; for (IntWritable count : v1) { sum+=count.get(); } context.write(k1, new IntWritable(sum)); } }}
wordCount 其实不难.虽然是入门程序.但也相当的有用啊.很多都和这个的做法一样的..额,复习的时候自己写了一段.
package day1;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;/** * 统计学生总分. * 数据格式: * 姓名:科目:成绩 * 张三:语文:12 * 李四:数学:23 * 张三:英语:1000 * .... * @author YXY * */public class SumDemo { public static void main(String[] args) throws Exception { if (args.length!=2) { System.err.println("input err!"); System.exit(-1); } Job job=new Job(new Configuration(), "Sumdemo"); job.setJarByClass(wordCountDemo.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.setMapperClass(SumMap.class); job.setReducerClass(SumReduce.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.waitForCompletion(true); } public static class SumMap extends Mapper<LongWritable, Text, Text, IntWritable>{ @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { String[] lines = value.toString().split(":"); String name = lines[0].trim(); int score=Integer.parseInt(lines[2].trim()); context.write(new Text(name), new IntWritable(score)); } } public static class SumReduce extends Reducer<Text, IntWritable, Text,IntWritable>{ @Override protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException { int sum=0; for (IntWritable score : values) { sum+=score.get(); } context.write(key, new IntWritable(sum)); } }}
注意点
1. 使用智能提示的时候,注意要导入正确的包…因为很多时候,你会看到一样的名字.
2. 理解这种思路.复习MapReduce的工作原理.
3. alt+ctrl+L..快捷键.
4. 不要懒惰…刚开始不连续,想通过复制来解决.其实是对自己最大的不负责.
0 0
- WordCount案例---MapReduce学习小结(-)
- mapreduce wordcount案例
- mapreduce WordCount 学习笔记
- Mapreduce Java实现WordCount 小案例
- 5. MapReduce 结构与wordcount编程案例
- MapReduce案例学习开篇
- 案例学习: MapReduce
- Mapreduce wordCount
- MapReduce WordCount
- MapReduce WordCount
- wordCount MapReduce
- Mapreduce和HBase新版本整合之WordCount计数案例
- Hadoop(4-2)-MapReduce程序案例-WordCount(Intellij Idea环境)
- 学习Hadoop MapReduce与WordCount例子分析
- mapreduce学习笔记-wordcount代码实现
- MapReduce学习小结(二)
- hadoop学习--MapReduce初级案例
- hadoop学习笔记(三)mapreduce程序wordcount
- Spring boot(二)
- C语言实验——数组逆序 (sdut oj)
- 单向链表实现数据结构中的栈
- hexo如何为github博客设置一个域名?
- 《TCP/IP详解卷:协议》关于四层体系结构概述
- WordCount案例---MapReduce学习小结(-)
- 线性代数相关
- 给出两个字符串A B,求A与B的最长公共子序列(子序列不要求是连续的)。 比如两个串为: abcicba abdkscab ab是两个串的子序列,abc也是,abca也是,其中abca是这两个字符
- 排序 (sdut oj)
- #C/C++笔记#C++虚函数的作用和使用方法
- 双向链表实现数据结构中的队列
- Python学习笔记01_Python基础
- 紫书例题 11-3 UVa 1151 最小生成树,Kruskal,二进制枚举
- OpenGL学习总结(七)