hadoop 求key最大值
来源:互联网 发布:php项目管理系统源码 编辑:程序博客网 时间:2024/06/06 17:46
那么,在之前,我们是否考虑到了一个问题:什么时候才可以打印出最大值?在所有的map任务结束的时候,就调用cleanup(...)时,
略说一下,map与reduce之间传值的原理:
在源文件将一行数据,分解成<key,value>时,会统一由map加工(调用map()),输出新的<key,value>,然后进行分区,在分区里面进行排序,然后输出给reduce
static class MyMapper extends Mapper<LongWritable, Text, LongWritable, NullWritable>{long max = Long.MIN_VALUE;protected void map(LongWritable k1, Text v1, Context context) throws java.io.IOException ,InterruptedException {final long temp = Long.parseLong(v1.toString());if(temp>max){max = temp;}};protected void cleanup(org.apache.hadoop.mapreduce.Mapper<LongWritable,Text,LongWritable, NullWritable>.Context context) throws java.io.IOException ,InterruptedException {context.write(new LongWritable(max), NullWritable.get());};}static class MyReducer extends Reducer<LongWritable, NullWritable, LongWritable, NullWritable>{long max = Long.MIN_VALUE;protected void reduce(LongWritable k2, java.lang.Iterable<NullWritable> arg1, org.apache.hadoop.mapreduce.Reducer<LongWritable,NullWritable,LongWritable,NullWritable>.Context arg2) throws java.io.IOException ,InterruptedException {final long temp = k2.get();if(temp>max){max = temp;}};protected void cleanup(org.apache.hadoop.mapreduce.Reducer<LongWritable,NullWritable,LongWritable,NullWritable>.Context context) throws java.io.IOException ,InterruptedException {context.write(new LongWritable(max), NullWritable.get());};}
0 0
- hadoop 求key最大值
- Hadoop 求最大值 最小值 BiggestSmallest
- MapReducer 求Top key 100最大值
- 求最大值
- 求最大值
- 求最大值
- 求最大值
- 求最大值
- 求最大值
- 求最大值
- 求最大值
- 求最大值....
- 求最大值
- 求最大值
- 求最大值
- 求最大值
- hadoop 自定义分组排序,求相同key中value最小值
- hadoop 最大值最小值
- 如何监控和解决SQL Server的阻塞(3) (扩展事件)
- vs2010帮助文档未安装解决,提示 "xhelp "
- C# 常用函数集锦
- iOS多线程编程指南(一)关于多线程编程
- CodeForces 416B
- hadoop 求key最大值
- Android Protobuf 初探笔记
- iOS多线程编程指南(二)线程管理
- FreeCMS1.8发布,第一款提供移动APP的java开源CMS!
- 酌孜他们酌孜是酌孜在那,酌孜也没什么纂了纂
- [ACM] hdu 1166 敌兵布阵 (线段树,单点更新)
- SharePoint 2013 内容部署功能简介
- C# 用Remote技术实现简单SOAP通讯
- iOS多线程编程指南(三)Run Loop