基于Map-Reduce的选择运算
来源:互联网 发布:淘宝宝贝关联本地商户 编辑:程序博客网 时间:2024/05/21 11:32
Selections really do not need the full power of MapReduce. They can be donemost conveniently in the map portion alone, although they could also be donein the reduce portion alone. Here is a MapReduce implementation of selectionσC (R).
The Map Function: For each tuple t in R, test if it satisfies C. If so, producethe key-value pair (t, t). That is, both the key and value are t.
The Reduce Function: The Reduce function is the identity. It simply passeseach key-value pair to the output.
Note that the output is not exactly a relation, because it has key-value pairs.However, a relation can be obtained by using only the value components (oronly the key components) of the output.
下边这段代码进行了选择预算,选取性别为女的记录(输入文件中每行为一条记录,均含有性别)
代码如下:
import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;public class Selection { public static class TokenizerMapper extends Mapper<Object, Text, Text, Text>{ private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer tokenizerArticle = new StringTokenizer(value.toString(),"\n");//将整个文本按行进行切分 while (tokenizerArticle.hasMoreTokens()) { String line=tokenizerArticle.nextToken();//获取行 StringTokenizer tokenizerLine = new StringTokenizer(line);//处理行 while (tokenizerLine.hasMoreTokens()) { if(tokenizerLine.nextToken().equals("女")){ word.set(line);//满足条件时将行赋给word context.write(word, word);//并输出 } } } } } public static class SelectionReducer extends Reducer<Text,Text,Text,Text> { public void reduce(Text key, Iterable<Text> values, Context context ) throws IOException, InterruptedException { context.write(key, new Text(""));//将Map传来的值直接输出,vaules被忽略 } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length < 2) { System.err.println("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } Job job = new Job(conf, "Selection"); job.setJarByClass(Selection.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(SelectionReducer.class); job.setReducerClass(SelectionReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); for (int i = 0; i < otherArgs.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(otherArgs[i])); } FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length - 1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }}
- 基于Map-Reduce的选择运算
- map-Reduce的运算过程
- 基于Hadoop的Map reduce编程(一)
- 基于Riak数据库的Map/Reduce实现
- 基于Map-Reduce的相似度计算
- 基于 shell streaming的 Map/Reduce程序
- 基于Map-Reduce的相似度计算
- 基于Map-Reduce的相似度计算
- 基于Map-Reduce的相似度计算
- 基于单步的Map-Reduce的矩阵乘法
- 配置Disco——基于erlang的map-reduce架构
- 基于map-reduce的并行最短路径算法
- 基于HIVE文件格式的map reduce代码编写
- 基于C++的Hadoop Map/Reduce框架--HCE
- 基于Map/Reduce的频繁项集挖掘
- 基于HIVE文件格式的map reduce代码编写
- 基于Map-Reduce的大规模分词服务搭建
- 基于HIVE文件格式的map reduce代码编写
- 2015062610 - 渠梁集
- string来存放二进制数据
- OGNL:Object Graph Navigation Language(对象图导航语言)
- 【REST】REST和JAX-RS相关知识介绍
- 5 Coding Hacks to Reduce GC Overhead
- 基于Map-Reduce的选择运算
- Spring4.0给我们带来什么?
- 黑马程序员-IOS学习笔记(八)分类与协议
- 编写与设置Servlet
- 使用Django开发一个ToDoList小项目
- JVM Performance Magic Tricks
- TCP/IP协议族-----22、万维网和HTTP
- 谈一谈我对未来的看法
- TCP/IP协议族-----23、电子邮件:SMTP、POP、IMAP和MIME