运行WordCount—程序

来源:互联网 发布:淘宝可以买限定皮肤吗 编辑:程序博客网 时间:2024/05/22 16:48

一、实验题目

编写一个MapReduce程序WordCount

二、实验目的

该程序能够计算单词数以及每个单词出现的频率。

三、任务分析

计数很简单,mapper中将文档拆成n个单词,记为<word,1>的形式,reducer的时候将键值相同的相加就可以统计出每个单词的频率。
所有键值相加,就可以得出总数。

四、实验步骤

1.将Hadoop的安全模式关闭,命令为:
hadoop dfsadmin -safemode leave
2.将待处理文件导入到hdfs文件中,命令为:
bin/hadoop dfs -copyFromLocal 源文件位置 hdfs:/
3.启动eclipse,建立Java project,导入相关jar文件,开始编码。
mapper部分代码:
package com.bigdata;import java.io.IOException;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Mapper;/** *  * @author training Class: WordCountMapper **/public class WordCountMapper extendsMapper<LongWritable, Text, Text, IntWritable> {/** * Optimization: Instead of creating the variables in the */@Overridepublic void map(LongWritable inputKey, Text inputVal, Context context)throws IOException, InterruptedException {String line = inputVal.toString();String[] splits = line.trim().split("\\W+");for (String outputKey : splits) {context.write(new Text(outputKey), new IntWritable(1));}}}
首先用trim方法去除头尾的空格,然后使用\\W+按英文字符串分割。
接着将splits中的元素取出,输出格式为<word,1>
然后是reducer函数:
package com.bigdata;import java.io.IOException;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Reducer;public class WordCountReducer extendsReducer<Text, IntWritable, Text, IntWritable> {@Overridepublic void reduce(Text key, Iterable<IntWritable> listOfValues,Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : listOfValues) {sum = sum + val.get();}context.write(key, new IntWritable(sum));}}
将所有key值相同的value相加,并输出<word,sum>
wordcount程序就写完了。
最后是driver函数:
package com.bigdata;import org.apache.hadoop.conf.Configured;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.Tool;import org.apache.hadoop.util.ToolRunner;public class WordCountDriver extends Configured implements Tool {public static void main(String[] args) throws Exception {ToolRunner.run(new WordCountDriver(), args);}@Overridepublic int run(String[] args) throws Exception {Job job = new Job(getConf(), "Basic Word Count Job");job.setJarByClass(WordCountDriver.class);// Map and Reducejob.setMapperClass(WordCountMapper.class);job.setReducerClass(WordCountReducer.class);job.setNumReduceTasks(1);job.setInputFormatClass(TextInputFormat.class);// the map outputjob.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(IntWritable.class);// the reduce outputjob.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);FileInputFormat.addInputPath(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));job.waitForCompletion(true);return 0;}}
4.编码完毕后export成jar文件
5.执行mapreduce

6.查看结果

0 0
原创粉丝点击