Hadoop实战学习(1)-WordCount

来源:互联网 发布:美工工资一般多少 编辑:程序博客网 时间:2024/06/12 00:50

 环境:Hadoop:2.7.3,JDK:1.8.0_111,ubuntu16.0.4


随意准备一个txt文档,输入一些单词。txt文档命名为file.txt。


然后上传到hdfs当中:hdfs dfs -put ~/file.txt input。


这里的input是事先创建好的目录。可以通过hadoop fs -mkdir input命令进行目录创建。


还有就是hadoop在运行作业之前输出目录是不应该存在的,否则hadoop会报错并拒绝运行。


以下是具体代码:

package org.apache.hadoop.examples;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;public class WordCount {public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{    private final static IntWritable one = new IntWritable(1);    private Text word = new Text();    public void map(Object key, Text value, Context context ) throws IOException, InterruptedException       {               StringTokenizer itr = new StringTokenizer(value.toString());              while (itr.hasMoreTokens())               {                          word.set(itr.nextToken());               context.write(word, one);              }      }    }public static class IntSumReducer     extends Reducer<Text,IntWritable,Text,IntWritable> {    private IntWritable result = new IntWritable();    public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException {       int sum = 0;       for (IntWritable val : values) {           sum += val.get();       }       result.set(sum);      context.write(key, result);    }   }public static void main(String[] args) throws Exception {    Configuration conf = new Configuration();    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();        if (otherArgs.length != 2) {      System.err.println("Usage: wordcount <in> <out>");      System.exit(2);    }    Job job = new Job(conf, "word count");    job.setJarByClass(WordCount.class);    job.setMapperClass(TokenizerMapper.class);//指定map类型    job.setCombinerClass(IntSumReducer.class);    job.setReducerClass(IntSumReducer.class);//指定reduce类型    job.setOutputKeyClass(Text.class);    job.setOutputValueClass(IntWritable.class);    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));//可以多次调用来实现多路径的输入    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));//只能有一个输出路径    System.exit(job.waitForCompletion(true) ? 0 : 1);  }}

编译后将程序打包成jar包MyWordCount.jar并放到主目录下。


用户主目录下运行命令:hadoop jar MyWordCount.jar org.apache.hadoop.examples.WordCount input output则开始作业。


作业完毕后,输入命令:hadoop fs -cat output/*查看运行结果。



0 0
原创粉丝点击