Hadoop实战学习(1)-WordCount
来源:互联网 发布:美工工资一般多少 编辑:程序博客网 时间:2024/06/12 00:50
环境:Hadoop:2.7.3,JDK:1.8.0_111,ubuntu16.0.4
随意准备一个txt文档,输入一些单词。txt文档命名为file.txt。
然后上传到hdfs当中:hdfs dfs -put ~/file.txt input。
这里的input是事先创建好的目录。可以通过hadoop fs -mkdir input命令进行目录创建。
还有就是hadoop在运行作业之前输出目录是不应该存在的,否则hadoop会报错并拒绝运行。
以下是具体代码:
package org.apache.hadoop.examples;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;public class WordCount {public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class);//指定map类型 job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class);//指定reduce类型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0]));//可以多次调用来实现多路径的输入 FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));//只能有一个输出路径 System.exit(job.waitForCompletion(true) ? 0 : 1); }}
编译后将程序打包成jar包MyWordCount.jar并放到主目录下。
用户主目录下运行命令:hadoop jar MyWordCount.jar org.apache.hadoop.examples.WordCount input output则开始作业。
作业完毕后,输入命令:hadoop fs -cat output/*查看运行结果。
0 0
- Hadoop实战学习(1)-WordCount
- Hadoop 用Eclipse来Mapreduce WordCount实战(1)
- hadoop学习1 实现WordCount
- hadoop实战-学习(1)
- Hadoop学习(二)wordcount源码详解
- Hadoop学习笔记:(一)WordCount运行
- Hadoop 用Eclipse来MapReduce WordCount实战 (2)
- Hadoop WordCount详解(2.7.1版本)
- Hadoop-HelloWorld(WordCount)
- Hadoop 实战之单词计数WordCount
- Hadoop 实战之单词计数wordcount
- hadoop实战 自己运行WordCount.java
- hadoop学习笔记之wordcount
- Hadoop 从零开始学习系列-wordCount
- hadoop入门之wordcount学习
- hadoop学习(六)WordCount示例深度学习MapReduce过程(1)
- 第122讲:实战WordCount测试Hadoop集群环境学习笔记
- Hadoop学习笔记(1):WordCount程序的实现与总结
- 用matlab画地形图,包括三维地形图以及平面热度图的方法
- 厄米多项式
- 微信产品分析
- python爬虫程序相关学习
- MQ产品比较-ActiveMQ-RocketMQ
- Hadoop实战学习(1)-WordCount
- Qt学习笔记--对话框
- 辗转相除法,相减法,穷举法求最大公约数
- AOP参数详解
- Error:Execution failed for task ':app:validateDebugSigning'. > Keystore file F:\myAndroid3\android_s
- 多线程_多线程常见的面试题
- 贝叶斯
- CUDA学习日记10
- Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.Str