基于eclipse开发hadoop2中的MapReduce

来源:互联网 发布:java编写小程序实例 编辑:程序博客网 时间:2024/06/04 19:56
开发
在windows下开发,通过eclipse连接到hadoop集群,并且远程运行 

参考代码为wordcount代码

package mapreduce;import java.net.URI;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCountApp {static final String INPUT_PATH = "hdfs://192.168.7.12:9000/hello";static final String OUT_PATH = "hdfs://192.168.7.12:9000/helloout";public static void main(String[] args) throws Exception {Configuration conf = new Configuration();conf.set("mapred.job.tracker", "192.168.7.12:9001");final FileSystem fileSystem = FileSystem.get(new URI(INPUT_PATH), conf);final Path outPath = new Path(OUT_PATH);if(fileSystem.exists(outPath)){fileSystem.delete(outPath, true);}final Job job = new Job(conf , WordCountApp.class.getSimpleName());//1.1指定读取的文件位于哪里FileInputFormat.setInputPaths(job, INPUT_PATH);//指定如何对输入文件进行格式化,把输入文件每一行解析成键值对//job.setInputFormatClass(TextInputFormat.class);//1.2 指定自定义的map类job.setMapperClass(MyMapper.class);//map输出的<k,v>类型。如果<k3,v3>的类型与<k2,v2>类型一致,则可以省略//job.setMapOutputKeyClass(Text.class);//job.setMapOutputValueClass(LongWritable.class);//1.3 分区//job.setPartitionerClass(HashPartitioner.class);//有一个reduce任务运行//job.setNumReduceTasks(1);//1.4 TODO 排序、分组//1.5 TODO 规约//2.2 指定自定义reduce类job.setReducerClass(MyReducer.class);//指定reduce的输出类型job.setOutputKeyClass(Text.class);job.setOutputValueClass(LongWritable.class);//2.3 指定写出到哪里FileOutputFormat.setOutputPath(job, outPath);//指定输出文件的格式化类//job.setOutputFormatClass(TextOutputFormat.class);//把job提交给JobTracker运行job.waitForCompletion(true);}/** * KEYIN即k1表示行的偏移量 * VALUEIN即v1表示行文本内容 * KEYOUT即k2表示行中出现的单词 * VALUEOUT即v2表示行中出现的单词的次数,固定值1 */static class MyMapper extends Mapper<LongWritable, Text, Text, LongWritable>{protected void map(LongWritable k1, Text v1, Context context) throws java.io.IOException ,InterruptedException {final String[] splited = v1.toString().split("\t");for (String word : splited) {context.write(new Text(word), new LongWritable(1));}};}/** * KEYIN即k2表示行中出现的单词 * VALUEIN即v2表示行中出现的单词的次数 * KEYOUT即k3表示文本中出现的不同单词 * VALUEOUT即v3表示文本中出现的不同单词的总次数 * */static class MyReducer extends Reducer<Text, LongWritable, Text, LongWritable>{protected void reduce(Text k2, java.lang.Iterable<LongWritable> v2s, Context ctx) throws java.io.IOException ,InterruptedException {long times = 0L;for (LongWritable count : v2s) {times += count.get();}ctx.write(k2, new LongWritable(times));};}}


调试

           直接在eclipse运行即可


测试

MRunit测试开发

异常解决

1、Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
hadoop2.2.0配置eclipse运行wordcount程序问题及解决方法:
解决方法

从https://github.com/srccodes/hadoop-common-2.2.0-bin上下载bin包,然后将bin目录下的wiutils.exe放在hadoop/bin目录下或者将hadoop/bin目录文件全部替换为bin包,并且配置hadoop环境变量HADOOP_HOME="*****",在path中配置%HADOOP_HOME%\bin


问题2:文件IO异常

将第一步问题解决后继续运行程序,会出现如下问题

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
原因:缺少hadoop.dll文件

解决方案:

将bin包下的hadoop文件放在C:\Windows\System32目录下。

即可解决上面问题!!!Happy一把!!!


0 0
原创粉丝点击