Map/Reduce API 样例一

来源:互联网 发布:淘宝的全球购是真的吗 编辑:程序博客网 时间:2024/06/11 03:46
环境:
CentOS 6.5, Eclipse 4.4.2, Hadoop 1.1.2


任务目标:从数据源抽取指定的字段,并统计出错行数


一、数据源准备

在hdfs://vm1:9000/user/hadoop/in目录中上传了两个数据文件,test1.txt和test2.txt

内容如下:

test1.txt

MAY 12:10:12 192.158.202 calvinTHR 11:22:23 192.168.22.3 jamesTHR 22:33:22 192.155.23.22 johnFRI 23:22:12 158.129.234.23 kateLL DDI 23:11:33 192.168.11.10 frame

test2.txt

EIG 12:10:12 192.158.202 calvinOCT 11:22:23 192.168.22.3 jamesNUM 22:33:22 192.155.23.22 johnSEC 23:22:12 158.129.234.23 kate


二、程序编写

import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.conf.Configured;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.NullWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;import org.apache.hadoop.util.Tool;import org.apache.hadoop.util.ToolRunner;public class Test_1 extends Configured implements Tool {enum Counter {LINE_SKIP}public static class Map_1 extends Mapper<LongWritable, Text, NullWritable, Text> {@Overrideprotected void map(LongWritable key, Text value, Context context)throws IOException, InterruptedException {String line = value.toString(); try {String[] lineSplits = line.split(" ");String month = lineSplits[0];String time = lineSplits[1];String ip = lineSplits[2];Text out = new Text(month + " " + time + " " + ip);context.write(NullWritable.get(), out);} catch (Exception e) {//e.printStackTrace();context.getCounter(Counter.LINE_SKIP).increment(1);}}}@Overridepublic int run(String[] args) throws Exception {Configuration conf = this.getConf();Job job = new Job(conf, "Test_1");job.setJarByClass(Test_1.class);FileInputFormat.addInputPath(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));job.setMapperClass(Map_1.class);job.setOutputFormatClass(TextOutputFormat.class);job.setOutputKeyClass(NullWritable.class);job.setOutputValueClass(Text.class);job.waitForCompletion(true);return job.isSuccessful() ? 1 : 0;}public static void main(String[] args) throws Exception {int res = ToolRunner.run(new Configuration(), new Test_1(), args);System.exit(res);}}


三、在Eclipse中运行程序 

Run As -> Run Configuration -> Arguments选项卡 -> 在Program Arguments中填入hdfs://vm1:9000/user/hadoop/in hdfs://vm1:9000/user/hadoop/out  -> Run


四、运行结果

在hdfs://vm1:9000/user/hadoop/out/part-r-00000文件中查看结果

EIG 12:10:12 192.158.202OCT 11:22:23 192.168.22.3NUM 22:33:22 192.155.23.22SEC 23:22:12 158.129.234.23MAY 12:10:12 192.158.202THR 11:22:23 192.168.22.3THR 22:33:22 192.155.23.22FRI 23:22:12 158.129.234.23DDI 23:11:33 192.168.11.10


0 0
原创粉丝点击