基础MapReduce程序骨架

来源:互联网 发布:淘宝红包活动名称 编辑:程序博客网 时间:2024/04/27 20:18
/*************************************************** * MapReduce Basic Template * Author: jokes000 * Date: 2011-12-14 * Version: 1.0.0 **************************************************/public class MapReduceTemplate extends Configured implements Tool {public static class MapClass extends Mapper<Object,Object,Object,Object> {// Map Methodpublic void map(KEYIN key, VALUEIN value, Mapper.Context context) {}}public static class Reduce extends Reducer<Object,Object,Object,Object> {// Reduce Methodpublic void reduce(KEYIN key, Iterable<VALUEIN> values, Reducer.Context context) {}}// run Methodpublic int run(String[] args) throws Exception {// Create and Run the JobJob job = new Job();job.setJarByClass(MyJob.class);FileInputFormat.addInputPath(job, new Path(arg0[0]));FileOutputFormat.setOutputPath(job, new Path(arg0[1]));job.setMapperClass(MapClass.class);job.setReducerClass(Reduce.class);job.setInputFormatClass(MyInputFormat.class);job.setOutputFormatClass(TextOutputFormat.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(Text.class);System.exit(job.waitForCompletion(true) ? 0 : 1);return 0;}public static void main(String[] args) throws Exception {// }}

Mapper、Reducer在Job执行期间被多个节点clone以及执行,而剩下的Job Class值在客户端上执行。


除去Mapper和Reducer之后,我们程序的骨架为:

/*************************************************** * MapReduce Basic Template * Author: jokes000 * Date: 2011-12-14 * Version: 1.0.0 **************************************************/public class MapReduceTemplate extends Configured implements Tool {// run Methodpublic int run(String[] args) throws Exception {// Create and Run the JobJob job = new Job();job.setJarByClass(MyJob.class);FileInputFormat.addInputPath(job, new Path(arg0[0]));FileOutputFormat.setOutputPath(job, new Path(arg0[1]));job.setMapperClass(MapClass.class);job.setReducerClass(Reduce.class);job.setInputFormatClass(MyInputFormat.class);job.setOutputFormatClass(TextOutputFormat.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(Text.class);System.exit(job.waitForCompletion(true) ? 0 : 1);return 0;}public static void main(String[] args) throws Exception {// }}
这个骨架的核心是run()函数,被成为程序的driver,driver实例化、配置一个Job实体,并使其运行。

Job class会和JobTracker通信使其在集群中执行MapReduce Job。


run()函数中的基础步骤为:

1.创建一个新的Job实例;

2.设置实例运行的主类;

3.设置程序输入输出位置;

4.设置Mapper、Reducer类

5.设置Input、OutputFormat,设置OutputKey、Value类

6.执行Job


Job变量有许多变量,当用户从命令行执行一个Hadoop Job时,他可能想传入一些额外的参数来改变Job的配置。

于是Hadoop提供Configured和Tool类来支持这些参数的识别(具体识别的类为GenericOptionsParser)。

例如:

标准运行命令:bin/hadoop jar MyJob.jar MyJob input/cite77_99.txt output

但是此时我们只想看Mapper的输出(用来Debug),这是我们可以添加这些参数

bin/haoop jar MyJob.jar MyJob -D maped.reduce.tasks=0 input/cite75_99.txt output

GenericOptionsParser支持以下的选项:

-conf <configuration file>         specify an application configuration file

-D <property=value>                use value for given property

-fs <local|namenode:port>      specify a namenode

-jt <local|jobtracker:port>         specify a job tracker

-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster

-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.

-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.



原创粉丝点击