为Hadoop的MapReduce程序编写makefile

来源:互联网 发布:深圳教苑中学 知乎 编辑:程序博客网 时间:2024/06/05 05:14

最近需要把基于hadoop的MapReduce程序集成到一个大的用C/C++编写的框架中,需要在make的时候自动将MapReduce应用进行编译和打包。这里以简单的WordCount1为例说明具体的实现细节,注意:hadoop版本为2.4.0.

源代码包含两个文件,一个是WordCount1.java是具体的对单词计数实现的逻辑;第二个是CounterThread.java,其中简单的当前处理的行数做一个统计和打印。代码分别见附1. 编写makefile的关键是将hadoop提供的jar包的路径全部加载进来,看到网上很多资料都自己实现一个脚本把hadoop目录下所有的.jar文件放到一个路径中,然后进行编译,这种做法太麻烦了。当然也有些简单的办法,但是都是比较老的hadoop版本如0.20之类的。


其实,hadoop提供了一个命令hadoop classpath可以获得包含所有jar包的路径.所以只需要用 javac -classpath "`hadoop classpath`" *.java 便可,然后使用jar -cvf对class文件进行打包就可以了。具体的Makefile代码如下:

SRC_DIR = src/mypackage/*.java CLASS_DIR = binTARGET_JAR = WordCountall:$(TARGET_JAR)$(TARGET_JAR): $(SRC_DIR) mkdir -p $(CLASS_DIR)#javac -classpath `$(HADOOP) classpath` -d $(CLASS_DIR) $(SRC_DIR) javac -classpath "`hadoop classpath`" src/mypackage/*.java -d $(CLASS_DIR) -Xlintjar -cvf $(TARGET_JAR).jar -C $(CLASS_DIR) ./ clean: rm -rf $(CLASS_DIR) *.jar

make一下:

lichao@ubuntu:WordCount1$ makemkdir -p binjavac -classpath "`hadoop classpath`" src/mypackage/*.java -d bin -Xlintwarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/common/lib/jaxb-api.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/common/lib/activation.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/common/lib/jsr173_1.0_api.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/common/lib/jaxb1-impl.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-api.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/yarn/lib/activation.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/yarn/lib/jsr173_1.0_api.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb1-impl.jar": no such file or directorywarning: [path] bad path element "/home/lichao/Software/hadoop/hadoop-src/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/contrib/capacity-scheduler/*.jar": no such file or directorysrc/mypackage/WordCount1.java:61: warning: [deprecation] Job(Configuration,String) in Job has been deprecatedJob job = new Job(conf, "WordCount1");                  //建立新job          ^10 warningsjar -cvf WordCount.jar -C bin ./added manifestadding: mypackage/(in = 0) (out= 0)(stored 0%)adding: mypackage/WordCount1.class(in = 1970) (out= 1037)(deflated 47%)adding: mypackage/CounterThread.class(in = 1760) (out= 914)(deflated 48%)adding: mypackage/WordCount1$IntSumReducer.class(in = 1762) (out= 749)(deflated 57%)adding: mypackage/WordCount1$TokenizerMapper.class(in = 1759) (out= 762)(deflated 56%)adding: log4j.properties(in = 476) (out= 172)(deflated 63%)
虽然有warning,但是不影响结果。编译后,我们来简单的测试一下。

先生成测试数据:while true; do seq 1 100000 >> tmpfile; done; 差不多可以了就Ctrl+c

然后将数据放到hdfs上,hadoop fs -put tmpfile /data/

接着运行MapReduce程序:hadoop jar WordCount.jar mypackage/WordCount1 /data/tmpfile /output2

效果如下:

14/07/15 13:26:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable14/07/15 13:26:03 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:803214/07/15 13:26:05 INFO input.FileInputFormat: Total input paths to process : 114/07/15 13:26:05 INFO mapreduce.JobSubmitter: number of splits:614/07/15 13:26:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1405397597558_000314/07/15 13:26:06 INFO impl.YarnClientImpl: Submitted application application_1405397597558_000314/07/15 13:26:06 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1405397597558_0003/14/07/15 13:26:06 INFO mapreduce.Job: Running job: job_1405397597558_000314/07/15 13:26:20 INFO mapreduce.Job: Job job_1405397597558_0003 running in uber mode : false14/07/15 13:26:20 INFO mapreduce.Job:  map 0% reduce 0%14/07/15 13:26:34 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead输入行数:014/07/15 13:26:48 INFO mapreduce.Job:  map 2% reduce 0%输入行数:313847414/07/15 13:26:51 INFO mapreduce.Job:  map 5% reduce 0%14/07/15 13:26:54 INFO mapreduce.Job:  map 6% reduce 0%14/07/15 13:26:55 INFO mapreduce.Job:  map 8% reduce 0%14/07/15 13:26:57 INFO mapreduce.Job:  map 9% reduce 0%14/07/15 13:26:58 INFO mapreduce.Job:  map 11% reduce 0%14/07/15 13:27:00 INFO mapreduce.Job:  map 12% reduce 0%14/07/15 13:27:01 INFO mapreduce.Job:  map 13% reduce 0%输入行数:2338359514/07/15 13:27:05 INFO mapreduce.Job:  map 14% reduce 0%输入行数:2338359514/07/15 13:27:23 INFO mapreduce.Job:  map 15% reduce 0%14/07/15 13:27:27 INFO mapreduce.Job:  map 16% reduce 0%14/07/15 13:27:28 INFO mapreduce.Job:  map 18% reduce 0%14/07/15 13:27:30 INFO mapreduce.Job:  map 19% reduce 0%14/07/15 13:27:31 INFO mapreduce.Job:  map 21% reduce 0%14/07/15 13:27:34 INFO mapreduce.Job:  map 24% reduce 0%输入行数:3843030114/07/15 13:27:37 INFO mapreduce.Job:  map 25% reduce 0%14/07/15 13:27:40 INFO mapreduce.Job:  map 26% reduce 0%输入行数:4282632214/07/15 13:27:57 INFO mapreduce.Job:  map 27% reduce 0%14/07/15 13:28:00 INFO mapreduce.Job:  map 29% reduce 0%14/07/15 13:28:02 INFO mapreduce.Job:  map 30% reduce 0%14/07/15 13:28:03 INFO mapreduce.Job:  map 32% reduce 0%输入行数:5451353114/07/15 13:28:05 INFO mapreduce.Job:  map 33% reduce 0%14/07/15 13:28:06 INFO mapreduce.Job:  map 34% reduce 0%14/07/15 13:28:08 INFO mapreduce.Job:  map 35% reduce 0%14/07/15 13:28:09 INFO mapreduce.Job:  map 36% reduce 0%输入行数:6095908114/07/15 13:28:22 INFO mapreduce.Job:  map 42% reduce 0%14/07/15 13:28:30 INFO mapreduce.Job:  map 43% reduce 0%14/07/15 13:28:31 INFO mapreduce.Job:  map 44% reduce 0%14/07/15 13:28:34 INFO mapreduce.Job:  map 45% reduce 0%14/07/15 13:28:35 INFO mapreduce.Job:  map 46% reduce 0%输入行数:6993615914/07/15 13:28:37 INFO mapreduce.Job:  map 47% reduce 0%14/07/15 13:28:38 INFO mapreduce.Job:  map 48% reduce 0%14/07/15 13:28:41 INFO mapreduce.Job:  map 49% reduce 0%14/07/15 13:28:44 INFO mapreduce.Job:  map 50% reduce 0%输入行数:7716046114/07/15 13:29:01 INFO mapreduce.Job:  map 51% reduce 0%14/07/15 13:29:04 INFO mapreduce.Job:  map 52% reduce 0%14/07/15 13:29:05 INFO mapreduce.Job:  map 53% reduce 0%输入行数:8300037314/07/15 13:29:07 INFO mapreduce.Job:  map 54% reduce 0%14/07/15 13:29:09 INFO mapreduce.Job:  map 55% reduce 0%14/07/15 13:29:10 INFO mapreduce.Job:  map 56% reduce 0%14/07/15 13:29:13 INFO mapreduce.Job:  map 57% reduce 0%14/07/15 13:29:16 INFO mapreduce.Job:  map 58% reduce 0%输入行数:9336176614/07/15 13:29:32 INFO mapreduce.Job:  map 59% reduce 0%输入行数:9819469614/07/15 13:29:35 INFO mapreduce.Job:  map 60% reduce 0%14/07/15 13:29:37 INFO mapreduce.Job:  map 61% reduce 0%14/07/15 13:29:38 INFO mapreduce.Job:  map 62% reduce 0%14/07/15 13:29:40 INFO mapreduce.Job:  map 63% reduce 0%14/07/15 13:29:41 INFO mapreduce.Job:  map 64% reduce 0%14/07/15 13:29:44 INFO mapreduce.Job:  map 65% reduce 0%14/07/15 13:29:48 INFO mapreduce.Job:  map 66% reduce 0%输入行数:10956218414/07/15 13:30:04 INFO mapreduce.Job:  map 67% reduce 0%输入行数:11336281814/07/15 13:30:06 INFO mapreduce.Job:  map 68% reduce 0%14/07/15 13:30:08 INFO mapreduce.Job:  map 69% reduce 0%14/07/15 13:30:10 INFO mapreduce.Job:  map 70% reduce 0%14/07/15 13:30:12 INFO mapreduce.Job:  map 71% reduce 0%14/07/15 13:30:15 INFO mapreduce.Job:  map 72% reduce 0%输入行数:12307411914/07/15 13:30:32 INFO mapreduce.Job:  map 76% reduce 0%14/07/15 13:30:33 INFO mapreduce.Job:  map 80% reduce 0%14/07/15 13:30:34 INFO mapreduce.Job:  map 83% reduce 0%14/07/15 13:30:35 INFO mapreduce.Job:  map 84% reduce 0%输入行数:12307411914/07/15 13:30:37 INFO mapreduce.Job:  map 89% reduce 0%14/07/15 13:30:38 INFO mapreduce.Job:  map 92% reduce 0%14/07/15 13:30:39 INFO mapreduce.Job:  map 95% reduce 0%14/07/15 13:30:40 INFO mapreduce.Job:  map 100% reduce 0%输入行数:12307411914/07/15 13:30:53 INFO mapreduce.Job:  map 100% reduce 100%14/07/15 13:30:53 INFO mapreduce.Job: Job job_1405397597558_0003 completed successfully14/07/15 13:30:53 INFO mapreduce.Job: Counters: 50File System CountersFILE: Number of bytes read=58256119FILE: Number of bytes written=66039749FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=724520133HDFS: Number of bytes written=1088895HDFS: Number of read operations=21HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Killed map tasks=2Launched map tasks=8Launched reduce tasks=1Data-local map tasks=8Total time spent by all maps in occupied slots (ms)=1528715Total time spent by all reduces in occupied slots (ms)=17508Total time spent by all map tasks (ms)=1528715Total time spent by all reduce tasks (ms)=17508Total vcore-seconds taken by all map tasks=1528715Total vcore-seconds taken by all reduce tasks=17508Total megabyte-seconds taken by all map tasks=1565404160Total megabyte-seconds taken by all reduce tasks=17928192Map-Reduce FrameworkMap input records=123074119Map output records=123074119Map output bytes=1216795535Map output materialized bytes=7133406Input split bytes=594Combine input records=127374119Combine output records=4900000Reduce input groups=100000Reduce shuffle bytes=7133406Reduce input records=600000Reduce output records=100000Spilled Records=5500000Shuffled Maps =6Failed Shuffles=0Merged Map outputs=6GC time elapsed (ms)=39761CPU time spent (ms)=1397060Physical memory (bytes) snapshot=1797943296Virtual memory (bytes) snapshot=5082316800Total committed heap usage (bytes)=1398800384Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=724519539File Output Format Counters Bytes Written=1088895


附录1:WordCount1.java和CounterThread.java的代码

//WordCount1.java代码
package mypackage;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;public class WordCount1 {public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{private final static IntWritable one = new IntWritable(1);  //建立"int"型变量one,初值为1private Text word = new Text();                             //建立"string:型变量 word,用于接收传入的单词public void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());  //将输入的文本按行分段while (itr.hasMoreTokens()) {word.set(itr.nextToken());                                  //为word赋值context.write(word, one);                                   // 将 键-值 对 word one 传入}//System.out.println("read lines:"+context.getCounter("org.apache.hadoop.mapred.Task$Counter","MAP_INPUT_RECORDS").getValue());//System.out.println( "输入行数:" + context.getCounters().findCounter("org.apache.hadoop.mapred.Task$Counter", "MAP_INPUT_RECORDS").getValue() );//System.out.println( "输入行数:" + context.getCounters().findCounter("", "MAP_INPUT_RECORDS").getValue() );}}public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable();                 //创建整型变量resultpublic void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {int sum = 0;                                                 //创建int 型变量sum 初值0for (IntWritable val : values) {sum += val.get();                                          //将每个key对应的所有value类间}result.set(sum);                                              //sum传入result                                        context.write(key, result);                                   //将 key-result对传入}}public static void main(String[] args) throws Exception {Configuration conf = new Configuration();//String[] newArgs = new String[]{"hdfs://localhost:9000/data/tmpfile","hdfs://localhost:9000/data/wc_output"};String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();if (otherArgs.length != 2) {System.err.println("Usage: wordcount <in> <out>");System.exit(2);}Job job = new Job(conf, "WordCount1");                  //建立新jobjob.setJarByClass(WordCount1.class);job.setMapperClass(TokenizerMapper.class);              //设置map类job.setCombinerClass(IntSumReducer.class);              //设置combiner类job.setReducerClass(IntSumReducer.class);               //设置reducer类job.setOutputKeyClass(Text.class);                       //输出的key类型job.setOutputValueClass(IntWritable.class);              //输出的value类型FileInputFormat.addInputPath(job, new Path(otherArgs[0]));  //输入输出参数(在设置中指定)FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));CounterThread ct = new CounterThread(job);ct.start();job.waitForCompletion(true);System.exit(0);//System.exit(job.waitForCompletion(true) ? 0 : 1);}}

//CounterThread.java的代码
package mypackage;import java.lang.*;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.JobStatus;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;public class CounterThread extends Thread{public CounterThread(Job job) {_job = job;}public void run() {while(true){try {Thread.sleep(1000*5);} catch (InterruptedException e1) {// TODO Auto-generated catch blocke1.printStackTrace();}try {if(_job.getStatus().getState() == JobStatus.State.RUNNING) //continue;System.out.println( "输入行数:" + _job.getCounters().findCounter("org.apache.hadoop.mapred.Task$Counter", "MAP_INPUT_RECORDS").getValue() );} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();} catch (InterruptedException e) {// TODO Auto-generated catch blocke.printStackTrace();}}}private Job _job;}


3 0
原创粉丝点击