自定义OutputFormat--Hadoop
来源:互联网 发布:看电影视频软件 编辑:程序博客网 时间:2024/05/16 10:53
1.首先介绍OutputFormat类的作用:
1)验证输出规范的有效性
2)为在输出文件进行写操作的RecordWriter实现类提供一个实例。
RecordWriter用于将输出的值与键类型的Java实例转化成原始字节。
2.主要讨论如何将文本文件转化成XML文件
输出示例
<recs> <rec> <key>0</key> <year>1988</year> </rec> <rec> <key>98</key> <year>1998<year> </rec></recs>
3.主程序代码
import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.conf.Configured;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.Tool;import org.apache.hadoop.util.ToolRunner;public class OutputFormatTest extends Configured implements Tool{ public static class TextToXMLConversionMapper extends Mapper<LongWritable,Text,LongWritable,Text>{ public void map(LongWritable key,Text value,Context context) throws IOException, InterruptedException{ if(value.toString().contains(" ")) context.write(key, value); } } @Override public int run(String[] args) throws Exception { // TODO Auto-generated method stub Job job =Job.getInstance(getConf()); job.setJarByClass(OutputFormatTest.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(XMLOutputFormat.class);// job.setMapOutputKeyClass(LongWritable.class);// job.setMapOutputValueClass(Text.class); job.setOutputKeyClass(LongWritable.class); job.setOutputValueClass(Text.class); job.setMapperClass(TextToXMLConversionMapper.class); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); return job.isSuccessful()?0:1; } public static void main(String args[]) throws Exception{ int rex = ToolRunner.run(new Configuration(),new OutputFormatTest() , args); System.exit(rex); }}
4.自定义OutputFormat类代码即XMLOutputFormat类
package com.input.output;import java.io.DataOutputStream;import java.io.IOException;import org.apache.hadoop.fs.FSDataOutputStream;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.RecordWriter;import org.apache.hadoop.mapreduce.TaskAttemptContext;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class XMLOutputFormat extends FileOutputFormat<LongWritable,Text>{ protected static class XMLRecordWriter extends RecordWriter<LongWritable,Text>{ private DataOutputStream out; public XMLRecordWriter(DataOutputStream out) throws IOException{ this.out=out; out.writeBytes("<recs>\n"); } private void writeTag(String tag,String value) throws IOException{ out.writeBytes("<"+tag+">"+value+"</"+tag+">\n"); } @Override public void close(TaskAttemptContext arg0) throws IOException, InterruptedException { // TODO Auto-generated method stub try{ out.writeBytes("</recs>\n"); }finally{ out.close(); } } @Override public void write(LongWritable key, Text value) throws IOException, InterruptedException { // TODO Auto-generated method stub out.writeBytes("<rec>"); this.writeTag("key", Long.toString(key.get())); String contents[] =value.toString().split(","); this.writeTag("year", contents[0]); out.writeBytes("</rec>"); } } @Override public RecordWriter<LongWritable, Text> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException { // TODO Auto-generated method stub String extension = "data.xml"; Path file = getDefaultWorkFile(job,extension); FileSystem fs = file.getFileSystem(job.getConfiguration()); FSDataOutputStream fileout = fs.create(file,false); return new XMLRecordWriter(fileout); }}
5.输出结果的处理过程如下:
1)Hadoop的框架通过调用XMLOutputFormat类(OutputFormat的一个实现)实例的getRecordWriter()的方法,获得XMLRecordWriter实例的引用。
2)RecordWriter 配置输出的参数,包括文件名的前缀和扩展名。扩展名会在代码列表里提供。文件的默认前缀是part,该值
0 0
- 自定义OutputFormat--Hadoop
- hadoop 自定义inputformat和outputformat
- hadoop自定义InputFormat,OutputFormat输入输出类型
- 自定义OutPutFormat
- 自定义OutputFormat
- 自定义outputformat
- Hadoop自定义 inputformat 和outputformat 实现图像的读写
- Hadoop入门之自定义groupingcomparator和outputformat的使用
- 探索Hadoop outputFormat
- Hadoop OutputFormat浅析
- Hadoop Outputformat浅析
- Hadoop:InputFormat和OutputFormat
- Hadoop OutputFormat 介绍
- MapReduce 自定义outputFormat
- 在Hadoop的streaming中使用自定义的inputformat和outputformat
- 在Hadoop的streaming中使用自定义的inputformat和outputformat
- hadoop学习;自定义Input/OutputFormat;类引用mapreduce.mapper;三种模式
- hadoop学习;自定义Input/OutputFormat;类引用mapreduce.mapper;三种模式
- 人脸识别算法-特征脸方法(Eigenface)及python实现
- 一维数组的选择排序,冒泡排序,以及插入排序。
- php上传二进制数据流
- Day 15: Linked List
- 重载,覆盖,多态
- 自定义OutputFormat--Hadoop
- Android中几种隐藏TITLE方式
- 洛谷月赛 U4727 小L 的二叉树
- 深入理解Java内存模型(二)——重排序
- linux中$# $* $?等符号的意义
- Golang开发中遇到的一些问题。
- CString 转换为char*
- 第3周项目1-顺序表的基本运算
- 约束