Mapreduce读取OrcFile格式的改造

来源:互联网 发布:hkeytray是什么软件 编辑:程序博客网 时间:2024/05/08 13:20

本文转自我的原创blog: http://www.javali.org/document/mapreduce_read_orcfile_solution.html

The Optimized Row Columnar (ORC) file format provides a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Using ORC files improves performance when Hive is reading, writing, and processing data.Compared with RCFile format, for example, ORC file format has many advantages

OrcFile 介绍

如官方文档所述, 在Hive里读、写操作文件能克服其他文件格式的弊端,但是到目前为止OrcFile还不对外提供读写的API,如果需要在MR读取就无能为力了。https://issues.apache.org/jira/browse/HIVE-5728

从上面链接可以看出其他人也遇到同样的问题,既然Hive里能读取,为何MR不行!我们下载Hive 0.13版源码下来,解压后orc相关的sourcecode在ql\src\java\org\apache\hadoop\hive\ql\io\orc 目录下

要了解OrcFile的结构,看OrcStruct的源码就知道了,只是他的构造函数不是public的;准确的说,Hive提供了Api,只是未对外开放,如下:

OrcStruct(int children) {fields = new Object[children];}Object getFieldValue(int fieldIndex) {return fields[fieldIndex];}void setFieldValue(int fieldIndex, Object value) {fields[fieldIndex] = value;}

这意味着要操作OrcStruct对象必须是在同样的package org.apache.hadoop.hive.ql.io.orc下才有权限。

我们在该package下创建OrcStructTest类,复制如下代码然后运行,可以稍对ORCFile的结构有所了解。

public void testStruct() throws Exception { OrcStruct st1 = new OrcStruct(4); OrcStruct st2 = new OrcStruct(4); OrcStruct st3 = new OrcStruct(3); st1.setFieldValue(0, "hop"); st1.setFieldValue(1, "on"); st1.setFieldValue(2, "pop"); st1.setFieldValue(3, 42);assertEquals(false, st1.equals(null)); st2.setFieldValue(0, "hop"); st2.setFieldValue(1, "on"); st2.setFieldValue(2, "pop"); st2.setFieldValue(3, 42); assertEquals(st1, st2); st3.setFieldValue(0, "hop"); st3.setFieldValue(1, "on"); st3.setFieldValue(2, "pop"); assertEquals(false, st1.equals(st3)); assertEquals(11241, st1.hashCode()); assertEquals(st1.hashCode(), st2.hashCode()); assertEquals(11204, st3.hashCode()); assertEquals("{hop, on, pop, 42}", st1.toString()); st1.setFieldValue(3, null); assertEquals(false, st1.equals(st2)); assertEquals(false, st2.equals(st1)); st2.setFieldValue(3, null); assertEquals(st1, st2); }

测试读取hdfs上的OrcFile

从OrcInputFormat类源码可以看到提供了一个OrcInputFormat.createReaderFromFile(file, conf, offset, length) 的接口或者直接将OrcInputFormat里的OrcRecordReader提取出来将之public也可直接构造:OrcRecordReader reader = new OrcRecordReader(reader,conf,offset,length);

OrcFile提供了OrcFile.createReader(fs, path)接口

两者结合就可以直接在应用程序里读取OrcFile了,代码如下:

public static void main(String[] args) throws IOException, InterruptedException { String INPUT = "/user/hive/orc_test.orc"; Configuration conf = new Configuration(); Path file_in = new Path(INPUT); Reader r = OrcFile.createReader(FileSystem.get(URI.create(INPUT), conf), file_in);// OrcRecordReader reader = new OrcRecordReader(r,conf,0,747); OrcRecordReader reader = (OrcRecordReader) OrcInputFormat.createReaderFromFile(r, conf, 0, 747); //我的test.orc文件长度就是747,可以单独写个getFileSize方法获取 if(reader!=null){ System.out.println("========record counts : " + reader.numColumns); while(reader.nextKeyValue()){ OrcStruct data = reader.getCurrentValue(); System.out.println("fields: " + data.getNumFields()); for(int i = 0; i < data.getNumFields(); i++){ System.out.println("============" + data.getFieldValue(i)); } } } }

MapReduce中读取OrcFile

源码里提供了OrcInputFormat但它不是extends而是implements InputFormat接口,但提供了另外一个类OrcNewInputFormat可以直接设置成InputFormat

public static class OrcReaderMap extends Mapper<Writable, OrcStruct, Writable, OrcStruct> {public void setup(Context context) throws IOException, InterruptedException {System.err.println("++++++++++++++++++++++set up+++++++++++++++++++++++");super.setup(context);}public void map(Writable key, OrcStruct value, Context ctx)throws IOException, InterruptedException {System.err.println("++++++++++++++++++++++mapper...+++++++++++++++++++++++");try {ctx.write(NullWritable.get(), value);} catch (Exception e) {e.printStackTrace();}}}/*** Map/Reduce Job启动方法,配置了Job启动参数,包括Map Job \ Reduce Job 以及输入输出文件,Reduce* tasks等*/public static void main(String[] args) throws IOException,InterruptedException, ClassNotFoundException {Configuration conf = new Configuration();conf.setBoolean("mapreduce.job.user.classpath.first", true);Job job = new Job(conf, "OrcReader");job.setJarByClass(OrcReader.class);job.setInputFormatClass(OrcNewInputFormat.class);job.setOutputFormatClass(OrcNewOutputFormat.class);FileInputFormat.addInputPath(job, new Path(INPUT));FileSystem fs = FileSystem.get(conf);if (fs.exists(new Path(OUTPUT))) {fs.delete(new Path(OUTPUT), true);}FileOutputFormat.setOutputPath(job, new Path(OUTPUT));job.setMapOutputKeyClass(NullWritable.class);job.setMapOutputValueClass(OrcStruct.class);job.setOutputKeyClass(NullWritable.class);job.setOutputValueClass(OrcStruct.class);job.setMapperClass(OrcReaderMap.class);System.exit(job.waitForCompletion(true) ? 0 : 1);}

0 0