Hadoop源码分析3: 序列化
来源:互联网 发布:mac 听写 插件 编辑:程序博客网 时间:2024/06/05 09:10
1. Java 默认序列化
private static finallong serialVersionUID = 1276464248616673062L; privatelong blockId; privatelong numBytes; privatelong generationStamp;
publicBlock1() { }
publicBlock1(final long blkid, final long len, final longgenerationStamp) { this.blockId = blkid; this.numBytes = len; this.generationStamp =genStamp; } @Override publicString toString() { return "Block1 [blockId=" +blockId + ", numBytes=" + numBytes +", generationStamp=" + generationStamp + "]"; } // setter getter...............
static { // register a ctor WritableFactories.setFactory(Block2.class,new WritableFactory() { public Writable newInstance(){ return new Block2(); } }); }
privatelong blockId; privatelong numBytes; privatelong generationStamp;
publicBlock2() { }
publicBlock2(final long blkid, final long len, final longgenerationStamp) { this.blockId = blkid; this.numBytes = len; this.generationStamp =genStamp; } publicvoid set(long blkid, long len, long genStamp) { this.blockId = blkid; this.numBytes = len; this.generationStamp =genStamp; } @Override publicvoid write(DataOutput out) throws IOException { out.writeLong(blockId); out.writeLong(numBytes); out.writeLong(generationStamp); }
@Override publicvoid readFields(DataInput in) throws IOException{ this.blockId = in.readLong(); this.numBytes =in.readLong(); this.generationStamp =in.readLong(); if (numBytes < 0) { throw new IOException("Unexpected blocksize: " + numBytes); } } // settergetter............... @Override publicString toString() { return "Block2 [blockId=" +blockId + ", numBytes=" + numBytes + ", generationStamp=" +generationStamp + "]"; }
publicstatic void main(String[] args) throws IOException {
Block1 block1 = newBlock1(7806259420524417791L, 39447755L, 56736651L); ByteArrayOutputStream out1 = newByteArrayOutputStream(); ObjectOutputStream objOut1 = newObjectOutputStream(out1); objOut1.writeObject(block1); objOut1.close(); System.out.println("writeObject (block1) with "+ out1.size() + " bytes:"); SerializationExample.print16(out1.toByteArray(),out1.size());
System.out.println();
Block2 block2 = new Block2(7806259420524417791L,39447755L, 56736651L); ByteArrayOutputStream out2 = newByteArrayOutputStream(); ObjectOutputStream objOut2 = newObjectOutputStream(out2); block2.write(objOut2); objOut2.close(); System.out.println("writeObject (block2) with "+ out2.size() + " bytes:"); SerializationExample.print16(out2.toByteArray(),out2.size());
}
Filefile =new File("d:\\1.txt"); FileOutputStream fos = newFileOutputStream(file); ObjectOutputStream objOut = newObjectOutputStream(fos); FileInputStream fis = newFileInputStream(file); ObjectInputStream objIn = newObjectInputStream(fis); Block1block1 = new Block1(7806259420524417791L, 39447755L,56736651L); objOut.writeObject(block1); objOut.close();
Block1 block1Retrive =(Block1) objIn.readObject(); System.out.println("writeObject(block1) : " + block1Retrive + " "); objIn.close(); System.out.println(); file=new File("d:\\2.txt"); fos =new FileOutputStream(file); objOut= new ObjectOutputStream(fos); fis =new FileInputStream(file); objIn =new ObjectInputStream(fis); Block2block2 = new Block2(7806259420524417791L, 39447755L,56736651L); block2.write(objOut); objOut.close();
Block2 block2Retrive = newBlock2(); block2Retrive.readFields(objIn); objIn.close(); System.out.println("writeObject(block2) : " + block2Retrive + " ");
public class Block1 implements Serializable {
}
2.Hadoop 序列化
public class Block2 implements Writable {
}
2. 比较
public class BlockMain {
}
打印:
writeObject (block1) with 113 bytes:
AC ED 00 05 73 72 00 1D 6F 72 67 2E 68 61 646F ....sr.. org.hado
6F 70 69 6E 74 65 72 6E 61 6C 2E 73 65 72 2E42 opintern al.ser.B
6C 6F 63 6B 31 11 B6 E9 78 9B 48 5F 26 02 0003 lock1... x.H_&...
4A 00 07 62 6C 6F 63 6B 49 64 4A 00 0F 67 656E J..block IdJ..gen
65 72 61 74 69 6F 6E 53 74 61 6D 70 4A 00 086E erationS tampJ..n
75 6D 42 79 74 65 73 78 70 6C 55 67 95 68 E792 umBytesx plUg.h..
FF 00 00 00 00 03 61 BB 8B 00 00 00 00 02 59EC ......a. ......Y.
CB .
writeObject (block2) with 30 bytes:
AC ED 00 05 77 18 6C 55 67 95 68 E7 92 FF 0000 ....w.lU g.h.....
00 00 02 59 EC CB 00 00 00 00 03 61 BB 8B ...Y.... ...a..
可见使用Writable的数据量比Serializable 小得多: 30/113 约为 26.6%。
3.文件读写对象
public class BlockMain {
public static void main(String[] args) throwsIOException, ClassNotFoundException {
}
}
打印:
writeObject (block1) : Block1 [blockId=7806259420524417791,numBytes=39447755,generationStamp=56736651]
writeObject (block2) : Block2 [blockId=7806259420524417791,numBytes=39447755,generationStamp=56736651]
文件内容:
d:\1.txt 113 字节
d:\2.txt 30 字节
0 0
- Hadoop源码分析3: 序列化
- Hadoop序列化与Writable源码分析
- Hadoop源码分析笔记(二):Hadoop序列化与压缩
- Hadoop基础之MapReduce原理、序列化和源码分析
- jQuery源码分析 :表单序列化动作
- Zookeeper源码分析之序列化
- hadoop源码分析(1-3)
- Hadoop源码分析-HDFS
- Hadoop RPC源码分析
- hadoop datanode源码分析
- hadoop datanode源码分析
- Hadoop RPC源码分析
- hadoop datanode源码分析
- hadoop 源码分析一
- Hadoop源码分析_DatanodeDescriptor
- Hadoop源码分析_DatanodeInfo
- hadoop源码分析 jobsplit
- Hadoop源码分析
- 《分布式系统原理与范型》习题答案 5.同步
- 《分布式系统原理与范型》习题答案 6.一致性和复制
- 正则表达式划分CSV
- Hadoop源码分析1: 客户端提交JOB
- Hadoop源码分析2: NIO Socket 分析
- Hadoop源码分析3: 序列化
- tortoisegit分支使用
- Hadoop源码分析4: 动态代理
- Hadoop源码分析5: RPC基本线程
- Hadoop源码分析6: RPC基本线程
- Hadoop源码分析6: Buffer 细节
- 联通让“打110”,我该怎么办?
- Hadoop源码分析7: IPC流程(1) 主要类
- Hadoop源码分析7: IPC流程(2) 流程