Difference between MappedByteBuffer and ByteBuffer.allocateDirect()

来源:互联网 发布:javascript 空格转义 编辑:程序博客网 时间:2024/05/18 05:50
加入 Google+
与合适的人分享合适的内容。
Riyad Kalla
工作单位 The Buzz Media, LLC
就读学校 University of Arizona
居住地 Arizona
查看完整的个人资料
举报此个人资料

Riyad Kalla

2012-3-5(已编辑)  -  公开
Difference between MappedByteBuffer and ByteBuffer.allocateDirect()

Java supports the concepts of both a memory-mapped file (represented by a MappedByteBuffer created via FileChannel.map) as well as a "direct" ByteBuffer (a block of native memory in the host OS).

Both of these buffer types are basically "direct" buffers into native OS memory space, so what is the difference between the two in the context of file operations?

The difference between the buffer types is that MappedByteBuffers are allocated in virtual-memory space in the operating system. R/W done with MappedByteBuffers is managed at the OS level by the VM paging logic. Direct ByteBuffers are just a solid slab of free memory (e.g. malloc) in RAM that you can utilize from within Java and treated by the OS as a standard memory allocation.

If you are running on a machine with a lot of RAM, MappedByteBuffers supports working with files up to 2GB easily; just memory-map the file and start working on it. You push the responsibility of memory-management onto the OS in this case, allowing it to page in or out portions of the file that you don't need in order to free up RAM for other processes on the host. All you need to focus on is operating on the file at the offsets you want.

If you are using files that get larger than 2GB or you need to utilize a much smaller and well-defined RAM requirement for file I/O without running the possibility of host-OS lockup as it pages data in/out, then you can allocate a direct ByteBuffer (e.g. 1MB in size) and fill it with the segments of the larger file you want via calls to FileChannel.read(ByteBuffer, position).

The benefit here is that the OS is able to perform the file-read operations directly into native memory space without passing it through the native-to-JVM barrier; allowing the OS to complete that operation as quickly as possible.

Once the buffer is filled, you can pull the data you want from the direct ByteBuffer into the JVM to operate on it easily.

BONUS

If the idea of direct memory via ByteBuffers sounds super slick to you, you might look at a strategy around allocating a big slab of direct memory at app startup then utilizing ByteBuffer.slice() to allow sub processes to get slices of it for processing.

This will bring you right back to the days of malloc/free inside of your Java apps, but if it makes sense for you, it can be a very flexible way to get some big performance wins.

Happy Coding!
原创粉丝点击