Java读取读取的缓冲区Buffer
来源:互联网 发布:数据库中substring 编辑:程序博客网 时间:2024/05/17 22:42
Optimum buffer size is related to a number of things: file system block size, CPU cache size and cache latency.
Most file systems are configured to use block sizes of 4096 or 8192. In theory, if you configure your buffer size so you are reading a few bytes more than the disk block, the operations with the file system can be extremely inefficient (i.e. if you configured your buffer to read 4100 bytes at a time, each read would require 2 block reads by the file system). If the blocks are already in cache, then you wind up paying the price of RAM -> L3/L2 cache latency. If you are unlucky and the blocks are not in cache yet, the you pay the price of the disk->RAM latency as well.
This is why you see most buffers sized as a power of 2, and generally larger than (or equal to) the disk block size. This means that one of your stream reads could result in multiple disk block reads - but those reads will always use a full block - no wasted reads.
Now, this is offset quite a bit in a typical streaming scenario because the block that is read from disk is going to still be in memory when you hit the next read (we are doing sequential reads here, after all) - so you wind up paying the RAM -> L3/L2 cache latency price on the next read, but not the disk->RAM latency. In terms of order of magnitude, disk->RAM latency is so slow that it pretty much swamps any other latency you might be dealing with.
So, I suspect that if you ran a test with different cache sizes (haven't done this myself), you will probably find a big impact of cache size up to the size of the file system block. Above that, I suspect that things would level out pretty quickly.
There are a ton of conditions and exceptions here - the complexities of the system are actually quite staggering (just getting a handle on L3 -> L2 cache transfers is mind bogglingly complex, and it changes with every CPU type).
This leads to the 'real world' answer: If your app is like 99% out there, set the cache size to 8192 and move on (even better, choose encapsulation over performance and use BufferedInputStream to hide the details). If you are in the 1% of apps that are highly dependent on disk throughput, craft your implementation so you can swap out different disk interaction strategies, and provide the knobs and dials to allow your users to test and optimize (or come up with some self optimizing system).
- Java读取读取的缓冲区Buffer
- java用缓冲区读取文件
- javaOOp 带缓冲区 的读取
- Java NIO缓冲区(Buffer)的数据存取
- Java NIO3:缓冲区Buffer
- 字符流缓冲区实现文件的读取
- Buffer的准备和数据读取
- 关于读取文件流时候的buffer
- 整型视图缓冲区如何读取字节缓冲区的数据?
- 环形缓冲区读取磁盘
- Node.Js Buffer类(缓冲区)-(三)文件读取实例
- java:带有缓冲区的读写拷贝BufferedInputStream,BufferedOutputStream 读取写入文件
- java.nio.Buffer缓冲区基础
- Java NIO4:缓冲区Buffer(续)
- 《Java源码解析》NIO中Buffer缓冲区的实现
- Java-NIO(二):缓冲区(Buffer)的数据存取
- 模拟字节读取流缓冲区
- VTK读取缓冲区像素数据
- 递归二叉树建立、遍历、删除、打印
- hashtable和hashmap解析
- Struts2整合JFreeChart图表---------------JFreeChart作图
- android actionbar的神奇问题
- 说说家乡的互联网-沟北
- Java读取读取的缓冲区Buffer
- ubuntu 网络设置问题
- 归并排序的实现
- tomcat与apache
- Webbrowser Win7下 DEP引起的内存只读
- vmware tools 怎么安装不上去啊 ?
- Hibernate自动建表的方法
- cloudfoundry居然开始收费了!
- vim中执行shell命令小结