HDFS针对多硬盘节点的存储策略

来源:互联网 发布:网商银行客户贷款数据 编辑:程序博客网 时间:2024/05/22 07:44

http://hi.baidu.com/thinkdifferent/blog/item/95de0e2416c4da3fc89559b8.html

 

对于HDFS针对多硬盘节点的存储策略,一直没有找到比较确实的依据,只有Hadoop官网上说过一句nodes with multiple disks should be managed internally(大致如此,懒得再看了)。今天看到一篇博客,直接把代码片段给贴上去了。现转贴如下:

from: kzk's blog

To use multiple disks in Hadoop DataNode, you should add comma-separated directories to dfs.data.dir in hdfs-site.xml. The following is an example of using four disks.

view plaincopy to clipboardprint?
  1. <property>  
  2.   <name>dfs.data.dir</name>  
  3.   <value>/disk1, /disk2, /disk3, /disk4</value>  
  4. </property>  

But how to use these disks in Hadoop? I found the following code snippet in ./hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java at hadoop-0.20.1.

view plaincopy to clipboardprint?
  1. synchronized FSVolume getNextVolume(long blockSize) throws IOException {  
  2.   int startVolume = curVolume;  
  3.   while (true) {  
  4.     FSVolume volume = volumes[curVolume];  
  5.     curVolume = (curVolume + 1) % volumes.length;  
  6.     if (volume.getAvailable() > blockSize) { return volume; }  
  7.     if (curVolume == startVolume) {  
  8.       throw new DiskOutOfSpaceException("Insufficient space for an additional block");  
  9.     }  
  10.   }  
  11. }  

FSVolume represents the single directory specified at dfs.data.dir. This code places the blocks in round-robin fashion into multiple disks, while considering the available disk capacities.

One more thing. If the disk utilization reaches the 100%, the other important data (c,f. error log) cannot be written. To prevent this, Hadoop prepares the "dfs.datanode.du.reserved" value. When calculating the disk capacity in Hadoop, this value is always subtracted from the real capacity. Setting this value as severay hundreds of megabytes would be safe.

This is the default strategy of Hadoop, but I think considering the disk load avg would be better. If one disk is busy, Hadoop should avoid to use that disk. However, the block distribution would not be same across the disks in this method. Therefore, the read performance will drop. This is a very difficult problem. Do you come up with a better strategy?

原创粉丝点击