deadline-iosched.txt

来源:互联网 发布:毕业设计node.js 编辑:程序博客网 时间:2024/05/18 01:27

If you have any comment or update to the content, please contact the
original document maintainer directly. However, if you have a problem
communicating in English you can also ask the Chinese maintainer for
help. Contact the Chinese maintainer if this translation is outdated
or if there is a problem with the translation.


Chinese maintainer: 曾亮亮 <lianglaingzeng93@gmail.com>
---------------------------------------------------------------------
Documentation_powerpc_cpu_features.txt的中文翻译


如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文
交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻
译存在问题,请联系中文版维护者。


中文版维护者:曾亮亮 <lianglaingzeng93@gmail.com>
中文版翻译者:曾亮亮 <lianglaingzeng93@gmail.com>
中文版校译者:曾亮亮 <lianglaingzeng93@gmail.com>


以下为正文

  Deadline IO scheduler tunables
   死限调度算法
  ==============================
  
   This little file attempts to document how the deadline io scheduler works.
   In particular, it will clarify the meaning of the exposed tunables that may be
   of interest to power users.
   这个小文件将会记录死限调度算法是如何工作的。特别的,它将会使暴露的参数变得清晰,
    这可能会让用户感兴趣。
   Selecting IO schedulers
   选择调度算法
   -----------------------
  Refer to Documentation/block/switching-sched.txt for information on
  selecting an io scheduler on a per-device basis.
  参考Documentation/block/switching-sched.txt中关于为每一个基础设备选择一个io调度的
  信息。
 
 ********************************************************************************
 
 
  read_expire     (in ms)
 -----------
 
  The goal of the deadline io scheduler is to attempt to guarantee a start
  service time for a request. As we focus mainly on read latencies, this is
  tunable. When a read request first enters the io scheduler, it is assigned
  a deadline that is the current time + the read_expire value in units of
  milliseconds.
  死限的io调度是为了保证开始的请求服务时间。由于我们主要注意读延迟,而这是可以
  调节的。当一个读请求首先进入io调度,它会被分配到一个长度为数毫秒,由当前时间
  加上read_expire的值所构成的死限。
 
  write_expire    (in ms)
  -----------
 
  Similar to read_expire mentioned above, but for writes.
  类似于上面提到的read_expire,但是是用于写请求的。
 
  fifo_batch      (number of requests)
  ----------
 
  Requests are grouped into ``batches'' of a particular data direction (read or
  write) which are serviced in increasing sector order.  To limit extra seeking,
  deadline expiries are only checked between batches.  fifo_batch controls the
  maximum number of requests per batch.
  请求被分成了成“批”的特定的数据指令(读或写)是服务部门的订单增加。为了限制额外
  的寻找,死限只检查每个批次之间是否到期。fifo_batch是用于控制每批次请求的最大数
  量。
 
  This parameter tunes the balance between per-request latency and aggregate
  throughput.  When low latency is the primary concern, smaller is better (where
  a value of 1 yields first-come first-served behaviour).  Increasing fifo_batch
  generally improves throughput, at the cost of latency variation.
  这个参数的值平衡于每个请求的延迟量和总的吞吐量。当低延时是首要关心的,那么延时
  越小越好(其中每个值都是先来先服务的)。加大fifo_batch的值会增加吞吐量,也会加
  大延时变化的成本。
 
  writes_starved  (number of dispatches)
  --------------
 
  When we have to move requests from the io scheduler queue to the block
  device dispatch queue, we always give a preference to reads. However, we
  don't want to starve writes indefinitely either. So writes_starved controls
  how many times we give preference to reads over writes. When that has been
  done writes_starved number of times, we dispatch some writes based on the
  same criteria as reads.
  当我们必须从块设备调度队列的io调度队列中移动请求指令时,我们总是优先考虑读
  请求。然而,我们并不想让写请求无限等待。所以write_starved控制着我们优先考虑
  读请求的次数。当已经数次完成writes_starved,我们会调度一些基于相同读标准的
  写请求。
 
  front_merges    (bool)
  ------------
 
  Sometimes it happens that a request enters the io scheduler that is contiguous
  with a request that is already on the queue. Either it fits in the back of that
  request, or it fits at the front. That is called either a back merge candidate
  or a front merge candidate. Due to the way files are typically laid out,
  back merges are much more common than front merges. For some work loads, you
  may even know that it is a waste of time to spend any time attempting to
  front merge requests. Setting front_merges to 0 disables this functionality.
  Front merges may still occur due to the cached last_merge hint, but since
  that comes at basically 0 cost we leave that on. We simply disable the
  rbtree front sector lookup when the io scheduler merge function is called.
  有些时候会发生当一个请求进入io调度时,相似的请求已经在队列中的情况。无论它们是在
  于队列头还是尾,这被称为头合并请求或尾合并请求。由于文件的典型布置方式,尾合并请
  求远比头合并请求常见。对于一些工作负担来说,你甚至会发现头合并请求就是在浪费时间。
  将front_merges设置成0可以禁用这个功能。头合并请求可能还是会由于缓存的last_merge提
  示出现,都是因为它是0成本的,所以我们忽略它。我们简单地禁用红黑树的先根遍历当io调
  度的合并函数被请求时。
 
  Nov 11 2002, Jens Axboe <jens.axboe@oracle.com>
 
 

原创粉丝点击