使用cache tier
来源:互联网 发布:golang sleep 毫秒 编辑:程序博客网 时间:2024/06/09 19:14
使用cache tier
cache tier几种模式:
Writeback Mode: When admins configure tiers with writeback mode, Ceph clients write data to the cache tier and receive an ACK from the cache tier. In time, the data written to the cache tier migrates to the storage tier and gets flushed from the cache tier. Conceptually, the cache tier is overlaid “in front” of the backing storage tier. When a Ceph client needs data that resides in the storage tier, the cache tiering agent migrates the data to the cache tier on read, then it is sent to the Ceph client. Thereafter, the Ceph client can perform I/O using the cache tier, until the data becomes inactive. This is ideal for mutable data (e.g., photo/video editing, transactional data, etc.).
Read-only Mode: When admins configure tiers with readonly mode, Ceph clients write data to the backing tier. On read, Ceph copies the requested object(s) from the backing tier to the cache tier. Stale objects get removed from the cache tier based on the defined policy. This approach is ideal for immutable data (e.g., presenting pictures/videos on a social network, DNA data, X-Ray imaging, etc.), because reading data from a cache pool that might contain out-of-date data provides weak consistency. Do not use readonly mode for mutable data.
And the modes above are accomodated to adapt different configurations:
Read-forward Mode: this mode is the same as the writeback mode when serving write requests. But when Ceph clients is trying to read objects not yet copied to the cache tier, Ceph forward them to the backing tier by replying with a “redirect” message. And the clients will instead turn to the backing tier for the data. If the read performance of the backing tier is on a par with that of its cache tier, while its write performance or endurance falls far behind, this mode might be a better choice.
Read-proxy Mode: this mode is similar to readforward mode: both of them do not promote/copy the data when the requested object does not exist in the cache tier. But instead of redirecting the Ceph clients to the backing tier when cache misses, the cache tier reads from the backing tier on behalf of the clients. Under some circumstances, this mode can help to reduce the latency.
使用:
1. 新增cache crush map:
- 新建type为root的crush bucket:
root@ceph:~# ceph osd crush add-bucket cache rootadded bucket cache type root to crush map
- 新建type为host的crush bucket:
root@ceph:~# ceph osd crush add-bucket host-cache hostadded bucket host-cache type host to crush map
- 移动crush map中的host到root下:
root@ceph:~# ceph osd crush move host-cache root=cachemoved item id -6 name 'host-cache' to location {root=cache} in crush map
- 添加osd到host-cache,或者移动已有的osd到host-cache:
root@ceph:~# ceph osd crush create-or-move osd.2 0.03899 host=host-cachecreate-or-move updating item name 'osd.2' weight 0.03899 at location {host=host-cache} to crush map
- 检查crush map:
root@ceph:~# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -5 0.03899 root cache -6 0.03899 host host-cache 2 0.03899 osd.2 up 1.00000 1.00000 -1 0.08780 root default -2 0.08780 host ceph 1 0.08780 osd.1 up 1.00000 1.00000
2. 新增crush rule:
- 创建基于cache bucket的crush rule:
root@ceph:~# ceph osd crush rule create-simple cache cache host
- 查看当前rule:
root@ceph:~# ceph osd crush rule dump { "rule_id": 2, "rule_name": "cache", "ruleset": 2, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -5, "item_name": "cache" }, { "op": "chooseleaf_firstn", "num": 0, "type": "host" }, { "op": "emit" } ] }
3. 创建cache pool:
- 创建cache pool:
root@ceph:~# ceph osd pool create cache 64 64pool 'cache' created
- 修改pool的crush rule:
root@ceph:~# ceph osd pool set cache crush_ruleset 2set pool 3 crush_ruleset to 2
4. 创建cache tier:
- 绑定data pool和cache pool:
root@ceph:~# ceph osd tier add data cachepool 'cache' is now (or already was) a tier of 'data'
- 设置cache tier模式:
root@ceph:~# ceph osd tier cache-mode cache writebackset cache-mode for pool 'cache' to writeback
- 设置overlay:
root@ceph:~# ceph osd tier set-overlay data cacheoverlay for 'data' is now (or already was) 'cache' (WARNING: overlay pool cache_mode is still NONE)
- 设置cache pool参数:
root@ceph:~# ceph osd pool set cache hit_set_type bloomroot@ceph:~# ceph osd pool set cache hit_set_count 1root@ceph:~# ceph osd pool set cache hit_set_period 600root@ceph:~# ceph osd pool set cache target_max_bytes 10000000000root@ceph:~# ceph osd pool set cache target_max_objects 300000root@ceph:~# ceph osd pool set cache cache_min_flush_age 600root@ceph:~# ceph osd pool set cache cache_min_evict_age 600root@ceph:~# ceph osd pool set cache cache_target_dirty_ratio 0.4root@ceph:~# ceph osd pool set cache cache_target_dirty_high_ratio 0.6root@ceph:~# ceph osd pool set cache cache_target_full_ratio 0.8root@ceph:~# ceph osd pool set cache min_read_recency_for_promote 1root@ceph:~# ceph osd pool set cache min_write_recency_for_promote 1
- 使用cache tier
- flash cache tier下放flush实验
- Oracle web tier——oracle web cache
- ceph存储 ceph集群Tier和RBD Cache的区别
- Java缓存Ehcache-Ehcache的Cache预热机制及代码实现(Cache Warming for multi-tier Caches)
- Java缓存Ehcache-Ehcache的Cache预热机制及代码实现(Cache Warming for multi-tier Caches)
- cache使用
- WP01 – 为什么实现Multi-Tier, 为什么使用 Data Abstract?
- WP01 – 为什么实现Multi-Tier, 为什么使用 Data Abstract?
- 使用Cache还是跳过Cache
- Cache-control使用:header('Cache-control:private')
- Cache-control使用:header('Cache-control:private')
- Cache-control使用:header('Cache-control:private')
- 使用com.google.common.cache.Cache缓存
- Cache缓存使用相关
- 缓存的使用Cache
- 使用Listener实现Cache
- varnish cache 配置使用
- stm32库函数下,输出可调频率pwm
- hdu 2768 二分图匹配
- hdu 6047 Maximum Sequence
- java框架之MybatisSQL注入漏洞
- 云服务器Tomcat版本升级(Tomcat6升级至Tomcat7和Tomcat8)问题总结
- 使用cache tier
- 【js基础】javascript中几种常见的继承方式。
- DOM节点删除之保留数据的删除操作detach()
- linux常用的命令
- Tensorflow for linux 安装过程
- hive 数据写入es
- java手机端后台信息交互
- InnoDB主要特性、概念和架构
- 【剑指Offer】面试题61:按之字形顺序打印二叉树