GlusetrFS命令解析

来源:互联网 发布:推广和美工工作计划 编辑:程序博客网 时间:2024/06/03 20:27
GlusetrFS命令解析
1,The hostnames used to create the storage pool must be resolvable by DNS. Also make sure that firewall is not blocking the probe requests/replies. (iptables -F)
To add a server to the storage pool:
添加一个服务器加入到存储池中,前提是他们能被DNS服务器解析到。让你的防火墙能够通过probe requests/replies :
gluster peer probe server
2.反向操作,删除一个服务器从存储池中。
gluster peer detach server4
3建立一个卷
gluster volume create NEW-VOLNAME [stripe COUNT | replica COUNT]
[transport [tcp | rdma | tcp,rdma]] NEW-BRICK1 NEW-BRICK2 NEW-BRICK3...
分布式:
gluster volume create NEW-VOLNAME [transport [tcp | rdma | tcp,rdma]]
NEW-BRICK...
For example, to create a distributed volume with four storage servers using tcp:
# gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/
exp4
复制式:
gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...
For example, to create a replicated volume with two storage servers:
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
条带式:
gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...
For example, to create a striped volume across two storage servers:
# gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2
分布条带式:
gluster volume create NEW-VOLNAME [stripe COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...
For example, to create a distributed striped volume across eight storage servers:
# gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2
分布复制式:
gluster volume create NEW-VOLNAME [replica COUNT] [transport [tcp |
rdma | tcp,rdma]] NEW-BRICK...
For example, four node distributed (replicated) volume with a two-way mirror:
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
For example, to create a six node distributed (replicated) volume with a two-way mirror:
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
 
三合一:
Create a distributed striped replicated volume using the following command:
# gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT]
[transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
For example, to create a distributed replicated striped volume across eight storage servers:
# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1
 
复制条带:
gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT]
[transport [tcp | rdma | tcp,rdma]] NEW-BRICK...
For example, to create a striped replicated volume across four storage servers:
# gluster volume create test-volume stripe 2 replica 2 transport tcp server1:/exp1
gluster volume create test-volume stripe 3 replica 2 transport tcp server1:/exp1
 
启动数据卷:
gluster volume start VOLNAME
For example, to start test-volume:
# gluster volume start test-volume
4.客户端管理:
本地挂在目录到分布式文件存储:
mount -t glusterfs server1:/test-volume /mnt/glusterfs
5.管理数据卷:
Tune volume options using the following command:
# gluster volume set VOLNAME OPTION PARAMETER
For example, to specify the performance cache size for test-volume:
# gluster volume set test-volume performance.cache-size 256MB
下面是应该是张表
 
auth.allow   哪些IP地址能够访问volume  默认是全部都可以访问  
Valid IP address which
includes wild card
patterns including *,
such as 192.168.1.*
auth.reject  哪些是IP地址拒绝访问volume 默认是没有  
Valid IP address which
includes wild card
patterns including *,
such as 192.168.2.*
client.grace-timeout 当客户端网络断开的时候最多多长时间能保持会话 默认是10 可选范围是
10 - 1800 secs
cluster.self-heal-window-size 指定在自我修复的时候最大数量的可用broick 默认是16 可选范围是
0 - 1025 blocks
cluster.data-self-heal-algorithm 指定修复算法,diff是差异修复,full是全修复,reset 是重置修复。
范围是 diff full reset
cluster.min-free-disk  指定硬盘的最大空余 默认是10%范围是 
cluster.stripe-block- size  指定单元的条带block大小将被读取和写入 默认是128K,可选范围是可用的
cluster.self-heal-daemon 允许自我关闭和开启自我修复 默认是打开的, 可选Off
diagnostics.brick-log-level 更改brick的日志等级,默认是INFO,可选范围是
DEBUG|WARNING|
ERROR|CRITICAL|
NONE|TRACE
diagnostics.client-log-level 更改客户日志等级,默认是INFO 可选范围是
DEBUG|WARNING|
ERROR|CRITICAL|
NONE|TRACE
diagnostics.latency-measurement 每个操作的延迟相关的统计数据将被跟踪。 默认是off ,可选范围是on 
diagnostics.dump-fd-stats  统计相关的文件操作都被跟踪。默认是Off,可选范围是On
feature.read-only  所有mount默认的客户端都将会用只读模式来察看文件 默认是off 可选范围是On
features.lock-heal 当网络断开的时候能自我修复 默认是打开,可选OFF
features.quota-timeout 
出于性能方面的考虑,配额在客户端高速缓存目录的大小。您可以设置
超时显示目录大小的高速缓存中的最长持续时间,从时间
他们被填充,在此期间,他们被认为是有效的。默认是 0 范围是 0 ~3600
geo-replication.indexing 此选项用于选择同步是主服务器还是从服务器 默认是On 可选OFF
network.frame-timeout 时间超时 默认是1800(30 min)可选范围是1800
network.ping-timeout 
等待检查,如果服务器响应客户端的持续时间。当ping超时发生时,有一个客户端和服务器之间的网络连接断开。代表的客户端服务器上的所有资源得到清理。当重新连接时,所有的资源将需要重新收购之前,客户端可以在服务器上恢复其业务。此外,被收购的锁和更新锁表。这重新连接是非常昂贵的操作,并应
被避免。默认是42sec 可选是42sec
NFS之间配置以后再补充
performance.write-behind-window-size 每个文件写入之后的缓冲区大小 默认1MB 可选范围是随意
performance.io-thread-count 在线的IO线程数量 默认是 16个,实际上可以是65个。
performance.flush-behind 
如果此选项设置为ON,指示写后面的翻译进行冲洗的背景下,成功返回(或任何错误,如果以前写
失败)即使冲洗前发送到后端文件系统的应用程序 默认是on 可选是OFF
performance.cache-max-file-size 
设置缓存的最大文件大小的IO缓存翻译。可以正常使用了
大小描述,KB,MB,GB,TB或PB(例如,6GB)。最大尺寸UINT64。
默认 2 ^ 64 -1 bytes。
performance.cache-min-file-size 和上面的相反
performance.cache-refresh-timeout 1秒钟以内的请求数据都会被缓存起来默认是 1 可选是范围1~61sec
performance.cache-size 读取区的缓存,默认是32MB 
server.allow-insecure 允许客户使用非特权端口进入,默认的是On 
server.grace-timeout 
server.statedump-path 状态文件的转储。 
/tmp directory of the New directory path
brick 
默认是/tmp
原创粉丝点击