Ceph RGW远程同步(multisite)配置

来源:互联网 发布:vmware如何安装mac os 编辑:程序博客网 时间:2024/06/06 19:59

0. preview

本文的例子基于ceph 10.2.3。
本文以一个例子来介绍如何配置RGW的远程同步(multisite),对于其机制的研究,见http://blog.csdn.net/for_tech/article/details/68928185

1. Overview

                                      +--- master-zone      : 1 or more rgw backed by ceph cluster1
                                      +--- secondary-zone-1 : 1 or more rgw backed by ceph cluster2
        +-master-zone-group ----------+ ......
        |                             +--- secondary-zone-N : 1 or more rgw backed by ceph clusterN
        |
        |                             +--- master-zone
        |                             +--- secondary-zone-1
realm---+-secondary-zone-group-1------+ ......
        |                             +--- secondary-zone-N
        +-secondary-zone-group-2
        |
        + ......
        |
        +-secondary-zone-group-N

2. Example

In this example, I will make a simplest configuration:


                              +--- zone_master_hyg    : client.radosgw.gateway0 on ceph-cluster-1
  realm_hyg - zonegroup_hyg --+
                              +--- zone_secondary_hyg : client.radosgw.gateway1 on ceph-cluster-2

2.1 Configure ceph clusters

  On node1.localdomain, configure a ceph cluster (ceph-cluster-1), without any radosgw instance;
  On node2.localdomain, configure a ceph cluster (ceph-cluster-2), without any radosgw instance;

2.2 Generate access-key and secret key:

  access-key=1WYCCJZ9JRLWZU8JTDQJ
  secret=PXhbQDJVeF1PsXw5tsCuIaKY0N8s1BP2J3yCn9K3

2.3 Configure Master Zone

2.3.1 Create pools for master zone and delete the rbd pool

 ceph osd pool create zone_master_hyg.rgw.control 16 16
 ceph osd pool create zone_master_hyg.rgw.data.root 16 16
 ceph osd pool create zone_master_hyg.rgw.gc 16 16
 ceph osd pool create zone_master_hyg.rgw.log 16 16
 ceph osd pool create zone_master_hyg.rgw.intent-log 16 16
 ceph osd pool create zone_master_hyg.rgw.usage 16 16
 ceph osd pool create zone_master_hyg.rgw.users.keys 16 16
 ceph osd pool create zone_master_hyg.rgw.users.email 16 16
 ceph osd pool create zone_master_hyg.rgw.users.swift 16 16
 ceph osd pool create zone_master_hyg.rgw.users.uid 16 16
 ceph osd pool create zone_master_hyg.rgw.buckets.index 32 32
 ceph osd pool create zone_master_hyg.rgw.buckets.data 32 32
 ceph osd pool create zone_master_hyg.rgw.meta 16 16
 rados rmpool rbd rbd --yes-i-really-really-mean-it

2.3.2 Create the realm

  radosgw-admin realm create --rgw-realm=realm_hyg --default

2.3.3 Create the master zone group

  radosgw-admin zonegroup create --rgw-zonegroup=zonegroup_hyg --endpoints=http://node1.localdomain:8000 --master --default

2.3.4 Create the master zone

  radosgw-admin zone create --rgw-zonegroup=zonegroup_hyg --rgw-zone=zone_master_hyg --endpoints=http://node1.localdomain:8000 --access-key=1WYCCJZ9JRLWZU8JTDQJ --secret=PXhbQDJVeF1PsXw5tsCuIaKY0N8s1BP2J3yCn9K3 --default --master

2.3.5 Create the user with the generated access and secret key

  radosgw-admin user create --uid=yuanguo --display-name="Yuanguo Huo"  --access-key=1WYCCJZ9JRLWZU8JTDQJ --secret=PXhbQDJVeF1PsXw5tsCuIaKY0N8s1BP2J3yCn9K3 --system

2.3.6 Update the period

  radosgw-admin period update --commit

2.3.7 Edit the ceph.conf by adding these lines

  [client.radosgw.gateway0]
    rgw_frontends = "civetweb port=8000"
    rgw_zone=zone_master_hyg
    admin_socket = /var/ceph/sock/gateway0.asock
    rgw socket path = /var/ceph/sock/gateway0.sock
    keyring = /var/ceph/rgwkeyr/gateway0.keyring
    log file = /var/ceph/logs/gateway0.log
    debug_rgw = 100
    debug_objecter = 100
    rgw_override_bucket_index_max_shards = 6
    rgw_md_log_max_shards = 4
    rgw_num_zone_opstate_shards = 8
    rgw_data_log_num_shards = 8
    rgw_objexp_hints_num_shards = 7

2.3.8 Start the rgw

  systemctl start ceph-radosgw@0

2.4 Configure Secondary Zone

2.4.1 Create pools for secondary zone and delete the rbd pool

 ceph osd pool create zone_secondary_hyg.rgw.control 16 16
 ceph osd pool create zone_secondary_hyg.rgw.data.root 16 16
 ceph osd pool create zone_secondary_hyg.rgw.gc 16 16
 ceph osd pool create zone_secondary_hyg.rgw.log 16 16
 ceph osd pool create zone_secondary_hyg.rgw.intent-log 16 16
 ceph osd pool create zone_secondary_hyg.rgw.usage 16 16
 ceph osd pool create zone_secondary_hyg.rgw.users.keys 16 16
 ceph osd pool create zone_secondary_hyg.rgw.users.email 16 16
 ceph osd pool create zone_secondary_hyg.rgw.users.swift 16 16
 ceph osd pool create zone_secondary_hyg.rgw.users.uid 16 16
 ceph osd pool create zone_secondary_hyg.rgw.buckets.index 32 32
 ceph osd pool create zone_secondary_hyg.rgw.buckets.data 32 32
 ceph osd pool create zone_secondary_hyg.rgw.meta 16 16
 rados rmpool rbd rbd --yes-i-really-really-mean-it

2.4.2 Pull the realm

  radosgw-admin realm pull --url=http://node1.localdomain:8000 --access-key=1WYCCJZ9JRLWZU8JTDQJ --secret=PXhbQDJVeF1PsXw5tsCuIaKY0N8s1BP2J3yCn9K3

2.4.3 Pull the period

  radosgw-admin period pull --url=http://node1.localdomain:8000 --access-key=1WYCCJZ9JRLWZU8JTDQJ --secret=PXhbQDJVeF1PsXw5tsCuIaKY0N8s1BP2J3yCn9K3

2.4.4 Set the real and zonegroup just pulled as default

  radosgw-admin realm default --rgw-realm=realm_hyg
  radosgw-admin zonegroup default --rgw-zonegroup=zonegroup_hyg

2.4.5 Create the secondary zone

  Notice: maste site and secondary site share the same realm and zonegroup, so realm and zonegroup are pulled from master site, but zones are different, so create it:
  radosgw-admin zone create --rgw-zonegroup=zonegroup_hyg --rgw-zone=zone_secondary_hyg --endpoints=http://node2.localdomain:8000 --default --access-key=1WYCCJZ9JRLWZU8JTDQJ --secret=PXhbQDJVeF1PsXw5tsCuIaKY0N8s1BP2J3yCn9K3

2.4.6 Update the peroid

  radosgw-admin period update --commit --rgw-zone=zone_secondary_hyg

2.4.7 Edit the ceph.conf by adding these lines

  [client.radosgw.gateway1]
    rgw_frontends = "civetweb port=8000"
    rgw_zone=zone_secondary_hyg
    admin_socket = /var/ceph/sock/gateway1.asock
    rgw socket path = /var/ceph/sock/gateway1.sock
    keyring = /var/ceph/rgwkeyr/gateway1.keyring
    log file = /var/ceph/logs/gateway1.log
    debug_rgw = 100
    debug_objecter = 100
    rgw_override_bucket_index_max_shards = 6
    rgw_md_log_max_shards = 4
    rgw_num_zone_opstate_shards = 8
    rgw_data_log_num_shards = 8
    rgw_objexp_hints_num_shards = 7

2.4.8 Start the rgw

  systemctl start ceph-radosgw@1

2.5 Check sync status

  radosgw-admin sync status

3. RGW远程同步机制研究

    见下一篇:http://blog.csdn.net/for_tech/article/details/68928185

0 0
原创粉丝点击