Ceph RadosGW - Placement Targets

来源:互联网 发布:怎样制作炒股票软件 编辑:程序博客网 时间:2024/06/08 05:32


转: https://blog-fromsomedude.rhcloud.com/2015/11/06/Ceph-RadosGW-Placement-Targets/

Ceph RadosGW - Placement Targets

Placement targets in the RGW are not well known nor do they have the documentation they deserve. Let’s have a stab at it.

What are placement targets

Placement target allows an administrator to specify multiple pools of storage (e.g. SSD and spinning drives) to be used by the RadosGW.
The administrator defines a default placement-target and can explicitely allows some users to write data to other placement targets. The default placement target can also be changed on a per-user basis.

Setting up placement targets

This article assumes you already have a Ceph cluster deployed, CRUSH rules defined, and a working RadosGW.

Here is how my current deployment looks like.

# ceph osd treeID  WEIGHT  TYPE NAME              UP/DOWN REWEIGHT PRIMARY-AFFINITY-5 0.26999 root ssd-6 0.09000     host host01-ssd9 0.09000         osd.9               up  1.00000          1.00000-7 0.09000     host host02-ssd10 0.09000         osd.10              up  1.00000          1.00000-8 0.09000     host host03-ssd11 0.09000         osd.11              up  1.00000          1.00000-1 8.09996 root default-4 2.70000     host host03 6 0.89999         osd.6               up  1.00000          1.00000 7 0.89999         osd.7               up  1.00000          1.00000 8 0.89999         osd.8               up  1.00000          1.00000-9 2.69997     host host01 0 0.89998         osd.0               up  1.00000          1.00000 1 0.89999         osd.1               up  1.00000          1.00000 2 0.89999         osd.2               up  1.00000          1.00000-3 2.70000     host host02 4 0.89999         osd.4               up  1.00000          1.00000 3 0.89999         osd.3               up  1.00000          1.00000 5 0.89999         osd.5               up  1.00000          1.00000

And the CRUSH rules.

rule data {    ruleset 0    type replicated    min_size 1    max_size 10    step take default    step chooseleaf firstn 0 type host    step emit}rule ssd-rule {    ruleset 1    type replicated    min_size 1    max_size 10    step take ssd    step choose firstn 0 type host    step emit}

We need to define our placement targets for both the region and the zone maps.

# radosgw-admin region get > /tmp/region

Edit /tmp/region so it looks like

{    "name": "default",    "api_name": "",    "is_master": "true",    "endpoints": [],    "hostnames": [],    "master_zone": "",    "zones": [        {            "name": "default",            "endpoints": [],            "log_meta": "false",            "log_data": "false",            "bucket_index_max_shards": 0        }    ],    "placement_targets": [        {            "name": "default-placement",            "tags": ["default-placement"]        },        {            "name": "ssd",            "tags": ["ssd"]        }    ],    "default_placement": "default-placement"}

And make the changes to the region map

# radosgw-admin region set < /tmp/region

For the zone map

# radosgw-admin zone get > /tmp/zone

Edit /tmp/zone

{    "domain_root": ".rgw",    "control_pool": ".rgw.control",    "gc_pool": ".rgw.gc",    "log_pool": ".log",    "intent_log_pool": ".intent-log",    "usage_log_pool": ".usage",    "user_keys_pool": ".users",    "user_email_pool": ".users.email",    "user_swift_pool": ".users.swift",    "user_uid_pool": ".users.uid",    "system_key": {        "access_key": "",        "secret_key": ""    },    "placement_pools": [        {            "key": "default-placement",            "val": {                "index_pool": ".rgw.buckets.index",                "data_pool": ".rgw.buckets",                "data_extra_pool": ".rgw.buckets.extra"            }        },        {            "key": "ssd",            "val": {                "index_pool": ".rgw.buckets-ssd.index",                "data_pool": ".rgw.buckets-ssd",                "data_extra_pool": ".rgw.buckets-ssd.extra"            }        },    ]}

And apply the changes to the zone map

# radosgw-admin zone set < /tmp/zone

Create the 3 pools defined in the zone and assign them the SSD ruleset

# ceph osd pool create .rgw.buckets-ssd.index 8# ceph osd pool create .rgw.buckets-ssd 32# ceph osd pool create .rgw.buckets-ssd.extra 8# ceph osd pool set .rgw.buckets-ssd.index crush_ruleset 1# ceph osd pool set .rgw.buckets-ssd crush_ruleset 1# ceph osd pool set .rgw.buckets-ssd.extra crush_ruleset 1

Update the region map and restart the RadosGW to apply the region and zone changes

# radosgw-admin regionmap update(EL)# service ceph-radosgw restart(Ubuntu)# restart radosgw-all

Trying it out

Create a user

# radosgw-admin user create --uid=ptuser --display_name "Placement Target test user"

By default, this user should only be able to create buckets in the default placement target.

# ./s3curl.pl --id ptuser --createBucket -- http://localhost/ptuserBucket1# radosgw-admin bucket stats --bucket ptuserBucket1 | grep pool    "pool": ".rgw.buckets",    "index_pool": ".rgw.buckets.index",

Trying to create a bucket in the SSD placement target should fail. Note the syntax<region>:<placement-target>.

# ./s3curl.pl --id ptuser --createBucket default:ssd -- -i http://localhost:8080/ptuserBucket2HTTP/1.1 403 Forbiddenx-amz-request-id: tx000000000000000000001-00563d3a85-d46928-defaultContent-Length: 150Accept-Ranges: bytesContent-Type: application/xmlDate: Fri, 06 Nov 2015 23:40:53 GMT<?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</   Code><RequestId>tx000000000000000000001-00563d3a85-d46928-default</    RequestId></Error

If we have a look at the radosgw logs, we can see
2015-11-06 15:40:53.887068 7f11337fe700 0 user not permitted to use placement rule

Let’s authorize this user to create a bucket in the SSD tier

# radosgw-admin metadata get user:ptuser > /tmp/user

Edit /tmp/user to set default-placement and the allowed placement tags

{    "key": "user:ptuser",    [..]        "default_placement": "default-placement",        "placement_tags": ["default-placement", "ssd"],    [...]}

And apply the changes

# radosgw-admin metadata put user:ptuser < /tmp/user

ptuser should now be able to create buckets on the SSD tier

# ./s3curl.pl --id ptuser --createBucket default:ssd -- -i http://localhost:8080/ptuserBucket2HTTP/1.1 200 OKx-amz-request-id: tx000000000000000000002-00563d3b03-d46928-defaultContent-Length: 0Date: Fri, 06 Nov 2015 23:42:59 GMT

We can check that the bucket has been created in the SSD tier with

# radosgw-admin bucket stats --bucket ptuserBucket2 | grep pool"pool": ".rgw.buckets-ssd","index_pool": ".rgw.buckets-ssd.index"

Hurray.

You could easily have multiple tiers with Erasure Code, Replication and SSD for instance for cold, warm and hot data.

0 0
原创粉丝点击