MongoDB学习之旅三十:Replica Sets + Sharding

来源:互联网 发布:淘宝上口碑好的家具店 编辑:程序博客网 时间:2024/05/26 02:51

    MongoDB Auto-Sharding 解决了海量存储和动态扩容的问题,但离实际生产环境所需的高可靠、高可用还有些距离,所以有了"Replica Sets + Sharding"的解决方案。

    shard:

    使用Replica Sets,确保每个数据节点都具有备份,自动容错转移,自动回复能力。

    config:

    使用3个配置服务器,确保元数据的完整性。

    route:

    使用3个路由进程,实现负载均衡,提高客户端接入性能。

     配置Replica Sets + Sharding 架构图:

    

   

     配置Replica Sets + Sharding

    (1)配置shard1所用到的Replica Sets

    在server A上

[root@localhost bin]# /Apps/mongo/bin/mongod --shardsvr --replSet shard1 --port 27017--dbpath /data/shard1_1 --logpath /data/shard1_1/shard1_1.log --logappend --fork[root@localhost bin]# all output going to: /data/shard1_1/shard1_1.logforked process: 18923
    在server B上
[root@localhost bin]# /Apps/mongo/bin/mongod --shardsvr --replSet shard1 --port 27017--dbpath /data/shard1_2 --logpath /data/shard1_2/shard1_2.log --logappend --forkforked process: 18859[root@localhost bin]# all output going to: /data/shard1_2/shard1_2.log[root@localhost bin]#
    在Server C 上
[root@localhost bin]# /Apps/mongo/bin/mongod --shardsvr --replSet shard1 --port 27017--dbpath /data/shard1_3 --logpath /data/shard1_3/shard1_3.log --logappend --forkall output going to: /data/shard1_3/shard1_3.logforked process: 18768[root@localhost bin]#
    用mongo 连接其中一台机器的27017 端口的mongod,初始化Replica Sets“shard1”,执行:
[root@localhost bin]# ./mongo --port 27017MongoDB shell version: 1.8.1connecting to: 127.0.0.1:27017/test> config = {_id: 'shard1', members: [... {_id: 0, host: '192.168.3.231:27017'},... {_id: 1, host: '192.168.3.232:27017'},... {_id: 2, host: '192.168.3.233:27017'}]... }……> rs.initiate(config){"info" : "Config now saved locally. Should come online in about a minute.","ok" : 1}
    (2)配置shard2所用到的Replica Sets

       在server A上

[root@localhost bin]# /Apps/mongo/bin/mongod --shardsvr --replSet shard2 --port 27018--dbpath /data/shard2_1 --logpath /data/shard2_1/shard2_1.log --logappend --forkall output going to: /data/shard2_1/shard2_1.log[root@localhost bin]# forked process: 18993[root@localhost bin]#
       在server B上
[root@localhost bin]# /Apps/mongo/bin/mongod --shardsvr --replSet shard2 --port 27018--dbpath /data/shard2_2 --logpath /data/shard2_2/shard2_2.log --logappend --forkall output going to: /data/shard2_2/shard2_2.logforked process: 18923[root@localhost bin]#
       在Server C上
[root@localhost bin]# /Apps/mongo/bin/mongod --shardsvr --replSet shard2 --port 27018--dbpath /data/shard2_3 --logpath /data/shard2_3/shard2_3.log --logappend --fork[root@localhost bin]# all output going to: /data/shard2_3/shard2_3.logforked process: 18824[root@localhost bin]#

       用mongo 连接其中一台机器的27018 端口的mongod,初始化Replica Sets “shard2”,执行:

[root@localhost bin]# ./mongo --port 27018MongoDB shell version: 1.8.1connecting to: 127.0.0.1:27018/test> config = {_id: 'shard2', members: [... {_id: 0, host: '192.168.3.231:27018'},... {_id: 1, host: '192.168.3.232:27018'},... {_id: 2, host: '192.168.3.233:27018'}]... }……> rs.initiate(config){"info" : "Config now saved locally. Should come online in about a minute.","ok" : 1db.runCommand({ enablesharding:"test" })db.runCommand({ shardcollection: "test.users", key: { _id:1 }})}
  (3)配置3 台Config Server
    在Server A、B、C上执行:
/Apps/mongo/bin/mongod --configsvr --dbpath /data/config --port 20000 --logpath/data/config/config.log --logappend --fork
   (4)配置3台Route Process
    在Server A、B、C上执行:
/Apps/mongo/bin/mongos --configdb192.168.3.231:20000,192.168.3.232:20000,192.168.3.233:20000 --port 30000 --chunkSize 1--logpath /data/mongos.log --logappend --fork
    (5)配置Shard Cluster
     连接到其中一台机器的端口30000 的mongos 进程,并切换到admin 数据库做以下配置
[root@localhost bin]# ./mongo --port 30000MongoDB shell version: 1.8.1connecting to: 127.0.0.1:30000/test> use adminswitched to db admin>db.runCommand({addshard:"shard1/192.168.3.231:27017,192.168.3.232:27017,192.168.3.233:27017"});{ "shardAdded" : "shard1", "ok" : 1 }>db.runCommand({addshard:"shard2/192.168.3.231:27018,192.168.3.232:27018,192.168.3.233:27018"});{ "shardAdded" : "shard2", "ok" : 1 }>
    激活数据库及集合的分片
db.runCommand({ enablesharding:"test" })db.runCommand({ shardcollection: "test.users", key: { _id:1 }})
    (6)验证Sharding正常工作
    连接到其中一台机器的端口30000 的mongos 进程,并切换到test 数据库,以便添加测试数据 
use testfor(var i=1;i<=200000;i++) db.users.insert({id:i,addr_1:"Beijing",addr_2:"Shanghai"});db.users.stats(){"sharded" : true,"ns" : "test.users","count" : 200000,"size" : 25600384,"avgObjSize" : 128,"storageSize" : 44509696,"nindexes" : 2,"nchunks" : 15,"shards" : {"shard0000" : {……},"shard0001" : {……}},"ok" : 1}
    可以看到Sharding搭建成功了,跟我们期望的结果一致,至此我们就将Replica Sets与Sharding结合的架构也学习完毕了!

0 0
原创粉丝点击