mongodb 副本集+分片

来源:互联网 发布:windows10安装mysql 编辑:程序博客网 时间:2024/04/28 11:37

Mongodb 副本集+分片

 

1.本地磁盘不足,将数据分散到其它的机器,实现大数据的存储,处理负载,->shard,

2.故障恢复与冗余,读写分离->replica set

本文尝试多种不同的架构:

1.      副本集+分片

1)     其架构图为:



3台服务器,Server1作为主节点,3个分片,server2,server3作为副本集,对server1的数据进行备份。

2)     配置

serverA

shard1配置为:

port=28010replSet=rs1fork=truedbpath=/root/data/shard/s0logpath=/root/data/shard/log/s0.logshardsvr=truedirectoryperdb=true 

shard2,config的replSet分别为rs2,rsconf

server B

shard1 配置为:

port=28010replSet=rs1fork=truedbpath=/root/data/shard/s0logpath=/root/data/shard/log/s0.loglogappend=truedirectoryperdb=true

shard2, config的replSet分别为rs2,rsconf.

3)     启动

分别启动3台服务器的几个shard进程

mongod -f s0.conf

mongod -fs1.conf

任意进入一台服务器

mongo -port28010

a)       副本集rs1配置:

config={_id:"rs1",members:[{_id:0,host:"192.168.182.210:28010"},{_id:1,host:"192.168.182.211:28010"},{_id:1,host:"192.168.182.212:28010",arbiterOnly:true}]}

rs.initiate(config)

rs1的副本集搭建完毕,依此搭建rs2.

b)      搭建config,启动rout

报错:

BadValue:Invalid configdb connection string: FailedToParse: invalid url [192.13.183.210:30000,192. 168.182.211:30000,192.168.182.212:30000]

表示所有的步骤都是按照正确方式来的,后来发现在3.2版本,configdb的方式已经变成了: configReplSet/<cfgsvr1:port1>,<cfgsvr2:port2>,<cfgsvr3:port3>

c)       配置config为:

config={_id:"rsconf",members:[{_id:0,host:"182.168.182.210:30000"},{_id:1,host:"182.168.182.211:30000"},{_id:1,host:"182.168.182.212:30000"}]}

d)      配置rout.conf

port=40000fork=trueconfigdb=rsconf/192.168.182.210:30000,192.168.182.211:30000,192.168.182.212:30000logpath=/root/data/shard/log/rout.logchunkSize=1logappend=true

e)       加入分片:

db.runCommand({addshard:"rs2/192.168.182.210:28010",name:"shard1"})db.runCommand({addshard:"rs2/192.168.182.210:28011",name:"shard2"})

db.proj.ensureIndex({name:1})mongos> db.runCommand({enablesharding:"test"}){ "ok" : 1 }mongos> db.runCommand({shardcollection:"test.proj",key:{"name":1}}){ "collectionsharded" : "test.proj", "ok" : 1 }for(var i=0;i<10000;i++){... db.proj.insert({name:i+"de"})... }

查看分片情况:

mongos>printShardingStatus()--- ShardingStatus ---  sharding version: {         "_id" : 1,         "minCompatibleVersion" : 5,         "currentVersion" : 6,         "clusterId" :ObjectId("57dcb44297abbc578ff6f300")}  shards:         { "_id" : "shard1", "host" : "rs1/192.168.182.210:28010,192.168.182.210:28010"}         { "_id" : "shard2", "host" : "rs2/192.168.182.211:28011,192.168.182.211:28011"}         { "_id" : "shard3", "host" : "rs3/192.168.182.212:28012,192.168.182.212:28012"}  active mongoses:         "3.2.5" : 2  balancer:         Currently enabled:  yes         Currently running:  no         Failed balancer rounds in last 5attempts:  0         Migration Results for the last 24hours:                   2 : Success                   1 : Failed with error'aborted', from shard1 to shard2  databases:         { "_id" : "test", "primary" : "shard1",  "partitioned" : true }                   test.proj                            shard key: {"name" : 1 }                            unique: false                            balancing: true                            chunks:                                     shard1      1                                     shard2      1                                     shard3      1                            { "name" :{ "$minKey" : 1 } } -->> { "name" : "10de"} on : shard2 Timestamp(2, 0)                            { "name" :"10de" } -->> { "name" : "6de" } on :shard3 Timestamp(3, 0)                            { "name" :"6de" } -->> { "name" : { "$maxKey" : 1 } }on : shard1 Timestamp(3, 1)

4)     测试

a)       测试分片

新增分片:

db.runCommand({addshard:"rs2/192.168.182.210:28012",name:"shard3"})

进入任一个实例:mongo -port28012

rs3:SECONDARY>rs.slaveOk()rs3:SECONDARY>db.proj.count()6555

b)      测试故障

关闭serverA的shard1

自动选举出serverB的shard1作为主节点.

rs1:PRIMARY>rs.status(){         …         "members" : [                   {                            "_id" : 0,                            "name" :"192.168.182.210:28010",                            "health" :0,                            "state" :8,                            "stateStr": "(not reachable/healthy)",…

再启动ServerA的shard,其状态将变为SECONDARY


关闭Server A的config

会使用其他服务器的config,进入ServeB的mongos

mongos>db.proj.count()10001

再关闭ServerB的config,会采纳ServerC的config文件


关闭ServerA的mongos

进入B的mongos进行查询

mongos>db.proj.count()

10001

关闭B的mongos

进入C的Mongos查询

mongos>db.proj.count()10001mongos>db.proj.find(){"_id" : ObjectId("57dcbadde0ec017f1c47136d"),"name" : "test" }{"_id" : ObjectId("57dcbc1ae0ec017f1c471376"),"name" : "8de" }{"_id" : ObjectId("57dcbc1ae0ec017f1c471374"),"name" : "6de" }


至此,显示出副本集可以良好的进行冗余,多个config与mongos也是必要的。

0 0