MongoDB集群部署:Replic Set + Sharding

来源:互联网 发布:传奇斗笠数据库代码 编辑:程序博客网 时间:2024/06/05 16:45


一、方案:Replic Set + Sharding


1)虚拟机环境:

三台centos6.6虚拟机(参考http://blog.csdn.net/chxr1620/article/details/76695682)


2)mongodb的版本:mongodb-linux-x86_64-2.6.12.tar

 

二、安装部署


1.   tar zxvf mongodb-linux-x86_64-2.x.x.tgz

2.     rm -rf /opt/mongodbmv mongodb-linux-x86_64-2.x.x/ /opt/mongodb

3.     自动部署脚本 ,分别在centos1,centos2,centos3进行执行

该脚本(mongodb_deploy.sh)内容为如下:

#!/bin/sh

echo"Plear input the Centos's index(1 or 2 or 3)?"

read cindex

 

rm-rf /work/mongodb

 

mkdir -p /work/mongodb/shard1$cindex

 

cat> /work/mongodb/shard1$cindex.conf << EOF

shardsvr=true

replSet=shard1

port=28017

dbpath=/work/mongodb/shard1$cindex

oplogSize=500

logpath=/work/mongodb/shard1$cindex.log

logappend=true

fork=true

rest=true

nojournal=true

 

EOF

 

 

mkdir -p /work/mongodb/shard2$cindex

 

cat> /work/mongodb/shard2$cindex.conf << EOF

shardsvr=true

replSet=shard2

port=28018

dbpath=/work/mongodb/shard2$cindex

oplogSize=500

logpath=/work/mongodb/shard2$cindex.log

logappend=true

fork=true

rest=true

nojournal=true

 

 

EOF

 

mkdir-p /work/mongodb/config

 

cat > /work/mongodb/config$cindex.conf << EOF

configsvr=true

dbpath=/work/mongodb/config/

port=20000

logpath=/work/mongodb/config$cindex.log

logappend=true

fork=true

nojournal=true

 

 

EOF

 

 

mkdir-p /work/mongodb/arbiter1

cat> /work/mongodb/arbiter1.conf << EOF

shardsvr=true

replSet=shard1

port=28031

dbpath=/work/mongodb/arbiter1

oplogSize=100

logpath=/work/mongodb/arbiter1.log

logappend=true

fork=true

rest=true

nojournal=true

EOF

 

echo"finished arbiter1...."

 

mkdir-p /work/mongodb/arbiter2

cat> /work/mongodb/arbiter2.conf << EOF

shardsvr=true

replSet=shard2

port=28032

dbpath=/work/mongodb/arbiter2

oplogSize=100

logpath=/work/mongodb/arbiter2.log

logappend=true

fork=true

rest=true

nojournal=true

 

EOF

 

 

echo"finished arbiter2....."

 

mkdir-p /work/mongodb/mongos$cindex

cat> /work/mongodb/mongos$cindex.conf << EOF

configdb=centos1:20000,centos2:20000,centos3:20000

port=28885

chunkSize=100

logpath=/work/mongodb/mongos$cindex.log

logappend=true

fork=true

EOF

 

echo"finished mongos........"

 

echo"begin shard1"

/opt/mongodb/bin/mongod--config /work/mongodb/shard1$cindex.conf

echo"begin shard2"

/opt/mongodb/bin/mongod--config /work/mongodb/shard2$cindex.conf

echo"begin arbiter1"

/opt/mongodb/bin/mongod--config /work/mongodb/arbiter1.conf

echo"begin arbiter2"

/opt/mongodb/bin/mongod--config /work/mongodb/arbiter2.conf

echo"begin config"

/opt/mongodb/bin/mongod--config /work/mongodb/config$cindex.conf

4.   各个虚拟机上分别执行,开启mongos服务,脚本(mongos_deploy.sh)内容如下:

#!/bin/bash

echo"the current Centos' index (1 or 2 or 3)?"

read cindex

/opt/mongodb/bin/mongos --config/work/mongodb/mongos$cindex.conf

5.   配置分片副本集

任意登陆一个机器,如centos1,设置第一个分片副本集

/opt/mongodb/bin/mongo centos1:28017/admin

定义副本集配置:

config = { _id:"shard1", members:[   { _id:0,host:"centos1:28017"},   { _id:1,host:"centos2:28017"},   {_id:2,host:"centos3:28017",slaveDelay:7200,priority:0},   {_id:3,host:"centos1:28031",arbiterOnly:true},   {_id:4,host:"centos2:28031",arbiterOnly:true},   {_id:5,host:"centos3:28031",arbiterOnly:true} ] };

初始化副本集配置。

rs.initiate(config):

注:配置分片信息时遇到的报错(有如下几种)

{

      "errmsg" : "exception:Can't take a write lock while out of disk space",

      "code" : 14031,

      "ok" : 0

}

 

{

      "ok" : 0,

      "errmsg" : "couldn'tinitiate : cmdline oplogsize (2048) different than existing (0) see:http://dochub.mongodb.org/core/increase-oplog"

}

以上两种情况,重新从上述第3步开始执行。

 

{"ok" : 0, "errmsg" : "couldn't initiate : new fileallocation failure" }

对于这个错误信息,主要是系统的存储空间过少问题造成的,可以修改第3步中oplogSize的大小,然后重新执行从第3步开始执行。

 

同样的设置第二个分片集

/opt/mongodb/bin/mongo centos1:28018/admin

config = { _id:"shard2", members:[   { _id:0,host:"centos1:28018"},   { _id:1,host:"centos2:28018"},   { _id:2,host:"centos3:28018",slaveDelay:7200,priority:0},   {_id:3,host:"centos1:28032",arbiterOnly:true},   { _id:4,host:"centos2:28032",arbiterOnly:true},   {_id:5,host:"centos3:28032",arbiterOnly:true} ] };

rs.initiate(config)

 

6.   添加分片

进入任意一台虚拟机

/opt/mongodb/bin/mongo centos1:2885/admin

串联路由服务器与分配副本集1,2

db.runCommand({“addshard”:”shard1/centos:28017,centos2:28017”})

db.runCommand({“addshard”:”shard2/centos:28018,centos2:28018”})

因为centos3:28017,centos3:28018是延迟节点,所以不在副本集当中。

查看分片服务器的配置:

db.runCommand({listsshards:1});

 

指定NCDB库分片生效

db.runCommand({“enablesharding”:”NCDB”})

指定分片集合与片键

db.runCommand({shardcollection:”NCDB.test1”,key:{id:1}})

7.   测试分片结果

下面我们进一步对分片结果进行测试

#连接mongos服务器并对库表分片

[mongod@racdb ~]$ mongoracdb:28885/admin

MongoDB shell version: 3.2.3

connecting to:racdb:28885/admin

 

mongos> db.runCommand( {enablesharding:"testdb"});

{ "ok" : 1 }

mongos> db.runCommand( {shardcollection: "testdb.table1",key : {id: 1} } )

{ "collectionsharded":"testdb.table1", "ok" : 1 }

 

#插入测试数据

for (var i = 1; i <= 10000;i++)

db.table1.save({id:i,"test1":"licz"});

进入任意一台虚拟机

/opt/mongodb/bin/mongo centos1:28885/admin

db.runCommand({enablesharding:”testdb”})

 

 

指定分片库时的错误参考:

https://stackoverflow.com/questions/19918956/can-not-create-shards-sharding-not-enabled-for-db



参考:http://blog.csdn.net/lichangzai/article/details/50927588



 

原创粉丝点击