mongodb3.0.3集群分片搭建

来源:互联网 发布:速达软件数据库密码 编辑:程序博客网 时间:2024/04/30 14:41


开始搭建mongodb分片数据:
1、分别在master、slave1和slave2服务器上启动配置服务器
mkdir /data/mongodb/config
/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --configsvr --dbpath /data/mongodb/config --port 20000 --logpath /data/mongodb/logs/configsvr_20000.log --logappend --fork




2、分别在master、slave1和slave2服务器上启动mongos服务器
/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongos --configdb 10.254.3.62:20000,10.254.3.63:20000,10.254.3.72:20000 --port 30000 --chunkSize 64 --logpath /data/mongodb/logs/mongos.log --logappend --fork




3、分别在master、slave1和slave2服务器上配置分片副本集
/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --shardsvr --replSet shard1 --port 27017 --dbpath /data/mongodb/shard11 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m1s1_11.log --logappend --fork
/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --shardsvr --replSet shard2 --port 27018 --dbpath /data/mongodb/shard21 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m1s1_21.log --logappend --fork


/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --shardsvr --replSet shard1 --port 27017 --dbpath /data/mongodb/shard11 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m1s2_12.log --logappend --fork
/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --shardsvr --replSet shard2 --port 27018 --dbpath /data/mongodb/shard21 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m1s2_22.log --logappend --fork


/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --shardsvr --replSet shard1 --port 27017 --dbpath /data/mongodb/shard11 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2s1_12.log --logappend --fork
/usr/local/mongodb-linux-x86_64-3.0.3/bin/mongod --shardsvr --replSet shard2 --port 27018 --dbpath /data/mongodb/shard21 --oplogSize 2048 --logpath /data/mongodb/logs/shard_m2s1_22.log --logappend --fork


原blog地址为:http://blog.csdn.net/mchdba/article/details/51087622,原作者mchdba(黄杉),谢绝转载。


4、登陆任意一台mongodb服务器,比如


###设置第一个分片副本集,必须使用admin数据库


mongo 192.168.0.178:22001/admin
#定义副本集


> config = { _id:"shard1", members:[
{_id:0,host:"10.254.3.62:27017",priority:1},
{_id:1,host:"10.254.3.63:27017",priority:2},
{_id:2,host:"10.254.3.72:27017",arbiterOnly:true}
]
};


#初始化副本集
> rs.initiate(config);


执行过程如下:
[mongodb@db_m1_slave_1 logs]$ /usr/local/mongodb-linux-x86_64-3.0.3/bin/mongo localhost:27017
MongoDB shell version: 3.0.3
connecting to: localhost:27017/test
Server has startup warnings: 
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] 
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] 
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] 
shard2:PRIMARY> 
> config = { _id:"shard1", members:[
... {_id:0,host:"10.254.3.62:27017",priority:1},
... {_id:1,host:"10.254.3.63:27017",priority:2},
... {_id:2,host:"10.254.3.72:27017",arbiterOnly:true}
... ]
... };
{
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "10.254.3.62:27017",
"priority" : 1
},
{
"_id" : 1,
"host" : "10.254.3.63:27017",
"priority" : 2
},
{
"_id" : 2,
"host" : "10.254.3.72:27017",
"arbiterOnly" : true
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
shard1:OTHER> 




设置第二个分片


config = { _id:"shard2", members:[
{_id:0,host:"10.254.3.62:27018",priority:2},
{_id:1,host:"10.254.3.63:27018",priority:1},
{_id:2,host:"10.254.3.72:27018",arbiterOnly:true}
]};
rs.initiate(config);


执行过程如下:
[mongodb@db_m1_slave_1 logs]$ /usr/local/mongodb-linux-x86_64-3.0.3/bin/mongo localhost:27018
MongoDB shell version: 3.0.3
connecting to: localhost:27018/test
Server has startup warnings: 
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] 
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] 
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-03-15T16:30:20.071+0800 I CONTROL  [initandlisten] 
shard2:PRIMARY> 
> config = { _id:"shard2", members:[
... {_id:0,host:"10.254.3.62:27018",priority:2},
... {_id:1,host:"10.254.3.63:27018",priority:1},
... {_id:2,host:"10.254.3.72:27018",arbiterOnly:true}
... ]};
{
"_id" : "shard2",
"members" : [
{
"_id" : 0,
"host" : "10.254.3.62:27018",
"priority" : 2
},
{
"_id" : 1,
"host" : "10.254.3.63:27018",
"priority" : 1
},
{
"_id" : 2,
"host" : "10.254.3.72:27018",
"arbiterOnly" : true
}
]
}
> rs.initiate(config);
{ "ok" : 1 }
shard2:OTHER> 


检查下副本状态:
shard2:OTHER> rs.status();
{
"set" : "shard2",
"date" : ISODate("2016-03-15T09:03:16.372Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "10.254.3.62:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2010,
"optime" : Timestamp(1458032561, 1),
"optimeDate" : ISODate("2016-03-15T09:02:41Z"),
"electionTime" : Timestamp(1458032565, 1),
"electionDate" : ISODate("2016-03-15T09:02:45Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "10.254.3.63:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 34,
"optime" : Timestamp(1458032561, 1),
"optimeDate" : ISODate("2016-03-15T09:02:41Z"),
"lastHeartbeat" : ISODate("2016-03-15T09:03:15.498Z"),
"lastHeartbeatRecv" : ISODate("2016-03-15T09:03:15.498Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.254.3.72:27018",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 34,
"lastHeartbeat" : ISODate("2016-03-15T09:03:15.499Z"),
"lastHeartbeatRecv" : ISODate("2016-03-15T09:03:15.497Z"),
"pingMs" : 1,
"configVersion" : 1
}
],
"ok" : 1
}
shard2:PRIMARY> 


5、目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到 mongos 路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。
mongo 192.168.0.178:20000/admin


#串联路由服务器与分配副本集1
db.runCommand( { addshard : "shard1/10.254.3.62:27017,10.254.3.63:27017,10.254.3.72:27017"});


#串联路由服务器与分配副本集2
db.runCommand( { addshard : "shard2/10.254.3.62:27018,10.254.3.63:27018,10.254.3.72:27018"});


[mongodb@db_m1_slave_1 logs]$ /usr/local/mongodb-linux-x86_64-3.0.3/bin/mongo localhost:30000/admin
MongoDB shell version: 3.0.3
connecting to: localhost:30000/admin
mongos> db.runCommand( { addshard : "shard1/10.254.3.62:27017,10.254.3.63:27017,10.254.3.72:27017"});
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> db.runCommand( { addshard : "shard2/10.254.3.62:27018,10.254.3.63:27018,10.254.3.72:27018"});
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> 


6、查看分片服务器配置
db.runCommand({listshards : 1 });
命令输出结果
mongos> db.runCommand({listshards : 1 });
{
"shards" : [
{
"_id" : "shard1",
"host" : "shard1/10.254.3.62:27017,10.254.3.63:27017"
},
{
"_id" : "shard2",
"host" : "shard2/10.254.3.62:27018,10.254.3.63:27018"
}
],
"ok" : 1
}
mongos> 
因为10.254.3.72是每个分片副本集的仲裁节点,所以在上面结果没有列出来。


7、目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片,就差那么一点点,一点点。。。


连接在mongos上,准备让指定的数据库、指定的集合分片生效。
#指定uba分片生效
db.runCommand( { enablesharding :"app"});
db.runCommand( { enablesharding :"im"});
db.runCommand( { enablesharding :"parking"});
db.runCommand( { enablesharding :"pv"});
db.runCommand( { enablesharding :"report"});
db.runCommand( { enablesharding :"screen"});
db.runCommand( { enablesharding :"search"});
db.runCommand( { enablesharding :"traffice"});
db.runCommand( { enablesharding :"wifi"});
执行过程如下:mongos> db.runCommand( { enablesharding :"app"});
});
db.runCommand( { enablesharding :"search"});
db.runCommand( { enablesharding :"traffice"});
db.runCommand( { enablesharding :"wifi"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"im"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"parking"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"pv"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"report"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"screen"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"search"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"traffice"});
{ "ok" : 1 }
mongos> db.runCommand( { enablesharding :"wifi"});
{ "ok" : 1 }
mongos> 
mongos> 




#指定数据库里需要分片的集合和片键
db.runCommand( { shardcollection : "uba.table1",key : {id: 1} } )
我们设置uba的 table1 表需要分片,根据 id 自动分片到 shard1 ,shard2,shard3 上面去。要这样设置是因为不是所有 mongodb 的数据库和表 都需要分片!


--1 app
db.runCommand({enablesharding:"app"});
db.runCommand( { shardcollection : "app.download",key : {_id: 1} } );
执行过程如下:
mongos> db.runCommand({enablesharding:"app"});
{ "ok" : 0, "errmsg" : "already enabled" }
mongos> use admin
switched to db admin
mongos> db.runCommand({enablesharding:"app"});
{ "ok" : 0, "errmsg" : "already enabled" }
mongos> db.runCommand( { shardcollection : "app.download",key : {_id: 1} } );
{ "collectionsharded" : "app.download", "ok" : 1 }
mongos> 
3 0
原创粉丝点击