Troubleshoot Replica Sets
来源:互联网 发布:修图师需要学什么软件 编辑:程序博客网 时间:2024/05/18 18:17
- Replication >
- Replica Set Tutorials >
- Troubleshoot Replica Sets
Troubleshoot Replica Sets
On this page
- Check Replica Set Status
- Check the Replication Lag
- Test Connections Between all Members
- Socket Exceptions when Rebooting More than One Secondary
- Check the Size of the Oplog
- Oplog Entry Timestamp Error
- Duplicate Key Error on
local.slaves
This section describes common strategies for troubleshooting replica set deployments.
Check Replica Set Status
To display the current state of the replica set and current state of each member, run the rs.status()
method in a mongo
shell connected to the replica set’s primary. For descriptions of the information displayed by rs.status()
, see replSetGetStatus.
NOTE
The rs.status()
method is a wrapper that runs the replSetGetStatus
database command.
Check the Replication Lag
Replication lag is a delay between an operation on the primary and the application of that operation from the oplog to the secondary. Replication lag can be a significant issue and can seriously affect MongoDB replica set deployments. Excessive replication lag makes “lagged” members ineligible to quickly become primary and increases the possibility that distributed read operations will be inconsistent.
To check the current length of replication lag:
In a
mongo
shell connected to the primary, call thers.printSlaveReplicationInfo()
method.Returns the
syncedTo
value for each member, which shows the time when the last oplog entry was written to the secondary, as shown in the following example:source: m1.example.net:27017 syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT) 0 secs (0 hrs) behind the primarysource: m2.example.net:27017 syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT) 0 secs (0 hrs) behind the primary
A delayed member may show as
0
seconds behind the primary when the inactivity period on the primary is greater than themembers[n].slaveDelay
value.NOTE
The
rs.status()
method is a wrapper around thereplSetGetStatus
database command.Monitor the rate of replication by watching the oplog time in the “replica” graph in the MongoDB Cloud Manager and in Ops Manager, an on-premise solution available in MongoDB Enterprise Advanced. For more information see the MongoDB Cloud Manager documentation and Ops Manager documentation.
Possible causes of replication lag include:
Network Latency
Check the network routes between the members of your set to ensure that there is no packet loss or network routing issue.
Use tools including
ping
to test latency between set members andtraceroute
to expose the routing of packets network endpoints.Disk Throughput
If the file system and disk device on the secondary is unable to flush data to disk as quickly as the primary, then the secondary will have difficulty keeping state. Disk-related issues are incredibly prevalent on multi-tenant systems, including virtualized instances, and can be transient if the system accesses disk devices over an IP network (as is the case with Amazon’s EBS system.)
Use system-level tools to assess disk status, including
iostat
orvmstat
.Concurrency
In some cases, long-running operations on the primary can block replication on secondaries. For best results, configure write concern to require confirmation of replication to secondaries. This prevents write operations from returning if replication cannot keep up with the write load.
Use the database profiler to see if there are slow queries or long-running operations that correspond to the incidences of lag.
Appropriate Write Concern
If you are performing a large data ingestion or bulk load operation that requires a large number of writes to the primary, particularly with
unacknowledged write concern
, the secondaries will not be able to read the oplog fast enough to keep up with changes.To prevent this, request write acknowledgement write concern after every 100, 1,000, or another interval to provide an opportunity for secondaries to catch up with the primary.
For more information see:
- Write Concern
- Replica Set Write Concern
- Oplog Size
Test Connections Between all Members
All members of a replica set must be able to connect to every other member of the set to support replication. Always verify connections in both “directions.” Networking topologies and firewall configurations can prevent normal and required connectivity, which can block replication.
Consider the following example of a bidirectional test of networking:
EXAMPLE
Given a replica set with three members running on three separate hosts:
m1.example.net
m2.example.net
m3.example.net
Test the connection from
m1.example.net
to the other hosts with the following operation setm1.example.net
:mongo --host m2.example.net --port 27017mongo --host m3.example.net --port 27017
Test the connection from
m2.example.net
to the other two hosts with the following operation set fromm2.example.net
, as in:mongo --host m1.example.net --port 27017mongo --host m3.example.net --port 27017
You have now tested the connection between
m2.example.net
andm1.example.net
in both directions.Test the connection from
m3.example.net
to the other two hosts with the following operation set from them3.example.net
host, as in:mongo --host m1.example.net --port 27017mongo --host m2.example.net --port 27017
If any connection, in any direction fails, check your networking and firewall configuration and reconfigure your environment to allow these connections.
Socket Exceptions when Rebooting More than One Secondary
When you reboot members of a replica set, ensure that the set is able to elect a primary during the maintenance. This means ensuring that a majority of the set’s members[n].votes
are available.
When a set’s active members can no longer form a majority, the set’s primary steps down and becomes a secondary. The former primary closes all open connections to client applications. Clients attempting to write to the former primary receive socket exceptions and Connection reset errors until the set can elect a primary.
EXAMPLE
Given a three-member replica set where every member has one vote, the set can elect a primary if at least two members can connect to each other. If you reboot the two secondaries at once, the primary steps down and becomes a secondary. Until at least another secondary becomes available, i.e. at least one of the rebooted secondaries also becomes available, the set has no primary and cannot elect a new primary.
For more information on votes, see Replica Set Elections. For related information on connection errors, see Does TCP keepalive time affect MongoDB Deployments?.
Check the Size of the Oplog
A larger oplog can give a replica set a greater tolerance for lag, and make the set more resilient.
To check the size of the oplog for a given replica set member, connect to the member in a mongo
shell and run the rs.printReplicationInfo()
method.
The output displays the size of the oplog and the date ranges of the operations contained in the oplog. In the following example, the oplog is about 10 MB and is able to fit about 26 hours (94400 seconds) of operations:
configured oplog size: 10.10546875MBlog length start to end: 94400 (26.22hrs)oplog first event time: Mon Mar 19 2012 13:50:38 GMT-0400 (EDT)oplog last event time: Wed Oct 03 2012 14:59:10 GMT-0400 (EDT)now: Wed Oct 03 2012 15:00:21 GMT-0400 (EDT)
The oplog should be long enough to hold all transactions for the longest downtime you expect on a secondary. At a minimum, an oplog should be able to hold minimum 24 hours of operations; however, many users prefer to have 72 hours or even a week’s work of operations.
For more information on how oplog size affects operations, see:
- Oplog Size,
- Delayed Replica Set Members, and
- Check the Replication Lag.
NOTE
You normally want the oplog to be the same size on all members. If you resize the oplog, resize it on all members.
To change oplog size, see the Change the Size of the Oplog tutorial.
Oplog Entry Timestamp Error
Consider the following error in mongod
output and logs:
replSet error fatal couldn't query the local local.oplog.rs collection. Terminating mongod after 30 seconds.<timestamp> [rsStart] bad replSet oplog entry?
Often, an incorrectly typed value in the ts
field in the last oplog entry causes this error. The correct data type is Timestamp.
Check the type of the ts
value using the following two queries against the oplog collection:
db = db.getSiblingDB("local")db.oplog.rs.find().sort({$natural:-1}).limit(1)db.oplog.rs.find({ts:{$type:17}}).sort({$natural:-1}).limit(1)
The first query returns the last document in the oplog, while the second returns the last document in the oplog where the ts
value is a Timestamp. The $type
operator allows you to select BSON type 17, is the Timestamp data type.
If the queries don’t return the same document, then the last document in the oplog has the wrong data type in the ts
field.
EXAMPLE
If the first query returns this as the last oplog entry:
{ "ts" : {t: 1347982456000, i: 1}, "h" : NumberLong("8191276672478122996"), "op" : "n", "ns" : "", "o" : { "msg" : "Reconfig set", "version" : 4 } }
And the second query returns this as the last entry where ts
has the Timestamp
type:
{ "ts" : Timestamp(1347982454000, 1), "h" : NumberLong("6188469075153256465"), "op" : "n", "ns" : "", "o" : { "msg" : "Reconfig set", "version" : 3 } }
Then the value for the ts
field in the last oplog entry is of the wrong data type.
To set the proper type for this value and resolve this issue, use an update operation that resembles the following:
db.oplog.rs.update( { ts: { t:1347982456000, i:1 } }, { $set: { ts: new Timestamp(1347982456000, 1)}})
Modify the timestamp values as needed based on your oplog entry. This operation may take some period to complete because the update must scan and pull the entire oplog into memory.
Duplicate Key Error on local.slaves
Changed in version 3.0.0.
MongoDB 3.0.0 removes the local.slaves
collection. For local.slaves
error in earlier versions of MongoDB, refer to the appropriate version of the MongoDB Manual.
- Troubleshoot Replica Sets
- mongodb replica sets功能
- Mongodb Replica Sets部署
- MongoDB Replica Sets
- Replica Sets (复制集)
- MongoDB Replica Sets
- MongoDB Replica Sets管理
- 实战Replica Sets+Sharding方案
- MongoDB Replica Sets (复制集)
- MongoDB之Replica Sets + Sharding
- Centos配置MongoDb replica sets
- 第十章:Replica Sets + Sharding
- mongodb集群(Shard+Replica Sets)
- mongodb replica sets搭建及使用
- MongoDB学习整理之Replica Sets安装
- MongoDB学习整理之Replica Sets + Sharding
- 1.6 pair convert 2.0 replica sets
- Mongodb Replica Sets + Sharding/配置文件启动
- 图像特征描述子之BRISK
- 7、Queue
- 啊404!
- 外键
- day07_反射
- Troubleshoot Replica Sets
- 8、Deque
- (08)jQuery事件绑定以及解绑
- VB中常用的函数(一)
- 【GDKOI2016】寻宝
- freeswitch mrcp 源码分析--数据接收(下)
- 9、PriorityQueue分析
- 异步Servlet
- caffe源码之 softmax层