On this page
Change the Size of the Oplog
On this page
New in version 3.6.
This procedure changes the size of the oplog on each member of a replica set using the replSetResizeOplog
command, starting with the secondary members before proceeding to the primary.
Important
You can only run replSetResizeOplog
on replica set member’s running with the Wired Tiger storage engine.
Perform these steps on each secondary replica set member first. Once you have changed the oplog size for all secondary members, perform these steps on the primary.
A. Connect to the replica set member
Connect to the replica set member using the mongo
shell:
mongo --host <hostname>:<port>
Note
If the replica set enforces authentication, you must authenticate as a user with privileges to modify the local
database, such as the clusterManager
or clusterAdmin
role.
B. (Optional) Verify the current size of the oplog
To view the current size of the oplog, switch to the local
database and run db.collection.stats()
against the oplog.rs
collection. stats()
displays the oplog size as maxSize
.
use local
db.oplog.rs.stats().maxSize
The maxSize
field displays the collection size in bytes.
C. Change the oplog size of the replica set member
To change the size, run the replSetResizeOplog
passing the desired size in megabytes as the size
parameter. The specified size must be greater than 990
, or 990 megabytes.
The following operation changes the oplog size of the replica set member to 16 gigabytes, or 16000 megabytes.
db.adminCommand({replSetResizeOplog: 1, size: 16000})
D. (Optional) Compact oplog.rs
to reclaim disk space
Reducing the size of the oplog does not automatically reclaim the disk space allocated to the original oplog size. You must run compact
against the oplog.rs
collection in the local
database to reclaim disk space. There are no benefits to running compact
on the oplog.rs
collection after increasing the oplog size.
Important
The replica set member cannot replicate oplog entries while the compact
operation is ongoing. While compact
runs, the member may fall so far behind the primary that it cannot resume replication. The likelihood of a member becoming “stale” during the compact
procedure increases with cluster write throughput, and may be further exacerbated by the reduced oplog size.
Consider scheduling a maintenance window during which writes are throttled or stopped to mitigate the risk of the member becoming “stale” and requiring a full resync.
Do not run compact
against the primary replica set member. Connect a mongo
shell to the primary and run rs.stepDown()
. If successful, the primary steps down and closes all open connections. Reconnect the mongo
shell to the member and run the compact
command on the member.
The following operation runs the compact
command against the oplog.rs
collection:
use local
db.runCommand({ "compact" : "oplog.rs" } )
For clusters enforcing authentication, authenticate as a user with the compact
privilege action on the local
database and the oplog.rs
collection. For complete documentation on compact
authentication requirements, see compact Required Privileges.