Capped collections are fixed-size collections that support high-throughput operations that insert and retrieve documents based on insertion order. Capped collections work in a way similar to circular buffers: once a collection fills its allocated space, it makes room for new documents by overwriting the oldest documents in the collection.
As an alternative to capped collections, consider using MongoDB’s TTL (Time To Live) indexes. As described in Expire Data from Collections by Setting TTL, these indexes allow you to expire and remove data from normal collections based on the value of a date-typed field and a TTL value for the index.
TTL indexes are not compatible with capped collections.
Capped collections guarantee preservation of the insertion order. As a result, queries do not need an index to return documents in insertion order. Without this indexing overhead, capped collections can support higher insertion throughput.
To make room for new documents, capped collections automatically remove the oldest documents in the collection without requiring scripts or explicit remove operations.
- Store log information generated by high-volume systems. Inserting documents in a capped collection without an index is close to the speed of writing log information directly to a file system. Furthermore, the built-in first-in-first-out property maintains the order of events, while managing storage use.
- Cache small amounts of data in a capped collections. Since caches are read rather than write heavy, you would either need to ensure that this collection always remains in the working set (i.e. in RAM) or accept some write penalty for the required index or indexes.
If you plan to update documents in a capped collection, create an index so that these update operations do not require a collection scan.
Changed in version 3.2.
If an update or a replacement operation changes the document size, the operation will fail.
You cannot delete documents from a capped collection. To remove all documents from a collection, use the
drop() method to drop the collection and recreate the capped collection.
Use natural ordering to retrieve the most recently inserted elements from the collection efficiently. This is (somewhat) analogous to tail on a log file.
The aggregation pipeline operator
$out cannot write results to a capped collection.
You must create capped collections explicitly using the
db.createCollection() method, which is a helper in the
mongo shell for the
create command. When creating a capped collection you must specify the maximum size of the collection in bytes, which MongoDB will pre-allocate for the collection. The size of the capped collection includes a small amount of space for internal overhead.
size field is less than or equal to 4096, then the collection will have a cap of 4096 bytes. Otherwise, MongoDB will raise the provided size to make it an integer multiple of 256.
Additionally, you may also specify a maximum number of documents for the collection using the
max field as in the following document:
size argument is always required, even when you specify
max number of documents. MongoDB will remove older documents if a collection reaches the maximum size limit before it reaches the maximum document count.
If you perform a
find() on a capped collection with no ordering specified, MongoDB guarantees that the ordering of results is the same as the insertion order.
isCapped() method to determine if a collection is capped, as follows:
You can convert a non-capped collection to a capped collection with the
size parameter specifies the size of the capped collection in bytes.
This holds a database exclusive lock for the duration of the operation. Other operations which lock the same database will be blocked until the operation completes. See What locks are taken by some common client operations? for operations that lock the database.
You can use a tailable cursor with capped collections. Similar to the Unix
tail -f command, the tailable cursor “tails” the end of a capped collection. As new documents are inserted into the capped collection, you can use the tailable cursor to continue retrieving documents.
See Tailable Cursors for information on creating a tailable cursor.