On this page
Deploy Sharded Cluster with Keyfile Access Control
On this page
Overview
Enforcing access control on a sharded cluster requires configuring:
- Security between components of the cluster using Internal Authentication.
- Security between connecting clients and the cluster using User Access Controls.
For this tutorial, each member of the sharded cluster must use the same internal authentication mechanism and settings. This means enforcing internal authentication on each mongos
and mongod
in the cluster.
The following tutorial uses a keyfile to enable internal authentication.
Enforcing internal authentication also enforces user access control. To connect to the replica set, clients like the mongo
shell need to use a user account. See Access Control.
CloudManager and OpsManager
If you are using Cloud Manager or Ops Manager to manage your deployment, see the respective Cloud Manager manual or the Ops Manager manual to enforce authentication.
Considerations
IP Binding
Changed in version 3.6.
Starting with MongoDB 3.6, MongoDB binaries, mongod
and mongos
, bind to localhost
by default. From MongoDB versions 2.6 to 3.4, only the binaries from the official MongoDB RPM (Red Hat, CentOS, Fedora Linux, and derivatives) and DEB (Debian, Ubuntu, and derivatives) packages would bind to localhost
by default. To learn more about this change, see Localhost Binding Compatibility Changes.
Keyfile Security
Keyfiles are bare-minimum forms of security and are best suited for testing or development environments. For production environments we recommend using x.509 certificates.
Access Control
This tutorial covers creating the minimum number of administrative users on the admin
database only. For the user authentication, the tutorial uses the default SCRAM authentication mechanism. Challenge-response security mechanisms are best suited for testing or development environments. For production environments, we recommend using x.509 certificates or LDAP Proxy Authentication (available for MongoDB Enterprise only) or Kerberos Authentication (available for MongoDB Enterprise only).
For details on creating users for specific authentication mechanism, refer to the specific authentication mechanism pages.
See ➤ Configure Role-Based Access Control for best practices for user creation and management.
Users
In general, to create users for a sharded clusters, connect to the mongos
and add the sharded cluster users.
However, some maintenance operations require direct connections to specific shards in a sharded cluster. To perform these operations, you must connect directly to the shard and authenticate as a shard-local administrative user.
Shard-local users exist only in the specific shard and should only be used for shard-specific maintenance and configuration. You cannot connect to the mongos
with shard-local users.
This tutorial requires creating sharded cluster users, but includes optional steps for adding shard-local users.
See the Users security documentation for more information.
Operating System
This tutorial uses the mongod
and mongos
programs. Windows users should use the mongod.exe
and mongos.exe
programs instead.
Deploy Sharded Cluster with Keyfile Access Control
The following procedures involve creating a new sharded cluster that consists of a mongos
, the config servers, and two shards.
Create the Keyfile
With keyfile authentication, each mongod
or mongos
instances in the sharded cluster uses the contents of the keyfile as the shared password for authenticating other members in the deployment. Only mongod
or mongos
instances with the correct keyfile can join the sharded cluster.
The content of the keyfile must be between 6 and 1024 characters long and must be the same for all members of the sharded cluster.
Note
On UNIX systems, the keyfile must not have group or world permissions. On Windows systems, keyfile permissions are not checked.
You can generate a keyfile using any method you choose. For example, the following operation uses openssl
to generate a complex pseudo-random 1024 character string to use for a keyfile. It then uses chmod
to change file permissions to provide read permissions for the file owner only:
openssl rand -base64 756 > <path-to-keyfile>
chmod 400 <path-to-keyfile>
See Keyfiles for additional details and requirements for using keyfiles.
Distribute the Keyfile
Copy the keyfile to each server hosting the sharded cluster members. Ensure that the user running the mongod
or mongos
instances is the owner of the file and can access the keyfile.
Avoid storing the keyfile on storage mediums that can be easily disconnected from the hardware hosting the mongod
or mongos
instances, such as a USB drive or a network attached storage device.
Create the Config Server Replica Set
The following steps deploys a config server replica set.
For a production deployment, deploys a config server replica set with at least three members. For testing purposes, you can create a single-member replica set.
Start each mongod
in the config server replica set. Include the keyFile
setting. The keyFile
setting enforces both Internal Authentication and Role-Based Access Control.
You can specify the mongod
settings either via a configuration file or the command line.
Configuration File
If using a configuration file, set security.keyFile
to the keyfile’s path, sharding.clusterRole
to configsvr
, and replication.replSetName
to the desired name of the config server replica set.
security:
keyFile: <path-to-keyfile>
sharding:
clusterRole: configsvr
replication:
replSetName: <setname>
Include additional options as required for your configuration. For instance, if you wish remote clients to connect to your deployment or your deployment members are run on different hosts, specify the net.bindIp
setting. For more information, see Localhost Binding Compatibility Changes.
Start the mongod
specifying the --config
option and the path to the configuration file.
mongod --config <path-to-config-file>
Command Line
If using the command line parameters, start the mongod
with the --keyFile
, --configsvr
, and --replSet
parameters.
mongod --keyFile <path-to-keyfile> --configsvr --replSet <setname> --dbpath <path>
Include additional options as required for your configuration. For instance, if you wish remote clients to connect to your deployment or your deployment members are run on different hosts, specify the --bind_ip
. For more information, see Localhost Binding Compatibility Changes.
Connect to a member of the replica set over the localhost interface.
Connect a mongo
shell to one of the mongod
instances over the localhost interface. You must run the mongo
shell on the same physical machine as the mongod
instance.
The localhost interface is only available since no users have been created for the deployment. The localhost interface closes after the creation of the first user.
The rs.initiate()
method initiates the replica set and can take an optional replica set configuration document. In the replica set configuration document, include:
- The
_id
. The_id
must match the--replSet
parameter passed to themongod
. - The
members
field. Themembers
field is an array and requires a document per each member of the replica set. - The
configsvr
field. Theconfigsvr
field must be set totrue
for the config server replica set.
See Replica Set Configuration for more information on replica set configuration documents.
Initiate the replica set using the