Enforce Authentication in an Existing Sharded Cluster Without Downtime

Overview

Important

The following procedure applies to sharded clusters using MongoDB 3.4 or later.

Earlier versions of MongoDB do not support no-downtime upgrade. For sharded clusters using earlier versions of MongoDB, see Enforce Keyfile Access Control in Sharded Cluster.

A MongoDB sharded cluster can enforce user authentication as well as internal authentication of its components to secure against unauthorized access.

The following tutorial describes a procedure using security.transitionToAuth to transition an existing sharded cluster to enforce authentication without incurring downtime.

Before you attempt this tutorial, please familiarize yourself with the contents of this document.

Considerations

Cloud Manager and Ops Manager

If you are using Cloud Manager or Ops Manager to manage your deployment, refer to Configure Access Control for MongoDB Deployments in the Cloud Manager manual or Ops Manager manual to enforce authentication.

IP Binding

Changed in version 3.6.

Starting with MongoDB 3.6, MongoDB binaries, mongod and mongos, bind to localhost by default. From MongoDB versions 2.6 to 3.4, only the binaries from the official MongoDB RPM (Red Hat, CentOS, Fedora Linux, and derivatives) and DEB (Debian, Ubuntu, and derivatives) packages would bind to localhost by default. To learn more about this change, see Localhost Binding Compatibility Changes.

Internal and Client Authentication Mechanisms

This tutorial configures authentication using SCRAM for client authentication and a keyfile for internal authentication.

Refer to the Authentication documentation for a complete list of available client and internal authentication mechanisms.

Architecture

This tutorial assumes that each shard replica set, as well as the config server replica set, can elect a new primary after stepping down its existing primary.

A replica set can elect a primary only if both of the following conditions are true:

Minimum number of mongos instances

Ensure your sharded cluster has at least two mongos instances available. This tutorial requires restarting each mongos in the cluster. If your sharded cluster has only one mongos instance, this results in downtime during the period that the mongos is offline.

Enforce Keyfile Access Control on an Existing Sharded Cluster

Create and Distribute the Keyfile

With keyfile authentication, each mongod or mongos instances in the sharded cluster uses the contents of the keyfile as the shared password for authenticating other members in the deployment. Only mongod or mongos instances with the correct keyfile can join the sharded cluster.

The content of the keyfile must be between 6 and 1024 characters long and must be the same for all members of the sharded cluster.

Note

On UNIX systems, the keyfile must not have group or world permissions. On Windows systems, keyfile permissions are not checked.

You can generate a keyfile using any method you choose. For example, the following operation uses openssl to generate a complex pseudo-random 1024 character string to use for a keyfile. It then uses chmod to change file permissions to provide read permissions for the file owner only:

openssl rand -base64 755 > <path-to-keyfile>
chmod 400 <path-to-keyfile>

Copy the keyfile to each server hosting the sharded cluster members. Ensure that the user running the mongod or mongos instances is the owner of the file and can access the keyfile.

Avoid storing the keyfile on storage mediums that can be easily disconnected from the hardware hosting the mongod or mongos instances, such as a USB drive or a network attached storage device.

For more information on using keyfiles for internal authentication, refer to Keyfiles.

Configure Sharded Cluster Admin User and Client Users

You must connect to a mongos to complete the steps in this section. The users created in these steps are cluster-level users and cannot be used for accessing individual shard replica sets.

1

Create the adminstrator user.

Use the db.createUser() method to create an administrator user and assign it the following roles:

Clients performing maintenance operations or user administrative operations on the sharded cluster must authenticate as this user at the completion of this tutorial. Create this user now to ensure that you have access to the cluster after enforcing authentication.

admin = db.getSiblingDB("admin");
admin.createUser(
  {
    user: "admin",
    pwd: "<password>",
    roles: [
      { role: "clusterAdmin", db: "admin" },
      { role: "userAdmin", db: "admin" }
    ]
  }
);

Important

Passwords should be random, long, and complex to prevent or hinder malicious access.

2

Optional: Create additional users for client applications.

In addition to the administrator user, you can create additional users before enforcing authentication.. This ensures access to the sharded cluster once you fully enforce authentication.

Example

The following operation creates the user joe on the marketing database, assigning to this user the readWrite role on the marketing database`.

db.getSiblingDB("marketing").createUser(
  {
    "user": "joe",
    "pwd": "<password>",
    "roles": [ { "role" : "readWrite", "db" : "marketing" } ]
  }
)

Clients authenticating as "joe" can perform read and write operations on the marketing database.

See Database User Roles for roles provided by MongoDB.

See the Add Users tutorial for more information on adding users. Consider security best practices when adding new users.

3

Optional: Update client applications to specify authentication credentials.

While the sharded cluster does not currently enforce authentication, you can still update client applications to specify authentication credentials when connecting to the sharded cluster. This may prevent loss of connectivity at the completion of this tutorial.

Example

The following operation connects to the sharded cluster using the mongo shell, authenticating as the user joe on the marketing database.

mongo  --username "joe" --password "<password>" \
  --authenticationDatabase "marketing" --host mongos1.example.net:27017

If your application uses a MongoDB driver, see the associated driver documentation for instructions on creating an authenticated connection.

Transition Each mongos Instance to Enforce Authentication

1

Create a new mongos configuration file.

For each mongos:

  1. Copy the existing mongos configuration file, giving it a distinct name such as <filename>-secure.conf (or .cfg if using Windows). You will use this new configuration file to transition the mongos to enforce authentication in the sharded cluster. Retain the original configuration file for backup purposes.

  2. To the new configuration file, add the following settings: