20.2.5 Production Deployment of InnoDB Cluster

When working in a production environment, the MySQL server instances which make up an InnoDB cluster run on multiple host machines as part of a network rather than on single machine as described in Section 20.2.4, “Sandbox Deployment of InnoDB Cluster”. Before proceeding with these instructions you must install the required software to each machine that you intend to add as a server instance to your cluster, see Section 20.2.3, “Methods of Installing”.

The following diagram illustrates the scenario you work with in this section:

Figure 20.2 Production Deployment

Three MySQL servers are grouped together as a production InnoDB cluster. One of the servers is the primary instance, and the other two are secondary instances. The IP address for the primary server is 139.59.177.10, and the IP addresses for the two secondary instances are 139.59.177.11 and 139.59.177.12. MySQL Router connects a client application to the primary instance. The admin capability in MySQL Shell interacts directly with the production InnoDB cluster.

Important

Unlike a sandbox deployment, where all instances are deployed locally to one machine, for a production deployment you must connect to each machine and run MySQL Shell locally before issuing dba.configureLocalInstance() on the instance. This ensures that any configuration changes are persisted into the option file on the instance. This also requires that you have access to the server and the required permissions to execute MySQL Shell.

To pass a server's connection information to AdminAPI use URI type strings. See Connecting Using URI-Like Connection Strings for more information.

User Privileges

The user account used to administer an instance does not have to be the root account, however the user needs to be assigned full read and write privileges on the InnoDB cluster metadata tables in addition to full MySQL administrator privileges (SUPER, GRANT OPTION, CREATE, DROP and so on). The preferred method to create users to administer the cluster is using the clusterAdmin option with the dba.configureInstance(), and Cluster.addInstance() operations. In this procedure the user ic is shown in examples.

If only read operations are needed (such as for monitoring purposes), an account with more restricted privileges can be used. To give the user your_user the privileges needed to monitor InnoDB cluster issue:

GRANT SELECT ON mysql_innodb_cluster_metadata.* TO your_user@'%';
GRANT SELECT ON performance_schema.global_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_coordinator TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_worker TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_member_stats TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_members TO your_user@'%';
GRANT SELECT ON performance_schema.threads TO your_user@'%' WITH GRANT OPTION;

In this procedure the user ic is used in examples.

Configuring Hostname

The production instances which make up a cluster run on separate machines, therefore each machine must have a unique host name and be able to resolve the host names of the other machines which run server instances in the cluster. If this is not the case, you can:

  • configure each machine to map the IP of each other machine to a hostname. See your operating system documentation for details. This is the recommended solution.

  • set up a DNS service

  • configure the report_host variable in the MySQL configuration of each instance to a suitable externally reachable address

In this procedure the host name ic-number is used in examples.

To verify whether the hostname of a MySQL server is correctly configured, execute the following query to see how the instance reports its own address to other servers and try to connect to that MySQL server from other hosts using the returned address:

SELECT coalesce(@@report_host, @@hostname);

Verbose Logging

When working with a production deployment it can be useful to configure verbose logging for MySQL Shell, the information in the log can help you to find and resolve any issues that might occur when you are preparing server instances to work as part of InnoDB cluster. To start MySQL Shell with a verbose logging level use the --log-level option:

shell> mysqlsh --log-level=DEBUG3

The DEBUG3 is recommended, see --log-level for more information. When DEBUG3 is set the MySQL Shell log file contains lines such as Debug: execute_sql( ... ) which contain the SQL queries that are executed as part of each AdminAPI call. The log file generated by MySQL Shell is located in ~/.mysqlsh/mysqlsh.log for Unix-based systems; on Microsoft Windows systems it is located in %APPDATA%\MySQL\mysqlsh\mysqlsh.log. See MySQL Shell Logging and Debug for more information.

In addition to enabling the MySQL Shell log level, you can configure the amount of output AdminAPI provides in MySQL Shell after each call to the API. To enable the amount of AdminAPI output, in MySQL Shell issue:

mysql-js> dba.verbose=2

This enables the maximum output from AdminAPI calls. The available levels of output are:

  • 0 or OFF is the default. This provides minimal output and is the recommended level when not troubleshooting.

  • 1 or ON adds verbose output from each call to the AdminAPI.

  • 2 adds debug output to the verbose output providing full information about what each call to AdminAPI executes.

Checking Instance Configuration

Before creating a production deployment from server instances you need to check that MySQL on each instance is correctly configured by using the dba.checkInstanceConfiguration() function. This ensures that the instance satisfies the Section 20.2.2, “InnoDB Cluster Requirements”. This does not check any data that is on the instance, see Checking Instance State for more information. The following demonstrates issuing this in a running MySQL Shell:

mysql-js> dba.checkInstanceConfiguration('ic@ic-1:3306')

Please provide the password for 'ic@ic-1:3306':
Validating instance...

The instance 'ic-1:3306' is not valid for Cluster usage.

The following issues were encountered:

- Some configuration options need to be fixed.

+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                         | Current Value | Required Value | Note                                             |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum                  | CRC32         | NONE           | Update the server variable or restart the server |
| enforce_gtid_consistency         | OFF           | ON             | Restart the server                               |
| gtid_mode                        | OFF           | ON             | Restart the server                               |
| log_bin                          | 0             | 1              | Restart the server                               |
| log_slave_updates                | 0             | ON             | Restart the server                               |
| master_info_repository           | FILE          | TABLE          | Restart the server                               |
| relay_log_info_repository        | FILE          | TABLE          | Restart the server                               |
| transaction_write_set_extraction | OFF           | XXHASH64       | Restart the server                               |
+----------------------------------+---------------+----------------+--------------------------------------------------+

Please fix these issues , restart the server and try again.

{
  "config_errors": [
    {
      "action": "server_update",
      "current": "CRC32",
      "option": "binlog_checksum",
      "required": "NONE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "enforce_gtid_consistency",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "gtid_mode",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_bin",
      "required": "1"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_slave_updates",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "master_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "relay_log_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "transaction_write_set_extraction",
      "required": "XXHASH64"
    }
  ],
  "errors": [],
  "restart_required": true,
  "status": "error"
}
mysql-js>

Repeat this process for each server instance that you plan to use as part of your cluster. The reports generated after running dba.checkInstanceConfiguration() provide information about any configuration changes required before you can proceed. The restart_required field in the final part of the report tells you whether MySQL on the instance requires a restart to detect any change made to the configuration file.

Configuring the Instance

If configuration issues have been identified in the report generated by running dba.checkInstanceConfiguration() against the instance, it does not satisfy the Section 20.2.2, “InnoDB Cluster Requirements”. Therefore you need to connect to the machine and reconfigure the server instance. AdminAPI provides the dba.configureLocalInstance() function that finds the MySQL server's option file and modifies it to ensure that the instance is correctly configured for InnoDB cluster. Alternatively make the changes to the instance's option file manually based on the information in the report. See Section 4.2.2.2, “Using Option Files” for more information. Regardless of the way you make the configuration changes, you might have to restart MySQL to ensure the configuration changes are detected.

The recommended method is to log in to the remote machine, run MySQL Shell as the root user and then connect to the local MySQL server:

shell> sudo -i mysqlsh --log-level=DEBUG3

The dba.configureLocalInstance() method verifies that a suitable user is available for cluster usage, which is used for connections between members of the cluster. The root user cannot do remote logins by default, therefore you have three options to continue with the configuration: enable remote connections for the root user , create a new user or neither of these two previous options. The following example demonstrates the second option, creating a new user for cluster usage. The format of the user names accepted follows the standard MySQL account name format, see Section 6.2.4, “Specifying Account Names”.

mysql-js> dba.configureLocalInstance('root@localhost:3306')

Please provide the password for 'root@localhost:3306':

Please specify the path to the MySQL configuration file: /etc/mysql/mysql.conf.d/mysqld.cnf
Validating instance...

The configuration has been updated but it is required to restart the server.
{
  "config_errors": [
    {
      "action": "restart",
      "current": "OFF",
      "option": "enforce_gtid_consistency",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "gtid_mode",
      "required": "ON"
      },
    {
      "action": "restart",
      "current": "0",
      "option": "log_bin",
      "required": "1"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_slave_updates",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "master_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "relay_log_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "transaction_write_set_extraction",
      "required": "XXHASH64"
    }
  ],
  "errors": [],
  "restart_required": true,
  "status": "error"
}
mysql-js>
Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

As with dba.checkInstanceConfiguration(), the configuration requirements are identified, but this time the chosen configuration file is modified. For the configuration changes to take effect you might need to restart the MySQL Server.

The dba.configureLocalInstance() function also accepts the clusterAdmin and clusterAdminPassword options, which enable you to configure the cluster user and password when calling the function. clusterAdmin supports identifiers or strings for the user name and host name. By default if unquoted it assumes input is a string. For example:

mysql-js> dba.configureLocalInstance('ic@ic-1:3306', \
	      {clusterAdmin: 'icadmin@ic-1%',clusterAdminPassword: 'password'});

This user is granted the privileges for an administrative user described at User Privileges.

Creating the Cluster

Once you have prepared your instances, use the dba.createCluster() function to create the cluster. The machine which you are running MySQL Shell on is used as the seed instance for the cluster. The seed instance is replicated to the other instances which you add to the cluster, making them replicas of the seed instance. Log in to the instance and run MySQL Shell locally.

shell> mysqlsh --uri ic@ic-1:3306

Creating a Session to 'ic@ic-1:3306'
Enter password: *********
Classic Session successfully established. No default schema selected.

MySQL Shell must be connected to an instance before you can create a cluster because when you issue dba.createCluster(name) MySQL Shell creates a classic MySQL protocol session to the server instance connected to the MySQL Shell's current global session. Use the dba.createCluster(name) function to create the cluster and assign the returned cluster to a variable called cluster:

mysql-js> var cluster = dba.createCluster('prodCluster')

      A new InnoDB cluster will be created on instance 'ic@ic-1:3306'.

      Creating InnoDB cluster 'prodCluster' on 'ic@ic-1:3306'...
      Adding Seed Instance...

      Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
      At least 3 instances are needed for the cluster to be able to withstand up to
      one server failure.
Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Note

If you encounter an error related to metadata being inaccessible you might have the loopback network interface configured. For correct InnoDB cluster usage disable the loopback interface.

To check the cluster has been created, use the cluster instance's status() function. See Checking the InnoDB Cluster Status.

Tip

Once server instances belong to a cluster it is important to only administer them using MySQL Shell and AdminAPI. Attempting to manually change the configuration of Group Replication on an instance once it has been added to a cluster is not supported. Similarly, modifying server variables critical to InnoDB cluster, such as server_uuid after an instance is configured using AdminAPI is not supported.

Use the cluster.addInstance(instance) function to add more instances to the cluster, where instance is a URI type string to connect to the local instance. The instances must have been configured for cluster usage. You need a minimum of three instances in the cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an instance. To add an instance to the cluster issue:

mysql-js> cluster.addInstance('ic@ic-2:3306');

To verify the instance has been added, use the cluster instance's status() function.

Important

At this stage, the server instances have been added to the cluster but the changes to the InnoDB cluster metadata have only been made on the instance which you are currently connected to. To make the configuration changes persistent for all instances in the cluster, you must connect to each instance and issue dba.configureLocalInstance() locally on each instance you have added. This is essential to ensure that instances rejoin the cluster in the event of leaving the cluster.

To persist the InnoDB cluster metadata for all instances, log in to each instance that you added to the cluster and run MySQL Shell locally.

shell> mysqlsh

Use the \connect command to log in to MySQL server. Execute the dba.configureLocalInstance('instance') function, where instance is a URI type string to connect to the local instance. For example:

mysql-js> dba.configureLocalInstance('ic@ic-2:3306')
Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Repeat this process for each server instance that you added to the cluster. Similarly if you modify the cluster structure, for example changing the number of instances, you need to repeat this process for each server instance to update the InnoDB cluster metadata accordingly for each instance in the cluster.

Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 20.3, “Using MySQL Router with InnoDB Cluster”.