20.2.4 Sandbox Deployment of InnoDB Cluster
This section explains how to set up a sandbox InnoDB cluster deployment. You create and administer your InnoDB clusters using MySQL Shell with the included AdminAPI. This section assumes familiarity with MySQL Shell, see MySQL Shell 8.0 (part of MySQL 8.0) for further information.
Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of InnoDB cluster. You can fully test out InnoDB cluster locally, prior to deployment on your production servers. MySQL Shell has built-in functionality for creating sandbox instances that are correctly configured to work with Group Replication in a locally deployed scenario.
Sandbox instances are only suitable for deploying and running on your local machine for testing purposes. In a production environment the MySQL Server instances are deployed to various host machines on the network. See Section 20.2.5, “Production Deployment of InnoDB Cluster” for more information.
This tutorial shows how to use MySQL Shell to create an InnoDB cluster consisting of three MySQL server instances.
MySQL Shell includes the AdminAPI that adds the
dba global variable, which provides functions for administration of sandbox instances. In this example setup, you create three sandbox instances using
Start MySQL Shell from a command prompt by issuing the command:
\py for Python mode, and
\js command, then execute:
The argument passed to
deploySandboxInstance() is the TCP port number where the MySQL Server instance listens for connections. By default the sandbox is created in a directory named
$HOME/mysql-sandboxes/ on Unix systems. For Microsoft Windows systems the directory is
The root password for the instance is prompted for.
Each instance has its own password. Defining the same password for all sandboxes in this tutorial makes it easier, but remember to use different passwords for each instance in production deployments.
To deploy further sandbox server instances, repeat the steps followed for the sandbox instance at port 3310, choosing different port numbers. For each additional sandbox instance issue:
To follow this tutorial, use port numbers 3310, 3320 and 3330 for the three sandbox server instances. Issue:
mysql-js> dba.deploySandboxInstance(3320) mysql-js> dba.deploySandboxInstance(3330)
The next step is to create the InnoDB cluster while connected to the seed MySQL Server instance. The seed instance contains the data that you want to replicate to the other instances. In this example the sandbox instances are blank, therefore we can choose any instance.
Connect MySQL Shell to the seed instance, in this case the one at port 3310:
mysql-js> \connect [email protected]:3310
\connect MySQL Shell command is a shortcut for the
mysql-js> shell.connect('[email protected]:3310')
Once you have connected, AdminAPI can write to the local instance's option file. This is different to working with a production deployment, where you would need to connect to the remote instance and run the MySQL Shell application locally on the instance before AdminAPI can write to the instance's option file.
dba.createCluster() method to create the InnoDB cluster with the currently connected instance as the seed:
mysql-js> var cluster = dba.createCluster('testCluster')
createCluster() method deploys the InnoDB cluster metadata to the selected instance, and adds the instance you are currently connected to as the seed instance. The
createCluster() method returns the created cluster, in the example above this is assigned to the
cluster variable. The parameter passed to the
createCluster() method is a symbolic name given to this InnoDB cluster, in this case
The next step is to add more instances to the InnoDB cluster. Any transactions that were executed by the seed instance are re-executed by each secondary instance as it is added. This tutorial uses the sandbox instances that were created earlier at ports 3320 and 3330.
The seed instance in this example was recently created, so it is nearly empty. Therefore, there is little data that needs to be replicated from the seed instance to the secondary instances. In a production environment, where you have an existing database on the seed instance, you could use a tool such as MySQL Enterprise Backup to ensure that the secondaries have matching data before replication starts. This avoids the possibility of lengthy delays while data replicates from the primary to the secondaries. See Section 17.4.4, “Using MySQL Enterprise Backup with Group Replication”.
Add the second instance to the InnoDB cluster:
mysql-js> cluster.addInstance('[email protected]:3320')
The root user's password is prompted for.
Add the third instance:
mysql-js> cluster.addInstance('[email protected]:3330')
The root user's password is prompted for.
At this point you have created a cluster with three instances: a primary, and two secondaries.
You can only specify
addInstance() if the instance is a sandbox instance. This also applies to the implicit
addInstance() after issuing
Once the sandbox instances have been added to the cluster, the configuration required for InnoDB cluster must be persisted to each of the instance's option files. Connect to each instance.
mysql-js> \connect instance
You are prompted for the instance's password. The configuration changes are persisted to the instance.
dba.configureLocalInstance() is not issued when connected to the instance, the configuration is not persisted to the option file. This does not stop the instance from initially joining a cluster, but it does mean that the instance cannot rejoin the cluster automatically, for example after being stopped.
Repeat the process of connecting to each sandbox instance you added to the cluster and persisting the configuration. For this example we added sandbox instances at ports 3310, 3320 and 3330. Therefore issue this for ports 3320 and 3330:
mysql-js> \connect [email protected]:port_number mysql-js> dba.configureLocalInstance('[email protected]:port_number')
To check the cluster has been created, use the cluster instance's
status() function. See Checking the InnoDB Cluster Status.
Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 20.3, “Using MySQL Router with InnoDB Cluster”.