21.3.4 Using High-Speed Interconnects with NDB Cluster
Even before design of
NDBCLUSTER began in 1996, it was evident that one of the major problems to be encountered in building parallel databases would be communication between the nodes in the network. For this reason,
NDBCLUSTER was designed from the very beginning to permit the use of a number of different data transport mechanisms. In this Manual, we use the term transporter for these.
The NDB Cluster codebase provides for four different transporters:
TCP/IP using 100 Mbps or gigabit Ethernet, as discussed in Section 220.127.116.11, “NDB Cluster TCP/IP Connections”.
Direct (machine-to-machine) TCP/IP; although this transporter uses the same TCP/IP protocol as mentioned in the previous item, it requires setting up the hardware differently and is configured differently as well. For this reason, it is considered a separate transport mechanism for NDB Cluster. See Section 18.104.22.168, “NDB Cluster TCP/IP Connections Using Direct Connections”, for details.
Shared memory (SHM). Supported in production beginning with NDB 7.6.6. For more information about SHM, see Section 22.214.171.124, “NDB Cluster Shared Memory Connections”.
Using SCI transporters in NDB Cluster requires specialized hardware, software, and MySQL binaries not available using an NDB 7.5 or 7.6 distributions.
Most users today employ TCP/IP over Ethernet because it is ubiquitous. TCP/IP is also by far the best-tested transporter for use with NDB Cluster.
Regardless of the transporter used,
NDB attempts to make sure that communication with data node processes is done using chunks that are as large as possible since this benefits all types of data transmission.