On this page
MySQL Glossary
These terms are commonly used in information about the MySQL database server. This glossary originated as a reference for terminology about the InnoDB storage engine, and the majority of definitions are InnoDB-related.
A
- .ARM file
-
Metadata for
ARCHIVE
tables. Contrast with .ARZ file. Files with this extension are always included in backups produced by themysqlbackup
command of the MySQL Enterprise Backup product.See Also .ARZ file, MySQL Enterprise Backup, mysqlbackup command.
- .ARZ file
-
Data for ARCHIVE tables. Contrast with .ARM file. Files with this extension are always included in backups produced by the
mysqlbackup
command of the MySQL Enterprise Backup product.See Also .ARM file, MySQL Enterprise Backup, mysqlbackup command.
- ACID
-
An acronym standing for atomicity, consistency, isolation, and durability. These properties are all desirable in a database system, and are all closely tied to the notion of a transaction. The transactional features of
InnoDB
adhere to the ACID principles.Transactions are atomic units of work that can be committed or rolled back. When a transaction makes multiple changes to the database, either all the changes succeed when the transaction is committed, or all the changes are undone when the transaction is rolled back.
The database remains in a consistent state at all times — after each commit or rollback, and while transactions are in progress. If related data is being updated across multiple tables, queries see either all old values or all new values, not a mix of old and new values.
Transactions are protected (isolated) from each other while they are in progress; they cannot interfere with each other or see each other's uncommitted data. This isolation is achieved through the locking mechanism. Experienced users can adjust the isolation level, trading off less protection in favor of increased performance and concurrency, when they can be sure that the transactions really do not interfere with each other.
The results of transactions are durable: once a commit operation succeeds, the changes made by that transaction are safe from power failures, system crashes, race conditions, or other potential dangers that many non-database applications are vulnerable to. Durability typically involves writing to disk storage, with a certain amount of redundancy to protect against power failures or software crashes during write operations. (In
InnoDB
, the doublewrite buffer assists with durability.)See Also atomic, commit, concurrency, doublewrite buffer, isolation level, locking, rollback, transaction.
- adaptive flushing
-
An algorithm for InnoDB tables that smooths out the I/O overhead introduced by checkpoints. Instead of flushing all modified pages from the buffer pool to the data files at once, MySQL periodically flushes small sets of modified pages. The adaptive flushing algorithm extends this process by estimating the optimal rate to perform these periodic flushes, based on the rate of flushing and how fast redo information is generated.
See Also buffer pool, checkpoint, data files, flush, InnoDB, page, redo log.
- adaptive hash index
-
An optimization for
InnoDB
tables that can speed up lookups using=
andIN
operators, by constructing a hash index in memory. MySQL monitors index searches forInnoDB
tables, and if queries could benefit from a hash index, it builds one automatically for index pages that are frequently accessed. In a sense, the adaptive hash index configures MySQL at runtime to take advantage of ample main memory, coming closer to the architecture of main-memory databases. This feature is controlled by theinnodb_adaptive_hash_index
configuration option. Because this feature benefits some workloads and not others, and the memory used for the hash index is reserved in the buffer pool, typically you should benchmark with this feature both enabled and disabled.The hash index is always built based on an existing B-tree index on the table. MySQL can build a hash index on a prefix of any length of the key defined for the B-tree, depending on the pattern of searches against the index. A hash index can be partial; the whole B-tree index does not need to be cached in the buffer pool.
In MySQL 5.6 and higher, another way to take advantage of fast single-value lookups with
InnoDB
tables is to use theInnoDB
memcached plugin. See Section 14.21, “InnoDB memcached Plugin” for details.See Also B-tree, buffer pool, hash index, memcached, page, secondary index.
- AIO
-
Acronym for asynchronous I/O. You might see this acronym in
InnoDB
messages or keywords.See Also asynchronous I/O.
- Antelope
-
The code name for the original
InnoDB
file format. It supports the REDUNDANT and COMPACT row formats, but not the newer DYNAMIC and COMPRESSED row formats available in the Barracuda file format.See Also Barracuda, compact row format, compressed row format, dynamic row format, file format, innodb_file_format, redundant row format.
- application programming interface (API)
-
A set of functions or procedures. An API provides a stable set of names and types for functions, procedures, parameters, and return values.
- apply
-
When a backup produced by the MySQL Enterprise Backup product does not include the most recent changes that occurred while the backup was underway, the process of updating the backup files to include those changes is known as the apply step. It is specified by the
apply-log
option of themysqlbackup
command.Before the changes are applied, we refer to the files as a raw backup. After the changes are applied, we refer to the files as a prepared backup. The changes are recorded in the ibbackup_logfile file; once the apply step is finished, this file is no longer necessary.
See Also hot backup, ibbackup_logfile, MySQL Enterprise Backup, prepared backup, raw backup.
- asynchronous I/O
-
A type of I/O operation that allows other processing to proceed before the I/O is completed. Also known as nonblocking I/O and abbreviated as AIO.
InnoDB
uses this type of I/O for certain operations that can run in parallel without affecting the reliability of the database, such as reading pages into the buffer pool that have not actually been requested, but might be needed soon.Historically,
InnoDB
used asynchronous I/O on Windows systems only. Starting with the InnoDB Plugin 1.1 and MySQL 5.5,InnoDB
uses asynchronous I/O on Linux systems. This change introduces a dependency onlibaio
. Asynchronous I/O on Linux systems is configured using theinnodb_use_native_aio
option, which is enabled by default. On other Unix-like systems, InnoDB uses synchronous I/O only.See Also buffer pool, nonblocking I/O.
- atomic
-
In the SQL context, transactions are units of work that either succeed entirely (when committed) or have no effect at all (when rolled back). The indivisible ("atomic") property of transactions is the “A” in the acronym ACID.
See Also ACID, commit, rollback, transaction.
- atomic DDL
-
An atomic DDL statement is one that combines the data dictionary updates, storage engine operations, and binary log writes associated with a DDL operation into a single, atomic transaction. The transaction is either fully committed or rolled back, even if the server halts during the operation. Atomic DDL support was added in MySQL 8.0. For more information, see Atomic Data Definition Statement Support .
See Also binary log, data dictionary, DDL, storage engine.
- atomic instruction
-
Special instructions provided by the CPU, to ensure that critical low-level operations cannot be interrupted.
- auto-increment
-
A property of a table column (specified by the
AUTO_INCREMENT
keyword) that automatically adds an ascending sequence of values in the column.It saves work for the developer, not to have to produce new unique values when inserting new rows. It provides useful information for the query optimizer, because the column is known to be not null and with unique values. The values from such a column can be used as lookup keys in various contexts, and because they are auto-generated there is no reason to ever change them; for this reason, primary key columns are often specified as auto-incrementing.
Auto-increment columns can be problematic with statement-based replication, because replaying the statements on a replica might not produce the same set of column values as on the source, due to timing issues. When you have an auto-incrementing primary key, you can use statement-based replication only with the setting
innodb_autoinc_lock_mode=1
. If you haveinnodb_autoinc_lock_mode=2
, which allows higher concurrency for insert operations, use row-based replication rather than statement-based replication. The settinginnodb_autoinc_lock_mode=0
should not be used except for compatibility purposes.Consecutive lock mode (
innodb_autoinc_lock_mode=1
) is the default setting prior to MySQL 8.0.3. As of MySQL 8.0.3, interleaved lock mode (innodb_autoinc_lock_mode=2
) is the default, which reflects the change from statement-based to row-based replication as the default replication type.See Also auto-increment locking, innodb_autoinc_lock_mode, primary key, row-based replication, statement-based replication.
- auto-increment locking
-
The convenience of an auto-increment primary key involves some tradeoff with concurrency. In the simplest case, if one transaction is inserting values into the table, any other transactions must wait to do their own inserts into that table, so that rows inserted by the first transaction receive consecutive primary key values.
InnoDB
includes optimizations and theinnodb_autoinc_lock_mode
option so that you can configure and optimal balance between predictable sequences of auto-increment values and maximum concurrency for insert operations.See Also auto-increment, concurrency, innodb_autoinc_lock_mode.
- autocommit
-
A setting that causes a commit operation after each SQL statement. This mode is not recommended for working with
InnoDB
tables with transactions that span several statements. It can help performance for read-only transactions onInnoDB
tables, where it minimizes overhead from locking and generation of undo data, especially in MySQL 5.6.4 and up. It is also appropriate for working withMyISAM
tables, where transactions are not applicable.See Also commit, locking, read-only transaction, SQL, transaction, undo.
- availability
-
The ability to cope with, and if necessary recover from, failures on the host, including failures of MySQL, the operating system, or the hardware and maintenance activity that may otherwise cause downtime. Often paired with scalability as critical aspects of a large-scale deployment.
See Also scalability.
B
- B-tree
-
A tree data structure that is popular for use in database indexes. The structure is kept sorted at all times, enabling fast lookup for exact matches (equals operator) and ranges (for example, greater than, less than, and
BETWEEN
operators). This type of index is available for most storage engines, such asInnoDB
andMyISAM
.Because B-tree nodes can have many children, a B-tree is not the same as a binary tree, which is limited to 2 children per node.
Contrast with hash index, which is only available in the
MEMORY
storage engine. TheMEMORY
storage engine can also use B-tree indexes, and you should choose B-tree indexes forMEMORY
tables if some queries use range operators.The use of the term B-tree is intended as a reference to the general class of index design. B-tree structures used by MySQL storage engines may be regarded as variants due to sophistications not present in a classic B-tree design. For related information, refer to the
InnoDB
Page Structure Fil Header section of the MySQL Internals Manual .See Also hash index.
- backticks
-
Identifiers within MySQL SQL statements must be quoted using the backtick character (
`
) if they contain special characters or reserved words. For example, to refer to a table namedFOO#BAR
or a column namedSELECT
, you would specify the identifiers as`FOO#BAR`
and`SELECT`
. Since the backticks provide an extra level of safety, they are used extensively in program-generated SQL statements, where the identifier names might not be known in advance.Many other database systems use double quotation marks (
"
) around such special names. For portability, you can enableANSI_QUOTES
mode in MySQL and use double quotation marks instead of backticks to qualify identifier names.See Also SQL.
- backup
-
The process of copying some or all table data and metadata from a MySQL instance, for safekeeping. Can also refer to the set of copied files. This is a crucial task for DBAs. The reverse of this process is the restore operation.
With MySQL, physical backups are performed by the MySQL Enterprise Backup product, and logical backups are performed by the
mysqldump
command. These techniques have different characteristics in terms of size and representation of the backup data, and speed (especially speed of the restore operation).Backups are further classified as hot, warm, or cold depending on how much they interfere with normal database operation. (Hot backups have the least interference, cold backups the most.)
See Also cold backup, hot backup, logical backup, MySQL Enterprise Backup, mysqldump, physical backup, warm backup.
- Barracuda
-
The code name for an
InnoDB
file format that supports the COMPRESSED row format that enables InnoDB table compression, and the DYNAMIC row format that improves the storage layout for long variable-length columns.The MySQL Enterprise Backup product version 3.5 and above supports backing up tablespaces that use the Barracuda file format.
See Also Antelope, compact row format, compressed row format, dynamic row format, file format, file-per-table, general tablespace, innodb_file_format, MySQL Enterprise Backup, row format, system tablespace.
- base column
-
A non-generated table column upon which a stored generated column or virtual generated column is based. In other words, a base column is a non-generated table column that is part of a generated column definition.
See Also generated column, stored generated column, virtual generated column.
- beta
-
An early stage in the life of a software product, when it is available only for evaluation, typically without a definite release number or a number less than 1.
InnoDB
does not use the beta designation, preferring an early adopter phase that can extend over several point releases, leading to a GA release.See Also early adopter, GA.
- binary log
-
A file containing a record of all statements or row changes that attempt to change table data. The contents of the binary log can be replayed to bring replicas up to date in a replication scenario, or to bring a database up to date after restoring table data from a backup. The binary logging feature can be turned on and off, although Oracle recommends always enabling it if you use replication or perform backups.
You can examine the contents of the binary log, or replay it during replication or recovery, by using the mysqlbinlog command. For full information about the binary log, see Section 5.4.4, “The Binary Log”. For MySQL configuration options related to the binary log, see Section 16.1.6.4, “Binary Logging Options and Variables”.
For the MySQL Enterprise Backup product, the file name of the binary log and the current position within the file are important details. To record this information for the source when taking a backup in a replication context, you can specify the
--slave-info
option.Prior to MySQL 5.0, a similar capability was available, known as the update log. In MySQL 5.0 and higher, the binary log replaces the update log.
See Also binlog, MySQL Enterprise Backup, replication.
- binlog
-
An informal name for the binary log file. For example, you might see this abbreviation used in e-mail messages or forum discussions.
See Also binary log.
- blind query expansion
-
A special mode of full-text search enabled by the
WITH QUERY EXPANSION
clause. It performs the search twice, where the search phrase for the second search is the original search phrase concatenated with the few most highly relevant documents from the first search. This technique is mainly applicable for short search phrases, perhaps only a single word. It can uncover relevant matches where the precise search term does not occur in the document.See Also full-text search.
- bottleneck
-
A portion of a system that is constrained in size or capacity, that has the effect of limiting overall throughput. For example, a memory area might be smaller than necessary; access to a single required resource might prevent multiple CPU cores from running simultaneously; or waiting for disk I/O to complete might prevent the CPU from running at full capacity. Removing bottlenecks tends to improve concurrency. For example, the ability to have multiple
InnoDB
buffer pool instances reduces contention when multiple sessions read from and write to the buffer pool simultaneously.See Also buffer pool, concurrency.
- bounce
-
A shutdown operation immediately followed by a restart. Ideally with a relatively short warmup period so that performance and throughput quickly return to a high level.
See Also shutdown.
- buddy allocator
-
A mechanism for managing different-sized pages in the InnoDB buffer pool.
See Also buffer pool, page, page size.
- buffer
-
A memory or disk area used for temporary storage. Data is buffered in memory so that it can be written to disk efficiently, with a few large I/O operations rather than many small ones. Data is buffered on disk for greater reliability, so that it can be recovered even when a crash or other failure occurs at the worst possible time. The main types of buffers used by InnoDB are the buffer pool, the doublewrite buffer, and the change buffer.
See Also buffer pool, change buffer, crash, doublewrite buffer.
- buffer pool
-
The memory area that holds cached
InnoDB
data for both tables and indexes. For efficiency of high-volume read operations, the buffer pool is divided into pages that can potentially hold multiple rows. For efficiency of cache management, the buffer pool is implemented as a linked list of pages; data that is rarely used is aged out of the cache, using a variation of the LRU algorithm. On systems with large memory, you can improve concurrency by dividing the buffer pool into multiple buffer pool instances.Several
InnoDB
status variables,INFORMATION_SCHEMA
tables, andperformance_schema
tables help to monitor the internal workings of the buffer pool. Starting in MySQL 5.6, you can avoid a lengthy warmup period after restarting the server, particularly for instances with large buffer pools, by saving the buffer pool state at server shutdown and restoring the buffer pool to the same state at server startup. See Section 14.8.3.6, “Saving and Restoring the Buffer Pool State”.See Also buffer pool instance, LRU, page, warm up.
- buffer pool instance
-
Any of the multiple regions into which the buffer pool can be divided, controlled by the
innodb_buffer_pool_instances
configuration option. The total memory size specified byinnodb_buffer_pool_size
is divided among all buffer pool instances. Typically, having multiple buffer pool instances is appropriate for systems that allocate multiple gigabytes to theInnoDB
buffer pool, with each instance being one gigabyte or larger. On systems loading or looking up large amounts of data in the buffer pool from many concurrent sessions, having multiple buffer pool instances reduces contention for exclusive access to data structures that manage the buffer pool.See Also buffer pool.
- built-in
-
The built-in
InnoDB
storage engine within MySQL is the original form of distribution for the storage engine. Contrast with the InnoDB Plugin. Starting with MySQL 5.5, the InnoDB Plugin is merged back into the MySQL code base as the built-inInnoDB
storage engine (known as InnoDB 1.1).This distinction is important mainly in MySQL 5.1, where a feature or bug fix might apply to the InnoDB Plugin but not the built-in
InnoDB
, or vice versa.See Also InnoDB.
- business rules
-
The relationships and sequences of actions that form the basis of business software, used to run a commercial company. Sometimes these rules are dictated by law, other times by company policy. Careful planning ensures that the relationships encoded and enforced by the database, and the actions performed through application logic, accurately reflect the real policies of the company and can handle real-life situations.
For example, an employee leaving a company might trigger a sequence of actions from the human resources department. The human resources database might also need the flexibility to represent data about a person who has been hired, but not yet started work. Closing an account at an online service might result in data being removed from a database, or the data might be moved or flagged so that it could be recovered if the account is re-opened. A company might establish policies regarding salary maximums, minimums, and adjustments, in addition to basic sanity checks such as the salary not being a negative number. A retail database might not allow a purchase with the same serial number to be returned more than once, or might not allow credit card purchases above a certain value, while a database used to detect fraud might allow these kinds of things.
See Also relational.
C
- .cfg file
-
A metadata file used with the
InnoDB
transportable tablespace feature. It is produced by the commandFLUSH TABLES ... FOR EXPORT
, puts one or more tables in a consistent state that can be copied to another server. The.cfg
file is copied along with the corresponding .ibd file, and used to adjust the internal values of the.ibd
file, such as the space ID, during theALTER TABLE ... IMPORT TABLESPACE
step.See Also .ibd file, space ID, transportable tablespace.
- cache
-
The general term for any memory area that stores copies of data for frequent or high-speed retrieval. In
InnoDB
, the primary kind of cache structure is the buffer pool.See Also buffer, buffer pool.
- cardinality
-
The number of different values in a table column. When queries refer to columns that have an associated index, the cardinality of each column influences which access method is most efficient. For example, for a column with a unique constraint, the number of different values is equal to the number of rows in the table. If a table has a million rows but only 10 different values for a particular column, each value occurs (on average) 100,000 times. A query such as
SELECT c1 FROM t1 WHERE c1 = 50;
thus might return 1 row or a huge number of rows, and the database server might process the query differently depending on the cardinality ofc1
.If the values in a column have a very uneven distribution, the cardinality might not be a good way to determine the best query plan. For example,
SELECT c1 FROM t1 WHERE c1 = x;
might return 1 row whenx=50
and a million rows whenx=30
. In such a case, you might need to use index hints to pass along advice about which lookup method is more efficient for a particular query.Cardinality can also apply to the number of distinct values present in multiple columns, as in a composite index.
See Also column, composite index, index, index hint, persistent statistics, random dive, selectivity, unique constraint.
- change buffer
-
A special data structure that records changes to pages in secondary indexes. These values could result from SQL
INSERT
,UPDATE
, orDELETE
statements (DML). The set of features involving the change buffer is known collectively as change buffering, consisting of insert buffering, delete buffering, and purge buffering.Changes are only recorded in the change buffer when the relevant page from the secondary index is not in the buffer pool. When the relevant index page is brought into the buffer pool while associated changes are still in the change buffer, the changes for that page are applied in the buffer pool (merged) using the data from the change buffer. Periodically, the purge operation that runs during times when the system is mostly idle, or during a slow shutdown, writes the new index pages to disk. The purge operation can write the disk blocks for a series of index values more efficiently than if each value were written to disk immediately.
Physically, the change buffer is part of the system tablespace, so that the index changes remain buffered across database restarts. The changes are only applied (merged) when the pages are brought into the buffer pool due to some other read operation.
The kinds and amount of data stored in the change buffer are governed by the
innodb_change_buffering
andinnodb_change_buffer_max_size
configuration options. To see information about the current data in the change buffer, issue theSHOW ENGINE INNODB STATUS
command.Formerly known as the insert buffer.
See Also buffer pool, change buffering, delete buffering, DML, insert buffer, insert buffering, merge, page, purge, purge buffering, secondary index, system tablespace.
- change buffering
-
The general term for the features involving the change buffer, consisting of insert buffering, delete buffering, and purge buffering. Index changes resulting from SQL statements, which could normally involve random I/O operations, are held back and performed periodically by a background thread. This sequence of operations can write the disk blocks for a series of index values more efficiently than if each value were written to disk immediately. Controlled by the
innodb_change_buffering
andinnodb_change_buffer_max_size
configuration options.See Also change buffer, delete buffering, insert buffering, purge buffering.
- checkpoint
-
As changes are made to data pages that are cached in the buffer pool, those changes are written to the data files sometime later, a process known as flushing. The checkpoint is a record of the latest changes (represented by an LSN value) that have been successfully written to the data files.
See Also buffer pool, data files, flush, fuzzy checkpointing, LSN.
- checksum
-
In
InnoDB
, a validation mechanism to detect corruption when a page in a tablespace is read from disk into theInnoDB
buffer pool. This feature is controlled by theinnodb_checksums
configuration option in MySQL 5.5.innodb_checksums
is deprecated in MySQL 5.6.3, replaced byinnodb_checksum_algorithm
.The innochecksum command helps diagnose corruption problems by testing the checksum values for a specified tablespace file while the MySQL server is shut down.
MySQL also uses checksums for replication purposes. For details, see the configuration options
binlog_checksum
,master_verify_checksum
, andslave_sql_verify_checksum
.See Also buffer pool, page, tablespace.
- child table
-
In a foreign key relationship, a child table is one whose rows refer (or point) to rows in another table with an identical value for a specific column. This is the table that contains the
FOREIGN KEY ... REFERENCES
clause and optionallyON UPDATE
andON DELETE
clauses. The corresponding row in the parent table must exist before the row can be created in the child table. The values in the child table can prevent delete or update operations on the parent table, or can cause automatic deletion or updates in the child table, based on theON CASCADE
option used when creating the foreign key.See Also foreign key, parent table.
- clean page
-
A page in the
InnoDB
buffer pool where all changes made in memory have also been written (flushed) to the data files. The opposite of a dirty page.See Also buffer pool, data files, dirty page, flush, page.
- clean shutdown
-
A shutdown that completes without errors and applies all changes to
InnoDB
tables before finishing, as opposed to a crash or a fast shutdown. Synonym for slow shutdown.See Also crash, fast shutdown, shutdown, slow shutdown.
- client
-
A type of program that sends requests to a server, and interprets or processes the results. The client software might run only some of the time (such as a mail or chat program), and might run interactively (such as the mysql command processor).
- clustered index
-
The
InnoDB
term for a primary key index.InnoDB
table storage is organized based on the values of the primary key columns, to speed up queries and sorts involving the primary key columns. For best performance, choose the primary key columns carefully based on the most performance-critical queries. Because modifying the columns of the clustered index is an expensive operation, choose primary columns that are rarely or never updated.In the Oracle Database product, this type of table is known as an index-organized table.
See Also index, primary key, secondary index.
- cold backup
-
A backup taken while the database is shut down. For busy applications and websites, this might not be practical, and you might prefer a warm backup or a hot backup.
See Also backup, hot backup, warm backup.
- column
-
A data item within a row, whose storage and semantics are defined by a data type. Each table and index is largely defined by the set of columns it contains.
Each column has a cardinality value. A column can be the primary key for its table, or part of the primary key. A column can be subject to a unique constraint, a NOT NULL constraint, or both. Values in different columns, even across different tables, can be linked by a foreign key relationship.
In discussions of MySQL internal operations, sometimes field is used as a synonym.
See Also cardinality, foreign key, index, NOT NULL constraint, primary key, row, table, unique constraint.
- column index
-
See Also composite index, index.
- column prefix
-
When an index is created with a length specification, such as
CREATE INDEX idx ON t1 (c1(N))
, only the first N characters of the column value are stored in the index. Keeping the index prefix small makes the index compact, and the memory and disk I/O savings help performance. (Although making the index prefix too small can hinder query optimization by making rows with different values appear to the query optimizer to be duplicates.)For columns containing binary values or long text strings, where sorting is not a major consideration and storing the entire value in the index would waste space, the index automatically uses the first N (typically 768) characters of the value to do lookups and sorts.
See Also index.
- commit
-
A SQL statement that ends a transaction, making permanent any changes made by the transaction. It is the opposite of rollback, which undoes any changes made in the transaction.
InnoDB
uses an optimistic mechanism for commits, so that changes can be written to the data files before the commit actually occurs. This technique makes the commit itself faster, with the tradeoff that more work is required in case of a rollback.By default, MySQL uses the autocommit setting, which automatically issues a commit following each SQL statement.
See Also autocommit, optimistic, rollback, SQL, transaction.
- compact row format
-
The default
InnoDB
row format for InnoDB tables from MySQL 5.0.3 to MySQL 5.7.8. As of MySQL 5.7.9, the default row format is defined by theinnodb_default_row_format
configuration option, which has a default setting of DYNAMIC. The COMPACT row format provides a more compact representation for nulls and variable-length columns than the prior default (REDUNDANT row format).For additional information about
InnoDB
COMPACT
row format, see Section 14.11, “InnoDB Row Formats”.See Also Antelope, dynamic row format, file format, redundant row format, row format.
- composite index
-
An index that includes multiple columns.
See Also index.
- compressed backup
-
The compression feature of the MySQL Enterprise Backup product makes a compressed copy of each tablespace, changing the extension from
.ibd
to.ibz
. Compressing backup data allows you to keep more backups on hand, and reduces the time to transfer backups to a different server. The data is uncompressed during the restore operation. When a compressed backup operation processes a table that is already compressed, it skips the compression step for that table, because compressing again would result in little or no space savings.A set of files produced by the MySQL Enterprise Backup product, where each tablespace is compressed. The compressed files are renamed with a
.ibz
file extension.Applying compression at the start of the backup process helps to avoid storage overhead during the compression process, and to avoid network overhead when transferring the backup files to another server. The process of applying the binary log takes longer, and requires uncompressing the backup files.
See Also apply, binary log, compression, hot backup, MySQL Enterprise Backup, tablespace.
- compressed row format
-
A row format that enables data and index compression for
InnoDB
tables. It was introduced in theInnoDB
Plugin, available as part of the Barracuda file format. Large fields are stored away from the page that holds the rest of the row data, as in dynamic row format. Both index pages and large fields are compressed, yielding memory and disk savings. Depending on the structure of the data, the decrease in memory and disk usage might or might not outweigh the performance overhead of uncompressing the data as it is used. See Section 14.9, “InnoDB Table and Page Compression” for usage details.For additional information about
InnoDB
COMPRESSED
row format, see DYNAMIC Row Format.See Also Barracuda, compression, dynamic row format, row format.
- compressed table
-
A table for which the data is stored in compressed form. For
InnoDB
, it is a table created withROW_FORMAT=COMPRESSED
. See Section 14.9, “InnoDB Table and Page Compression” for more information.See Also compressed row format, compression.
- compression
-
A feature with wide-ranging benefits from using less disk space, performing less I/O, and using less memory for caching.
InnoDB
supports both table-level and page-level compression.InnoDB
page compression is also referred to as transparent page compression. For more information aboutInnoDB
compression, see Section 14.9, “InnoDB Table and Page Compression”.Another type of compression is the compressed backup feature of the MySQL Enterprise Backup product.
See Also Barracuda, buffer pool, compressed backup, compressed row format, DML, transparent page compression.
- compression failure
-
Not actually an error, rather an expensive operation that can occur when using compression in combination with DML operations. It occurs when: updates to a compressed page overflow the area on the page reserved for recording modifications; the page is compressed again, with all changes applied to the table data; the re-compressed data does not fit on the original page, requiring MySQL to split the data into two new pages and compress each one separately. To check the frequency of this condition, query the
INFORMATION_SCHEMA.INNODB_CMP
table and check how much the value of theCOMPRESS_OPS
column exceeds the value of theCOMPRESS_OPS_OK
column. Ideally, compression failures do not occur often; when they do, you can adjust theinnodb_compression_level
,innodb_compression_failure_threshold_pct
, andinnodb_compression_pad_pct_max
configuration options.See Also compression, DML, page.
- concatenated index
-
See composite index.
- concurrency
-
The ability of multiple operations (in database terminology, transactions) to run simultaneously, without interfering with each other. Concurrency is also involved with performance, because ideally the protection for multiple simultaneous transactions works with a minimum of performance overhead, using efficient mechanisms for locking.
See Also ACID, locking, transaction.
- configuration file
-
The file that holds the option values used by MySQL at startup. Traditionally, on Linux and Unix this file is named
my.cnf
, and on Windows it is namedmy.ini
. You can set a number of options related to InnoDB under the[mysqld]
section of the file.See Section 4.2.2.2, “Using Option Files” for information about where MySQL searches for configuration files.
When you use the MySQL Enterprise Backup product, you typically use two configuration files: one that specifies where the data comes from and how it is structured (which could be the original configuration file for your server), and a stripped-down one containing only a small set of options that specify where the backup data goes and how it is structured. The configuration files used with the MySQL Enterprise Backup product must contain certain options that are typically left out of regular configuration files, so you might need to add options to your existing configuration file for use with MySQL Enterprise Backup.
See Also my.cnf, MySQL Enterprise Backup, option, option file.
- consistent read
-
A read operation that uses snapshot information to present query results based on a point in time, regardless of changes performed by other transactions running at the same time. If queried data has been changed by another transaction, the original data is reconstructed based on the contents of the undo log. This technique avoids some of the locking issues that can reduce concurrency by forcing transactions to wait for other transactions to finish.
With REPEATABLE READ isolation level, the snapshot is based on the time when the first read operation is performed. With READ COMMITTED isolation level, the snapshot is reset to the time of each consistent read operation.
Consistent read is the default mode in which
InnoDB
processesSELECT
statements in READ COMMITTED and REPEATABLE READ isolation levels. Because a consistent read does not set any locks on the tables it accesses, other sessions are free to modify those tables while a consistent read is being performed on the table.For technical details about the applicable isolation levels, see Section 14.7.2.3, “Consistent Nonlocking Reads”.
See Also concurrency, isolation level, locking, READ COMMITTED, REPEATABLE READ, snapshot, transaction, undo log.
- constraint
-
An automatic test that can block database changes to prevent data from becoming inconsistent. (In computer science terms, a kind of assertion related to an invariant condition.) Constraints are a crucial component of the ACID philosophy, to maintain data consistency. Constraints supported by MySQL include FOREIGN KEY constraints and unique constraints.
See Also ACID, foreign key, unique constraint.
- counter
-
A value that is incremented by a particular kind of
InnoDB
operation. Useful for measuring how busy a server is, troubleshooting the sources of performance issues, and testing whether changes (for example, to configuration settings or indexes used by queries) have the desired low-level effects. Different kinds of counters are available through Performance Schema tables and INFORMATION_SCHEMA tables, particularlyINFORMATION_SCHEMA.INNODB_METRICS
.See Also INFORMATION_SCHEMA, metrics counter, Performance Schema.
- covering index
-
An index that includes all the columns retrieved by a query. Instead of using the index values as pointers to find the full table rows, the query returns values from the index structure, saving disk I/O.
InnoDB
can apply this optimization technique to more indexes than MyISAM can, becauseInnoDB
secondary indexes also include the primary key columns.InnoDB
cannot apply this technique for queries against tables modified by a transaction, until that transaction ends.Any column index or composite index could act as a covering index, given the right query. Design your indexes and queries to take advantage of this optimization technique wherever possible.
See Also column index, composite index, index, primary key, secondary index.
- CPU-bound
-
A type of workload where the primary bottleneck is CPU operations in memory. Typically involves read-intensive operations where the results can all be cached in the buffer pool.
See Also bottleneck, buffer pool, workload.
- crash
-
MySQL uses the term “crash” to refer generally to any unexpected shutdown operation where the server cannot do its normal cleanup. For example, a crash could happen due to a hardware fault on the database server machine or storage device; a power failure; a potential data mismatch that causes the MySQL server to halt; a fast shutdown initiated by the DBA; or many other reasons. The robust, automatic crash recovery for InnoDB tables ensures that data is made consistent when the server is restarted, without any extra work for the DBA.
See Also crash recovery, fast shutdown, InnoDB, shutdown.
- crash recovery
-
The cleanup activities that occur when MySQL is started again after a crash. For InnoDB tables, changes from incomplete transactions are replayed using data from the redo log. Changes that were committed before the crash, but not yet written into the data files, are reconstructed from the doublewrite buffer. When the database is shut down normally, this type of activity is performed during shutdown by the purge operation.
During normal operation, committed data can be stored in the change buffer for a period of time before being written to the data files. There is always a tradeoff between keeping the data files up-to-date, which introduces performance overhead during normal operation, and buffering the data, which can make shutdown and crash recovery take longer.
See Also change buffer, commit, crash, data files, doublewrite buffer, InnoDB, purge, redo log.
- CRUD
-
Acronym for “create, read, update, delete”, a common sequence of operations in database applications. Often denotes a class of applications with relatively simple database usage (basic DDL, DML and query statements in SQL) that can be implemented quickly in any language.
- cursor
-
An internal data structure that is used to represent the result set of a query, or other operation that performs a search using an SQL
WHERE
clause. It works like an iterator in other high-level languages, producing each value from the result set as requested.Although SQL usually handles the processing of cursors for you, you might delve into the inner workings when dealing with performance-critical code.
See Also query.
D
- data definition language
-
See DDL.
- data dictionary
-
Metadata that keeps track of InnoDB-related objects such as tables, indexes, and table columns. This metadata is physically located in the
InnoDB
system tablespace. For historical reasons, it overlaps to some degree with information stored in the .frm files.Because the MySQL Enterprise Backup product always backs up the system tablespace, all backups include the contents of the data dictionary.
See Also column, file-per-table, .frm file, index, MySQL Enterprise Backup, system tablespace, table.
- data directory
-
The directory under which each MySQL instance keeps the data files for
InnoDB
and the directories representing individual databases. Controlled by thedatadir
configuration option.See Also data files, instance.
- data files
-
The files that physically contain table and index data.
The
InnoDB
system tablespace, which holds theInnoDB
data dictionary and is capable of holding data for multipleInnoDB
tables, is represented by one or more.ibdata
data files.File-per-table tablespaces, which hold data for a single
InnoDB
table, are represented by a.ibd
data file.General tablespaces (introduced in MySQL 5.7.6), which can hold data for multiple
InnoDB
tables, are also represented by a.ibd
data file.See Also data dictionary, file-per-table, general tablespace, .ibd file, ibdata file, index, system tablespace, table, tablespace.
- data manipulation language
-
See DML.
- data warehouse
-
A database system or application that primarily runs large queries. The read-only or read-mostly data might be organized in denormalized form for query efficiency. Can benefit from the optimizations for read-only transactions in MySQL 5.6 and higher. Contrast with OLTP.
See Also denormalized, OLTP, query, read-only transaction.
- database
-
Within the MySQL data directory, each database is represented by a separate directory. The InnoDB system tablespace, which can hold table data from multiple databases within a MySQL instance, is kept in data files that reside outside of individual database directories. When file-per-table mode is enabled, the .ibd files representing individual InnoDB tables are stored inside the database directories unless created elsewhere using the
DATA DIRECTORY
clause. General tablespaces, introduced in MySQL 5.7.6, also hold table data in .ibd files. Unlike file-per-table .ibd files, general tablespace .ibd files can hold table data from multiple databases within a MySQL instance, and can be assigned to directories relative to or independent of the MySQL data directory.For long-time MySQL users, a database is a familiar notion. Users coming from an Oracle Database background will find that the MySQL meaning of a database is closer to what Oracle Database calls a schema.
See Also data files, file-per-table, .ibd file, instance, schema, system tablespace.
- DCL
-
Data control language, a set of SQL statements for managing privileges. In MySQL, consists of the
GRANT
andREVOKE
statements. Contrast with DDL and DML. - DDL
-
Data definition language, a set of SQL statements for manipulating the database itself rather than individual table rows. Includes all forms of the
CREATE
,ALTER
, andDROP
statements. Also includes theTRUNCATE
statement, because it works differently than aDELETE FROM
statement, even though the ultimate effect is similar.table_name
DDL statements automatically commit the current transaction; they cannot be rolled back.
The
InnoDB
online DDL feature enhances performance forCREATE INDEX
,DROP INDEX
, and many types ofALTER TABLE
operations. See Section 14.13, “InnoDB and Online DDL” for more information. Also, theInnoDB
file-per-table setting can affect the behavior ofDROP TABLE
andTRUNCATE TABLE
operations.Contrast with DML and DCL.
See Also commit, DCL, DML, file-per-table, rollback, SQL, transaction.
- deadlock
-
A situation where different transactions are unable to proceed, because each holds a lock that the other needs. Because both transactions are waiting for a resource to become available, neither will ever release the locks it holds.
A deadlock can occur when the transactions lock rows in multiple tables (through statements such as
UPDATE
orSELECT ... FOR UPDATE
), but in the opposite order. A deadlock can also occur when such statements lock ranges of index records and gaps, with each transaction acquiring some locks but not others due to a timing issue.For background information on how deadlocks are automatically detected and handled, see Section 14.7.5.2, “Deadlock Detection and Rollback”. For tips on avoiding and recovering from deadlock conditions, see Section 14.7.5.3, “How to Minimize and Handle Deadlocks”.
See Also gap, lock, transaction.
- deadlock detection
-
A mechanism that automatically detects when a deadlock occurs, and automatically rolls back one of the transactions involved (the victim). Deadlock detection can be disabled using the
innodb_deadlock_detect
configuration option.See Also deadlock, rollback, transaction, victim.
- delete
-
When
InnoDB
processes aDELETE
statement, the rows are immediately marked for deletion and no longer are returned by queries. The storage is reclaimed sometime later, during the periodic garbage collection known as the purge operation. For removing large quantities of data, related operations with their own performance characteristics are TRUNCATE and DROP. - delete buffering
-
The technique of storing changes to secondary index pages, resulting from
DELETE
operations, in the change buffer rather than writing the changes immediately, so that the physical writes can be performed to minimize random I/O. (Because delete operations are a two-step process, this operation buffers the write that normally marks an index record for deletion.) It is one of the types of change buffering; the others are insert buffering and purge buffering.See Also change buffer, change buffering, insert buffer, insert buffering, purge buffering.
- denormalized
-
A data storage strategy that duplicates data across different tables, rather than linking the tables with foreign keys and join queries. Typically used in data warehouse applications, where the data is not updated after loading. In such applications, query performance is more important than making it simple to maintain consistent data during updates. Contrast with normalized.
See Also data warehouse, foreign key, join, normalized.
- descending index
-
A type of index available with some database systems, where index storage is optimized to process
ORDER BY
clauses. Currently, although MySQL allows thecolumn
DESCDESC
keyword in theCREATE TABLE
statement, it does not use any special storage layout for the resulting index.See Also index.
- dirty page
-
A page in the
InnoDB
buffer pool that has been updated in memory, where the changes are not yet written (flushed) to the data files. The opposite of a clean page.See Also buffer pool, clean page, data files, flush, page.
- dirty read
-
An operation that retrieves unreliable data, data that was updated by another transaction but not yet committed. It is only possible with the isolation level known as read uncommitted.
This kind of operation does not adhere to the ACID principle of database design. It is considered very risky, because the data could be rolled back, or updated further before being committed; then, the transaction doing the dirty read would be using data that was never confirmed as accurate.
Its opposite is consistent read, where
InnoDB
ensures that a transaction does not read information updated by another transaction, even if the other transaction commits in the meantime.See Also ACID, commit, consistent read, isolation level, READ UNCOMMITTED, rollback.
- disk-based
-
A kind of database that primarily organizes data on disk storage (hard drives or equivalent). Data is brought back and forth between disk and memory to be operated upon. It is the opposite of an in-memory database. Although
InnoDB
is disk-based, it also contains features such as he buffer pool, multiple buffer pool instances, and the adaptive hash index that allow certain kinds of workloads to work primarily from memory.See Also adaptive hash index, buffer pool, in-memory database.
- disk-bound
-
A type of workload where the primary bottleneck is disk I/O. (Also known as I/O-bound.) Typically involves frequent writes to disk, or random reads of more data than can fit into the buffer pool.
See Also bottleneck, buffer pool, workload.
- DML
-
Data manipulation language, a set of SQL statements for performing
INSERT
,UPDATE
, andDELETE
operations. TheSELECT
statement is sometimes considered as a DML statement, because theSELECT ... FOR UPDATE
form is subject to the same considerations for locking asINSERT
,UPDATE
, andDELETE
.DML statements for an
InnoDB
table operate in the context of a transaction, so their effects can be committed or rolled back as a single unit.Contrast with DDL and DCL.
See Also commit, DCL, DDL, locking, rollback, SQL, transaction.
- document id
-
In the
InnoDB
full-text search feature, a special column in the table containing the FULLTEXT index, to uniquely identify the document associated with each ilist value. Its name isFTS_DOC_ID
(uppercase required). The column itself must be ofBIGINT UNSIGNED NOT NULL
type, with a unique index namedFTS_DOC_ID_INDEX
. Preferably, you define this column when creating the table. IfInnoDB
must add the column to the table while creating aFULLTEXT
index, the indexing operation is considerably more expensive.See Also full-text search, FULLTEXT index, ilist.
- doublewrite buffer
-
InnoDB
uses a file flush technique called doublewrite. Before writing pages to the data files,InnoDB
first writes them to a storage area called the doublewrite buffer. Only after the write and the flush to the doublewrite buffer have completed, doesInnoDB
write the pages to their proper positions in the data file. If there is an operating system, storage subsystem, or mysqld process crash in the middle of a page write,InnoDB
can later find a good copy of the page from the doublewrite buffer during crash recovery.Although data is always written twice, the doublewrite buffer does not require twice as much I/O overhead or twice as many I/O operations. Data is written to the buffer itself as a large sequential chunk, with a single
fsync()
call to the operating system.To turn off the doublewrite buffer, specify the option
innodb_doublewrite=0
.See Also crash recovery, data files, page, purge.
- drop
-
A kind of DDL operation that removes a schema object, through a statement such as
DROP TABLE
orDROP INDEX
. It maps internally to anALTER TABLE
statement. From anInnoDB
perspective, the performance considerations of such operations involve the time that the data dictionary is locked to ensure that interrelated objects are all updated, and the time to update memory structures such as the buffer pool. For a table, the drop operation has somewhat different characteristics than a truncate operation (TRUNCATE TABLE
statement).See Also buffer pool, data dictionary, DDL, table, truncate.
- dynamic row format
-
A row format introduced in the
InnoDB
Plugin, available as part of the Barracuda file format. Because long variable-length column values are stored outside of the page that holds the row data, it is very efficient for rows that include large objects. Since the large fields are typically not accessed to evaluate query conditions, they are not brought into the buffer pool as often, resulting in fewer I/O operations and better utilization of cache memory.As of MySQL 5.7.9, the default row format is defined by
innodb_default_row_format
, which has a default value ofDYNAMIC
.For additional information about
InnoDB
DYNAMIC
row format, see DYNAMIC Row Format.See Also Barracuda, buffer pool, file format, row format.
E
- early adopter
-
A stage similar to beta, when a software product is typically evaluated for performance, functionality, and compatibility in a non-mission-critical setting.
See Also beta.
- error log
-
A type of log showing information about MySQL startup and critical runtime errors and crash information. For details, see Section 5.4.2, “The Error Log”.
- eviction
-
The process of removing an item from a cache or other temporary storage area, such as the
InnoDB
buffer pool. Often, but not always, uses the LRU algorithm to determine which item to remove. When a dirty page is evicted, its contents are flushed to disk, and any dirty neighbor pages might be flushed also.See Also buffer pool, dirty page, flush, LRU, neighbor page.
- exclusive lock
-
A kind of lock that prevents any other transaction from locking the same row. Depending on the transaction isolation level, this kind of lock might block other transactions from writing to the same row, or might also block other transactions from reading the same row. The default
InnoDB
isolation level, REPEATABLE READ, enables higher concurrency by allowing transactions to read rows that have exclusive locks, a technique known as consistent read.See Also concurrency, consistent read, isolation level, lock, REPEATABLE READ, shared lock, transaction.
- extent
-
A group of pages within a tablespace. For the default page size of 16KB, an extent contains 64 pages. In MySQL 5.6, the page size for an
InnoDB
instance can be 4KB, 8KB, or 16KB, controlled by theinnodb_page_size
configuration option. For 4KB, 8KB, and 16KB pages sizes, the extent size is always 1MB (or 1048576 bytes).Support for 32KB and 64KB
InnoDB
page sizes was added in MySQL 5.7.6. For a 32KB page size, the extent size is 2MB. For a 64KB page size, the extent size is 4MB.InnoDB
features such as segments, read-ahead requests and the doublewrite buffer use I/O operations that read, write, allocate, or free data one extent at a time.See Also doublewrite buffer, page, page size, read-ahead, segment, tablespace.
F
- .frm file
-
A file containing the metadata, such as the table definition, of a MySQL table.
For backups, you must always keep the full set of
.frm
files along with the backup data to be able to restore tables that are altered or dropped after the backup.Although each
InnoDB
table has a.frm
file,InnoDB
maintains its own table metadata in the system tablespace..frm
files are backed up by the MySQL Enterprise Backup product. These files must not be modified by anALTER TABLE
operation while the backup is taking place, which is why backups that include non-InnoDB
tables perform aFLUSH TABLES WITH READ LOCK
operation to freeze such activity while backing up.frm
files. Restoring a backup can result in.frm
files being created, changed, or removed to match the state of the database at the time of the backup.See Also data dictionary, MySQL Enterprise Backup, system tablespace.
- Fast Index Creation
-
A capability first introduced in the InnoDB Plugin, now part of MySQL in 5.5 and higher, that speeds up creation of
InnoDB
secondary indexes by avoiding the need to completely rewrite the associated table. The speedup applies to dropping secondary indexes also.Because index maintenance can add performance overhead to many data transfer operations, consider doing operations such as
ALTER TABLE ... ENGINE=INNODB
orINSERT INTO ... SELECT * FROM ...
without any secondary indexes in place, and creating the indexes afterward.In MySQL 5.6, this feature becomes more general. You can read and write to tables while an index is being created, and many more kinds of
ALTER TABLE
operations can be performed without copying the table, without blocking DML operations, or both. Thus in MySQL 5.6 and higher, this set of features is referred to as online DDL rather than Fast Index Creation.For related information, see Section 14.13, “InnoDB and Online DDL”.
See Also DML, index, online DDL, secondary index.
- fast shutdown
-
The default shutdown procedure for
InnoDB
, based on the configuration settinginnodb_fast_shutdown=1
. To save time, certain flush operations are skipped. This type of shutdown is safe during normal usage, because the flush operations are performed during the next startup, using the same mechanism as in crash recovery. In cases where the database is being shut down for an upgrade or downgrade, do a slow shutdown instead to ensure that all relevant changes are applied to the data files during the shutdown.See Also crash recovery, data files, flush, shutdown, slow shutdown.
- file format
-
The file format for
InnoDB
tables, enabled using theinnodb_file_format
configuration option. Supported file formats are Antelope and Barracuda. Antelope is the originalInnoDB
file format and supports the REDUNDANT and COMPACT row formats. Barracuda is the newerInnoDB
file format and supports the COMPRESSED and DYNAMIC row formats.See Also Antelope, Barracuda, file-per-table, .ibd file, ibdata file, row format.
- file-per-table
-
A general name for the setting controlled by the
innodb_file_per_table
option, which is an important configuration option that affects aspects ofInnoDB
file storage, availability of features, and I/O characteristics. As of MySQL 5.6.7,innodb_file_per_table
is enabled by default.With the
innodb_file_per_table
option enabled, you can create a table in its own .ibd file rather than in the shared ibdata files of the system tablespace. When table data is stored in an individual .ibd file, you have more flexibility to choose row formats required for features such as data compression. TheTRUNCATE TABLE
operation is also faster, and reclaimed space can be used by the operating system rather than remaining reserved forInnoDB
.The MySQL Enterprise Backup product is more flexible for tables that are in their own files. For example, tables can be excluded from a backup, but only if they are in separate files. Thus, this setting is suitable for tables that are backed up less frequently or on a different schedule.
See Also compressed row format, compression, file format, .ibd file, ibdata file, innodb_file_per_table, MySQL Enterprise Backup, row format, system tablespace.
- fill factor
-
In an
InnoDB
index, the proportion of a page that is taken up by index data before the page is split. The unused space when index data is first divided between pages allows for rows to be updated with longer string values without requiring expensive index maintenance operations. If the fill factor is too low, the index consumes more space than needed, causing extra I/O overhead when reading the index. If the fill factor is too high, any update that increases the length of column values can cause extra I/O overhead for index maintenance. See Section 14.6.2.2, “The Physical Structure of an InnoDB Index” for more information. - fixed row format
-
This row format is used by the
MyISAM
storage engine, not byInnoDB
. If you create anInnoDB
table with the optionROW_FORMAT=FIXED
in MySQL 5.7.6 or earlier,InnoDB
uses the compact row format instead, although theFIXED
value might still show up in output such asSHOW TABLE STATUS
reports. As of MySQL 5.7.7,InnoDB
returns an error ifROW_FORMAT=FIXED
is specified.See Also compact row format, row format.
- flush
-
To write changes to the database files, that had been buffered in a memory area or a temporary disk storage area. The
InnoDB
storage structures that are periodically flushed include the redo log, the undo log, and the buffer pool.Flushing can happen because a memory area becomes full and the system needs to free some space, because a commit operation means the changes from a transaction can be finalized, or because a slow shutdown operation means that all outstanding work should be finalized. When it is not critical to flush all the buffered data at once,
InnoDB
can use a technique called fuzzy checkpointing to flush small batches of pages to spread out the I/O overhead.See Also buffer pool, commit, fuzzy checkpointing, redo log, slow shutdown, undo log.
- flush list
-
An internal
InnoDB
data structure that tracks dirty pages in the buffer pool: that is, pages that have been changed and need to be written back out to disk. This data structure is updated frequently byInnoDB
internal mini-transactions, and so is protected by its own mutex to allow concurrent access to the buffer pool.See Also buffer pool, dirty page, LRU, mini-transaction, mutex, page, page cleaner.
- foreign key
-
A type of pointer relationship, between rows in separate
InnoDB
tables. The foreign key relationship is defined on one column in both the parent table and the child table.In addition to enabling fast lookup of related information, foreign keys help to enforce referential integrity, by preventing any of these pointers from becoming invalid as data is inserted, updated, and deleted. This enforcement mechanism is a type of constraint. A row that points to another table cannot be inserted if the associated foreign key value does not exist in the other table. If a row is deleted or its foreign key value changed, and rows in another table point to that foreign key value, the foreign key can be set up to prevent the deletion, cause the corresponding column values in the other table to become null, or automatically delete the corresponding rows in the other table.
One of the stages in designing a normalized database is to identify data that is duplicated, separate that data into a new table, and set up a foreign key relationship so that the multiple tables can be queried like a single table, using a join operation.
See Also child table, FOREIGN KEY constraint, join, normalized, NULL, parent table, referential integrity, relational.
- FOREIGN KEY constraint
-
The type of constraint that maintains database consistency through a foreign key relationship. Like other kinds of constraints, it can prevent data from being inserted or updated if data would become inconsistent; in this case, the inconsistency being prevented is between data in multiple tables. Alternatively, when a DML operation is performed,
FOREIGN KEY
constraints can cause data in child rows to be deleted, changed to different values, or set to null, based on theON CASCADE
option specified when creating the foreign key.See Also child table, constraint, DML, foreign key, NULL.
- FTS
-
In most contexts, an acronym for full-text search. Sometimes in performance discussions, an acronym for full table scan.
See Also full table scan, full-text search.
- full backup
-
A backup that includes all the tables in each MySQL database, and all the databases in a MySQL instance. Contrast with partial backup.
See Also backup, database, instance, partial backup, table.
- full table scan
-
An operation that requires reading the entire contents of a table, rather than just selected portions using an index. Typically performed either with small lookup tables, or in data warehousing situations with large tables where all available data is aggregated and analyzed. How frequently these operations occur, and the sizes of the tables relative to available memory, have implications for the algorithms used in query optimization and managing the buffer pool.
The purpose of indexes is to allow lookups for specific values or ranges of values within a large table, thus avoiding full table scans when practical.
See Also buffer pool, index.
- full-text search
-
The MySQL feature for finding words, phrases, Boolean combinations of words, and so on within table data, in a faster, more convenient, and more flexible way than using the SQL
LIKE
operator or writing your own application-level search algorithm. It uses the SQL functionMATCH()
and FULLTEXT indexes.See Also FULLTEXT index.
- FULLTEXT index
-
The special kind of index that holds the search index in the MySQL full-text search mechanism. Represents the words from values of a column, omitting any that are specified as stopwords. Originally, only available for
MyISAM
tables. Starting in MySQL 5.6.4, it is also available for InnoDB tables.See Also full-text search, index, InnoDB, search index, stopword.
- fuzzy checkpointing
-
A technique that flushes small batches of dirty pages from the buffer pool, rather than flushing all dirty pages at once which would disrupt database processing.
See Also buffer pool, dirty page, flush.
G
- GA
-
“Generally available”, the stage when a software product leaves beta and is available for sale, official support, and production use.
See Also beta.
- gap
-
A place in an
InnoDB
index data structure where new values could be inserted. When you lock a set of rows with a statement such asSELECT ... FOR UPDATE
,InnoDB
can create locks that apply to the gaps as well as the actual values in the index. For example, if you select all values greater than 10 for update, a gap lock prevents another transaction from inserting a new value that is greater than 10. The supremum record and infimum record represent the gaps containing all values greater than or less than all the current index values.See Also concurrency, gap lock, index, infimum record, isolation level, supremum record.
- gap lock
-
A lock on a gap between index records, or a lock on the gap before the first or after the last index record. For example,
SELECT c1 FROM t WHERE c1 BETWEEN 10 and 20 FOR UPDATE;
prevents other transactions from inserting a value of 15 into the columnt.c1
, whether or not there was already any such value in the column, because the gaps between all existing values in the range are locked. Contrast with record lock and next-key lock.Gap locks are part of the tradeoff between performance and concurrency, and are used in some transaction isolation levels and not others.
See Also gap, infimum record, lock, next-key lock, record lock, supremum record.
- general log
-
See general query log.
- general query log
-
A type of log used for diagnosis and troubleshooting of SQL statements processed by the MySQL server. Can be stored in a file or in a database table. You must enable this feature through the
general_log
configuration option to use it. You can disable it for a specific connection through thesql_log_off
configuration option.Records a broader range of queries than the slow query log. Unlike the binary log, which is used for replication, the general query log contains
SELECT
statements and does not maintain strict ordering. For more information, see Section 5.4.3, “The General Query Log”.See Also binary log, log, slow query log.
- general tablespace
-
A shared
InnoDB
tablespace created usingCREATE TABLESPACE
syntax. General tablespaces can be created outside of the MySQL data directory, are capable of holding multiple tables, and support tables of all row formats. General tablespaces were introduced in MySQL 5.7.6.Tables are added to a general tablespace using
CREATE TABLE
ortbl_name
... TABLESPACE [=]tablespace_name
ALTER TABLE
syntax.tbl_name
TABLESPACE [=]tablespace_name
Contrast with system tablespace and file-per-table tablespace.
For more information, see Section 14.6.3.3, “General Tablespaces”.
See Also file-per-table, system tablespace, table, tablespace.
- generated column
-
A column whose values are computed from an expression included in the column definition. A generated column can be virtual or stored.
See Also base column, stored generated column, virtual generated column.
- generated stored column
- generated virtual column
- global transaction
-
A type of transaction involved in XA operations. It consists of several actions that are transactional in themselves, but that all must either complete successfully as a group, or all be rolled back as a group. In essence, this extends ACID properties “up a level” so that multiple ACID transactions can be executed in concert as components of a global operation that also has ACID properties.
See Also ACID, transaction, XA.
- group commit
-
An
InnoDB
optimization that performs some low-level I/O operations (log write) once for a set of commit operations, rather than flushing and syncing separately for each commit.See Also binary log, commit.
H
- hash index
-
A type of index intended for queries that use equality operators, rather than range operators such as greater-than or
BETWEEN
. It is available forMEMORY
tables. Although hash indexes are the default forMEMORY
tables for historic reasons, that storage engine also supports B-tree indexes, which are often a better choice for general-purpose queries.MySQL includes a variant of this index type, the adaptive hash index, that is constructed automatically for
InnoDB
tables if needed based on runtime conditions.See Also adaptive hash index, B-tree, index, InnoDB.
- HDD