Error message:
InnoDB: An optimized (without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet.
To solve this error, percona adds three new parameters xtrabackup — lock DDL, xtrabackup — lock DDL timeout, xtrabackup — lock DDL per table
MySQL 5.7 skips some DDLS when recording redo logs. In MySQL 5.7, index creation is divided into three stages
1: scan the clustered index and generate index entries into the sort buffer. When the sort buffer is full, it will be written to the temporary file
2: multiple threads sort the index entries in the temporary file or sort buffer
3: insert them into the B-tree after sorting
before that, MySQL will call insert APIs to insert B-tree one by one when creating an index. This insertion method will open the B-tree cursor, find the location and insert optimistically. If the B-tree node page to be inserted is full, the B-tree page page will be split or merged, which is called pessimistic insertion. In this way, creating an index will cause continuous splitting and merging, and find the insertion location, which will be more expensive
sorted index builds and redo logging:
redo logging will be turned off when using sorted index to create an index. It ensures the high availability of index creation through checkpointing. Checkpoints force all dirty pages to be written to the disk. During index creation, the page cleaner will periodically refresh the dirty pages to ensure the fast push forward of checkpoints. Usually, the page cleaner will refresh dirty pages only when the number of clean pages is lower than the set threshold. However, during index creation, the page cleaner will refresh quickly to reduce the overhead of checkpoints
Because redo is not recorded, the following errors will occur during backup
[FATAL] InnoDB: An optimized(without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet. Percona XtraBackup will not be able to take a consistent backup. Retry the backup operation
Xtrabackup adds the — lock DDL option to solve this problem, but it will add an executing lock tables for backup to all tables during backup
another option is — lock DDL per table, which will have the following output
170726 11:32:33 [01] Copying ./ibdata1 to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/ibdata1 170726 11:32:33 Locking MDL for db1.sb1 170726 11:32:33 [02] Copying ./db1/sb1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db1/sb1.ibd 170726 11:32:33 Locking MDL for db1.sbtest1 170726 11:32:33 Locking MDL for db2.sb1 170726 11:32:33 [03] Copying ./db1/sbtest1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db1/sbtest1.ibd 170726 11:32:33 [04] Copying ./db2/sb1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db2/sb1.ibd 170726 11:32:33 [04] ...done 170726 11:32:33 >> log scanned up to (2892754398) 170726 11:32:34 Locking MDL for db2.sbtest1
It should be noted that the use of the — lock DDL parameter in non percona servers may not be supported, such as MariaDB. But — lock DDL per table is supported by all servers
However, if a process holds an MDL lock and another process performs an alter table operation (index creation belongs to the alter table operation), the update operation will be blocked
If you don’t want any blocking during backup, you can only open old_alter_Table parameter, so you can’t use the new feature of alter table
Parameter item interpretation
--lock-ddl Issue LOCK TABLES FOR BACKUP if it is supported by server at the beginning of the backup to block all DDL operations. --lock-ddl-timeout=# If LOCK TABLES FOR BACKUP does not return within given timeout, abort the backup. --lock-ddl-per-table Lock DDL for each table before xtrabackup starts to copy it and until the backup is completed. --safe-slave-backup Stop slave SQL thread and wait to start backup until Slave_open_temp_tables in "SHOW STATUS" is zero. If there are no open temporary tables, the backup will take place, otherwise the SQL thread will be started and stopped until there are no open temporary tables. The backup will fail if Slave_open_temp_tables does not become zero after --safe-slave-backup-timeout seconds. The slave SQL thread will be restarted when the backup finishes.