Vitess 12.0 does not support backups of MySQL 8.0.30 and later. You can see additional details here.
Backup and Restore are integrated features provided by tablets managed by Vitess. As well as using backups for data integrity, Vitess will also create and restore backups for provisioning new tablets in an existing shard.
Before backing up or restoring a tablet, you need to ensure that the tablet is aware of the Backup Storage system and Backup engine that you are using. To do so, use the following command-line flags when starting a vttablet or vtctld that has access to the location where you are storing backups.
If set, the content of every file to backup is sent to a hook. The
hook receives the data for each file on stdin. It should echo the
transformed data to stdout. Anything the hook prints to stderr will
be printed in the vttablet logs. Hooks should be located in the vthook subdirectory of the
VTROOT directory. The hook receives a -operation write or a
-operation read parameter depending on the direction
of the data processing. For instance, write would be for
encryption, and read would be for decryption.
This flag controls if the backups are compressed by the Vitess code.
By default it is set to true. Use
-backup_storage_compress=false to disable.This is meant to be used with a -backup_storage_hook
hook that already compresses the data, to avoid compressing the data
For the file plugin, this identifies the root directory
for backups. This path must exist on shared storage to provide a global backup view for all vtctlds and vttablets.
For the gcs plugin, this identifies the
For the s3 plugin, this identifies the AWS region.
For the s3 plugin, this identifies the AWS S3
For the ceph plugin, this identifies the path to a text
file with a JSON object as configuration. The JSON object requires the
following keys: accessKey, secretKey,
endPoint and useSSL. Bucket name is computed
from keyspace name and shard name is separated for different
keyspaces / shards.
Indicates that, when started with an empty MySQL instance, the
tablet should restore the most recent backup from the specified
If set, restore the latest backup taken at or before this timestamp
rather than using the most recent one. Example: ‘2021-04-29.133050’.
The flags to pass to the xbstream command during restore. These should be space separated and will be added to the end of the command. These need to match the ones used for backup e.g. --compress / --decompress, --encrypt / --decrypt
For the xtrabackup backup engine, directory location of the xtrabackup executable, e.g., `/usr/bin`
For the xtrabackup backup engine, flags to pass to the backup command. These should be space separated and will be added to the end of the command.
For the xtrabackup backup engine, which mode to use if streaming, valid values are tar and xbstream. Defaults to tar.
For the xtrabackup backup engine, required user that xtrabackup will use to connect to the database server. This user must have all necessary privileges. For details, please refer to xtrabackup documentation.
For the xtrabackup backup engine, if greater than 0, use data striping across this many destination files to parallelize data transfer and decompression.
For the xtrabackup backup engine, size in bytes of each block that gets sent to a given stripe before rotating to the next stripe. Defaults to 102400.
Flags to pass to the prepare command. These should be space separated and will be added to the end of the command.
Note that for the Google Cloud Storage plugin, we currently only support
Application Default Credentials. This means that access to Google Cloud Storage (GCS) is automatically granted by virtue of the fact that you're already running within Google Compute Engine (GCE) or Google Kubernetes Engine (GKE).
For this to work, the GCE instances must have been created with the scope that grants read-write access to GCS. When using GKE, you can do this for all the instances it creates by adding --scopes storage-rw to the gcloud container clusters create command.
We recommend to take backups regularly -- e.g. you should set up a cron job for it.
To determine the proper frequency for creating backups, consider the amount of time that you keep replication logs (see the binlog_expire_logs variables) and allow enough time to investigate and fix problems in the event that a backup operation fails.
For example, suppose you typically keep four days of replication logs and you create daily backups. In that case, even if a backup fails, you have at least a couple of days from the time of the failure to investigate and fix the problem.