Backup and Restore are integrated features provided by tablets managed by Vitess. As well as using backups for data integrity, Vitess will also create and restore backups for provisioning new tablets in an existing shard.
Before backing up or restoring a tablet, you need to ensure that the tablet is aware of the Backup Storage system and Backup engine that you are using. To do so, use the following command-line flags when starting a vttablet that has access to the location where you are storing backups.
The following options can be used to configure VTTablet for backups:
Specifies the implementation of the Backup Storage interface to
Current plugin options available are:
file: NFS or any other filesystem-mounted network
gcs: Google Cloud Storage.
s3: Amazon S3.
ceph: Ceph Object Gateway S3 API.
Specifies the implementation of the Backup Engine to
Current options available are:
builtin: Copy all the database files into specified storage. This is the default.
xtrabackup: Percona Xtrabackup.
If set, the content of every file to backup is sent to a hook. The
hook receives the data for each file on stdin. It should echo the
transformed data to stdout. Anything the hook prints to stderr will
be printed in the vttablet logs. Hooks should be located in the vthook subdirectory of the
VTROOT directory. The hook receives a -operation write or a
-operation read parameter depending on the direction
of the data processing. For instance, write would be for
encryption, and read would be for decryption.
This flag controls if the backups are compressed by the Vitess code.
By default it is set to true. Use
-backup_storage_compress=false to disable.This is meant to be used with a -backup_storage_hook
hook that already compresses the data, to avoid compressing the data
For the file plugin, this identifies the root directory
For the gcs plugin, this identifies the
For the s3 plugin, this identifies the AWS region.
For the s3 plugin, this identifies the AWS S3
For the ceph plugin, this identifies the path to a text
file with a JSON object as configuration. The JSON object requires the
following keys: accessKey, secretKey,
endPoint and useSSL. Bucket name is computed
from keyspace name and shard name is separated for different
keyspaces / shards.
Indicates that, when started with an empty MySQL instance, the
tablet should restore the most recent backup from the specified
String Flags to pass to xbstream command during restore. These should be space separated and will be added to the end of the command. These need to match the ones used for backup e.g. --compress / --decompress, --encrypt / --decrypt
For the xtrabackup backup engine, directory location of the xtrabackup executable, e.g., /usr/bin
String For the xtrabackup backup engine, flags to pass to backup command. These should be space separated and will be added to the end of the command.
String For the xtrabackup backup engine, which mode to use if streaming, valid values are tar and xbstream. Defaults to tar.
For the xtrabackup backup engine, required user that xtrabackup will use to connect to the database server. This user must have all necessary privileges. For details, please refer to xtrabackup documentation.
Unit For the xtrabackup backup engine, if greater than 0, use data striping across this many destination files to parallelize data transfer and decompression.
Unit For the xtrabackup backup engine, size in bytes of each block that gets sent to a given stripe before rotating to the next stripe.Defaults to 102400.
String Flags to pass to prepare command. These should be space separated and will be added to the end of the command.
Note that for the Google Cloud Storage plugin, we currently only support
Application Default Credentials.
It means that access to Cloud Storage is automatically granted by virtue of
the fact that you’re already running within Google Compute Engine or Container
For this to work, the GCE instances must have been created with the scope that grants read-write access to Cloud Storage. When using Container Engine, you can
do this for all the instances it creates by adding --scopes storage-rw to the gcloud container clusters create command.
We recommend to take backups regularly e.g. you should set up a cron job for it.
To determine the proper frequency for creating backups, consider the amount of time that you keep replication logs and allow enough time to investigate and fix problems in the event that a backup operation fails.
For example, suppose you typically keep four days of replication logs and you create daily backups. In that case, even if a backup fails, you have at least a couple of days from the time of the failure to investigate and fix the problem.