Applies to: DB2, DB2 MultiNode, Informix, Microsoft SQL Server on Linux, Microsoft SQL Server on Windows, MySQL, PostgreSQL, Oracle, Oracle RAC, SAP HANA
You can optimize database log backup recovery point objectives (RPOs) and have log backups running independently of the Control Plane or CommServe computer maintenance windows. You can run database log backups at a greater frequency, in a scalable manner, without adding to the workload of the CommServe computer. Disk caching of log backups uses compression to save space and encryption for greater security.
Log backups continue to run, even when the CommServe computer is down for maintenance or when connectivity to the CommServe computer is disrupted. With disk caching enabled, you can set the log RPO to as low as 5 minutes.
Without disk caching for log backups, if you run transaction log jobs frequently, then you must manage those scheduled jobs and store the job data in the CommServe computer. However, with disk caching for log backups, the CommServe computer doesn't have to run frequent backup jobs throughout the day. Instead, log files are backed up at the desired frequency using native backup utilities, are cached to a mount point on the MediaAgent, and are available for database restores. The Scheduler runs only at the interval defined by the Use disk cache for log backups and commit every setting to ensure cached data is committed to the CommServe computer.
For Oracle and SQL Server agents, the frequency of checking to determine whether a log backup should run is governed by parameters defined for the database type. If a defined maximum time has elapsed (the Force a backup every setting on the GUI), a log backup is run if no backup was run during the elapsed time. For SAP HANA agents, when HANA initiates a log backup through the Commvault backint interface, the logs are cached to a mount point on the MediaAgent. At intervals defined by the Use disk cache for log backups and commit every setting, a scheduled job runs to commit to the CommServe computer all cached log backups made since the most recent scheduled job.
For the DB2, DB2 MultiNode, MySQL and Informix agents, when a log backup operation is initiated through the native backup utility, the logs are cached to a mount point on the MediaAgent. For PostgreSQL, whenever a transaction log is archived it will be copied to the MediaAgent. This is achieved by modifying archive_command in postgresql.auto.conf to copy the logs to backup storage location followed by reloading the config files. At intervals defined by the Use disk cache for log backups and commit every setting, a scheduled job runs to commit to the CommServe computer all cached log backups made since the last scheduled job. Application command line log backups do not initiate any job in the job controller for writing logs to disk cache. Commit jobs initiate the job and will be present in the backup history.
Any interactively submitted job that results in a log backup uses disk caching when the feature is enabled. The job history of these jobs displays an application size of 0 to prevent the size from being counted twice if capacity licensing is enabled.
If you enable the Use disk cache for log backups and commit every setting, backup windows, which define when backup operations run, are not applicable to database log backups using disk caching. This is because one goal of disk caching is to run disk caching backups as defined in the schedule, regardless of a CommServe computer's maintenance or downtime.
If you are already running automatic scheduled log backups and want to run disk caching of log backups instead, enable disk caching by selecting the Use disk cache for log backups and commit every checkbox, and enter the interval of time between each log commit operation to the CommServe computer.
Disk caching of log backups is also supported using a HyperScale X configuration. The following lists the CVFS versions necessary on HyperScale X to support disk caching:
-
For SQL Unicode database support, the CVFS version must be 4.7.6 or greater.
-
For other databases, the CVFS version must be 4.7.4 or greater.
Additional configuration to open some ports for NFS connectivity for disk caching or to use cloud library for disk caching is not required or is required as listed for the following workloads:
-
Additional configuration is not required: DB2, DB2 MultiNode, Microsoft SQL Server on Linux, Microsoft SQL Server on Windows, MySQL, Oracle, Oracle RAC, PostgreSQL and SAP HANA
-
Additional configuration is required: Informix
For Informix backups, you must open ports from the source client to the MediaAgent. For Informix restores, you must open ports from the destination client to the MediaAgent. NFS connectivity is made using the Commvault DataServer-IP capability. For a list of the ports to open, see Firewall Ports Required to Configure DataServer-IP.
Note
-
Caching is not supported on tape libraries. You can cache database logs on a disk library or on a cloud library. You can use one of the following cloud libraries for disk caching: Amazon S3, Google Cloud Storage, or Microsoft Azure Storage.
-
To allow disk caching, the MediaAgent computer and the client computer must be different. You must not have the agent with disk caching enabled on the MediaAgent with the disk library.
-
If you want to enable disk caching of log backups, all clients, except the clients that have the SQL Server agent, must run the latest version of indexing. For more information, see Indexing.
-
For MySQL, the disk caching of database log backups is supported only for MySQL traditional backups. To enable this setting, you must enable the EnableDumpSweepFeatureForMySQL additional setting. Disk caching of log backups is not available for the MySQL Cluster and Proxy Backups configurations.
-
Disk caching of database log backups does not support the Media Parameter feature.
-
Disk caching of database log backups does not support Oracle Data Guard.
Related Topics
For instructions about creating a log backup for a subclient, see Scheduling Database Log Backups Using Disk Caching.