Recommendations for Improving the Performance of Azure SQL Backups and Restores

Consider the following items if you need to improve the performance of Azure SQL database backup and restore operations.

  • Running a database export creates an extra load on the Azure SQL database service. You can view the database transaction unit (DTU) statistics for the database on the Azure portal. For more information, go to "Azure SQL Database and Azure SQL Managed Instance resource monitoring" on the Azure SQL website.

    You can upgrade the service tier to add more resources if the database has reached its resource limits. For more information, go to "Dynamically scale database resources with minimal downtime - Azure SQL Database & Azure SQL Managed Instance" on the Azure SQL website.

  • The Azure SQL Database service should be provisioned with a minimum of four vCores for optimal backup performance.

  • Use a Business Critical or Premium tier database for best performance.

    When you restore a SQL database to Azure you can choose the database pricing tier needed for the desired performance. In the Advanced options section of the Restore window, select the Database tier for the database. If you don't select a database pricing tier, the Pricing Tier Standard S0 is applied to the restored database by default.

  • Run backups by using an access node in the same region as your database. Otherwise, you might experience performance problems related to network latency.

  • Use staging disk space on SSD disks on the access node to back up large databases. When staging disk space is configured, the Microsoft SQL Server Data-Tier Application Framework (DacFx) is used to back up the databases. This has the following advantages:

    • Allows backups of larger databases, with .bacpac file sizes greater than 200 GB.
    • The Allow Azure services and resources to access this server option does not need be turned on for the SQL Server. This results in a more secure environment.
    • Blob storage does not need to be associated with each Azure SQL instance, since blob storage is no longer used for staging.

    Microsoft does not provide a specific formula to calculate the disk space needed for staging the .bacpac and temp files, based on the database size. For example, two databases of the same size but with different types of data could have different disk space requirements and one database might require staging disk space that is three times the size of the database.

  • Exporting large tables without clustered indexes can be very slow or even cause failure. This behavior occurs because the table can't be split up and exported in parallel. Instead, it must be exported in a single transaction. This slows performance and may result in potential failure during the export operation, especially for large tables.

  • If possible, use “Redirect” as the server connection policy. For more information, go to "Connection policy" on the Azure SQL website.

  • Enable accelerated networking in the VM. For more information, go to "Accelerated Networking overview" on the Azure SQL website.

  • Unresolved object references can cause slower export operations. For example, this occurs if a view that references a table is defined and the table no longer exists in the database. If unresolved references appear in the export log, consider correcting the schema of the database to improve the export performance.

  • If possible, use a Linux-based access node. Commvault has measured better backup performance with Linux access nodes than with Windows access nodes.

  • The Azure SQL Database Import/Export service offers a limited number of compute VMs in each region for handling import and export operations. One of these VMs manages the export request when the software submits a backup task for an Azure SQL database. The compute VMs are hosted per region to make sure that the import or export avoids cross-region bandwidth delays and charges. These VMs serve not only your subscription but any export request from any subscription in that region, acting as a shared resource pool. If too many requests are submitted concurrently in the same region, significant delays in processing can occur. The time to complete these requests can range from a few seconds to several hours. Moreover, the Import/Export service creates a logical backup of the database, so the completion time is influenced more by the number of objects in the database than by its size, unlike a traditional physical database backup.

    If you think your backups are running slowly because the number of compute VMs in your region is a limiting factor, you can run a test backup with the SQLPackage utility from the access node to directly back up the databases. This may help you determine if the number of compute VMs in your region is a limiting factor, or you can use this test method to troubleshoot general performance problems.

  • For more information about troubleshooting why Import and Export services take a long time, go to "Azure SQL Database and SQL Managed Instance Import/Export service takes a long time to import or export a database" on the Azure SQL website.

Loading...