Throughput during an Azure Cosmos DB backup is determined by several variables, including provisioned throughput. Provisioned throughput of each container or database, as administered in the Azure portal, determines the maximum rate at which Commvault software can read data from each container. For more information, see Introduction to provisioned throughput in Azure Cosmos DB on Microsoft's website.
Provisioned throughput also indirectly influences the number of physical partitions for a container. For more information, see Partitioning and horizontal scaling in Azure Cosmos DB on Microsoft's website.
The parallelism obtained during backup operations for a container is the minimum of the following values:
The number of streams available to back up the container. Note that the number of streams that is configured for the storage policy is shared between the backup of all containers configured as subclient content.
The number of physical partitions for the container.
To increase backup throughput for containers with an inadequate number of physical partitions, it is possible to temporarily increase provisioned throughput for those containers to a high value, such as at least 50,000 RUs per second. The elevated provisioned throughput value should not be lowered until automatic re-partitioning has finished. This can leave a container with an increased number of physical partitions, even after the provisioned throughput is returned to a much lower value.
For large containers that require a high backup throughput, use an access node that is not heavily loaded with other activity and has sufficient RAM and CPU cores. Using the CommServe computer as an access node can negatively impact backup performance.