Loading...

Best Practices - Linux File System

Table of Contents

Eliminating Backup Failures

Avoiding Duplicate Content Backups on Clustered File Systems

Reconfiguring Default Subclient Content

Restore by Job

Resource Control Groups for Commvault

Optimizing the CPU Usage on Production Servers

Optimizing Collect File Creation

Eliminating Backup Failures

You can use filters to exclude items which consistently fail and that are not integral to the operation of the system or applications. Some items fail because they are locked by the operating system or application and cannot be opened at the time of the data protection operation. This often occurs with certain system-related files and database application files. Also, keep in mind that you will need to run a full backup after adding failed files to the filter in order to remove them from backups.

Avoiding Duplicate Content Backups on Clustered File Systems

Note: When backups are run from multiple nodes on clustered file systems, the same content is backed up multiple times. Backups should be run on only one node to avoid such duplicate content backups. However, the other nodes can also be configured to run backups but make sure that the clustered file system mount point is added to the file system exclusion list of the physical machine's subclients.

Reconfiguring Default Subclient Content

We recommend that you do not re-configure the content of a default subclient because this would disable its capability to serve as a catch-all entity for client data. As a result, some data will not get backed up or scanned.

Restore by Job

Avoid running restores by jobs for jobs associated with the default backup set, if you do not want to restore the  operating system files or directories. The entire content of the backed up client will be restored and the client where you are restoring might run out of space.

Resource Control Groups for Commvault

Resource Control Groups for Commvault is a mechanism to control CPU and other resources for Commvault processes so that they operate within the set constraints. See Resource Control Groups for Commvault.

Optimizing the CPU Usage on Production Servers

In virtualized environments (e.g.,LPAR, WPAR, Solaris Zones etc.,) where dedicated CPUs are not allocated, backup jobs may result in high CPU usage on production servers. The following measures can be taken to optimize the CPU usage:

  • Set the appropriate priority for the backup jobs using the dNICEVALUE registry key to restrict allocation of all the available CPU resources to a specific backup job. By default, Commvault processes run at default priority on the client computers. If there are available CPU cycles, then Commvault processes will use the available CPU for backup and restore operations. If the CPU is being used by other application or system processes, Commvault processes will not preempt them. In such cases, if you want to give higher priority to other application or system processes, which are running at the default priority, you can modify the priority of the Commvault process using the following steps:
  1. From the CommCell Browser, navigate to Client Computers.
  2. Right-click the <Client> and click Properties.
  3. Click Advanced and then click Additional Settings tab.
  4. Click Add.
  5. In the Name field, type dNICEVALUE.

    The Category and Type fields are populated automatically.

  6. In the Value field, type the appropriate value.

    For example, 15.

  7. Click OK.

    Note: Restart the services on the client after setting this key.

  • Client side compression, encryption, and deduplication operations also consume considerable CPU resources. Moving these operations from the client to the MediaAgent will help reduce the additional CPU load.
  • Using a proxy server for IntelliSnap operations will move the CPU load onto the proxy thereby decreasing the overhead on the production servers further.

Optimizing Collect File Creation

The Collect File records the path and name of each scanned file that is included in the backup. When you are performing backups for huge data, large collect files are generated. You can divide the content in more number of collect files and reduce the time taken by a data reader to read the collect file.

By default, the number of collect files is equal to twice the number of data readers (2 x Number of Data Readers); 2 is a multiplication factor which is the number of collect files that will be created for the job. You can change the multiplication factor.

Note: Select the Allow multiple data readers within a drive or mount point option on the Performance tab of the Advanced Subclient Properties dialog box.

Use the following steps to modify the multiplication factor:

  1. From the CommCell Browser, navigate to Client Computers.
  2. Right-click the <Client> and click Properties.
  3. Click Advanced and then click the Additional Settings tab.
  4. Click Add.
  5. In the Name field, type For_Multiple_Reads_On_Disk_Collect_Split_Multiplication_Factor.

    The Category and Type fields are populated automatically.

  6. In the Value field, type an integer value greater than 2.

    For example, 2.

  7. Click OK.

Last modified: 12/8/2017 7:05:30 AM