Online Help

Deduplication Database Properties - General

Use this dialog box to view the deduplication database (DDB) information.

Deduplication Database Name

The name of the deduplication database.

Use space to modify the name of the DDB.

Creation Time

The date and time that the active DDB was created.

Version

Displays the software version of the DDB.

Estimated baseline size for new DDB

The amount of space required on the disk for a new backup data when the existing DDB is sealed.

DDB Access Path

The following information is displayed for each partition configured for the storage policy copy.

Partition

The number of partitions configured for DDB.

MediaAgent And Partition

The name of the MediaAgent hosting the DDB and the DDB path.

Minimum Free Space (MB)

The total amount of free space that must be available all times on the disk that is hosting the DDB.

By default, the minimum free space required on the volume to host the DDB is set to 5120 MB (5 GB). Use space to modify the amount of minimum free space.

Note: You cannot specify value lower than 5120 MB.

If the disk space size reaches below the size specified, then:

  • The DDB will be marked as offline.

  • Backup jobs will not continue.

  • Pruning of backed up data from disk is prevented until the DDB is available.

Therefore, the minimum free space on the volume must be maintained all the time to avoid backup failures and to reclaim the disk space.

Free Space Warning (MB)

To generate an event message or MediaAgents - Disk Space Low alert (if configured), when the amount of free space on the disk that is hosting the DDB falls below the specified amount.

By default, the minimum free space at which an event message must be generated is set to 10240 MB (10 GB). Use space to modify the value.

Notes

  • You cannot specify value lower than 10240 MB (10 GB).

  • If free space on the DDB disk falls below 10240 MB threshold, the SIDBEngine.log displays that the zeroref journal is deleted. However, if the free space on the DDB disk crosses the high threshold of 20480 MB, then the SIDBEngine.log displays that the zeroref journal is recreated.

Deduplication Database Properties - Settings

Use this dialog box to configure settings for the deduplication database (DDB).

Enable Software compression with Deduplication

By default, when a storage policy is configured to use deduplication, compression is automatically enabled for the storage policy copy. All the subclients associated to this storage policy will use storage policy compression settings. That is, Use Storage Policy option is enabled on the Subclient Properties dialog box of all subclients associated to the storage policy.

Do not Deduplicate against objects older than n day(s)

The number of days after which a unique data block cannot be used for deduplication during new data protection jobs. Setting this value ensures that very old data blocks are not allowed as the 'origin' data for newer data protection jobs that are deduplicated.

Important: If you set a value for less than 30 days, then the window will display the value but internally it will default to 365 days. For example, if you set the value to 29 days, then the window will display 29 days but data blocks that are as old as 365 days will be used for deduplication during new data protection jobs.

Enable garbage collection

Use this option to optimize DDB pruning by reducing the disk IO.

Enable pruning logs for reconstruction

Use this option to optimize the DDB reconstruction performance wherein pruning journal is maintained that logs all the records that are removed during the pruning operation.

Enable Physical Pruning

This option is selected by default. You can clear the check box to disable the physical pruning on demand for the DDB.

Notes:

  • To perform this action, you must have either Storage Policy Management capability or MediaAgent Management capability. If you have MediaAgent Management capability, the value for the configuration Provide user with MediaAgent management rights additional capabilities for libraries, data paths, and storage policies must be set to 1 to be able to perform this action. For more information, see Media Management Configuration: Service Configuration.

  • You can disable ongoing pruning operations only on version 11 MediaAgents.

  • The Enable Physical Pruning option is unavailable for migrated deduplication databases.

Deduplication Database Properties - Audit

Use the Audit tab to create notes about the deduplication database.

Audit History

Lists all of the notes previously entered about the deduplication database, including the user who entered the note, the severity, and the date that the note was entered.

New Audit

This button opens the New Audit dialog box.

Save As Script

Click to open the Save As Script dialog, which allows you to save this operation and the selected options as a script file (in XML format). The script can later be executed from the Command Line Interface using qoperation execute command.

When you save an operation as a script, each option in the dialog will have a corresponding XML parameter in the script file. When executing the script, you can modify the value for any of these XML parameters as per need.

Configure Additional Partitions

Use this dialog box to add more partitions to an existing deduplication database (DDB) that is used by a storage policy enabled with deduplication.

Number of Partitions

Select the number of partitions that you want to add to an existing DDB.

Partition

Displays the number of partitions together with their respective paths and MediaAgents.

MediaAgent and Partition Path

Displays the selected MediaAgent and the path location of the DDB partition.

DDB Network Interface

Specify the network interface name or the IP address to configure a dedicated Network Interface Card (NIC) on each partition MediaAgent and set up data interface pair (DIP) between the data path and partition MediaAgents.

Note

Configuring DIP when the data path MediaAgent and the partition MediaAgent are same will resolve to the loopback IP(127.0.0.1). However, if you do configure DIP in such configuration where the data path MediaAgent and the partition MediaAgent are same, the DDB storage policy and storage policy copy properties will display the DDB Network Interface as Default.

Choose Path

Double-click to enter the MediaAgent and partition path details.

MediaAgent

Select the existing MediaAgent from the list.

Partition Path

Browse and select the location of the DDB partition.

Advanced

DDB Network Interface

Specify the network interface name or the IP address to configure a dedicated Network Interface Card (NIC) on each partition MediaAgent and set up data interface pair (DIP) between the data path and the partition MediaAgents.

Note

Configuring DIP when the data path MediaAgent and the partition MediaAgent are same will resolve to the loopback IP(127.0.0.1). However, if you do configure DIP in such configuration where the data path MediaAgent and the partition MediaAgent are same, the DDB storage policy and storage policy copy properties will display the DDB Network Interface as Default.

VSS COW Volume

Browse the location that hosts the snapshot of the drive where the DDB is hosted.

Deduplication - Advanced

Use this dialog box to modify the advanced deduplication options.

Temporarily disable deduplication

Use this check box to temporarily suspend deduplication during backups for diagnostics and maintenance purposes. When you clear the check box, the signature generation and data deduplication is resumed.

If the copy is dependent on a network storage pool, then deduplication is disabled for the copy when the deduplication is disabled for the storage pool. For more information, see Disabling the Deduplication for a Network Storage Pool.

To continue client backups without deduplication during DDB recovery, use Allow backup jobs to run to deduplication storage policy copy when DDB is in an unusable state option on the Media Management Configuration dialog box. For more information, see How do I continue my client backups during DDB recovery?.

Deduplication Options

Select options to perform deduplication operations in DASH (Deduplication Accelerated by Streaming Hash) mode. In this mode, hash signatures generated for data segments are effectively used to accelerate data transfer.

Applies To: Primary Copy.

Enable DASH Full (Read Optimized Synthetic Full)

DASH Full is a read optimized Synthetic Full operation. When the first full backup is complete, changed data blocks are protected during incremental or differential backups. DASH Full operation, reads the signatures from the meta data and updates the DDB and index files for existing data rather than physically copying data. DASH Full significantly reduces the time it takes to perform synthetic full backups.

Enable Deduplication on Clients

Select this option to enable source-side deduplication on the storage policy copy.

When this option is selected Use Storage Policy Settings option is enabled by default on the associated subclient properties. All clients associated to this storage policy will honor the source-side deduplication.

Enable DASH Copy (Transfer only unique data segments to target)

This option is available only on secondary copies with deduplication.

DASH (Deduplication Accelerated by Streaming Hash mode) Copy is enabled by default and Disk Read Optimization is the default method used.

Note

If the source copy is without deduplication and Network Read Optimized copy option is not selected, or if DASH Copy is disabled, then all data blocks are transmitted and signature generation/comparison is done on the destination copy's MediaAgent.

Disk Read Optimized Copy

Optimizes data transfer by using existing data signatures.

For disk read optimization the source copy must be deduplication enabled. Disk Read Optimized copy uses existing deduplicated block signatures on the source copy for comparison against existing signatures on the destination copy's DDB. Only unique blocks are transmitted. During DASH Copy with Disk Read Optimization:

  • The existing signatures are read from the data chunk's meta data (that contains the data block signatures) available on the source copy.

  • The signature is compared against the destination copy's deduplication database (DDB).

    • If the signature already exists, the destination copy DDB is updated to reflect another copy of data exists on the destination storage.

    • If the signature does not exist (unique data block), the destination copy's DDB is updated with the new signature, and the data block is copied to the destination copy.

Network Optimized Copy

Optimizes data transfer by performing data deduplication on the source.

For network read optimization, the source storage policy copy can be with or without deduplication. Network Optimization copy reads each data block to create a signature for comparison against existing signatures on the destination copy's DDB. Only unique blocks are sent over the network. During DASH Copy with Network Read Optimization:

  • Each data block is read from the source copy and a signature is generated. This validates the integrity of the data on the source copy.

  • The signature is compared against the destination copy's DDB.

    • If the signature already exists, the destination copy DDB is updated to reflect another copy of data exists on the destination storage.

    • If the signature does not exist (unique data block), the destination copy's DDB is updated with the new signature, and the data block is copied to the destination copy.

Enable source side disk cache

You can optimize the signature lookup process by setting up the local source-side cache on the client or source MediaAgent (for DASH Copy). After you set up the local source-side cache, the signatures are first looked up in the local source-side cache. A remote lookup is initiated when the signatures are not available in the local source-side cache. The remote lookup reduces the response time for a signature comparison in a network with high latency.

  • On primary copy, when this option is enabled, source-side cache is configured on all clients associated to this storage policy.

  • On secondary copies, use source-side cache option when both source and destination MediaAgents are in WAN network environment. If source and destination MediaAgents are in network environment other than WAN, enabling source-side cache on those MediaAgents might degrade the performance of the DASH Copy.

Source-Side cache for DASH Copy can be performed using:

Tip: For faster DASH copy performance, move the Job Results directory to a faster disk on the source MediaAgent. For instructions, see Changing the Path of the Job Results Directory.

Limit the Max cache size to n MB

Use this option to set the maximum size of the source-side cache. The range of the size:

  • For backup jobs is: 1 GB to 128 GB.

  • For auxiliary copy jobs is: 8 GB to 128 GB.

Optimize for High latency networks by avoiding remote lookups.

Use this option to increase data protection operation performance when clients or source MediaAgent (for DASH Copy) are in delayed network or in high latency environments like WAN, and Data Mover and DDB MediaAgents are in fast network environments like LAN.

On primary copy, enable this option to configure high latency optimization on all clients associated with this storage policy.

When high latency optimization is enabled, the client compares the signature against the local cache. The DDB is not verified by the client or the source MediaAgent (for DASH Copy).

  • If the signature exists in the local cache the data block is discarded.

  • If the signature does not exist in the local cache, the signature is updated in the local cache that can be referred by further backup jobs and the data with the signature is transmitted to the data mover MediaAgent.

    The data mover MediaAgent with a local or remotely hosted DDB compares signatures against the DDB.

    • If the signatures exist in the DDB, the Data Mover MediaAgent discards the data blocks and adds the additional entries in the DDB.

    • If the signatures are not available in the DDB, the DDB will be updated with new signatures and data will be written to the disk.

Restriction: This option is not supported for a storage policy copy configured with DDB Priming option with Source-Side Deduplication, but high latency with or without source-side deduplication is supported.

Temporarily disable deduplication

Use this check box to temporarily suspend deduplication during backups for diagnostics and maintenance purposes. When you clear the check box, the signature generation and data deduplication is resumed.

Deduplication Options

Select options to perform deduplication operations in DASH (Deduplication Accelerated by Streaming Hash) mode. In this mode, hash signatures generated for data segments are effectively used to accelerate data transfer.

Enable DASH Copy (Transfer only unique data segments to target)

This option is available only on secondary copies with deduplication.

DASH (Deduplication Accelerated by Streaming Hash mode) Copy is enabled by default and Disk Read Optimization is the default method used.

Note: If the source copy is without deduplication and Network Read Optimized copy option is not selected, or if DASH Copy is disabled, then all data blocks are transmitted and signature generation/comparison is done on the destination copy's MediaAgent.

Disk Read Optimized Copy

Optimizes data transfer by using existing data signatures.

Network Optimized Copy

Optimizes data transfer by performing data deduplication on the source.

Enable source side disk cache

Deduplication - Advanced Client Properties

Use this dialog box to configure deduplication options for the client.

Use Storage Policy Settings

Select this option to use the source-side deduplication setting enabled on the Storage Policy.

Perform Client Side Deduplication

Select this option to deduplicate the backup data at the source-side before transferring the data to MediaAgent. This setting is applicable to all deduplication enabled jobs on this client.

Enable Client Side Disk Cache

Select this option to maintain a local cache for deduplicated data. The signature is first compared in the local cache. When this option is enabled, each subclient will maintain its own cache under job results directory. However, you can set the CacheDBRootFolder additional setting to configure a different location to maintain a local cache for the deduplicated data. For more information, see Configuring Client Side Disk Cache Location.

  • If the signature exists the block is discarded.

  • If the signature does not exist in the local cache, it is sent to the MediaAgent.

    • If the signature does not exist in the DDB, the MediaAgent will request the data block to be sent to the MediaAgent. Both the local cache and the DDB are updated with the new signature.

    • If the signature does exist in the DDB, the MediaAgent will request the block to be discarded.

When you configure client side disk cache location on a client computer or a MediaAgent, the deleted subclients or copy are pruned from the client side database after 40 days. You can use the following additional settings to modify the time from the default value of 40 days:

For instructions about adding an additional setting from the CommCell Console, see Adding an Additional Setting from the CommCell Console.

Exception: The signature caching is not supported on Linux s390 and FreeBSD platforms.

Limit the Max Cache size to <n> MB

Use this option to set the maximum size of the source-side disk cache (CV_CLDB). The default value is 4096 MB (4 GB). The range of signature cache:

  • For Backup jobs is: 1 GB to 128 GB.

  • For Auxiliary Copy jobs is: 8 GB to 128 GB.

The following calculation can be used to determine the approximate amount of space required for the signature cache:

(Size of the Application data in bytes / Deduplication block size in bytes) * 200 bytes

For example, if the application data size is 10 GB, and the deduplication block size is 128 KB, then the cache size (in bytes) can be calculated as:

[(10 * 1024 * 1024 * 1024) / (128 * 1024)] * 200 bytes

If the free space on your client computer where the local cache is located falls below the 500 MB, the signature look up on the local cache is ignored and the look up happens in the DDB on the MediaAgent.

The signatures that are not referred to for 40 days since the day of their insertion are pruned from the local cache. However, if the cache size exceeds the limit, then the signatures that are not referenced in the last 14 days are pruned.

Enable High Latency Optimization

Use this option to increase backup performance when clients are in delayed network (high latency) environments like WAN and are in fast network (High bandwidth) environments like LAN. This option is not supported for a storage policy copy configured with cloud storage library. When high latency optimization is enabled, the client compares the signature against the local cache. The DDB is not looked up by the client.

  • If the signature exists in the local cache the data block is discarded.

  • If the signature does not exist in the local cache, the signature is updated in the local cache that can be referred by further backup jobs and the data with the signature is transmitted to the data mover MediaAgent.

    The data mover MediaAgent with a local or remotely hosted DDB compares signatures against the DDB.

    • If the signature exists in the DDB then the Data Mover MediaAgent discards the data blocks and adds the additional entries in the DDB.

    • If the signatures are not available in the DDB, the DDB will be updated with new signatures and data will be written to the disk.

Disable Client Side Deduplication

Select this option to disable source-side deduplication on the client computer. This is useful when Client Side Deduplication option is enabled on the storage policy copy and you want to disable source-side deduplication on specific clients associated to that storage policy copy.

Enable Variable Content Alignment

Variable content alignment is a content aware approach to deduplication that further reduces the amount of data stored during backups. This may be more effective for certain database backups. It accomplishes this by aligning the segment boundaries of the backup data stream as minor changes to the data in the stream are made between incremental backups. Therefore, the effectiveness of deduplication increases more with this feature on client systems that experience small changes to the backup data.

Variable content alignment is performed on the client system and consequently you might experience some performance overhead, especially when used together with software compression.

Enabling this option midway for a client which has already done some deduplication backups will result in consumption of more space on the disk library. This happens because a fresh copy of the deduplicated data blocks with new signature is created for that DDB. Hence, this new signature will not match the existing signatures available in the deduplication database and thus creates a new baseline for the DDB.

DDB Subclient Properties (General)

Use this dialog box to manage the Deduplication Database (DDB) Subclient properties.

Client Name

The name of the client computer to which this subclient belongs.

iDataAgent

The name of the agent to which this subclient belongs.

Backup Set

The name of the backup set to which this subclient belongs.

Subclient Name

The name of the subclient.

Use VSS

Enable VSS snapshot method to take a snapshot of the drive that hosts the DDB during DDB backups.

Select VSS shadow copy storage association

Location of the VSS shadow copy (snapshot) of the drive hosting the DDB during DDB backups.

Number of Data Readers

Specifies the number of simultaneous backup data streams allowed for the subclient.

Allow multiple data readers within a drive or mount point

Specifies whether multiple data reads are allowed for a single Windows physical drive or Unix mount point during backups on this subclient.

Description

A description about the entity, which can include information about the entity's content, cautionary notes, and so on.

Deduplication - Move Partition

Use this dialog box to change the location or MediaAgent of the deduplication database (DDB).

Partition

Displays the number of partition(s) configured for the DDB.

MediaAgent And Partition Path

Displays the name of the MediaAgent hosting the partition of the DDB and the partition path.

Target MediaAgent And Partition Path

Displays the name of the newly assigned MediaAgent and the path of the partition.

Change Config only

This option allows you to recover the DDB by updating the MediaAgent or the location of the DDB only in the CommServe database.

This is useful when:

  • The MediaAgent of the partition is permanently offline and unable to access the DDB files.

  • The partition is unexpectedly lost due to permanent hardware failure.

  • DDB files are not available in the DDB location.

Current Move Partition Job Status

Displays the status of the current partition move job.

Deduplication - Reconstruct Dedupe Database Options Dialog

Use this dialog box to perform an on-demand or manual reconstruct of the deduplication database (DDB) for a storage policy copy.

Select Source MediaAgent

Select the source MediaAgent from which you will run the reconstruct of the DDB.

Allow maximum number of Streams to be used in parallel

Sets the maximum number of data streams available to reconstruct the DDB.

No of Streams to be used in parallel

Allows you to select the number of data streams to reconstruct the DDB.

Reconstruct entire DDB without using a previous recovery backup

Use this option to perform full recovery of the DDB when DDB backup data is inaccessible.

  • For single DDB, if DDB backup data is in invalid or unusable state.

  • For multiple partitioned DDB:

    • If the DDB backup data of one of the offline partitions is in unusable state. In this scenario, the full recovery will be performed only for the offline partition(s).

    • If all partitions of the DDB are offline and if during the DDB reconstruction the system finds that one of the partitions DDB backup data is unusable. In this scenario, the full recovery will be performed for all partitions.

When selected, the reconstruction job performs the following actions:

  • Deletes the existing partition(s).

  • Reads the entire data on the disk.

  • Recreates a new partition(s) from the deduplicated data read on the disk.

Full reconstruction job can be a time-consuming process, because the full disk needs to be read during the recovery operation.

Use Scalable Resource Allocation

Use this option to ensure that in case the DDB reconstruction job fails, then it will restart from the point of failure.

Note

This option is selected by default for the automatic DDB reconstruction jobs that are triggered when the deduplication database (DDB) or partition of the DDB is detected as offline, unreadable, or unavailable.

Save As Script

Click to open the Save As Script dialog, which allows you to save this operation and the selected options as a script file (in XML format). The script can later be executed from the Command Line Interface using qoperation execute command.

When you save an operation as a script, each option in the dialog will have a corresponding XML parameter in the script file. When executing the script, you can modify the value for any of these XML parameters as per need.

Deduplication - Silo Options

Use this dialog box to configure the Silo storage options.

Remove DDB when the silo store is sealed

Select this option to automatically delete the sealed deduplication database (DDB). This frees up disk space by deleting the sealed DDB.

Note

The Remove DDB when the silo store is sealed option replaces the Archive Sealed DDB option that was available in version 9 and earlier. If you create the Silo storage copy in version 9 or earlier, and then upgrade your CommServe® system, the deduplication database is removed and not archived regardless of what version of the software is installed on your MediaAgents. The change in behavior from archiving to removing the deduplication database occurs automatically when you upgrade your CommServe system.

Number of Silos to be kept in cache

A silo is a set of disk volume folders (contains deduplicated data written on the disk library) associated with the DDB.

Use this option to retain the number of most recently backed up silos (of active and sealed DDB) in the disk library (local cache). This eliminates the need to restore the data from the tape to the disk library.

The number of silos to be retained corresponds to the silos of active and sealed DDB that have been backed up to the tape. For example, if you set the value to 3, the currently active silo and two recently sealed silos would be retained in the local cache.

Clear the option if you do not want to save any silos in the cache. If the check box is cleared, the active silo will also be removed from the disk library after it has been backed up to tape.

Enable Space Management

Enables Silo space management on the disk media.

Silo storage space management provides disk cleanup options to automatically reclaim the primary disk space once the data is moved to Silo Storage. Disk space occupied by the deduplicated data can be effectively managed and reclaimed by enabling space management option. Once enabled only the data from the disk media that has been moved to Silo Storage is considered for removal.

The following exceptions apply:

  • If a Silo is retained in the local disk cache (as described in Number of Silos to be kept in cache), it will not be considered for removal until the number of silos to be kept in cache is exceeded.

  • If the source copy of the Silo copy is configured for Auxiliary Copy, Data Verification, or Offline Content Indexing, then the copy data is not removed until all dependency are fulfilled.

  • If the disk space threshold for the mount path containing Silo volumes has not been reached, it will not be considered for removal

During a restore operation, if the backup data is available in the primary copy, it is restored. If the backup data has been moved to Silo storage, then the necessary volume are automatically restored to any mount path on the source copy disk that has sufficient space and the data is made available to complete the initial restore operation. Space Management will subsequently remove this staged data.

Amount of Data Size Moved

The amount of data that is moved to the silo storage from the currently active and sealed silos.

Amount of Data Size to be Moved

The amount of data to be moved to the silo storage from the currently active and sealed silos.

Select MediaAgent for Silo Restores

By default, a MediaAgent with direct access to the tape library containing the silo copy will be used to read data. Select a MediaAgent to use for reading from tape.

Select Destination client for Silo Restores

By default, the MediaAgent with direct access to the disk library from which the volume folders originated will be used to write the data back to disk. Select a MediaAgent to use write to disk.

Deduplication - Settings

Use this dialog box to configure deduplication database settings.

Use DDB Priming option with Source-Side Deduplication

Whenever a new DDB is created, a fresh copy of signatures and first occurrence of each subsequent data block is written to storage. However these data blocks may already exist in storage with signature contained in the sealed DDBs residing in the MediaAgent or Data Center. The DDB Priming feature looks for signatures in the previously sealed DDBs and uses them to baseline the new DDB. In source-side deduplication, this saves the need for clients to transfer data blocks that are already available in the storage.

This option is not supported on archive cloud storage or immutable storage.

Allow jobs to run to this copy while at least [ ] partition(s) are available

This option is available only for the storage policy configured with multiple partitioned DDB.

When this option selected and if you have a partially available DDB that is one of the partition is offline:

  • The backup or Auxiliary copy jobs can continue with remaining available partition of the DDB.

    However, we recommend you that to recover the offline partition as soon as possible to balance the load on the available partition.

  • The pruning of deduplicated data from the disk will not happen completely unless all partitions are available.

    So, make sure to recover the offline partitions to reclaim the disk space. This is also applicable for data associated with the sealed DDB.

DDB Availability Option

  • Seal and Start new DDB automatically on detection of inconsistency: When this option is selected, if DDB or one of the partition of the DDB is found offline or invalid, the DDB (includes all partitions) is automatically sealed and a new DDB is created.

  • Pause and Recover current DDB: When this option is selected the DDB partition is automatically reconstructed from the DDB backup when the system detects that the DDB partition is offline or unavailable for use.

Deduplication Database creation

Use the following options to automatically seal the existing DDB and create the new DDB when the data size or days or months reach a selected value.

Create new DDB every [ ] days

When this option is selected, a new DDB is created later than the specified number of days.

Create new DDB every [ ] TB

When this option is selected, a new DDB is created when the existing DDB reaches the specified disk space.

Create new DDB every [ ] Month(s). Starting from

When this option is selected, a new DDB is created every month based on the time interval in months.

You can create a new DDB with one of the following conditions:

  • Create new DDB every [ ] Days and/or Create new DDB every [ ] TB. If both the options are set, a new DDB will be created if either one of the two conditions is satisfied.

  • Create new DDB every [ ] month(s). Starting from

Block level Deduplication factor (in KB)

The data block size used for deduplication.

Deduplication - DDB Information

Use this dialog box to change the deduplication database information.

Total Number of DDB(s)

The number of deduplication databases configured on the storage policy copy.

Total Size of Application Data across all the DDB(s)

The amount of application data backed up for the copy across all DDBs.

Total Data Size on Disk for all the DDB(s)

The total size of backup data stored on the disk after deduplication for the copy across all DDBs.

If you have Silo Copy configured, then this is the total size of backup data stored only on the disk.

Total Data Size for all the DDB(s)

The total size of backup data stored on the media after deduplication for the copy across all DDBs (Sealed and Non-Sealed DDB).

If you have silo copy configured, then this is the total size of backup data stored on the media (disk and tape).

For example, if your application data size was 10 GB, after deduplication the data stored on the disk was 3 GB. If you have silo copy configured, and if 1 GB of data was moved to tape. Then this option will display the total size of the data that is occupied on both disk and the Silo destination which is 3 GB. The Total Data Size on Disk option displays the data size available only on the disk which is 2 GB.

Total Size of Application Data Ready to be freed:

The amount of application data ready to be pruned on the storage media associated with the storage policy copy.

Estimated baseline size for new DDB

The amount of space required on the disk for a new backup data when the existing DDB is sealed.

Global Deduplication Policy/Storage Pool Properties - General

Use this dialog box to view or change the properties of the selected global deduplication policy/storage pool. Options in this dialog box include:

Storage Policy Name/Storage Pool Name

The name of the storage policy/storage pool. You can change the name of a storage policy/storage pool at any time without affecting the ability to restore data that may have already been backed up through this storage policy/storage pool.

Storage Policy Type/Storage Pool Type

Displays the type of storage policy/storage pool.

No. of Copies

The number of copies associated with this storage policy/storage pool.

Distribute data evenly among multiple streams for offline read operations

When selected, the data is evenly distributed across multiple streams. This option is not applicable if source storage policy copy is pointing to tape media.

Keep resource reservation cached for jobs on this storage policy

When selected, the resources used by a subclient during a backup are kept in cache so it can be reused by future backup jobs of the same subclient. The resource reservation is cached for backups triggered by any subclient associated with this storage policy.

Note that the time interval to keep the resource reservation information in the cache for backups is based on the value specified in the Timeout Interval (in minutes) for cached resources when using reservation backup caching feature parameter in the Media Management Configuration dialog box.

Description

Use this field to enter a description about the entity. This description can include information about the entity's content, cautionary notes, etc.

Allow subclient associations to this storage policy for 30 more days

This option is available when the average Q&I time of a DDB associated with the global deduplication policy reaches 80% of its threshold (two milliseconds). Select this option to assign subclients to the storage policies that are associated with the global deduplication policy for 30 days.

If this option was selected previously and the DDB is under the grace period, the following message is displayed with the grace period end date.

"Note: This Storage Policy will not accept subclient associations anymore after <mm/dd/yyyy> mm:ss>"

Global Deduplication Policy/Storage Pool Properties - Advanced

Use this dialog box to configure advanced options for the global deduplication policy/storage pool.

Block level Deduplication factor (in KB)

Specify the block size to be used for block level deduplication. By default the block size is set to 128 KB. You can select from the following block sizes: 32KB, 64KB, 128KB, 256KB, 512KB and 1024KB.

A newly created deduplicated storage policy or global deduplication policy/storage pool with a cloud library as the data path is set to 512 KB as the default block size.

If a storage policy is set with a block size, the block size is applicable for all copies in the policy except for copies with global deduplication. Storage policy copies with global deduplication inherit the block size set at the global deduplication policy.

To get the maximum benefit of deduplication, we recommend you to have the same block size for all the copies in a storage policy. So, if one or more copies of a storage policy is associated with a global deduplication policy/storage pool, we recommend you that the storage policy and the global deduplication policy/storage pool are configured with the same block size.

Configure the block size of the global deduplication policy/storage pool before any data is written to the deduplication database. If any data is written to the deduplication database, you cannot modify the block size.

If the source primary copy uses Hyperscale or Hedvig storage and the secondary copy points to the cloud storage on a Global Deduplication storage policy (GDSP), the block size value on the secondary copy automatically changes to the block size value on the source primary copy.

Enable Storage Policy Level Media Password

Click to enable the storage policy level media password protection feature. The media password is used to prevent unauthorized access to the data residing on media used by the system for this storage policy. If not enabled, the CommServe Level Media Password is the default password.

Change Media Password

Select this option to change the storage policy level media password.

Enter New Media Password

Enter a new password.

Confirm New Media Password

Re-enter the new password for confirmation.

Enter Old Media Password

Enter the previous media password used for this feature. If this is the initial configuration of the storage policy level media password, enter the CommServe level media password.

Note

If you choose to password protect your media, it is essential that you record this password. In certain disaster recovery scenarios, it may be necessary to read your backup data directly from the backup media. This password will be required to directly access the media.

Storage Pool Copy Properties

Click this button to view or change the properties of the storage policy copy.

Global Deduplication Policy/Storage Pool Properties - Dependent Copies

Use this dialog box to view a list of dependent copies for this storage policy/storage pool.

Storage Policy Name

The name of the dependent storage policy.

Copy Name

The name of the storage policy copy.

Retain for

Displays the retention period that is set for the dependent storage policy copy. This retention is set on the Retention tab of the Storage Policy Copy properties dialog box.

Global Deduplication Policy/Storage Pool Properties - Security

Use this dialog box to:

  • Identify the user groups to which this CommCell object is associated.

  • Associate this object with a user group.

  • Disassociate this object from a user group.

Available Groups

Displays the names of the user groups that are not associated with this CommCell object.

Associated Groups

Displays the names of user groups that are associated with this CommCell object.

Global Deduplication Policy Properties - Silo Restore Precedence

Precedence

A numeric identifier assigned to a storage policy copy. You can specify the storage policy copy from which you want data to be restored through the restore options of the individual agents.

Copy Name

The name of the storage policy copy.

uparrow - To move the selected copy one row up in the copy precedence list (that is, decrements the copy precedence by 1).

down_arrow (old UI blue circle white v) - To moves the selected copy one row down in the copy precedence list (that is, increments the copy precedence by 1).

Source-Side Signature Cache with Disk Read Optimized DASH Copy

When you run an Auxiliary Copy job as a DASH Copy with Disk Read Optimization and source-side cache, the following process occurs:

  • The signatures are read from the metadata information of the primary disk data.

  • The signatures are compared with the local source-side cache of signatures on the source MediaAgent.

    • If the signatures exist, the data block was processed in a previous job and only the signature reference is transferred to the destination DDB.

    • If the signature does not exist, the data and the signature are transferred to the destination MediaAgent.

      If the signatures are not available in the destination DDB, the data block is new, the destination DDB and local source-side cache is updated with the new signatures and the new data block is copied to the destination copy.

Source-Side Signature Cache with Network Read Optimized DASH Copy

When you run an Auxiliary Copy job as a DASH Copy with Network Read Optimization and source-side cache, the following process occurs:

  • The data is expanded on the source MediaAgent and signatures are generated for each data block.

  • The generated signatures of the data are compared with the local source-side signature cache on the source MediaAgent.

    • If the signatures exist, the data block was processed in a previous job and only the signature reference is transferred to the destination DDB.

    • If the signatures does not exist, the data along with the signature are transferred to the destination MediaAgent.

      If the signatures are not available in the destination DDB, the data block is new and both the destination DDB and local source-side cache are updated with the new signatures and the new data block is copied to the destination copy.

Loading...