Loading...

Hardware Specifications for Deduplication Four Partitioned Extended Mode

Deduplication Partitioned Extended mode configuration uses multiple MediaAgents (two to four, in a grid) to host multiple individual physical partitions (up to two deduplication databases (DDBs) per MediaAgent) of larger logical DDBs (up to two per grid). This configuration is typically used to increase the amount of FET or BET a single DDB can manage, or to allow extended/alternate retention of data in the primary DDB through a DASH-copy from the primary DDB to the secondary DDB, or to allow cross-site copies of data in a disaster recovery (DR) configuration.

For details on supported platforms, see Building Block Guide - Deduplication System Requirements.

You can use the deduplication partitioned extended mode in the following scenarios:

MediaAgent Hosting DDBs of Four Sites

Long Term Retention

Four DDBs for Primary Copies per Grid

Four DDBs for Secondary Copies per MediaAgent

The following table provides the hardware requirements for Extra Large and Large environments for deduplication partitioned extended mode. For Medium, Small and Extra Small environments, partitioned mode is not recommended.

Terms that is used in the following Hardware Requirements:

  • Deduplication Node - MediaAgent hosting the DDB.
  • Grid - The collection of the deduplication nodes.

Important:

  • The following hardware requirements are applicable for MediaAgents with deduplication. The requirements do not apply for tape libraries or MediaAgents without deduplication or using third party deduplication applications.
  • The suggested workloads are not software limitations, rather design guidelines for sizing under specific conditions.
  • The TB values are base-2.
  • To achieve the required IOPs, please consult your hardware vendor for the most suitable configuration for your implementation.
  • The Index Cache Disk recommendation is for unstructured data types like files, VMs and granular messages. Structured data types like application, databases and so on need significantly less index cache.

Number of MediaAgents in a Partitioned DDB, Grid Backend Storage, and CPU/RAM

Component

Extra large

Large

Number of MediaAgents in Partitioned DDB

4

4

Grid Backend Storage1, 2

Up to 2000 TB

Up to 1200TB

CPU/RAM per node

16 cores, 128 GB (or 16 vCPUs/128 GB)

12 cores, 64 GB (or 12 vCPUs/64 GB)

Disk Layout per Node

Component

Extra large

Large

OS or Software Disk

400 GB SSD class disk

400 GB usable disk, min 4 spindles 15K RPM or higher OR SSD class disk

Deduplication Database (DDB) Disk 1 per node

2 TB SSD Class Disk/PCIe IO Cards3

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.

1.2 TB SSD Class Disk/PCIe IO Cards3

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.

Deduplication Database (DDB) Disk 2 per node

2 TB SSD Class Disk/PCIe IO Cards3

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.

1.2 TB SSD Class Disk/PCIe IO Cards3

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.

Suggested IOPS for each DDB Disk per node

20K dedicated Random IOPs4

15K dedicated Random IOPs4

Index Cache Disk per node6, 7

2 TB SSD Class Disk3, 5

1 TB SSD Class Disk3

Suggested Workloads for Grid

Component

Extra large

Large

Parallel Data Stream Transfers

400

300

Laptop Clients

Up to 20000 per grid

Up to 10000 per grid

Front End Terabytes (FET)

  • Primary Copy Only - 440 TB to 640 TB FET
  • Secondary Copy Only - 440TB to 640 TB FET
  • Mix of Primary and Secondary Copy:
    • 240 TB to 320 TB Primary FET AND
    • 240 TB to 320 TB Secondary FET
  • Primary Copy Only - 240 TB to 520 TB FET
  • Secondary Copy Only - 240 TB to 520 TB FET
  • Mix of Primary and Secondary Copy:
    • 160 TB to 240 TB Primary FET AND
    • 160 TB to 240 TB Secondary FET

Primary Copy Only (OR) Secondary Copy Only for Grid

  • 640 TB FET Files (includes OnePass for Files)
  • 380 TB FET of VM data (mix of VSA on VMs and MediaAgent)
  • 440 TB FET of VM and file data  (mix of VSA on VMs and MediaAgent)

Notes:

  • Assumes incremental forever strategy with periodic DASH fulls and staggered schedules
  • Combination of above data types not to exceed 380 TB to 440 TB FET on the primary copies
  • 520 TB FET Files (includes OnePass for Files))
  • 320 TB FET of VM data (mix of VSA on VMs and MediaAgent)
  • 360 TB FET of VM and File Data (mix of files, and VSA on VMs and MediaAgent)

Notes:

  • Assumes incremental forever strategy with periodic DASH fulls and staggered schedules
  • Combination of above data types not to exceed 240 TB to 360 TB FET on the primary copies

Mixed Primary and Secondary Copy for entire Grid

Primary Copy

  • 320 TB FET Files (includes OnePass for Files)
  • 240 TB FET for VMs and files (mix of files with VSA on MediaAgent, and multiple VMs with VSA)
  • 240 TB FET for databases or applications

    Secondary Copy

  • 240 to 320 TB FET originating from primary copy of another deduplication database

Primary Copy

  • 280 TB FET Files (includes OnePass for Files)
  • 200 TB FET for VMs and files (mix of files with VSA on MediaAgent, and multiple VMs with VSA)
  • 180 TB FET for databases or applications

    Secondary Copy

  • 120 to 240 TB FET originating from primary copy of another deduplication database

Supported Targets

Component

Extra large

Large

Tape Drives

Not Recommended

Not Recommended

Disk Storage without Commvault Deduplication

Not Recommended

Not Recommended

Deduplication Disk Storage

Up to 2000 TB, direct attached or NAS

Up to 1200 TB, direct attached or NAS

Third-Party Deduplication Appliances

Not recommended

Not Recommended

Cloud Storage

Yes

Primary copy on Disk and secondary copy on Cloud

Yes

Primary copy on Disk and secondary copy on Cloud

Deploying MediaAgent on Cloud / Virtual Environments

Yes, for AWS or Azure Sizing, see the following guides:

Yes, for AWS or Azure Sizing, see the following guides:

Footnotes

  1. Maximum size per DDB.
  2. Assumes standard retention of up to 90 days. Larger retention will affect FET managed by this configuration, the back end capacity remains the same.
  3. SSD class disk indicates PCIe based cards or internal dedicated endurance value drives. We recommend to use MLCs (Multi-Level Cells) class or better SSDs.
  4. Recommend dedicated RAID 1 or RAID 10 group.
  5. Recommendation for unstructured data types like files, VMs and granular messages. Structured data types like application and databases require considerably less index cache.
  6. To improve the indexing performance, it is recommended that you store your index data on a solid-state drive (SSD). The following agents and cases require the best possible indexing performance:
    • Exchange Mailbox Agent
    • Virtual Server Agents
    • NAS filers running NDMP backups
    • Backing up large file servers
    • SharePoint Agents
    • Ensuring maximum performance whenever it is critical
  7. The index cache directory must be on a local drive. Network drives are not supported.
  8. Recommended to have dedicated volume for Index Cache Disk and DDB Disk.

Related Topics

Tuning Performance When Using a Partitioned Deduplication Database

Last modified: 5/9/2019 7:19:12 PM