HyperScale X Reference Architecture

Updated

HyperScale X scale-out software provides for the creation of a storage pool for housing protected data. The initial creation of a storage pool, requires 3 similarly configured nodes. Expansion of the pool can be accomplished through the addition of individual or multi-node increments. The node configurations are optimized with sufficient resources to support all MediaAgent services, while ensuring resiliency and performance. The CommServe provides a consolidated view to create, monitor, and manage the storage pool and the HyperScale X nodes.

HyperScale X platform resilience is a function of system architecture and best practices implemented to deliver the required level of service. On HyperScale X platform, the inherent application level resilience of a distributed deduplication database and index cache is complimented by the scale-out architecture, which uses standard servers with redundant components. Data resilience on HyperScale X platform is based on (4+2) erasure coding, where each block of data is broken into 4 chunks of data and 2 chunks of parity and distributed across the nodes in the pool. The (4+2) erasure coding is the only method used and it provides for tolerance from multiple levels of hardware failure. Implementing industry best-practices such as mirrored root disk and separate subnet/VLAN for public data protection traffic and private storage pool traffic over bonded network interfaces, further enhances resilience at the node-level.

HyperScale X Reference Architecture servers are imaged with the HyperScale X software on-site, after initial server rack and stack. Commvault is the point of contact for support calls pertaining to the software stack. For hardware related issues, support is provided by the respective server vendor.

HyperScale X Reference Architecture may be deployed on several server platforms. The choice of the server is left to the customer to help leverage their existing support contracts and work through familiar channels. The following options are available for deployment:

  • 4 disk drive nodes, referred to as N4, or

  • 12 disk drive nodes, referred to as N12, or

  • 24 disk drive nodes, referred to as N24

    For a list of supported servers, see Design Specifications.