Hyperconverged Storage Pool - Sizing - Capacity and Demand
From a business perspective, optimal resource utilization is essential for a better return on investment (ROI). Optimal utilization of IT infrastructure resources requires a matching of workloads with IT platforms they run on, and a service oriented architecture which can evolve over time to accommodate changes to workloads. This goal can be achieved only if the architecture allows for easy and fine-grained scaling of resources to meet demand. The IT resources that need to be deployed, matched, measured and scaled include compute, network and storage.
In a scale-up system, sizing happens up front, at the time of the purchase and deployment cycle. Customers have to choose between overbuying an un-balanced stack or risk running out of capacity before the next purchase cycle. Both these scenarios have huge business implications. In contrast to scale-up stacks, hyperconverged systems are sized minimally up front using fine grained building blocks and the organization deploys new right-sized nodes as business needs dictate. Commvault's hyperconverged platform, consists of nodes with sufficient compute, network and storage resources to deliver acceptable levels of performance for the data management workload it runs. In this reference architecture (R.A), the three dimensions of flexibility (variables) and decisions necessary to be made are:
- Choice of server vendor
- Resiliency needs
- Server node configuration
Of these, typically vendor choice is based on customer preference tied to business and operational constraints. Resiliency in Commvault's hyperconverged platform, driven by system uptime requirement, is tied to erasure code and number of nodes in the cluster (block-size). Since compute and network resources are specified for the workload in question, the missing component of server node configuration is merely disk (HDD) capacity and number.
Following is an illustration of the recommended configuration for compute, network and storage.
|CPU||20 Cores @ minimum 2.0GHz
(e.g. 2x Intel E5-2620 v4)
(e.g. 8 * 32GB RDIMM)
400GB SATA SSD
(e.g. 480GB SATA SSD)
(e.g. 2x2TB Flash PCIe, 4x1.6TB SSD)
12/24 LFF NL-SAS (6TB, 8TB, 10TB)
(e.g. 12x6TB, 24x10TB, 12x8TB)
|Storage Controller||SAS HBA (no RAID card)|
|Network||2x 10Gbps / 4x 10Gbps|
Rack servers from leading vendors conforming to or exceeding above specifications can be considered as nodes for the platform. The reference architecture (R.A) approach to this hyperconverged platform, helps retain necessary flexibilities while also simplifying the sizing question. Further, the speed and ease of deployment and growth of a hyperconverged platform allows for better agility, where resources can be procured when needed and deployed just in time to cater to demand. Server vendors currently supported include Cisco, HP, Dell and SuperMicro. Leading 2RU and 4RU server models from these vendors with raw disk capacity up to 240TB per node are within scope.