HyperScale Reference Architecture - Use Cases
Commvault HyperScale caters to the following use-cases of structured and un-structured data management in private, public and hybrid cloud environments through a scale-out platform. The software defined approach of Commvault HyperScale allows for policy-based provisioning and protection of data, independent of the underlying hardware. Benefits include significant reduction of Total Cost of Ownership (TCO), flexibility in vendor and platform selection, a near linear scaling of the architecture and agility due to ease of use.
All data has value and it is imperative to have versions of data to track changes over time and recover from corruption, whether inadvertent or malicious. Commvault’s data verification and integrity checks ensure all managed data is accessible and valid. Data verification confirms the health of data written to the HyperScale platform. Verification is performed immediately after data is transferred or on a scheduled basis. Data integrity is ensured during data transfer to and from the platform and when read. All data on the platform is protected by NIST FIPS 140-2 certified encryption for date in-flight, at rest or both. Encryption keys are managed by Commvault software and stored in the meta-data database.
Apart from data, it is also necessary to backup system information to allow for quick recovery from scheduled (maintenance) and un-scheduled (malware) issues. Further, the data protection application should leverage underlying hardware and design resiliency factors to provide desired levels of resilience. Commvault’s HyperScale platform includes distributed indexing and deduplication engine for high-availability. Additional feature of erasure coding provides for resilience against hardware failure with minimal overhead.
All these features contribute to required business continuity by ensuring recovery from system and data issues, whether bare-metal or virtual. Commvault HyperScale can also empower the application owner through self-management of their data, without intervention from backup administrators or a helpdesk. This service oriented approach adds to the efficiency of the platform.
Data replication allows for the copying of specified content from a source to a destination. Replicating a source system to a remote site is used as a safeguard against failures to business critical applications. Remote replication is synchronous when the source is synced up with the destination in real-time. This feature requires dedicated network bandwidth with very low latency connections. It is used to safeguard business critical applications from the impact of disasters – Disaster Avoidance (D.A). A more realistic and widely used approach is a nearly-real-time synchronization of the remote site with the source. In this case, a recovery point snapshot may be used at the remote site, to recover from a disaster to the primary site with minimal impact to the business.
Commvault HyperScale platform can serve as a fully scalable and centrally managed component at both the source and target locations to facilitate replication.
Disaster Recovery measures allow a business to avoid or quickly recover from a service impacting event. Typically, disaster recovery is implemented through a geographically dispersed set of redundancies. A practical and often used method to implement disaster recovery is through an Auxiliary copy. An Auxiliary copy is a standby copy of the data set in question in a remote site. Auxiliary copies are usually created on lower tiers of storage than the primary copy, such as slower SATA or tape. This tiered approach allows for the promotion of a secondary copy in the event of the primary becoming corrupt or inoperative. DASH copy is a method to create deduplication-aware secondary copies. The deduplicationd format is maintained in the copies without the need to re-hydrate. Thus, DASH translates into less data which allows for quicker copies with significantly reduced storage space and network bandwidth requirements. It is thus a perfect solution to tackle Disaster Recovery, where an extremely efficient strategy is used to provide both WAN and storage optimization. Commvault’s HyperScale is well suited to be the target for secondary copies as it supports all required features.
Copy Data Management
Copy data management (CDM) is about reducing storage consumption through elimination of unnecessary duplicate data. The issue of data duplication arises mainly because backup software operates independent of enterprise applications, often creating multiple copies of the same data. Redundant copies of data, not only waste storage space, but also slow down network performance and make accessing or restoring mission critical data more difficult. Commvault HyperScale CDM features such as Auxiliary copy with DASH help eliminate these problems by reducing the number of full copies.
Direct Data Access
Data stored on Commvault HyperScale can be accessed directly through NFS/CIFS shares. This eliminates the need to maintain a separate storage staging area and having to restore before accessing. In-place and native access to data managed by Commvault, whether on-premise or in the Cloud, eliminates un-necessary movement of data and adds to faster access without the need to restore to primary storage.
A Digital Repository offers a convenient infrastructure to store, manage and access immense quantities of data. Usually, the data is un-structured and may be in the premises, the cloud or both. The primary need is to access the immense volume of data required in an easy and quick manner. The scale-out platform architecture in combination with data protection capabilities of Commvault’s HyperScale software is a perfect fit for this purpose. A combination of CDM features with live access at the software layer are complimented by the agility, ease and cloud-like consumption of resources from the underlying HyperScale platform. These features translate to a highly scalable and economical platform for the ever growing needs of any organization.
Last modified: 11/1/2019 6:39:21 PM