You can migrate HyperScale X 2.x clusters to HyperScale X 3.x clusters using the HyperScale X ISO image 3.2408.
The migration process automatically creates the cvbackupadmin user, if the user is not already present before the migration.
The migration process does not impact the drives that host Deduplication database (DDB) and Index Cache.
Before You Begin
-
Verify that the CommServe is on 11.38 or higher version.
-
Verify that you have maintenance release 11.38 or higher installed on all the nodes in the HyperScale X cluster. For more information, see Updating the Commvault Software on a Server.
-
Install the HyperScale X platform version 2.2408 or higher on all the HyperScale X nodes. For more information, see HyperScale X Platform Version 2.2408.
-
Enable password-based root access only on remote cache node to run the migration related commands.
-
If the CommServe software exists on one of the nodes, you must rebuild the CommServe on a separate server outside of the HyperScale X cluster. For more information on rebuiling the CommServe software, see CommServe Hardware Refresh. Migration is not supported if the CommServe exists within the HyperScale X cluster.
-
Verify that the nodes in the HyperScale cluster are not under maintenance mode.
-
The migration script does not support nodes with legacy BIOS and NVMe drives. To migrate these nodes, contact Commvault support.
Procedure
To migrate the HyperScale X version, complete the following steps:
-
Copy the latest HyperScale X image from Commvault store to the /ws/ddb/iso folder on the Hyperscale X remote software cache node.
-
Using the system console (for example, using KVM, not SSH), log on to the node in which the remote cache is configured, and navigate to the /ws/ddb/iso folder.
-
Verify if the existing cluster meets the migration criteria using the following command:
The validation process will prompt for the following inputs:#cd /opt/commvault/MediaAgent<br> #./cvmanager.py -t Validate_Migrate_Cluster
You must proceed with the migration only after the validation process is complete.root@smchsx2 MediaAgent]# ./cvmanager.py -t Validate_Migrate_Cluster : Initializing HyperScale Task Manager. : Commvault HyperScale Task Manager Initialized. Are you registering with Metallic (y/n): n CommServe fully qualified hostname: (smchsx11.company.com) CommCell's user name with permissions to register new clients: admin CommCell's user password with permissions to register new clients: [Re-Enter to Confirm] - CommCell's user password with permissions to register new clients:
-
To start the migration, run the following commands:
# cd /opt/commvault/MediaAgent/ #./cvmanager.py -t Migrate_Cluster
-
The migration script prompts you to input the following values:
Root password of existing cluster: [Re-enter to Confirm] - Root password of existing cluster: Restricted shell user '(cvbackupadmin)' password: [Re-enter to Confirm] - Restricted shell user '(cvbackupadmin)' password: Are you registering with Metallic (y/n): n CommServe fully qualified hostname: (server1.company.com) Commcell's user name with permissions for registering new clients: Commcell's user password with permissions for registering new clients: Enter the users password: [Re-Enter to Confirm] - Commcell's user password with permissions for registering new clients:
The migration process migrates and reboots each node in the cluster from RHEL7 to Rocky8 sequentially. This process may take several hours to complete.
What to do Next
-
After the migration, root user login is automatically disabled on the HyperScale X cluster nodes. SSH and console login will be prohibited, and only the cvbackupadmin user can log on and access the nodes. If required, you can enable the root access. For more information, see Enabling or Disabling Root Access.
-
To verify whether the node was migrated successfully, run the following command and make sure the output is displayed as follows:
# commvault reg | grep sRHELtoRockyMigrationCompleted
Output:
sRHELtoRockyMigrationCompleted yes
Note
If firewall was enabled on the HyperScale cluster before the migration, then you must re-enable firewall manually after the migration is complete. For more information, see &Enabling Firewall.