You can migrate HyperScale X 2.x clusters to HyperScale X 3.x clusters using the latest HyperScale X ISO image.
The migration process automatically creates the cvbackupadmin user, if the user is not already present before the migration.
The migration process does not impact the drives that host Deduplication database (DDB) and Index Cache.
Before You Begin
-
Verify that the CommServe is on 11.32.87 or higher version.
-
Verify that you have the latest maintenance release installed on all the nodes in the HyperScale X cluster. For more information, see Updating the Commvault Software on a Server.
-
Install the latest HyperScale X platform updates on all the HyperScale X nodes. For more information, see Platform Versions for HyperScale X.
-
Enable password-based root access only on remote cache node to run the migration related commands.
-
If the CommServe software exists on one of the nodes, you must rebuild the CommServe on a separate server outside of the HyperScale X cluster. For more information on rebuiling the CommServe software, see CommServe Hardware Refresh. Migration is not supported if the CommServe exists within the HyperScale X cluster.
-
Verify that the nodes in the HyperScale cluster are not under maintenance mode.
-
Verify that Commvault services and CVFS services are running on all the nodes in the cluster.
-
Verify that ICMP/PING request to default Gateway is enabled on the node. If disabled, update the network settings to allow ICMP/PING to the default Gateway.
-
If multipathing is enabled on the node, copy the /etc/multipath.conf file to a remote location. After the migration, copy back the multipath.conf file to the migrated node.
Procedure
To migrate the HyperScale X version, complete the following steps:
-
Using the system console (for example, using KVM, not SSH), log on to the node in which the remote cache is configured, and navigate to the /ws/ddb/iso folder.
-
Copy the latest HyperScale X image from Commvault store to the /ws/ddb/iso folder on the Hyperscale X remote software cache node.
-
To verify if the existing cluster meets the migration criteria, run the following commands:
The validation process will prompt for the following inputs:#cd /opt/commvault/MediaAgent<br> #./cvmanager.py -t Validate_Migrate_Cluster
You must proceed with the migration only after the validation process is complete. If the validation fails, you must fix the issue and then rerun the validation command.root@smchsx2 MediaAgent]# ./cvmanager.py -t Validate_Migrate_Cluster : Initializing HyperScale Task Manager. : Commvault HyperScale Task Manager Initialized. Are you registering with Metallic (y/n): n CommServe fully qualified hostname: (smchsx11.company.com) CommCell's user name with permissions to register new clients: admin CommCell's user password with permissions to register new clients: [Re-Enter to Confirm] - CommCell's user password with permissions to register new clients:
-
To start the migration, run the following command:
#./cvmanager.py -t Migrate_Cluster
-
On a legacy BIOS terminal, the validation process fails with the following message:
In such cases, to start the migration, run the following command:NOT_SUPPORTED Detected BIOS firmware. Please use Manual ISO procedure for migration by following documentation. ERROR : Migration is not supported on the cluster. ERROR : [Main-> Validate_Migrate_Cluster]-[main_process]: Step [validate_cluster_nodes] - Failed. Exiting!
./cvmanager.py -t Migrate_Cluster manual_iso=True
The migration process will prompt for the following inputs. When the ISO is ready to be presented on the IPMI console, type 'y' to proceed. Do not allow the node to reboot with 2.x version (RHEL7).
Open IPMI console on the node [node1.company.com] and make sure that IPMI console is active throughout migration process. Press 'y' to proceed (y|n): y Attach migration ISO [/ws/ddb/cvmanager/share/dvd_02132025_084957.iso] through the IPMI console. Press 'y' to proceed (y|n): y Change the boot order to perform next boot through the ISO. DO NOT REBOOT THE NODE. Press 'y' to proceed (y|n): y
-
The migration script prompts you to input the following values:
Root password of existing cluster: [Re-enter to Confirm] - Root password of existing cluster: Restricted shell user '(cvbackupadmin)' password: [Re-enter to Confirm] - Restricted shell user '(cvbackupadmin)' password: Are you registering with Metallic (y/n): n CommServe fully qualified hostname: (server1.company.com) Commcell's user name with permissions for registering new clients: Commcell's user password with permissions for registering new clients: Enter the users password: [Re-Enter to Confirm] - Commcell's user password with permissions for registering new clients:
The migration process migrates and reboots each node in the cluster from RHEL7 to Rocky8 sequentially. This process may take several hours to complete.
-
To verify whether the node was migrated successfully, run the following command and make sure the output is displayed as follows:
# commvault reg | grep sRHELToRockyMigrationCompleted
Output:
sRHELToRockyMigrationCompleted yes
-
If the migration fails after authentication, complete the following steps:
-
Run the following command to retrieve the last migration task:
# /opt/commvault/MediaAgent/cvmanager.py --unfinished_task_check Migrate_Cluster
This command generates a <filepath>.yaml file, which you will use to re-run the migration. For example, /ws/ddb/cvmanager/catalog/status/Task_6554744226181603181/cvmanager_jJV8X7.yaml
-
Start the migration using the <filepath>.yaml file. You do not require to verify the migration criteria to re-run the migration.
# /opt/commvault/MediaAgent/cvmanager.py <filepath.yaml>
-
What to Do Next
Perform the following post-migration tasks:
-
Install the latest Commvault updates
The MediaAgent on the migrated node may have an older version of the Commvault software. To view the MediaAgent version in the new nodes, see View the HyperScale X MediaAgent version .
If necessary, update the software to the latest maintenance release version. For more information, see Updating Commvault Software on a Server.
-
Install the latest HyperScale X platform version
After the migration, the CVFS version is updated. However, the node may not be fully up-to-date with the Operating System updates. To view the HyperScale X platform version, see Viewing the HyperScale X Platform Version.
If necessary, install the latest platform version to make sure that the node has the latest security fixes. For more information, see Installing Operating System Updates on Existing Nodes.
-
After the migration, root user login is automatically disabled on the HyperScale X cluster nodes. SSH and console login will be prohibited, and only the cvbackupadmin user can log on and access the nodes. If required, you can enable the root access. For more information, see Enabling or Disabling Root Access.
-
If firewall was enabled on the HyperScale cluster before the migration, then you must re-enable firewall manually after the migration is complete. For more information, see Enabling Firewall.
-
After the migration, the node only contains the following default packages included in the HyperScale X image:
- cloud App
- File System Core
- MediaAgent
- Storage Pool
- File System
You must reinstall any third-party packages that existed earlier on the node after the migration.