Migrating the HyperScale X Version on Existing Nodes

Migrating the HyperScale X Version on Existing Nodes

Updated

You can migrate HyperScale X 2.x clusters to HyperScale X 3.x clusters using the HyperScale X ISO image 3.2408.

The migration process automatically creates the cvbackupadmin user, if the user is not already present before the migration.

The migration process does not impact the drives that host Deduplication database (DDB) and Index Cache.

Before You Begin

  • Verify that the CommServe is on 11.36.18 or higher version.

  • Verify that you have maintenance release 11.36.18 or higher installed on all the nodes in the HyperScale X cluster. For more information, see Updating the Commvault Software on a Server.

  • Install the HyperScale X platform version 2.2408 or higher on all the HyperScale X nodes. For more information, see HyperScale X Platform Version 2.2408.

  • Enable password-based root access only on remote cache node to run the migration related commands.

  • If the CommServe software exists on one of the nodes, you must rebuild the CommServe on a separate server outside of the HyperScale X cluster. For more information on rebuiling the CommServe software, see CommServe Hardware Refresh. Migration is not supported if the CommServe exists within the HyperScale X cluster.

  • Verify that the nodes in the HyperScale cluster are not under maintenance mode.

  • The migration script does not support nodes with legacy BIOS and NVMe drives. To migrate these nodes, contact Commvault support.

Procedure

To migrate the HyperScale X version, complete the following steps:

  1. Copy the latest HyperScale X image from Commvault store to the /ws/ddb/iso folder on the Hyperscale X remote software cache node.

  2. Using the system console (for example, using KVM, not SSH), log on to the node in which the remote cache is configured, and navigate to the /ws/ddb/iso folder.

  3. Verify if the existing cluster meets the migration criteria using the following command:

    #cd /opt/commvault/MediaAgent<br>
    #./cvmanager.py -t Validate_Migrate_Cluster
            
        
    The validation process will prompt for the following inputs:
    root@smchsx2 MediaAgent]# ./cvmanager.py -t Validate_Migrate_Cluster        
    :      Initializing HyperScale Task Manager.        
    :      Commvault HyperScale Task Manager Initialized.
    Are you registering with Metallic (y/n): n
    CommServe fully qualified hostname: (smchsx11.company.com)
    CommCell's user name with permissions to register new clients: admin
    CommCell's user password with permissions to register new clients:
    [Re-Enter to Confirm] - CommCell's user password with permissions to register new clients:
            
        
    You must proceed with the migration only after the validation process is complete. If the validation fails, you must fix the issue and then rerun the validation command.

  4. To start the migration, run the following commands:

    # cd /opt/commvault/MediaAgent/
    #./cvmanager.py -t Migrate_Cluster
            
        

  5. The migration script prompts you to input the following values:

    Root password of existing cluster:
    [Re-enter to Confirm] - Root password of existing cluster:
    Restricted shell user '(cvbackupadmin)' password:
    [Re-enter to Confirm] - Restricted shell user '(cvbackupadmin)' password:
    Are you registering with Metallic (y/n): n
    CommServe fully qualified hostname: (server1.company.com)
    Commcell's user name with permissions for registering new clients: 
    Commcell's user password with permissions for registering new clients: 
    Enter the users password:
    [Re-Enter to Confirm] - Commcell's user password with permissions for registering new clients:
            
        

    The migration process migrates and reboots each node in the cluster from RHEL7 to Rocky8 sequentially. This process may take several hours to complete.

  6. To verify whether the node was migrated successfully, run the following command and make sure the output is displayed as follows:

    # commvault reg | grep sRHELtoRockyMigrationCompleted

    Output:
    sRHELtoRockyMigrationCompleted yes

  7. If the migration fails after authentication, complete the following steps:

    1. Run the following command to retrieve the last migration task:

      # /opt/commvault/MediaAgent/cvmanager.py --unfinished_task_check Migrate_Cluster

      This command generates a <filepath>.yaml file, which you will use to re-run the migration. For example, /ws/ddb/cvmanager/catalog/status/Task_6554744226181603181/cvmanager_jJV8X7.yaml

    2. Start the migration using the <filepath>.yaml file. You do not require to verify the migration criteria to re-run the migration.

      # /opt/commvault/MediaAgent/cvmanager.py <filepath.yaml>

What to Do Next

Perform the following post-migration tasks:

  • Install the latest Commvault updates

    The MediaAgent on the migrated node may have an older version of the Commvault software. To view the MediaAgent version in the new nodes, see View the HyperScale X MediaAgent version .

    If necessary, update the software to the latest maintenance release version. For more information, see Updating Commvault Software on a Server.

  • Install the latest HyperScale X platform version

    After the migration, the CVFS version is updated. However, the node may not be fully up-to-date with the Operating System updates. To view the HyperScale X platform version, see Viewing the HyperScale X Platform Version.

    If necessary, install the latest platform version to make sure that the node has the latest security fixes. For more information, see Installing Operating System Updates on Existing Nodes.

  • After the migration, root user login is automatically disabled on the HyperScale X cluster nodes. SSH and console login will be prohibited, and only the cvbackupadmin user can log on and access the nodes. If required, you can enable the root access. For more information, see Enabling or Disabling Root Access.

  • If firewall was enabled on the HyperScale cluster before the migration, then you must re-enable firewall manually after the migration is complete. For more information, see Enabling Firewall.

Was this page helpful?