Loading...

Stopping and Starting All Nodes

When all the nodes have to be shut-down for maintenance, it is important to follow the proper shut down and startup procedures to prevent data loss.

Before You Begin

Login to Admin Console and verify that the disks and storage pool are online and functioning properly .

  • To verify the Storage Pool:
    1. From the navigation pane, click Storage > Storage pools.
    2. The list of Storage pools are displayed in the right-pane.
    3. Click the name of the <Storage pool> to display the information associated with the pool.
    4. Verify that the Status of Storage are displayed as Online.
  • To verify the Disks:
    1. From the navigation pane, click Storage > Storage targets.
    2. The list of Storage targets are displayed in the right-pane.
    3. Click the name of the <disk library> to display the information associated with the disk.
    4. Make sure that the Status of the mount paths are displayed as Online.

Procedure

  1. Using RDP, shutdown the Windows Operating System on the VM hosting the CommServe.
  2. Set the hosted engine in the maintenance mode as follows:
    1. Access the Virtualization Manager by typing the following URL in a Web Browser:

      https://<Control Host Name>/ovirt-engine

      For example: http://mycontrolhost.mydomian.com/ovirt-engine

    2. Click Administration Portal.
    3. Type the login credentials to access the Virtualization Manager.

      Tip: Type admin as the user name and the root password provided for the CommServe Server during the setup.

      The Dashboard will be displayed in the Red Hat Virtualization window.

    4. Navigate to System > Data Centers > Clusters > Hosts.
    5. Right-click the appropriate host on the right-pane, and click Management > Maintenance.
  3. Using an SSH client program, like PuTTy on Windows, login to each HyperScale node as root and run the following command to stop services:

    commvault stop

  4. When the services are stopped, type the following command to view the deduplication processes that may be running.

    commvault list

    If the SIDB_Engine_<ID> process is found running, wait until the process is complete.

    Note: Depending on the size of the DDB, this process might take as long as 30 minutes to complete.

    Repeat this step to confirm that the processes are no longer running before restarting the computer.

  5. Login to any one of the HyperScale nodes, using an SSH client program, like PuTTy on Windows.
  6. Stop the gluster volumes on all the nodes using the following commands:
    • List the volumes to identify the volume names using the following command:

      gluster volume list

    • Stop the volumes using the following command:

      gluster volume stop <volume_name>

      Note: The above command will stop the volumes in all the nodes - it is not necessary to repeat the command in the other nodes.

      Make sure that the volumes are stopped in the following order:

      1. Storage Pool volume
      2. Volume associated with the CommServe VM
      3. Volume associated with the Control Host

        Tip: Storage Pool volume name is the same as the Storage Pool name. The default name for the volume associated with the CommServe VM is dav_vm_vol and the default name for the volume associated with the Control Host is dav_he_vol.

  7. Shut down the nodes to perform the necessary maintenance.

    Tip: Use the reboot or shutdown –h now commands to stop the node.

  8. After restarting the nodes, start the following volumes:

    gluster volume start dav_he_vol
    gluster volume start dav_vm_vol

  9. Access the Virtualization Manager and manually restart the virtual machine hosting the CommServe.

    You can access the Virtualization Manager by typing the following URL in a Web Browser:

    https://<Control Host Name>/ovirt-engine

    For example: http://mycontrolhost.mydomain.com/ovirt-engine

  10. Open the Admin Console and make sure that the Storage Pool and Disks are online and functioning properly as described in Before You Begin.

Last modified: 1/9/2019 8:59:00 PM