Configuring Cluster Group Clients for Non-Veritas Clusters (UNIX)

After the Commvault software is installed on all the physical nodes that host the cluster group, you must configure a cluster group client in the CommCell Console to protect cluster resources on the physical nodes.

Procedure

Step 1: Create the Cluster Group Client in the CommCell Console

  1. From the CommCell Browser, right-click the Client Computers node and then click New Client > Clustered Server > Unix Cluster.

  2. On the New Unix Cluster Client dialog box, specify the details of the cluster that you want to protect to create the cluster group client.

    1. In the Client Name box, type a name for the cluster group.

    2. In the Host Name box, type the fully qualified domain name of the cluster group.

      Note

      The host name of the cluster should not be used by any existing client in your CommCell environment.

      If you want to specify the IP address of the cluster group, ensure that the IP address is static.

    3. Click Next.

    4. Review the cluster group information and click Finish.

      The Advanced Client Properties dialog box appears. If the dialog box does not automatically open, right-click the Cluster_Group_Client, and then click Properties. In the Client Computer Properties dialog box, click Advanced.

  3. On the Advanced Client Properties dialog box, click the Cluster Group Configuration tab.

    • All the UNIX clients that are available in the CommCell will be listed in the Available list. Select the physical computers (nodes) where you installed the necessary Agents from the Available list, and then click Add > to move the client to the Selected list.

    • Click the Agents tab.
      Select the Agents you want to use in the cluster group client from the Available list and click Add > to move the Agent to the Selected list.

  4. Click the Job Configuration tab.

    In the Job Results Directory box, type the path for the job results directory. Ensure that the directory resides on a shared cluster file system.

    Note

    The Browse button does not work when you configure the cluster group client for the first time. After the initial configuration, you can use the button to update the directory (if required).

    In the case of MediaAgent and ContinuousDataReplicator, the Job Result directory path will be used for both the Index Cache and CDR Log directories respectively unless another directory location is provided.

    Click OK.

  5. In the Information dialog box, click OK.

    The cluster group client is successfully created. You can continue to configure failovers by configuring cluster resources and adding service dependencies.

Step 2: Configure the cvclusternotify Script

A failure of any Agent software on the active node in a UNIX cluster will not cause a failover to be initiated.

The cvclusternotify script should be added as part of the normal cluster startup/shutdown procedure. The script is provided as a generic template, and it must be run at the beginning of node shutdown and at the end of new active node startup before any I/O or application starts on the cluster volumes. In both cases, data protection services must be up and running.

Run the following command to notify Commvault that the specified "Cluster Group" is going up or down because of a cluster failover:

Usage:

cvclusternotify -inst InstanceName -cn ClientName [-start | -shutdown]

Where:

  • cvclusternotify is the script that notifies the Commvault software about cluster failovers.

  • -inst specifies the name of the Commvault instance on which you want to run the script. If you have a single instance, specify Instance001.

  • -cn is the name of the cluster group client.

Example:

For a two-node cluster, if the cluster group client name is "ClusterGroup1" and the application instance is "Instance001", run the following command:

  • To shutdown:

    cvclusternotify -inst Instance001 -cn "ClusterGroup1" -shutdown

  • To start up:

    cvclusternotify -inst Instance001 -cn "ClusterGroup1" -start

Step 3: Add Service Dependencies

Add the following service dependencies:

  • Dependencies to IP Resource

  • Dependencies to Disk Resource

Example:

When the node is going down:

  • Run the command to shut down:

    cvclusternotify -inst Instance00X -cn <cluster client name> -shutdown
  • Unmount any moved file system because of the failover

  • Un-plumb virtual IP

When the node is starting up:

  • Plumb virtual IP

  • Mount any moved file system because of the failover

  • Run the command to start up:

    cvclusternotify -inst Instance00X -cn <cluster client name> -start

Loading...