After the Commvault software is installed on all the physical nodes that host the cluster group, you must configure a cluster group client in the CommCell Console to protect cluster resources on the physical nodes.
Before You Begin
-
On the CommServe computer, add the EnableVCSClusterResourceDiscovery additional setting as shown in the following table.
For instructions about adding the additional setting from the CommCell Console, see Adding or Modifying Additional Settings from the CommCell Console.
Property
Value
Name
Category
CommServDB.GxGlobalParam
Type
INTEGER
Value
1
Procedure
-
From the CommCell Browser, right-click the Client Computers node and then click New Client > Clustered Server > Unix Cluster.
-
On the New Unix Cluster Client dialog box, specify the details of the cluster that you want to protect to create the cluster group client.
-
In the Client Name box, type a name for the cluster group.
-
In the Host Name box, type the fully qualified domain name of the cluster group.
Note
The host name of the cluster should not be used by any existing client in your CommCell environment.
If you want to specify the IP address of the cluster group, ensure that the IP address is static.
-
Click Next.
-
Review the cluster group information and click Finish.
The Advanced Client Properties dialog box appears. If the dialog box does not automatically open, right-click the Cluster_Group_Client, and then click Properties. In the Client Computer Properties dialog box, click Advanced.
-
-
On the Advanced Client Properties dialog box, click the Cluster Group Configuration tab.
-
All the Unix clients that are available in the CommCell will be listed in the Available list. Select the physical computers (nodes) where you installed the necessary Agents from the Available list, and then click Add > to move the client to the Selected list.
-
Click the Agents tab.
Select the Agents you want to use in the cluster group client from the Available list and click Add > to move the Agent to the Selected list.
-
-
Click the Job Configuration tab.
In the Job Results Directory box, type the path for the job results directory. Ensure that the directory resides on a shared cluster file system.
Note
The Browse button does not work when you configure the cluster group client for the first time. After the initial configuration, you can use the button to update the directory (if required).
In the case of MediaAgent and ContinuousDataReplicator, the Job Result directory path will be used for both the Index Cache and CDR Log directories respectively unless another directory location is provided.
Click OK.
-
In the Information dialog box, click OK.
The cluster group client is successfully created.
Result
-
A new application called GxClusterPlugin_service_group_name is created in the Veritas cluster, and dependency links will be set up from this newly created application to all of the resources in the group of type IP, mount point and disk group. This ensures that the plug-in is started after all of the other resources have come online or is stopped before any other resource goes offline during failovers.
The plug-in must be able to read/write the main.cmd file, which is typically located under /etc/VRTSvcs/conf/config. If the file is not present, run the following command:
hacf -cftocmd directory_path
After the command is ran, the main.cmd file is created with the same ownership as that of Commvault processes.
-
During the service group failover, the plug-in performs the following tasks:
-
Turns off firewall on the service group, and enables it on the active node.
-
Notifies the CommServe database where the service group is active for effective resource allocation.
-
If archiving/OnePass operations are configured, the plug-in switches the monitoring mount points to the active node to handle stub recalls.
-