Automatic Configuration of a MediaAgent as a Linux Access Node

A Linux MediaAgent that is able to act as a Linux access node is automatically configured to provide extended file system support for UNIX file systems. This enables the Linux MediaAgent to mount and recover virtual machine backup data for live browse and live file recovery operations, without requiring that granular recovery metadata be collected during backups.

A Linux access node can be used with any of the hypervisors supported by the Virtual Server Agent.

For hypervisors that support Linux proxies, the Virtual Server Agent role can also be enabled on the MediaAgent.

Conditions

UNIX MediaAgents that meet the following conditions are automatically configured as Linux access nodes:

  • The machine has the MediaAgent package installed.

    For a list of the operating systems supported by Linux access nodes, see Operating System Support in Converting a Linux MediaAgent to a Linux Access Node.

  • A UNIX virtual or physical machine running one of the operating system and kernels listed in System Requirements for Block-Level Backup for UNIX.

    The machine must have both MediaAgent and Virtual Server Agent packages installed.

  • To use a RHEL 8 or 9 VM or Oracle Linux VM that is configured to use the UEFI Secure Boot method as a Linux access node, you must enroll Commvault keys with the UEFI MOK (Machine Owned Key) list on the VM. For more information, see Use Commvault Driver Modules on a Linux Computer with UEFI Secure Boot Option Enabled.

  • When the Commvault Communications Service (CVD) starts, the block level driver (cvblk) loads automatically.

    To verify that the cvblk driver has loaded, run the following command on the MediaAgent:

    cat /var/log/commvault/Log_Files/cvfbr_validate.log | grep -i cvblk

    The following output indicates that the driver loaded successfully:

    sLNFBR set to /opt/commvault/Base/libCvBlkFBR.so already
    cvblk module load success...
  • The Job Results folder must meet the following conditions:

    • Be on an ext4 or XFS file system partition.

    • Have at least 30 GB free space.

Machines that are automatically configured as Linux access nodes are available when specifying the default Linux access node for a Virtual Server instance or as an advanced option for a browse and restore operation.

Additional Configuration

  • For all hypervisors, the Logical Volume Management (lvm) package must be installed on MediaAgents that are configured to act as Linux access nodes. On MediaAgents and VSA Linux access nodes used for Live file browse and restore or File indexing operations, set event_activation to 0 in /etc/lvm/lvm.conf

  • To browse files from a different file system than the file systems supported by the Linux machine, you must install the packages required by the file system used in the guest VM.

  • For OpenStack, the following packages must be installed on MediaAgents that are converted to Linux access nodes, to enable the MediaAgent to browse base images or instances created from images:

    • QEMU disk image utility (qemu-img)

    • libguestfs

    • libguestfs-tools

  • To specify an FBR mount point for file recovery operations that is different from the Job Results folder, perform the following steps:

    1. Add the following registry key for /etc/CommVaultRegistry/Galaxy/Instance001/Session/.properties:

      dFBRDIR: Path for the FBR cache mount point

    2. To apply the changes, restart CVD services.

    3. Check the /var/log/commvault/Log_Files/cvfbr_validate.log file to verify that the FBR mount point is validated.

    4. Restart services on the MediaAgent to update the list of Linux access nodes that is displayed in the CommCell Console.

File System Support

Linux access nodes support live browse and file recovery for the following file systems:

  • ext2

  • ext3

  • ext4

  • XFS

  • JFS

  • HFS

  • HFS Plus

  • Btrfs

Notes

  • Live browse and file recovery operations are not supported for XFS realtime subvolumes.

  • Live browse and recovery is supported for subvolumes of Btrfs file systems.

Troubleshooting

Symptom

Live browse of files and folders on the guest VMs might fail as a result of UUID conflicts with physical volumes (PVs) on the Linux access node. The following conditions are normal symptoms:

  • The fbr.log file contains the following error:

    Device mismatch detected
  • The Linux access node might become unresponsive.

Cause

If the virtual machine where a Linux access node is installed is based on the same VM template or image as the guest virtual machines or instances that the Linux access node is browsing, then live browse of files and folders on the guest VMs might fail as a result of UUID conflicts with physical volumes (PVs) on the Linux access node.

Resolution

This issue is resolved in Commvault Platform Release 2024 (11.34), so update the CommServe server, the MediaAgent, and the Linux access node to that release.

If you are unable to update to CPR 2024, you can resolve this issue by changing the UUIDs of the PVs on the Linux access node as follows:

  1. List all the physical volumes:

    pvs -o --noheadings
  2. Change the UUID for each PV listed in step 1.

    pvchange -f --uuid pv_name --config "global {activation=0}"
  3. List all the volume groups:

    vgs -o vg_name,vg_uuid,pv_name --noheadings
  4. Run the following commands to change the UUIDs and rename each of the volume groups listed in step 3 and activate the changed configuration:

    vgchange --uuid vg_name --config "global {activation=0}"
    vgrename old_vg_name new_vg_name
    vgchange -ay new_vg_name
  5. If there are logical volumes (LVs) on the volume groups that were renamed in the preceding step, the device paths for the LVs typically contain the VG name (for example, /dev/vg_name-lv_name). If /etc/fstab contains those device paths, update the paths to use the new volume group name. For example, the following command could be used for each volume group, specifying the old and new VG names:

    sed -i -e 's/old_vg_name/new_vg_name/g' /etc/fstab

    As a result of this command, a device path such as /dev/old_vg_name-lv_name would be renamed as /dev/new_vg_name-lv_name.

  6. Rename bootloader entries in /boot/grub2/grub.cfg and /boot/grub2/grubenv. For example, the following commands could be used for each volume group, specifying the old and new VG names:

    sed -i -e 's/old_vg_name/new_vg_name/g' /boot/grub2/grub.cfg
    sed -i -e 's/old_vg_name/new_vg_name/g' /boot/grub2/grubenv
    sed -i -e 's/old_vg_name/new_vg_name/g' /etc/default/grub
  7. For UEFI and secure boot enabled machines, rename the bootloader entries to /boot/efi/EFI/redhat/grub.cfg and /boot/efi/EFI/redhat/grubenv. For example, the following commands could be used for each volume group, specifying the old and new VG names for redhat vendor:

    sed -i -e 's/old_vg_name/new_vg_name/g' /boot/efi/EFI/redhat/grub.cfg
    sed -i -e 's/old_vg_name/new_vg_name/g' /boot/efi/EFI/redhat/grubenv

    Note

    files need to be modified as per the vendor folders

  8. Reboot the Linux access node machine and verify that all Commvault services are running.

Loading...