A Linux MediaAgent that is able to act as a Linux access node is automatically configured to provide extended file system support for UNIX file systems. This enables the Linux MediaAgent to mount and recover virtual machine backup data for live browse and live file recovery operations, without requiring that granular recovery metadata be collected during backups.
A Linux access node can be used with any of the hypervisors supported by the Virtual Server Agent.
For hypervisors that support Linux proxies, the Virtual Server Agent role can also be enabled on the MediaAgent.
Conditions
UNIX MediaAgents that meet the following conditions are automatically configured as Linux access nodes:
-
The machine has the MediaAgent package installed.
For a list of the operating systems supported by Linux access nodes, see Operating System Support in Converting a Linux MediaAgent to a Linux Access Node.
-
A UNIX virtual or physical machine running one of the operating system and kernels listed in System Requirements for Block-Level Backup for UNIX.
The machine must have both MediaAgent and Virtual Server Agent packages installed.
-
To use a RHEL 8 or 9 VM or Oracle Linux VM that is configured to use the UEFI Secure Boot method as a Linux access node, you must enroll Commvault keys with the UEFI MOK (Machine Owned Key) list on the VM. For more information, see Use Commvault Driver Modules on a Linux Computer with UEFI Secure Boot Option Enabled.
-
When the Commvault Communications Service (CVD) starts, the block level driver (cvblk) loads automatically.
To verify that the cvblk driver has loaded, run the following command on the MediaAgent:
cat /var/log/commvault/Log_Files/cvfbr_validate.log | grep -i cvblk
The following output indicates that the driver loaded successfully:
sLNFBR set to /opt/commvault/Base/libCvBlkFBR.so already cvblk module load success...
-
The Job Results folder must meet the following conditions:
-
Be on an ext4 or XFS file system partition.
-
Have at least 30 GB free space.
-
Machines that are automatically configured as Linux access nodes are available when specifying the default Linux access node for a Virtual Server instance or as an advanced option for a browse and restore operation.
Recommendation: Use the Commvault-Provided Access Node OVA for Configuring a Linux Access Node
Using the Commvault-provided access node OVA for configuring a Linux access node is strongly recommended, because the OVA provides the optimal system settings and kernel version.
Additional Configuration if Not Using the Commvault-Provided Access Node OVA
If using the Commvault OVA is not an option and you must use a custom configured node as a Linux access node, then you must make the following additional changes on the node for correct functionality:
-
Set use_devicesfile to 0 in /etc/lvm/lvm.conf
-
Set event_activation to 0 in /etc/lvm/lvm.conf (For RHEL OSs, this applies only to RHEL 8.x)
-
Set auto_activation_volume_list to contain root volume as well as other volumes used by the Commvault software (Installation folder, DDB, index and job results ) in /etc/lvm/lvm.conf (For RHEL OSs, this applies only to RHEL 8.x)
Note
-
For all hypervisors, the Logical Volume Management (lvm) package needs to be installed on MediaAgents that are configured to act as File Recovery Enablers.
-
To browse files from a different file system than the file systems supported by the Linux machine, you must install the packages required by the file system used in the guest VM.
-
For OpenStack, the following packages must be installed on MediaAgents that are converted to act as File Recovery Enablers, to enable the MediaAgent to browse base images or instances created from images:
-
QEMU disk image utility (qemu-img)
-
libguestfs
-
libguestfs-tools
-
-
To specify an FBR mount point for file recovery operations that is different from the Job Results folder, perform the following steps:
-
Add the following registry key for /etc/CommVaultRegistry/Galaxy/Instance001/Session/.properties:
dFBRDIR
Path for the FBR cache mount point
-
To apply the changes, restart CVD services.
-
Check the /var/log/commvault/Log_Files/cvfbr_validate.log file to verify that the FBR mount point is validated.
-
Restart services on the MediaAgent to update the list of File Recovery Enablers that is displayed in the CommCell Console.
-
File System Support
Linux access nodes support live browse and file recovery for the following file systems:
-
ext2
-
ext3
-
ext4
-
XFS
-
JFS
-
HFS
-
HFS Plus
-
Btrfs
Notes
-
Live browse and file recovery operations are not supported for XFS realtime subvolumes.
-
Live browse and recovery is supported for subvolumes of Btrfs file systems.
Troubleshooting
Symptom
Live browse of files and folders on the guest VMs might fail as a result of UUID conflicts with physical volumes (PVs) on the Linux access node. The following conditions are normal symptoms:
-
The fbr.log file contains the following error:
Device mismatch detected
-
The Linux access node might become unresponsive.
Cause
If the virtual machine where a Linux access node is installed is based on the same VM template or image as the guest virtual machines or instances that the Linux access node is browsing, then live browse of files and folders on the guest VMs might fail as a result of UUID conflicts with physical volumes (PVs) on the Linux access node.
Resolution
This issue is resolved in Commvault Platform Release 2024 (11.34), so update the CommServe server, the MediaAgent, and the Linux access node to that release.
If you are unable to update to CPR 2024, you can resolve this issue by changing the UUIDs of the PVs on the Linux access node as follows:
-
List all the physical volumes:
pvs -o --noheadings
-
Change the UUID for each PV listed in step 1.
pvchange -f --uuid pv_name --config "global {activation=0}"
-
List all the volume groups:
vgs -o vg_name,vg_uuid,pv_name --noheadings
-
Run the following commands to change the UUIDs and rename each of the volume groups listed in step 3 and activate the changed configuration:
vgchange --uuid vg_name --config "global {activation=0}"
vgrename old_vg_name new_vg_name
vgchange -ay new_vg_name
-
If there are logical volumes (LVs) on the volume groups that were renamed in the preceding step, the device paths for the LVs typically contain the VG name (for example, /dev/vg_name-lv_name). If /etc/fstab contains those device paths, update the paths to use the new volume group name. For example, the following command could be used for each volume group, specifying the old and new VG names:
sed -i -e 's/old_vg_name/new_vg_name/g' /etc/fstab
As a result of this command, a device path such as /dev/old_vg_name-lv_name would be renamed as /dev/new_vg_name-lv_name.
-
Rename bootloader entries in /boot/grub2/grub.cfg and /boot/grub2/grubenv. For example, the following commands could be used for each volume group, specifying the old and new VG names:
sed -i -e 's/old_vg_name/new_vg_name/g' /boot/grub2/grub.cfg
sed -i -e 's/old_vg_name/new_vg_name/g' /boot/grub2/grubenv
sed -i -e 's/old_vg_name/new_vg_name/g' /etc/default/grub
-
For UEFI and secure boot enabled machines, rename the bootloader entries to /boot/efi/EFI/redhat/grub.cfg and /boot/efi/EFI/redhat/grubenv. For example, the following commands could be used for each volume group, specifying the old and new VG names for redhat vendor:
sed -i -e 's/old_vg_name/new_vg_name/g' /boot/efi/EFI/redhat/grub.cfg
sed -i -e 's/old_vg_name/new_vg_name/g' /boot/efi/EFI/redhat/grubenv
Note
files need to be modified as per the vendor folders
-
Reboot the Linux access node machine and verify that all Commvault services are running.