A Linux MediaAgent that is able to act as a File Recovery Enabler for Linux (FREL) is automatically configured to provide extended file system support for UNIX file systems. This enables the Linux MediaAgent to mount and recover virtual machine backup data for live browse and live file recovery operations, without requiring that granular recovery metadata be collected during backups.
A File Recovery Enabler for Linux can be used with any of the hypervisors supported by the Virtual Server Agent.
For hypervisors that support Linux proxies, the Virtual Server Agent role can also be enabled on the MediaAgent.
Conditions
UNIX MediaAgents that meet the following conditions are automatically configured as File Recovery Enablers for Linux:
-
The machine has the MediaAgent package installed.
For a list of the operating systems that are supported for MediaAgents, see MediaAgent System Requirements.
-
A UNIX virtual or physical machine running one of the operating system and kernels listed in System Requirements for Block-Level Backup for UNIX.
The machine must have both MediaAgent and Virtual Server Agent packages installed.
-
To use a RHEL 8 VM or CentOS 8 VM that is configured to use the UEFI Secure Boot method as a File Recovery Enabler for Linux (FREL), you must enroll Commvault keys with the UEFI MOK (Machine Owned Key) list on the VM. For more information, see Use Commvault Driver Modules on a Linux Computer with UEFI Secure Boot Option Enabled.
-
When the Commvault Communications Service (CVD) starts, the block level driver (cvblk) loads automatically.
To verify that the cvblk driver has loaded, run the following command on the MediaAgent:
cat /var/log/commvault/Log_Files/cvfbr_validate.log | grep -i cvblk
The following output indicates that the driver loaded successfully:
sLNFBR set to /opt/commvault/Base/libCvBlkFBR.so already cvblk module load success...
-
The Job Results folder must meet the following conditions:
-
Be on an ext4 or XFS file system partition.
-
Have at least 30 GB free space.
-
Machines that are automatically configured as File Recovery Enablers are available when specifying the default File Recovery Enabler for a Virtual Server instance or as an advanced option for a browse and restore operation.
Additional Configuration
-
For all hypervisors, the Logical Volume Management (lvm) package needs to be installed on MediaAgents that are configured to act as File Recovery Enablers.
-
To browse files from a different file system than the file systems supported by the Linux machine, you must install the packages required by the file system used in the guest VM.
-
For OpenStack, the following packages must be installed on MediaAgents that are converted to act as File Recovery Enablers, to enable the MediaAgent to browse base images or instances created from images:
-
QEMU disk image utility (qemu-img)
-
libguestfs
-
libguestfs-tools
-
-
To specify an FBR mount point for file recovery operations that is different from the Job Results folder, perform the following steps:
-
Add the following registry key for /etc/CommVaultRegistry/Galaxy/Instance001/Session/.properties:
dFBRDIR
Path for the FBR cache mount point
-
To apply the changes, restart CVD services.
-
Check the /var/log/commvault/Log_Files/cvfbr_validate.log file to verify that the FBR mount point is validated.
-
Restart services on the MediaAgent to update the list of File Recovery Enablers that is displayed in the CommCell Console.
-
File System Support
The File Recovery Enabler supports live browse and file recovery for the following file systems:
-
ext2
-
ext3
-
ext4
-
XFS
-
JFS
-
HFS
-
HFS Plus
-
Btrfs
Notes
-
Live browse and file recovery operations are not supported for XFS realtime subvolumes.
-
Live browse and recovery is supported for subvolumes of Btrfs file systems.
Troubleshooting
Symptom
Live browse of files and folders on the guest VMs might fail as a result of UUID conflicts with physical volumes (PVs) on the FREL.
The fbr.log file contains the following error:
Device mismatch detected
Cause
If the virtual machine where a File Recovery Enabler for Linux is installed is based on the same VM template or image as the guest virtual machines or instances that the FREL is browsing, then live browse of files and folders on the guest VMs might fail as a result of UUID conflicts with physical volumes (PVs) on the FREL.
Resolution
To resolve this issue, change the UUIDs of the PVs on the FREL:
-
List all the physical volumes:
pvs -o --noheadings
-
Change the UUID for each PV listed in step 1.
pvchange -f --uuid pv_name --config "global {activation=0}"
-
List all the volume groups:
vgs -o vg_name,vg_uuid,pv_name --noheadings
-
Run the following commands to change the UUIDs and rename each of the volume groups listed in step 3 and activate the changed configuration:
vgchange --uuid vg_name --config "global {activation=0}"
vgrename old_vg_name new_vg_name
vgchange -ay new_vg_name
-
If there are logical volumes (LVs) on the volume groups that were renamed in the preceding step, the device paths for the LVs typically contain the VG name (for example, /dev/vg_name-lv_name). If /etc/fstab contains those device paths, update the paths to use the new volume group name. For example, the following command could be used for each volume group, specifying the old and new VG names:
sed -i -e 's/old_vg_name/new_vg_name/g' /etc/fstab
As a result of this command, a device path such as /dev/old_vg_name-lv_name would be renamed as /dev/new_vg_name-lv_name.
-
Rename bootloader entries in /boot/grub2/grub.cfg and /boot/grub2/grubenv. For example, the following commands could be used for each volume group, specifying the old and new VG names:
sed -i -e 's/old_vg_name/new_vg_name/g' /boot/grub2/grub.cfg
sed -i -e 's/old_vg_name/new_vg_name/g' /boot/grub2/grubenv
sed -i -e 's/old_vg_name/new_vg_name/g' /etc/default/grub
-
Reboot the FREL machine and verify that all Commvault services are running.