Backups fail to attach cloned disks with the error "Unable to attach disks for virtual machine [name]."

Because of a bug in Oracle Linux 7.4 and Oracle Linux 7.5, Oracle VM backups might fail to attach cloned disks.


Oracle VM backups with the Virtual Server Agent (VSA) might fail to attach cloned disks.

The following error appears in the vsbkp.log on the VSA proxy:

Unable to attach disks for virtual machine [name].


This issue occurs because volume groups or logical volumes on the cloned disks are automatically activated by the disk attach operation during a backup. This happens even if you have disabled lvmetad, because of a bug in Oracle Linux 7.4 and Oracle Linux 7.5. Because of this, the backup process cannot detach and clean up the cloned disks, which causes issues for the next backup during the disk attach operation.

The automatic activation occurs because the default udev rule (/usr/lib/udev/rules.d/69-dm-lvm-metad.rule), which is installed as part of Oracle Linux 7.4 and Oracle Linux 7.5, includes an extra vgchange action for the label lvm_scan.

Note: After the pvscan action, there is another vgchange action that is triggered on the disk ADD event. The pvscan action does not contribute to this issue.


Oracle recommends updating the VSA proxy to the most recent Oracle Linux 7.6 version and LVM2, which does not have this rule.

For environments that remain on Oracle Linux 7.4 and Oracle Linux 7.5, perform the following steps as a workaround to disable the extra action:

  1. Under /etc/udev/rules.d, create a copy of the /usr/lib/udev/rules.d/69-dm-lvm-metad.rules file.

    Note: Do not modify the original file.

  2. In the copy (/etc/udev/rules.d/69-dm-lvm-metad.rules), comment out the following line under the lvm_scan label:


    The following example shows the changed line at the bottom of the listing:


    # The table below summarises the situations in which we reach the LABEL="lvm_scan".
    # Marked by X, X* means only if the special dev is properly set up.
    # The artificial ADD is supported for coldplugging. We avoid running the pvscan
    # on artificial CHANGE so there's no unexpected autoactivation when WATCH rule fires.
    # N.B. MD and loop never actually reaches lvm_scan on REMOVE as the PV label is gone
    # within a CHANGE event (these are caught by the "LVM_PV_GONE" rule at the beginning).
    # | real ADD | real CHANGE | artificial ADD | artificial CHANGE | REMOVE
    # =============================================================================
    # DM | | X | X* | | X
    # MD | | X | X* | |
    # loop | | X | X* | |
    # other | X | | X | | X
    ACTION!="remove", ENV{LVM_PV_GONE}=="1", RUN+="/usr/bin/systemd-run /usr/sbin/lvm pvscan --cache $major:$minor", GOTO="lvm_end"
    ENV{ID_MODEL}="LVM PV $env{ID_FS_UUID_ENC} on /dev/$name"

  3. Reload the customized udev rules file by running the following command:

    udevadm control --reload ; udevadm trigger

Last modified: 3/11/2019 4:03:51 PM