Backup Failures
The following section provides information on troubleshooting backups.
Incremental Snapshot Backup Copies Are Failing |
If this event is generated: nasBackup: Exceeded maximum number of incremental backups. Must run full or differential backup Then you have performed the maximum number of consecutive incremental backups that is supported for your file server. To fix this issue, modify your snapshot backup schedules so that a full or differential snap backup is performed before you reach the maximum number of consecutive incremental backups. For NetApp file servers with ONTAP versions prior to 8.3, the maximum number of consecutive incremental backups permitted after a full backup is 9. After a differential backup, the maximum is 8. For NetApp file servers with ONTAP 8.3 or higher, the maximum number of consecutive incremental backups permitted after a full backup is 32. After a differential backup, the maximum is 31. |
NAS-attached libraries shared across a SAN |
During configuration, all of the drive devices must have a value for their serial number. If the automatic configuration does not populate a serial number for a drive, you must manually enter a serial number value. Each instance of a drive must have the same serial number. |
Filtering data that consistently fails
Symptom
Some mailboxes or folders appear in the Items That Failed list when you run the Job History Report. This indicates that the mailboxes or folders are locked by the operating system or by another application and cannot be opened during the data protection operation.
Resolution
Filter the mailboxes or folders that consistently appear in the Items That Failed list in the Job History Report and exclude those mailboxes or folders from future backup operations to avoid backup failure.
Creation of volume snapshots fail due to busy LUNs
Snapshot Clone Dependency
If snapshots are created when a LUN is cloned:
-
Delete snaps in the reverse order they were created in. If you have a situation where the busy snap is no longer mounted but is still shown as busy, then all additional snaps on this volume created while that snap was mounted will need to be deleted so that this snap will no longer be busy. See the ONTAP 7.3 note below to avoid this dependency.
-
Do not mount a volume and create another snap for the volume. To avoid this snapshot dependency, do not manually create a snapshot of a volume while you have a snapshot mounted.
For NetApp ONTAP version 7.3, there is an option to enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first delete all of the more recent backing Snapshot copies.
This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is disabled, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy.
We recommend that you enable this option but if you are using any other applications on the LUN, review the documentation for this feature for other impacts. As with this option enabled, if you delete the snapshot that had originally cloned the LUN, then you cannot use “snap restore” to restore the clone from one of the later snaps. If this volume options is later turned off on the volume, then you may have difficulties deleting snapshots because the dependencies will again be enforced.
Completed with one or more errors
Backup jobs from NDMP Agent are displayed as "Completed w/ one or more errors" in the Job History in the following cases:
-
Backup context no longer exists on the file server. Backup cannot be restarted.
-
Failing restarted backup of path [^1%s]. Failed to get context list from the file server.
-
Data Loss detected. The number of tape blocks written as reported by NDMP tape server does not match the reported value for the actual tape drive.
-
Failed to update the reference time. Failing path.
-
Error committing chunks with the Job Manager. Failing path.
-
Failing restarted backup of path. The restart string does not match the data backed up.
-
Failing restarted backup of path. Unable to parse the restart string. If this error persists, please kill the backup and start a new one.
-
Exceeded maximum number of incremental backups. Must run full or differential backup.
-
EMC Celerra backup failed: probably an incorrect data backup path in subclient content.
-
Error writing chunk trailer to media.
-
Error marking the media full at end of media.
-
ArchiveManager error marking chunk closed in the database.
-
Failure to update restart string.
-
Failed to update media information in the database.
-
Error writing chunk header to NAS attached tape.
-
Error enabling hardware encryption on NAS attached tape drive.
-
Error mounting media in NAS attached tape drive.
-
NDMP Server [<File Server>] was unable to back up path. It is likely this path does not exist.
-
Unexpected NDMP error communicating with client [<File Server>].
-
Unexpected NDMP error communicating with tape server [<File Server>].
-
NDMP connection to host [<File Server>] failed. Verify if NDMP can be reached on this host.
-
Could not position tape. Error returned [<File Server Error String>].
-
<File Server>: tape server halted with internal error.
-
Tape server halted with INTERNAL_ERROR and may not have received any data for the timeout period. Backup may be retried with NDMP_API DWORD additional setting nREADSOCKETTIMEOUT set to a larger value, in seconds.
-
NDMP Server is reporting a write error: [NDMP_MOVER_HALT_MEDIA_ERROR].
-
NDMP Server is reporting a write error: [NDMP_MOVER_PAUSE_MEDIA_ERROR].
-
Received data halt with internal error. See nasBackup.log for problem details.
-
Error updating the new chunk information in the database.
-
Error write file mark to NAS attached media.
-
Error sending backup file information to indexing.
-
When using snapmirror-to-tape, a full volume must be backed up. Backup Path is not a full volume.
-
NDMP authentication failed.
-
Failed to start the NRS process on remote host [<MediaAgent>]
Full snap backup job fails when using newer version of MediaAgent
Full snap backup jobs fail when the subclient's previous backup job ran on an older version of the MediaAgent, and then the current job runs on a newer version of the MediaAgent. The backup fails because the index cannot be restored from an older version of the MediaAgent to a newer version of the MediaAgent. This might happen if multiple data paths are configured in the storage policy and MediaAgents of both versions can be used.
To resolve this issue, you must configure an additional setting on the newer version MediaAgent, and then remove the additional setting after the next full snap backup.
To add or edit an additional setting, follow the steps in Add or Modify an Additional Setting.
Use these arguments:
-
In the Name box, enter sDontCopyPrevIndexOnFull.
-
In the Category box, enter NAS.
-
In the Type box, select STRING.
-
In the Value box, enter Y.
Run a full snap backup, and then delete the additional setting before the next backup. See Delete an Additional Setting, selecting sDontCopyPrevIndexOnFull to delete.