I have been running reverse incrementals for well over a year now without any issues. These jobs create Per-VM backups in respective repositories.I have the disk jobs configured to process our VMs in order of size, largest first.

I have tape jobs running which are configured to write "Backup jobs" (not repository or files) to tape daily once the disk reverse incremental are complete. This is configured for Fulls, not incrementals. These have also been running without issue, until last night when the job failed at around 80% asking for another tape.

There have been no changes to the backup jobs, or VMs being backed up, and there has been no noticeable growth for the VMs being backed up.When I investigated the space remaining on the tapes, there is well over 500GB free on each tape still however the jobs would not progress. The media pool used is configured for Parallel processing of jobs using 2 heads, as well as parallel processing of chains within a single job.

I can see no reason why this job has failed as there is ample capacity on the tapes, these jobs have not grown or changed, and the jobs have been completing without incident for several months now.

This message was displayed after writing a total of 2.9T of data across 2x tapes.This job writes 3.2T of data across 2 tapes on a daily basis leaving unused capacity of around 1.6T across the 2 tapes. There is absolutely no reason for this job to have asked for another tape.

I have not yet opened a ticket, as there is no "error" as such. At this point I was just looking for some input from the forum.If this fails again I will log it as a ticket tho.

Does the very same job ask for additional media pool or is it a second, third job? If the latter case, can you tell us what media set creation and retention settings are configured on the target media pool? Thanks.

This is a job which hasnt grown, which has been running for almost a year now, without issue.This job asked for a 3rd tape when it usually completes successfully on 2 tapes with plenty of space remaining.

I re-ran this job without any changes as I could not see any reason for it to fail. The job has now completed successfully.

Its so strange, as I have had a few explainable issues with Veeam backups to tape over the last 12 months. None of which I have found any reasonable explanation for.This is compounded by the fact that when re-run, most of the time the jobs complete without issue.