The bottleneck stats confirm the issue with the target storage speed, as 99% target means that the writer component spent most of the time performing I\O to backup files.
I'd recommend you to review the white paper "Veeam and Storeonce configuration guide", as it contains best practices. Thanks!

You can contact the support team and let them take a closer look at the infrastructure, also they might initiate some tests against your environment and give helpful advises.
Please do not forget to put the case id in this thread, so we can track it internally. Thanks!

You will not get close to what link allows with a single stream backup to a dedupe appliance. With several parallel jobs or per-VM chains enabled for repository (in case you're backing up multiple VMs), you could workaround the storage ingest rate limit of a single write stream and get better performance.

Hi I've been battling this for around 2 months with HPE and Veeam Tech support
One issue was that VM's with many hard disks get really slow e.g. a file server with 10 separate VMDK files can take hours, at 7MB/s per disk yet a VM with a single disk can transfer at >500MB/s over the same infrastructure - COFC 8GB FC

Are you running newer firmware on the SO? There have been some improvements in the newer version included number of open files IIRC. I'm running the the newest myself with no issues.

I would say your times are very similar to mine and I have a very similar setup as yours. You could try to change your jobs to land to the CIFS configuration on the SO, but you will not see the same dedup and you will not be able to do it over fiber. I tried to do this of 10GB network and was not pleased with the results.

There is a lot of extra processing that goes into the deduplicating the data. This is the trade off for backup performance. I'm currently in a bit of a redesign with my storage. I'm going to land my backups to local disk then run a copy job to our local SO. Are copying your data to a remote SO? I'm finding that doing a copy of the data from a local SO to remote SO through the Veeam software is getting worse compression. Veeam support recommended I change to land backups locally as well.

splatt wrote:Hi I've been battling this for around 2 months with HPE and Veeam Tech support
One issue was that VM's with many hard disks get really slow e.g. a file server with 10 separate VMDK files can take hours, at 7MB/s per disk yet a VM with a single disk can transfer at >500MB/s over the same infrastructure - COFC 8GB FC

Never seen performance that bad. Do you have too many concurrent tasks enabled on your appliance?

splatt wrote:One issue was that VM's with many hard disks get really slow e.g. a file server with 10 separate VMDK files can take hours, at 7MB/s per disk yet a VM with a single disk can transfer at >500MB/s over the same infrastructure - COFC 8GB FC

Basically, backup speed is expectedly higher if you backup multiple VMs, since each VM is processed within a separate write stream, while all disks of a single VM occupy a single stream, which is limited by ~150MB/s. That said, in your case the speed is indeed pretty slow (7MB/s).