My source folders are UNC-Paths and not C:\ and non of them contains a file from the user-folder (server is the username of the user that runs Duplicati) nor from the same disk or from the same host! (both are different hosts than the one Duplicati is running):

in the restore I do not have c! The first job has 2 UNC-Paths (to other machines, maybe you remember my thread with the restore gui and 2 UNC-Paths, that is this job) and the second job has one UNC-Path to a photo-folder on also an other machine.

Thanks for posting the job export - I think I may have misunderstood the error. This particular “hash mismatch” error only seems to come from one place in the code - when getting files from the destination and saving them locally as a temp file. (Assuming I’m reading things correctly.)

So I think what’s being reported is that a dlist file (a list of what’s been stored in a specific dblock file in the destination) has been downloaded into the temp folder but the resulting hash validation of the downloaded file doesn’t match what the local database was expecting so Duplicati treats the file contents as suspect.

My guess is that in the logs just before the hash error you’ll see a message similar to “Downloaded xxx MB in yyy seconds, zzz MB/s” (it might also start with “Downloaded and decrypted…”). If so, then it looks like the file being downloaded is the right size (otherwise you’d be seeing a different error) but the resulting hash of the file doesn’t match for some reason.

Assuming all of that is correct, it might be time to pull @kenkendk or somebody more familiar with the hashing checks into the conversation…

Just out of curiosity, do things work as expected if you enable the --no-backend-verification parameter? (Note that this isn’t a fix so much as a check for narrowing down when/where the error is happening.)

Is there another log anywhere on disk or so or can I change the loglevel somewhere to get more detailed information

If you add --log-file=<path> and --log-level=profiling to your job then you should get a very detailed log saved to .

thommyX:

Since this error the next 3 executions had no error

Well that’s good (it worked) and bad (it’s intermittent / hard to diagnose). Since nothing changed at your end I guess we’re left to assume hubic fixed something or there’s something occasionally going bad between them and you (aka flaky internet).

Sorry to hear the errors are back - but it’s good that you have the logs!

So at 19:34:08Z we can see the first of 5 downloads of a dblock (archive volume) file. Each attempt results a different temp file name (as expected) as well as the same error of the same download file hash not matching the same database expected file hash.

As I think I mentioned before, the file SIZE is correct in all 5 downloads, otherwise we would have seen a file size error instead of a hash error. So we’re still stuck with the question of is the database correct or the actual file.

I’d have poke around but I believe there is a parameter that tells Duplicati to not do the hash test (or to ignore the results) which would allow you to do a restore regardless of the error at which point you could manually verify the resulting files.

Unfortunately, I still have no idea what is causing this issue in the first place. I think @kenkendk is the expert on the database (and hashing) stuff so hopefully he’ll get a chance to see this and have some ideas.

I’d have poke around but I believe there is a parameter that tells Duplicati to not do the hash test (or to ignore the results) which would allow you to do a restore regardless of the error at which point you could manually verify the resulting files.

I can’t find this commandline option. Do you know the name of it?

thommyX:

But many problems with webdav boxcom, webdav teracloud and hubic.

I have 3 linux systems running duplicati and all of them are backing up to webdav on a qnap nas. All of them have more than one backup sets defined. But only the home folder on my desktop machine is getting this hash mismatch error.