It’s possible @kenkendk or a user of one of the ~27k One Drive for Business backups that ran in December might have some suggestions…

However, it should be noted that the first two weeks of 2018 have shown an ~50% drop in OD4B backups compared to the last 5 weeks of 2017, so it is possible something has changed in how things work for SOME users.

@Cloudbyte_Pony did you get this working? Here it is always blocking on the list/verify. Uploading is fine.

Same here. After spending some time discovering how to connect (using edu OneDrive account here), it connected and wrote the files, but it can’t read, so it freeze in “Verifying” step “forever”.

Found an open issue at github. And I saw that@kenkendk added a “UserAgent” header code to canary version. Does it means that this bug is fixed, but Onedrive for Business (provided by Education, in my case) won’t work with current Duplicati beta release?

Edit: I’m on macOS and testing CloudMounter to mount OneDrive (for Business) as a network drive, then I can setup duplicati using it as local folder (folder location is kinda hidden). It seems to be working fine, backup jobs are done quickly mainly because there is no waiting for the upload stage, as duplicati just write files to local path and CloudMounter will automatically upload. I’m just wondering how much additional local disk space will be impacted with this, because I believe using official online storage providers’ API, duplicati won’t write new files until the last written file is uploaded, somewhat sequential. For now, I’ll be testing/using OD4B like this as redundancy, but keeping the main backups to Google Drive just for safety.

Are you using a version of Duplicati that’s new than the Jan. 18, 2018 commit with the “UserAgent” header change?

As for local disk space impact when using CloudMounter, you might need to contact them directly. They’re website says cloud files aren’t stored locally, but I didn’t find anything that covered what happens when the file STARTS locally.

The best case (disk space wise) would be that the local file is MOVED to the cloud, but it’s unclear how long it takes for that to happen after the file is done being written.

So worst case scenario is local disk usage would be the full backup size (as reported in Duplicati) while best case would be temporary local disk usage of maybe a few dblock (archive Volume size) files eventually going to zero.

Note that CloudMounter mentions encrypting the files on their way to the cloud. Be careful with this setting as if you ever decide to switch back to a direct WebDAV or API connection you may end up getting the CloudMounter encrypted files, which Duplicati wouldn’t be able to use (unless you go back to using CloudMounter).

Are you using a version of Duplicati that’s new than the Jan. 18, 2018 commit with the “UserAgent” header change?

I’m using the latest beta (You are currently running Duplicati - 2.0.2.1_beta_2017-08-01).

JonMikelV:

So worst case scenario is local disk usage would be the full backup size (as reported in Duplicati) while best case would be temporary local disk usage of maybe a few dblock (archive Volume size) files eventually going to zero.

Exactly. So I couldn’t for example setup a backup job of my 1TB NAS using a Macbook Air, otherwise Duplicati would start process all blocks and just sending to “network folder”, in which macbook will eventually be out of space. Perhaps if there is any duplicati’s special/advanced setting to allow maximum size for each job run, I then could use as a “quick hack”.

JonMikelV:

Note that CloudMounter mentions encrypting the files on their way to the cloud. Be careful with this setting as if you ever decide to switch back to a direct WebDAV or API connection you may end up getting the CloudMounter encrypted files, which Duplicati wouldn’t be able to use (unless you go back to using CloudMounter).