Start

End

Group

AIM

MSN

Website URL

ICQ

Yahoo

Jabber

Skype

Location

Interests

I've been struggling with this for days. I've rebooted multiple times. I've disabled all start up programs so nothing is accessing the cloud drive. I've transfered over small files to upload all content even the kilobytes and it still won't detach. I've been at this for days now. I would like to safely detach so i can reconnect to a new computer. I had to wait for upload to finish and that took days now this is taking days.. PLEASE HELP
edit: UPDATE: still can't detach cloud drive after another 24 hours of trying.
I've restarted the computer. I've reauthorized the drive.
AND IT STILL WON'T DETACH!!
UGH !!! This is so frustrating.

Hello!
I am in the midst of moving server and I have configured my new target according to attached image. The settings are quite clear an intuitive. My intentions are to fill up the drive sequencially, not in parallel (parralellelly? ). However, as the picture shows the drives are filling in paralell. It feels like the system is putting the files on whichever drive has the most absolute free space.
There are no strange file placement rules in place.
Any help is much appreciated!
UPDATE
I activated my license, specified another drive as SSD and rebootet several times (due to something else) and the software now seems to adhere to my requested placement order. I won't touch any settings out of fear (currently still filling drive #1) and if drive #2 starts to fill next i will assume it had fixed itself, update this post and start fiddling with the settings again.

So I just installed the Scanner after using Drivepool for about 15 days now, however, with both of my disks I get "The on-disk SMART check is not accessible on any of your disks". Also, this http://prntscr.com/7a7lgz. However, smartctl seems to give me SMART data. Any ideas on how to fix? I'm currently using a HP Proliant Gen8 Microserver.
I tried unsafe DirectIO but that did not seem to work. Any help would be much appreciated

Hello Everyone,
I just started testing CloudDrive and have run into a problem copying files to Google Drive.
I can copy 1 or 2GB video files to the CloudDrive folder with no problems.
I have trouble when I copy 5 or 10 2GB files ... after the first couple files are copied to the folder, the file copy process slows down. After a while, the copy will abort - OSX displays an error that there was a read/write error.
In CloudDrive's logs I noticed that Google Drive reported a throttling error?
22:26:22.1: Warning: 0 : [ApiGoogleDrive:14] Google Drive returned error: [unable to parse]
22:26:22.1: Warning: 0 : [ApiHttp:14] HTTP protocol exception (Code=ServiceUnavailable).
22:26:22.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,327ms and retrying.
22:26:23.5: Warning: 0 : [IoManager:14] Error performing I/O operation on provider. Retrying. The request was aborted: The request was canceled.
I thought write data is cached to the local drive first then slowly uploaded to the cloud? Why would there be a throttling error with many large files are copied?
Thanks.

Running DrivePool 2.1.558 on Windows Server 2012 R2.
Don't own Scanner.
20TB DrivePool, 6 Drives, 6 SATA ports (all full)
One (1) of these six drives, a 3TB Toshiba, is in the process of failing BADLY.
Symptoms… whenever I attempt to copy data from the Pool, ONLY files being read off the failing disk slows to a craw of around 50-80Kb… then stops and chugs… then reads another 40-80kb… stops forever while the HDD churns. I'm assuming the drive's built-in ECC algorithms are working overtime to recover the bits and the SMART system is reallocating sectors (but the drive is nearly 100% full… maybe 60-70GB free out of 3TB. The files I'm copying off DO eventually get read though. And so far, the files I've copied off the Pool and spot checked seem to be error free, but it can take up to 1-2hrs to read back a single 1GB file from the damaged sectors!
Anyways, it's OBVIOUS this 3TB Toshiba will die any SECOND now.
So my options:
Option #1
Buy a PCI-e SATA III controller card. Buy a new 3TB+ disk. Add new controller & disk to server then use the built-in "remove drive from Pool" function to empty all data off the failing Toshiba hdd.
Q: Will the "remove drive from Pool" function timeout/error out? As noted above, I HAVE been able to successfully copy files off this damaged HDD using plain old File Explorer… it just takes a LONG TIME. How patient is the DrivePool evacuation function? Just as patient as File Explorer? I know there's a "force removal of damaged disk" checkbox, but frankly I'm wary of that option. Nearly all this data is multi-part .RAR files without parity (music, movies, audiobooks). If a single file from a multi-part .RaR-ed folder gets skipped b/c DrivePool decides it's taking too long to read, then I effectively lose 100% of the data in that folder even if 99% of those .Rar's are safely residing on the other 5 functioning HDD's in the Pool.
Q2: Does the "remove drive from Pool" function actually DELETE all the contents off the removed HDD after the removal process completes successfully or even errors out??? What if the process finishes but with errors?? Does DrivePool delete all files off the removed drive? If so, that's bad… gives me no opportunities to use a data recovery tool like SpinRite after the fact to recover those files.
Option #2
Physically remove the failing Toshiba 3TB drive from the server. Place the failing drive into a 2nd PC running Windows 10 x64. Place a new 3TB+ replacement drive the 2nd PC and then manually copy contents to the new hdd using File Explorer. Finally, move new disk back to server and run some command to reintroduce the "new" HDD containing the old/existing files to DrivePool???
Q1. There is a total filename/file path length limit in Windows I don't know the exact number, 127 chars?, but somehow you guys seem to get around this issue inside DrivePool. But I've encountered this problem when coping highly nested file trees using File Explorer (which my DrivePool contains (ie.. Media\Video\HighDef\TV\ShowName\Season\xxxxx.xxxxxx.xxxxxx.xxxxxx.xxxxx\yyyyy.yyyyyy.yyyyyy.yyyyyyy.yyyyyy.part99.rar). Am I going to get thousands of "filename/file path too long, please rename file or shorten file path" errors when I attempt this manual file copy using Win10 x64? Should I try using a disk cloning tool like Macrium Reflect Free Edition to clone the disk instead of using a manual file copy with File Explorer to get around such issues?
Q2. If I'm successful copying my disk over, I know there's a manual command to force DrivePool to notice the replacement disk and make it reindex/remeasure/re-integrated the data back into the DrivePool. Is there a URL to this FAQ?
Sorry for the really long post.