It has some storage limits and with many collaborators on many projects, those paid 50 GiB are lost fairly quickly.
–
HenrikJan 31 '11 at 19:37

3

There may be unforseen consequences in allowing companies like Dropbox to store our personal data. Thankfully, there is a project underway to make an OS that can run your own personal cloud on a plug-style ARM-powered computer.
–
daithib8May 19 '11 at 13:56

2

Ironic that this isn't as simple as a sudo apt-get install ubuntu-one-server.
–
PrateekJan 28 '12 at 11:08

Unfortunately it is using git DVCS as backend not suited for ~1TB binary data, since modifications on binary data will bloat server space usage. But beside that it looks promising.
–
mathFeb 3 '12 at 21:30

c. Pros: Windows X64 client, mature, AD-integration with ACLs, features no other project has started to implement. I think this might be a good starting point. Cons: Novell might not use its public svn repo as the primary repo and only do code-drops. I don't know exactly about this though. Might be too coupled to openSUSE to easily install on Ubuntu. To check out its algorithms.

scp/rcp - deprecated in favor of rsync

DRDB - block device mirroring tools for distributed RAID-1, i.e. a server-variant of dropbox. I haven't checked out its source code yet, but it's linux only. The actual algorithm would probably be easy to combine with the source code in my musings below this software-listing.

a. Versioning: internal message format over LAN/WAN

b. State: seems mature enough

c. Pros: stable enough for linux, Cons: no other operating systems are supported

Right now I'm investigating improving compile-times on a Virtualized Windows 7, where the compile-times on a Windows 7 on metal is 40 s, but virtualized approx 3m 20s. I'm thinking of writing an ioctl driver that is a write-through cache that looks like a ram-disk for selected folders on NTFS.

Using the above software, I think a week's worth of 2-3-person full-time development would produce a usable Alpha that doesn't lose you files by combining the above softwares.

On my system then, the general idea would be;

Mount a virtual drive \?{GUID}, that is the ram-disk and RW-cache. The software creating this virtual drive takes two input parameters (that are vital):

a. The target folder; this is the SMB folder, so I will be letting the operating system's network stack handle the actual IO. In my case this is in turn the VMWare virtual folder, that has in itself a target on an ext4 drive, but it could easily be your file server using SAMBA/SMB.

b. The path of the folder to be mounted, e.g. C:\ramdisk

This code for creating virtual volumes be taken from TrueCrypt's code, in /Driver/DriverFilter.c (among other files)

The drive uses SMB/the VMWare/network protocol to fetch data when it starts; it fetches with a low task priority, asynchronously from the network and fills its cache. It could use a simple compacting algorithm and have 1 thread that uses message-box type continuation passing to get great performance. On Windows it could use the normal async IO calls, and on linux it could use the epoll/inotify implementation and take code from nginx.

My service that is the ram-disk mounts the unnamed ramdisk drive as an NTFS folder. All programs can continue writing to C:\ramdisk, or whatever I call it.

Async copy from network still going on. With a read-rate of approx 100 MiB/s and 2 GiB ramdisk, it would be 20.5 s to read all data.

Each call to read would perform an in-CPU calculation of the index into a fixed n:ulong GiB max sized array. It would require conflict resolving though or read-write locks. If we'd implement a conflict-resolve algoritm like those available through Microsoft Sync, we could pass each chunk that conflicts as a message to another conflict resolve-process. Dropbox solves it by creating a new file and naming it "PrevFileName Username's Conflicted Copy (yyyy-MM-dd).ext". Perhaps this could be altered through a small widget, if one is compiling against that single source -- the widget would detect outstanding changes as messages/events and choose the conflict resolution protocol. As such, when programming against a folder in exclusive-mode, the Windows VM could set the widget to 'exclusive'.

This would have these PROs

It would be non-blocking / async

It would make the assumption but not require that one computer will be writing mostly to the files.

It would work for arbitrarily large files

It would work on *nix and Windows by tying together the mentioned projects.

It would work when high read-performance is needed (i.e. the files are physically located on disk)

When the conflicting events are reached, one could provide a user interface app that allows the user to write/download plugins that act sanely for different sorts of events -- i.e. different sorts of files. E.g. a text file could be brought up with Kompare/WinDiff, while a binary would be duplicated and saved as another file.

I don't think this is quite what you are looking for, but it depends on your intended usage.

CrashPlan is a backup software package and related online backup hosting service, but what's different is that their software has a mode that allows you to have your data backed up over the internet (or LAN) to another PC running the software.

This means the destination doesn't have to be in the cloud. It's not quite like dropbox in that it's more about backing up rather than syncing and accessing files from everywhere, but if it's just backups you want then it works well. If you want to access the backed up files from the other PC I think you can do a "local restore" but it's not something I've tried.

The basic software package is free and supports the "backup to another computer" mode, but only does scheduled backup but there is a "pro" version of the software that also costs and does real time syncing rather than just the scheduled backups. (Cloud storage is also an optional pay per month extra)

-1 I don't think this is the best solution and hence not the answer.
–
HenrikJan 31 '11 at 20:41

-1 this is a rsync server not a dropbox like solution...
–
ArmanAug 7 '11 at 19:53

1

This solution is limited in helpfulness because it simply links to another article, creating a risk of linkrot. This answer should be edited to be more substantial, while simply citing the given link as a source.
–
Christopher Kyle HortonNov 13 '11 at 21:27

Whilst this may theoretically answer the question, it would be preferable to include the essential parts of the answer here, and provide the link for reference.
–
TimMar 21 at 15:52

Are you saying that within a GlusterFS cluster, files will be synchronized while the clients are connected to others, but the filesystem will still be available locally to a client that is disconnected from the others? I'm thinking of using this to synchronize between my laptop and server.
–
Ryan ThompsonSep 18 '10 at 19:46

Yes. I haven't finished my testing yet but it appears to work without any problems.
–
Richard HollowaySep 22 '10 at 15:11

This doesn't work on Windows, so it's not "dropbox compliant".
–
HenrikJan 31 '11 at 20:42

No one's mentioned bitorrent sync? Runs on anything - Ubuntu, windows, many common smartphone OSes, raspberry pi.... you name it, it probably works, and as a regular user. Encrypted transfers, files arn't stored on the cloud (though I think bittorrent runs the tracker for it), reasonably fast, you can selectively share folders, and almost no complication involved, you just need to copy and paste a key to the other system.

I'm keeping my eye on AeroFS. It looks like it could be a Dropbox-like service where storage in the cloud is optional. Don't know if/when they will implement mobile support and I guess that would require that you sync those files to the cloud aswell. I'm primarily interested in a fairly painless syncing solution between Windows, Mac and Linux computers.

I use SSHFS to mount directories on my server as local directories on my desktop and laptop. All file changes are saved directly onto the server. Unlike dropbox though, the files are not stored locally on your client machines. I think this is great because you don't have to worry about syncing and versioning, but it's not ideal for offline use or very large files.

It's very direct and simple, and I find it to be the best solution. The only thing I don't use it for is large media like pictures and movies because all files get accessed over the network. Those I sync with Rsync.

While there are interesting alternatives listed here already and this is an older question, I am convinced that this topic is clearly not outdated and - on the contrary - is gaining more and more importance, due to the last privacy breaking events.

I want to share my own experience therefore.
My current solution for an own hosted cloud alike environment is Seafile.

There were no functional issues for me, until now! (using this since a few weeks now)

Feature set is basic (compared to owncloud e. g.), but I emphasize that everything works here!

No direct proxy support (at least for the linux client - and the webinterface!). Note: the webinterface works, but download of files via webinterface does not work behind a proxy - don't know if this is possible somehow