shovas writes: As a developer and sysadmin, the benefits of revision control systems are clear. It only seems natural that a simpler, transparent approach to versioning files on a regular file system would be a net win. There's ext3cow and Wayback FS, and possibly some fuse-based projects, but each is either dead, immature or just not applicable. So what happened to the promise of versioning file systems? Hasn't everyone lost a file to a bad rm command? Hasn't everyone wished they could see a revision of a file in the past? What's the hold up?

It really seems as though the market has moved more towards snapshotting file systems (think ZFS, etc). I guess it's easier to deal with (technically) at the block level, plus easier for end users to understand and use (ever tried to get the hoi polloi to use a decent revision control system?). The end result is nearly the same for most practical applications, and ends up being faster (I have no citation to give you, but the fact that there *is* no mature versioned file system is my evidence. Must be har

Must be hard to write an efficient diffing algorithm that works for most file types

What has a diff algorithm got to do with a versioning file system? Every time a file is saved, it should be written as a new version. Diff would be used to compare two versions of a file, not to decide whether to make a new version.

An ancient approach was used in DEC's FILES-11 system back in the 80's, where each file name had a version number attached (such as FILE.EXT;158). Files could be specified explicitly including the version number, but if a file was specified without a version number, then the h

File versioning did not die with RSX-11. It continued with DEC's (now HP) RMS file system used in their OpenVMS OS, which is still very much in use on DEC Alpha and Intel Itanium CPUs.http://en.wikipedia.org/wiki/OpenVMS [wikipedia.org]