SDFS is designed to support the needs of virtual environments including the VMware, Xen, and KVM hypervisors.

The filesystem can deduplicate inline (at a line speed of 150Mbps or greater) or periodically based on needs and this can be changed on the fly.

Support for file or folder level snapshots is also a feature.

With support for deduplication at 4K block sizes, virtual machines data can be deduplicated and stored locally, across multiple nodes or in the cloud.

It supports some 3TB of storage per gigabyte of memory.

A design goal was a distributed architecture and SDFS is scalable to eight petabytes of capacity with 256 storage engines, which can each store up to 32TB of deduplicated data.

Each volume can be up to 8 exabytes and the number of files is limited by underlying file system.

The requirements for Opendedup are a 64-bit Linux distribution (it’s tested and developed on Ubuntu), Fuse 2.8 or greater, 2 GB of memory and Java 7.

Silverberg designed Opendedup to run in user space and be object-based because it would be platform independent, have a faster development cycle, easier to scale and cluster and to provide flexibility for integrating with other user space services like Amazon S3.

There is also the opportunity to leverage file system technologies like replication and snapshotting.

The latest release of SDFS, version 0.8.8 adds better I/O performance, scheduling of filesystem tasks, and a fix for a data corruption issue when removing unused deduplicated chunks.

The maximum file size it currently limited to 250GB with 4K chunk size.

Opendedup’s architecture consists of a SDFS Volume (one deduplicated file engine and one Fuse-based file system); a dedup file engine (manages file-level activities); a Fuse-ased file system; and a dedup storage engine which is the server-side service that stores and retrieves chunks of deduplicated data.

SDFS is licenced under the GPLv2. Windows support and block level replication are on the Opendedup roadmap.