These requirements arose from two major changes in storage systems and usage — the size of storage in use (large or massive arrays of multi-terabyte drives now being fairly common), and the need for continual reliability. As a result, the file system needs to be self-repairing (to prevent disk checking from being impractically slow or disruptive), along with abstraction or virtualization between physical disks and logical volumes.

ReFS was initially added to Windows Server 2012 only, with the aim of gradual migration to consumer systems in future versions (although modifications were quickly developed by enthusiasts for the latter). The initial versions removed some NTFS features, causing concern among onlookers, such as quota systems and extended attributes. Some of these were re-implemented in later versions of ReFS.

In its early versions (2012–2013), ReFS was similar or slightly faster than NTFS in most tests,[4] but far slower when full integrity checking was enabled, a result attributed to the relative newness of ReFS.[5][6] Concerns were also raised over Storage Spaces, the storage system designed to underpin ReFS, which is able to fail in a manner that prevents ReFS itself from recovering.[7][8][9]

ReFS uses B+ trees for all on-disk structures, including all metadata and file data.[10][11] Metadata and file data are organized into tables similar to a relational database. The file size, number of files in a folder, total volume size and number of folders in a volume are limited by 64-bit numbers; as a result, ReFS supports a maximum file size of 16 exabytes, a maximum of 18.4 × 1018 directories and a maximum volume size of 1 yottabyte (with 64 KB clusters) which allows large scalability with no practical limits on file and directory size (hardware restrictions still apply). Free space is counted by a hierarchical allocator which includes three separate tables for large, medium, and small chunks. Data scrubbing can be enabled optionally.[11]

Built-in resilience

ReFS employs an allocation-on-write update strategy for metadata,[10] which allocates new chunks for every update transaction and uses large IO batches. All ReFS metadata has built-in 64-bit checksums which are stored independently. The file data can have an optional checksum in a separate "integrity stream", in which case the file update strategy also implements allocation-on-write; this is controlled by a new "integrity" attribute applicable to both files and directories. If nevertheless file data or metadata becomes corrupt, the file can be deleted without taking down the whole volume offline for maintenance, then restored from the backup. As a result of built-in resiliency, administrators do not need to periodically run error-checking tools such as CHKDSK when using ReFS.

In 2013, Windows 8.1 (64-bit version only) became the first client operating system to provide some support for ReFS. In Windows 8.1 and Server 2012 R2, ReFS reacquired alternate data streams and automatic correction of corruption when integrity streams are used on parity spaces.[15] ReFS had initially been unsuitable for Microsoft SQL Server instance allocation due to the absence of alternate data streams.[16]

Adding thin-provisioned ReFS on top of Storage Spaces (according to a 2012 pre-release article) can fail in a non-graceful manner, in which the volume without warning becomes inaccessible or unmanageable.[7] This can happen, for example, if the physical disks underlying a storage space becomes too full. Smallnetbuilder comments that, in such cases, recovery could be "prohibitive" as a "breakthrough in theory" is needed to identify storage space layouts and recover them, which is required before any ReFS recovery of file system contents can be started; therefore it recommends using backups as well.[7]

Even when Storage Spaces is not thinly provisioned, ReFS may still be unable to dependably correct all file errors in some situations, because Storage Spaces operates on blocks and not files, and therefore some files may potentially lack necessary blocks or recovery data if part of the storage space is not working correctly. As a result, disk and data addition and removal may be impaired, and redundancy conversion becomes difficult or impossible.[8]

Because ReFS was designed not to fail, if failure does occur, there are no tools provided to repair it. Third party tools are dependent on reverse engineering the system and (as of 2014) few of these exist.[9]

In 2014, a review of ReFS and assessment of its readiness for production use, concluded that ReFS had key advantages over two of its main file system competitors. ZFS (used in Solaris and FreeBSD, among others) was widely criticized for its comparatively extreme memory requirements of many gigabytes of RAM for online deduplication, which ruled it out from a large number of medium and smaller systems. However, with online deduplication turned off (as this is unsupported in ReFS), it is a more even comparison, ZFS has memory requirement of only a few hundred MB. Offerings such as Drobo used proprietary methods which have no fallback if the company behind them fails.[17]

ReFS was also found to be capable of running slightly faster than NTFS in most tests.[4] However, with integrity checking enabled, ReFS was found to be greatly slowed, and to run "dismally", suffering "a huge hit on performance and very high latency"; benchmark testing shows around 90% slowdown.[5][6] However, they also point out that ReFS is still very much a newcomer ("essentially a “1.0” feature that is rough around the edges") and has not had the time to reach maturity that file systems such as ZFS have had.[6]

In 2012, Phoronix wrote an analysis[18] of ReFS vs Btrfs, an experimental copy-on-write filesystem for Linux. Their features are similar, with both supporting checksums, RAID-like use of multiple disks, and error detection/correction. However, ReFS lacks deduplication, copy-on-write snapshots, and compression, all found in Btrfs and ZFS.