Absolutely! Since its official commercial release in 2017, JuiceFS has been running in production environments of a variety of internet and high-tech enterprise, carrying more than 700 days of different workloads and near 2PB of data. At the same time, each version of JuiceFS needs to pass extensive testing before release.

Besides, JuiceFS is designed to be a high available service (replicated in multiple available zones), with targeted uptime SLA as 99.95% per month. The availability of JuiceFS is also determined by the availability of underlying object storage, which is usually claimed to be high available, for actual SLA, please check the docs for the public cloud you are using.

In addition, JuiceFS supports automatic replication of data to another object storage in a different public cloud or region for outstanding availability and reliability.

JuiceFS is implemented via FUSE and can be used on Linux, BSD, and macOS that support FUSE. Windows client is under development and will release soon. Most Linux and BSD distributions have built-in FUSE modules. You need to install or compile the FUSE module if not exist. FUSE for macOS needs to be installed on macOS.

JuiceFS is a distributed file system, the latency of metedata is determined by 1 (reading) or 2 (writing) round trip(s) between client and metadata service (usually 1-3ms within same region). The latency of first byte is determined by the performance of underlying object storage (20-100ms). Thrughput of sequential read/write could be 50MB/s - 400MB/s, depends on network bandwidth and how the data could be compressed.

JuiceFS is built with multiple layers of caching (invalidated automatically), once the caching is warmed up, the latency and throughput of JuiceFS could be close to local filesystem (having the overhead of FUSE).

All the metadata updates are inmediately visible to all others. The new data written by write() will be buffered in kernel or client, visible to other processes on the same machine, not visible to other machines. Once flush(), datasync() or close() is called, the buffered data will be commited (upload to object storage and update metadata), will be visible to all others once the call returns.

After a certain time, call fdatasync() or close() to force upload the data to the object storage and update the metadata, other clients can visit the updates. It is also the strategy adopted by the vast majority of distributed file systems.

You could mount JuiceFS with --writeback parameter, which will write the small files into local disks first, then upload them to object storage in background, this could speedup coping many small files into JuiceFS.

The size of JuiceFS is the sum of size of all the objects, each file or directory has a minimum billable size of 4KB (same as the Azure Data Lake Store billing method), we recommend storing data in larger files to save costs and improve performance.

The realtime size of each directory (including all the files and directories in it) could be checked in the web console.