We software developers take for granted the notion of "a file-system", with its paths, directories, files and operations. Yet when it comes to distributed filesystems, those notions built from years of using desktop systems actually constraining us to a metaphor which is no longer sustainable

This talk looks at our foundational preconceptions from the perspective of trying to define a single operation in Hadoop HDFS, rename(), what it takes to implement it in a distributed filesystem —and what has to be done to mimic that behaviour when working with an object store.

Preconceptions about rename()'s semantics are deeply embedded in large scale applications such as Apache MapReduce, Apache Hive, Apache Spark and the like, being the operation used to atomically commit work —and so do not work the way we think they do on Object Stores like Amazon S3.

We have to rethink our strategies for committing distributed work, with Hadoop's S3A committer an example of the world we have to move to. The age of rename() is over.

(Warning: This talk may contains formal set-theoretic specifications of distributed system semantics, though disguised as Python code so developers can understand it rather than run away)