One of the constant dilemmas I encounter in software development is where to put files. In theory, it doesn't matter; I can set the lookup paths and file references however I want, and the software will compile and run. In practice, I feel it is one of the biggest factors in how comfortable I am developing a project. As a programmer, I need to be able to reload a big ol' wad of context whenever I switch projects, and the directory structure serves in part as external memory. Refactoring, collaboration, permissions, and deployment are all made easier by proper directory structures and naming. Code rot, the scourge of long-running projects, is multiplied by poor initial code management decisions. In short, file management makes a lot of little differences that can make or break a project in the long term.

Usually, the standard hierarchical filesystem model is perfect. I have a project Foo, it contains folders for source, compile info, and binaries, and each of those folders is probably hierarchical as well. The source and binary folders usually have matching subdirectory structures, but that particular duplication of structure is OK, because the binaries folder is generated, not edited; I never have to think about changing both directory structures at the same time.

This standard model fails when I have to work with multiple related projects. If I put the Foo and Bar projects side by side, I notice that both have src and bin directories. Perhaps I should have a src folder and bin folder instead, each subdivided into Foo and Bar directories! Each has advantages and disadvantages that are immediately obvious. This is known as the problem of cross-cutting concerns, where there are strong yet orthogonal organizational principles that compete for priority.

I like to imagine that we would be better off with a new filesystem model, perhaps one based on matrices, tagging, smart searches, filters, or DAGs. I don't know what the right answer is, but I'd like to hear your thoughts on the matter.