It’s sometimes said that in Unix, everything is a file. In Plan 9, everything is a filesystem, and they can be transparently or opaquely mounted on top of one another to customise the environment for each process.

For example, where Unix has a special API for making network connections and configuring them, Plan9 has a particular filesystem path: write a description of the connection you want to make, read back a new path. Open that path, and the resulting file-descriptor is your TCP socket. If you want to forward your connections via another computer, you don’t need a special port-forwarding API or a VPN, you can just mount the remote computer’s network-connection filesystem over the top of your own, and everything that makes a network connection from then on will be talking to the TCP stack on the remote computer.

I guess the big thing about Plan 9 is that it really tried to make everything a file. So using the network was just opening files, writing to the GUI was just writing to files &c. Really, the differences from Unix are mostly a result of that goal, e.g. in Plan 9 any user can create his own namespace of files & directories from other files & directories.

The longer version would go into detail about how that actually worked (short version: really well).

What I especially like about home-manager, is that it allows me to try and gradually migrate stuff to Nix, but I still can do e.g. nix-env -iA nixpkgs.umlet for quick additions/tests, and still have an escape hatch of sudo apt-get install ... if something is not available (or broken for me) in nixpkgs.

Every time I write something, I get 80% of the way there (often more), and then upon re-reading it sounds obvious and trite, and I talk myself out of publishing. I do this at least once a week. I’ve decided to try fighting the urge to do that, as a matter of personal improvement.

Thanks for this, however I still can’t understand how and why GNU Stow (or other similar dotfile managers or Ansible playbooks, etc) are better than a simple shell script. Precisely because I’m sharing my dotfiles across multiple devices, platforms, and operating systems I want them to be as platform agnostic and minimal as possible, and without any external dependencies. My script simply installs/symlinks everything and later I use git pull to sync changes across machines.

Yep I also use an install script (although mine is in Python), the reasons being:

I need to support 3 platforms (Ubuntu/Mint, macOS, Windows), with different things to install in different ways (or not at all) depending on the platform.

In some cases, I find it easier to procedurally generate a dotfile that will point to a resource located in your dotfiles repo, instead of symlinking everything into a fixed/hard-coded location.

A script can also manage your sub-repos (pulling/cloning), so that everything is done in one command.

A script can optionally do a subset of something, so that if you want to just update, say, your tmux config, you can just run that and it will just pull your tmux plugins’ repos, redo the symlinks, etc., and nothing else.

It’s certainly not for everyone. I use it because its default behaviour matches my workflow perfectly, and there’s no need for a shell script. The only thing simpler than a simple shell script is no shell script at all!

Unless I’m missing something you are creating an extra dependency before you can install (and manage) your dotfiles: Stow. I.e., you have to download, compile, and install from source or via a package manager. What if you have to use systems you’re not the admin for and have no sudo rights? Or Perl is too old (Stow’s requirement.) Or, or, or…

What’s the benefit of symlinking vs. copying? I guess being able to edit dotfiles in their usual places (vi ~/.vimrc) is cool, but I actually do make temporary local changes sometimes, and I don’t want them in the repo.

I just have “modules” (directories) with apply.sh scripts and a really simple install.sh to install these “modules” (also rinstall.sh to install over SSH to a machine where I don’t want to clone the git repo). So the repo works as kind of a “staging area” (like the git index).

Most of the things I want to make local customisations for have ways to include other files (sh, ssh, my editor, etc.), so I usually make my main config files include a .foo.local or .foo.d/* whenever possible.

Some of the things I stow with stow are directories for precisely that same reason. I wrote an i3wm config manager that uses ~/.config/i3/config.d, and what I put into stow was literally that directory so I can add new files to ~/.config/i3wm/config.d and they magically show up in my repo where I can add them since config.d is a symlink.

Yea that was kind of my (poorly worded) point. with copy-not-symlink, your script now has to be smart enough to recognize conflicts, not copy, and invoke git or whatever to help merge changes. with symlinks, you use one tool to symlink (stow) and another to resolve conflicts (git). Stow doesn’t care about conflicts, and it doesn’t have to. It is simpler and less work than creating your own script to copy-not-symlink.

Stow is a really neat tool and I use it for managing my ~/.local tree which contains locally built software packages. However, for dotfiles I just check my whole home directory into a git repository and add ignore rules for files and directories that don’t need to be versioned. It’s pretty low complexity and has the added benefit that you can spot config drift and new dotfiles (which you may or may not want to version) very quickly. I’d very much recommend that over using stow to manage dotfiles.

I thought about this, years ago - and I see one major problem. Using stow is a snapshot in time, e.g. in its usual use case I install foobar-1.2.3 to /usr/local/stow/foobar-1.2.3 and then it symlinks the foobar binary to /usr/local/bin/foobar. I don’t see how this would apply to dotfiles, as this set of files is ever-changing. You start using new tools,. stop using old ones. So everytime for it to be meaningful you have to move them to the stow path and re-stow, or not?

I’m not saying my solution is better - it’s not so different. But when I start using a new machine I clone my dotfiles to ~/code/dotfiles and symlink a select few config files to ~ - but I do know that it’s only a small subset, and then I might migrate some new ones into that set. So basically I have the same problem as I described above, but I also don’t have stow (just a shell script that does this job once) to think about. And yes, I do use stow regularly in /usr/local, but I think it doesn’t solve the problem in a meaningful way, but maybe I’m missing something.

Stow is cool and I remember looking at it when I was trying to find something better than git submodules, and symlinking for managing my dotfiles. I ended up going with vcsh, and myrepos, and have so far been really happy with it. It’s a bit more of a minimal solution, and a wrapper around git but it does what I want.

I wrote a utility earlier this year (dotenv) that caters to a similar use case (managing dotfiles) but with the added requirement of having customizable and overridable dotfiles. I did that because I found that new hires nerver really configured the command-line tools we use on a daily basis, and I wanted to create a standard configuration that they could still customize. It’s quite new (a few months old), but I use it every day. If you’re interested, let me know what you think of it!