I love the core concepts behind Nix, and have great respect for their engineering abilities. However I am skeptical of their ability to achieve broad adoption beyond their current community of passionate experts.

There are two reasons for my skepticism:

1. User experience. Unless you're one of the "passionate experts", the Nix user experience is pretty terrible. The learning curve is punishing compared to competing systems.

2. Elitist culture. In my experience, the Nix community is too smart for its own good. Their technical foundation is so far ahead of mainstream systems, and their technical design so satisfying to passionate experts, that they've forgotten how to live a day in the shoes of a mere mortal. Try pointing out flaws in the user experience, or the need to offer more pragmatic ways to migrate existing systems, and you will be met mostly with derision and reminders of Nix's superior engineering. But superior engineering is not everything. If you want to spread the amazing potential of Nix to everyone, then you need to compromise with a flawed, imperfect world. You need to meet users half-way, and guide them to the promised land, instead of waiting for them to show up on their own. Otherwise someone will come along that will do it for you.

All this is eerily similar to what happened to functional programming communities.

1. Lack of clarity as to how language/application package managers interact with Nix (pip, stack, Vundle). Pretty much every time I've asked about this, I've been told to go use `nix-shell` or to install things through Nix. Increasingly, when I get odd behavior with applications I install through Nix, my first resort is to uninstall the Nix version and install it from apt; it might be a bit older, but I'm sure it'll work as expected. I've went through the apt package -> Nix package -> apt package cycle three times from what I remember off the top of my head, with python/ipython, Haskell tooling, and Vim.

2. Nix on Ubuntu feels like a second class citizen. Things that interact with graphics drivers often don't work properly, e.g. video players. (I understand that there are technical reasons why this is the case, but there's no warning that this is the case.) There only appears to be online package search for NixOS (https://nixos.org/nixos/packages.html#) and not for Nix on other platforms. Nox helps, but it doesn't seem to be the same feature wise (no ability to see the info you get by clicking the package name) and is also slow.

3. In general package quality is not great for less frequently used packages. Inkscape was missing potrace for a while. Rarely used packages go unmaintained.

4. Poor CLI. Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design. No feedback or suggestions if you type the wrong thing to `nix-env -e`. It looks like there are major changes to this in 2.0, so this might have been improved.

Despite being someone who's gotten their toes wet contributing to nixpkgs, I'm likely not going to be installing Nix when I upgrade from Ubuntu 16.04 to 18.04.

> Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design.

That stems from the namespace problem that Nix devs seem to think doesn't exist.

Packages that have the same name, or belong in a different namespace (hackage, pip, etc.) are confusing to find from a user's perspective. The real problem is that every package in a channel has to go into the same global namespace. Practically every package manager has this problem, but, in my opinion, Nix deals with it the most poorly.

To make matters worse, it isn't obvious which namespace .nix files are supposed to reference, or what functions exist globally. Nix really does need a lot of UX work, and I'm glad to hear it is getting some.

I have a related pain-point. I want to maintain nix packages on their own long-lived branches. I also want to base these branches on the latest stable release rather than unstable.

This would require upstream nixpkgs to treat my package branches as equal peers and to pull their changes with 'git merge' like in the Linux kernel workflow.

Status quo seems to be that upstream expects people to work directly on the unstable branch and to make contributions using short-lived topic branches that can be rebased at any time.

I spent a lot of time packaging Pharo Smalltalk for nixpkgs but I found it too complicated to stay in sync with upstream and so I ended up orphaning that package and redoing it on a downstream overlay repo that insulates me from the upstream nixpkgs workflow.

If I may rush to the commenter's rescue: Darch addresses your issues with NixOS by combining a familiar set of tools (Arch Linux) with stateless architecture. Its a fantastic welding of extreme package availability and best-in-class documentation with declarative dependability.

It's not clear to me that this really solves the same problems that Nix does.

I'd imagine this solution inherits any problems that pacman has. The Arch wiki states that "if two packages depend on the same library, upgrading only one package might also upgrade the library (as a dependency), which might then break the other package which depends on an older version of the library." (https://wiki.archlinux.org/index.php/System_maintenance) This is one of the problems Nix does not have by design; in fact Nix lets you mix and match packages painlessly.

This also doesn't seem to allow unprivileged users to install packages, which is kind of a side benefit of using Nix.

and I see the same issues that https://blog.wearewizards.io/why-docker-is-not-the-answer-to... warns of. Heck the author even admits this is the case in an adjacent comment: "I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all)" (emphasis mine).

Also, frankly, I don't want to run my personal computer like a server, with complete immutability and the need to build fat image files every time I want to try out an additional program. That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.

> if two packages depend on the same library, upgrading only one package might also upgrade the library

Yes, you are inheriting the nuances of whatever package manager you choose. Maybe another distro can give you a truly fixed package versions? You can also use apt to pin versions. This isn't something that Darch introduced, but also, it isn't something it solves. So yes, if you need 100% deterministic, Nix is your guy. I don't think this is a big issue though on Ubuntu systems, or most non-rolling distros. Their apt updates are typically well tested and don't bump major versions.

> need to build fat image files every time I want to try out an additional program

You can install your packages and use them when you like, without requiring a rebuild. Hell, that is even part of my workflow for some applications like docker. I have dotfiles with a "install-docker" and "install-vmware" aliases that installs it whenever I need it, instead of baking it into images.

> That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.

I disagree. If this was the case, then why Nix then? Obviously, stateless is valuable. I have multiple machines that I perform common tasks on. I hate having to always manage the updates when I get back to each one of them. With Darch, I can deploy one single image to all devices, and be confident they will never drift in installed packages/configuration. I never again have to ask my self "what machine am I on again?". Stateless may be the definitive way to run servers, but that by no means restricts it to only servers. I have been running Darch for a few months now, and I find it incredibly useful and calming.

There are two orthogonal problems here: a lack of package isolation and nondeterminism.

Nix isolates packages, such that updating one package has no impact on any other packages (with unavoidable exceptions like the graphics driver, presumably).

You're right that package isolation isn't much of a problem on non-rolling distros. One of the benefits of Nix is that you get some of the stability and predictability of an LTS distro with the freshness of a rolling distro when desired, without having to deal with package conflicts.

Incidentally, Nix doesn't need to be used in a deterministic manner. In fact, I don't think most desktop users of Nix care too much about determinism for most packages they run. I certainly don't; I'm happy to follow along with whatever arrives in my channel. Nix has features that support determinism, and I'm certainly glad they exist for when I end up needing them, but they're not necessarily why people use Nix.

> Obviously, stateless is valuable.

When I said "stateless", I was referring to the whole "cattle, not pets" view of servers, where the running state of any particular server is unimportant, with nothing in the filesystem being of value. I was arguing that needing to build a new image and reboot in order to change which packages are installed are a poor fit for the desktop use case, where frequent reboots are much more inconvenient than for the server use case.

I'm not sure what taohansen meant when they used the word "stateless"; they seem to mean something different when they say that.

Anyways, this point is not really applicable anymore, since you've stated:

> You can install your packages and use them when you like, without requiring a rebuild.

Presumably if you install additional packages uncontrolled by your tooling, then your systems can start to drift away from each other.

Nix does not have this compromise; there's no build step. At any given point in time you can reproduce whatever configuration you have on one machine on another machine, regardless of how piecemeal you arrived at that configuration.

The "why" is exactly the same reason NixOS exists. I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all). I looked into NixOS, but the DSL was too much, and the npm/pip/etc stuff was a mess. I am a fan of Arch because of it's package availability and documentation, so I figured out a way to combine the two, using a "Docker-ish" approach.

My entire machines are built on Travis-CI and pushed to Docker Hub. Once I make a change to my recipes, ~20 minutes later, I can pull a fresh image and boot again.

Another thing I didn't like with non-declarative OSs (non-NixOS) was that if I wanted to just test a package out, after removing it, it would leave shards of config/package dependencies still on my system. With Darch, each boot has a tmpfs overlay, which means I can uninstall/install to my heart's desire, knowing that only things I commit to my recipes will be persisted. For example, I was trying to setup Ruby, and I had to try many ruby environment managers before I found one I liked. After a reboot, I was certain that the other ruby packages I tried where 100% scrubbed from my machine.

I also like the Docker approach, because using layers, I can quickly switch our the "desktop environment" layer to i3/plasma/gnome/etc, or my base image from Ubuntu/Arch/VoidLinux. This makes distro and DE hoping a breeze.

As for using Darch as a server, I would wait until I get the Ubuntu image done. That way, the builds will be more deterministic (instead of using rolling distros). I can see using that for servers, or IOT devices. I also intend to add PXE support to boot these images from the network, making it easy to manage the operating system on a fleet of devices. In summary, it is really up to your recipes and what operating system you choose.

Sure, pinning is great, and the ability to ignore versions seems to be a great design goal. Unfortunately, that design goal is shoved down the throats of users, and makes a very messy - and difficult to traverse or use - namespace where different versions are just thrown in the name without another thought.

NixOS is my favorite Linux distro, and the only OS currently installed on my laptop. It's also frustrating in unnecessary ways.

Thanks for linking that discussion. I'm surprised that you didn't get acknowledgment of the UX issues there, and I'm pretty sure there are people who would like to see the UX situation improve. It just takes work... this release represents a large step forward in UX. I think they'd welcome your input in making 3.0 better! I made a couple small docs fixes the other day.

Perhaps I'm wrong but I think I not only agree with the nix dev in that thread, but I also think it seems kind of the opposite case to what the gp was describing in their second point.

From the gp's post:

> Their technical foundation is so far ahead of mainstream systems [...] they've forgotten how to live a day in the shoes of a mere mortal. Try pointing out flaws in the user experience [...] If you want to spread the amazing potential of Nix to everyone, then you need to compromise with a flawed, imperfect world. You need to meet users [...]

The definition of "user" can be vague, but if you're talking about "mainstream" and "mere mortals", I don't think setting up Haskell dev environments are the primary use-case. As was commented in that thread:

What I was complaining about was the difficulty installing a specific package version. That's trivial with other build systems, but not Nix.

> I don't use [package managers] to install dev dependencies.

I'm under the impression that most people do. Either way, I see the main advantage of using Nix to be setting up dev environments without installing a bunch of global packages, or dealing with a mess of files. Unfortunately, as a user, I find it to be much more of a hassle than it needs to be.

When quoting me, you changed my wording to suit your own argument, which seems to imply you get my point but are choosing to ignore it. (You swapped out "Apt", my distribution package manager, for "[package managers]", which would include both distro and dev language package managers. The latter are for devs, the former for users)

It certainly reads like "it" was to mean "the aptitude package manager" to me.

If you like to use a separate package manager for libraries/etc. that is fine. That is not something I enjoy doing.

Using Debian, I would frequently install a build dependency, and forget I cluttered my install doing so, leaving a package I don't care about or remember to be updated, and possibly break future installs. I read about nix-shell, and have been using NixOS ever since.

NixOS has clear room for improvement, but, unfortunately, some of the devs are adamant that it cannot - worse: should not - be done.

> It certainly reads like "it" was to mean "the aptitude package manager" to me.

The aptitude package manager is the package manager for the Debian operating system. It is for managing packages that make your OS run. Unless you're defining multiple repo versions in your sources.list and using apt pinning, it typically gives you one version of the latest supported software. Other distro/OS package managers do the same. Even Homebrew for macOS deletes older versions and makes them inaccessible. NixOS' package manager does the same here. This is very typical. Portage is an exception.

Separately, Cabal is a package manager for development dependencies for the Haskell programming ecosystem. This is what you should use for setting up such a dev environment. It is specifically developed with this in mind.

Asking your OS package manager to additionally manage the ecosystem of every programming language you may want to develop in seems a bit out of scope, no?

> NixOS has clear room for improvement, but, unfortunately, some of the devs are adamant that it cannot - worse: should not - be done.

Your definition of "improvement" seems to be to handle use-cases it isn't intended for and for which there are specialised package managers, likely introducing huge complexity and spreading resources thin. This seems like something that could also be a disimprovement.

> Separately, Cabal is a package manager for development dependencies for the Haskell programming ecosystem.

And cargo/rustup, go for golang, pip for python, gem/bundle for ruby...

But they all tend to depend on parts of the Os - or their own c compile chain to compile their own version of c libs.

Which leads to the question of how you most easily write a tool in haskell that you can distribute as a deb package.

Saying that "for haskell, playing nice with Perl's ssl-dependencies is hard, so we won't bother" is one approach. It certainly is one way to use up additional resources provided by moore's law: no shared libraries, just thousands of copies of statically linked strlen-functions, in thousands of chroots. (also, hello there, docker).

Snapshots and isolation are great tools, but I don't think it's clear cut what's best yet. Especially when you want to ensure you're not running code with well-known vulnerabilities - be that in bash, ssl keygen, or mallloc.

> Asking your OS package manager to additionally manage the ecosystem of every programming language you may want to develop in seems a bit out of scope, no?

For apt? maybe. For Nix? no.

Nix doesn't install packages, it creates environments. NixOS considers one environment to be the main operating system. That is the main feature of Nix: self-contained reproducible environments.

Because it is so well suited to the task, nix provides all the packages provided by Hackage, sorted into separate namespaces for each supported version of GHC. Each namespace, however, supports only one package version per package; so any packages where it makes sense to provide multiple versions has loosely adopted the name_version naming scheme as a workaround.

Out of frustration, I searched for an answer why this is the case, and found an issue someone else had already discussed at length, only to find a dead end: the issue was discarded (not honestly considered). I then commented at length to explain my line of thinking, only to hear excuses for keeping the status quo, rather than actual help or discussion. The developers commenting on the issue were hung up on their preference for "pinning", and didn't want to consider the namespace mess I have such issue with.

It seems you share the same attitude that changes cannot be done - not because you are certain that is the case, but because you don't believe it should be. There is no reason behind this attitude, and I frankly have no patience for it.

If you want to discuss this problem and any related nuances, I am happy to do so. Please do not naively tell me that my perspective doesn't exist, or cannot make sense.

> It seems you share the same attitude that changes cannot be done - not because you are certain that is the case, but because you don't believe it should be. There is no reason behind this attitude, and I frankly have no patience for it.

I don't share this attitude in any objective sense. Changes can always be done. Whether they should is a pros-vs-cons consideration, many of which are subjective, and both conclusions are therefore valid.

I wasn't arguing that this should not be done, I was challenging the perspective that this should be expected. There's a difference between saying devs should follow the status quo, and asking devs to consider extending the scope of their project and adopt a new approach. And calling devs "elitist" for refusing to do the latter seems pretty rich.

As has been mentioned, Portage already does this, and it's a much older, more traditional package manager without the concept of setting up environments, so noone is questioning that this can be done.

> Please do not naively tell me that my perspective doesn't exist, or cannot make sense.

If your perspective is that it would be nice for Nix to support arbitrary package, that's great. If your perspective is that any dev who refuses to implement such a non-normative and complexity-increasing feature is elitist, then I don't believe your perspective makes sense.

Minor note: I do actually agree with zapita's original comment, so this lengthy thread underneath may seem like pedantry, but I just found it odd that you felt your example fit with zapita's point, as it seemed like the opposite to me.

It seems like Nix could benefit from a first-class concept for pinning the channel and managing updates to the channel. To achieve build reproducibility, the default outcome of checking out a project and building it should be to get the same result as everyone else who does so at the same project revision. I'd expect there to be a way to take a snapshot of the channel and lock those versions in the project until you're ready to upgrade.

I have tinkered a lot with Nix, but have not used it for any production system. I'd be reluctant to use Nix in production if there wasn't a good way to pin the channel, so that everyone who is developing on a project sees precisely the same versions as each other, and everyone who is patching a specific version of a project sees the same version as what's currently deployed. In my opinion, this should be the default behavior and not something that you have to go out of your way to achieve.

Maybe you could model this as something like a "channel fork" or "user channel". When you create a new project, you also create a channel for it that inherits from an existing channel, except that the versions of all packages are pinned and you can control the way in which changes make their way into your user channel from the upstream.

My company has a proprietary technology that's similar to Nix, and we tackle the build reproducibility problem in this way, by making it easy to branch off the main channel and control how updates make their way into it (which is like a merge). The default behavior when creating a project is to also create a project-specific channel, derived from the upstream, company-wide channel. Dependencies are declared at the major version level only, and versioning beneath that level happens implicitly via the project channel.

In this system you think of all channels as having revisions. Developers can control how their channel merges in changes from its upstream channel, which provides stability and reproducibility: until you merge and release a new version, `my-channel` always references exactly the same package versions for all developers. The default behavior of channels is to merge new versions from the upstream channel periodically, but developers can decide what's right for them, whether staying on the bleeding edge or prizing stability.

Software builds, releases, and deployments always happen in the context of a specific channel revision (`my-channel@revision`), which can be named and referenced, so if necessary you can inspect or replicate exactly the code that's part of a colleague's configuration or a production system. Not just approximately but exactly, for every package involved down to `gcc` and `glibc`. It's easy to create a workspace with the package versions specified by `my-channel@12345`.

By further tracking which source code commit hash corresponds to each package release in the channel, you can trivially look up the exact source of all packages (my-channel@12345 -> FooPackage@a1b2c3d4). Extremely useful! When you work and version things in this way, you hardly need version numbers beneath the major version level.

I suppose it's reasonable that the Nix project doesn't want to host binary versions of lots of old packages to make this use-case fast. In that case, I'd want it to be easy to clone the binary version of those dependencies and source them locally. Maybe this could operate as something like a caching proxy in front of Nixpkgs, that will store my own permanent copy of any package that I access or build. On the other hand, keeping builds forever is expensive, so perhaps this is an opportunity for Nix to provide a commercial offering with private channel hosting and a package cache with infinite TTL.

If you want to have a separate channel which you can update on your own, just fork nixpkgs on Github and point your pkgs.nix at your fork.

The Nix binary cache keeps every version of every binary in nixpkgs that's ever been built on the Nix project's build servers. (Note that that's not absolutely everything in nixpkgs, since some things aren't built centrally on the build servers for various reasons, but it's most things.)

However, I might be misunderstanding. Was there some other feature or ergonomics benefit that you were looking for?

> I suppose it's reasonable that the Nix project doesn't want to host binary versions of lots of old packages to make this use-case fast. In that case, I'd want it to be easy to clone the binary version of those dependencies and source them locally. Maybe this could operate as something like a caching proxy in front of Nixpkgs, that will store my own permanent copy of any package that I access or build.

I don't know when it will be ready, but some work has gone into backing Nix caches with IPFS, and it looks like it could be a really cool solution for making it easy to share a cache for 'extra' or 'old' packages without much infrastructure work on the parts of users.

As a starting point, I think Nix needs to become a bit more predictable. I like Arch Linux or Slackware because they are minimal (all packages are mostly vanilla upstream, nothing installed if you don't explicitly do it yourself) plus all operations have obvious side effects. Nix is also minimal, but sometimes side effects are really non-obvious. For example, when you update your system and you are subscribed to some channels, it's not obvious whether you will get binary substitutes (packages) or you will trigger a lot of local compilation. Things are getting better, e.g. with dry-run capabilities, but a more friendly UI should be default.

Furthermore, there should be better documented ways to run software that hasn't been packaged for Nix. There are tons of options: Docker, Systemd containers, FHS chroots. But it should be obvious too.

On the other hand, I can honestly say that if you want a really hassle-free machine that never breaks and you are not installing stuff that isn't packaged, NixOS is arguably one of the best distros out there. I have a few remote machines overseas in my dads house for some of his simulations. He knows no Linux. I can update Nix machines and tweak configuration remotely knowing that nothing will break. I can rollback anytime. Plus the whole configuration is a single declarative file.

On the other hand, to me, this is basically the same story as `git` (sans the cultural gravity of Linus). Incredibly well thought-out internal architecture, extremely talented developers, functional-style immutable datastructures, idiosyncratic frontend that’s a low-level façade over the internals, painful learning curve, often difficult (but possible!) to recover from real-world corruption, etc.

I say this as someone who has loved `git` for a long time and appreciates that knowing the internals and how the commands map onto them leads to an incredibly powerful tool.

The command-line interface was about the same as it is today. Which is to say, it’s great if you understand the underlying architecture and how the commands map natively to it, but it’s a huge learning curve if you’re coming from something else, have preconceived notions of what commands like “commit”, “checkout”, “fetch” et al do, and don’t have the time or the desire to learn the about what a DAG is.

The cogito interface used to be a thing, but I don’t think it ever gained widespread adoption and was pretty quickly abandoned.

rooted DAG != tree, so please don't introduce people to a concept by explicitly misleading them.

All nodes in a tree must have exactly 1 parent, except for the root that has exactly 0 parents. All nodes in a (single-)rooted DAG must have at least one parent, except for the root that has exactly 0 parents.

What I said was wrong. It was also doomed to be sloppy at best, because I intended to compare trees as programmers know them (i.e., typically directed and rooted, which is more structure than mathematicians' 'trees' have) to DAGs, which have a precise mathematical definition.

But I hardly meant to introduce anyone to the concept of either DAGs or trees by that comment. If there's a chance of it seriously misleading anyone, I would be happy to see it moderated out of sight.

The point I intended to make, more carefully stated, is this:

DAGs are closely related to structures that programmers already deal with all the time, and so it strikes me as odd when programmers protest learning to work with DAGs.

> and don’t have the time or the desire to learn the about what a DAG is.

That's the point people are complaining about with git. It's fine to consider that an issue, but that doesn't mean the interface or documentation are bad, it's that people want it to feel familiar.

I, for one, was very comfortable with git's command line from the start. I came in wanting something different, and found the distributed model was just what was needed, and frankly, more straightforeward than the popular centralized approach. It also helps that you can play with and break things locally, rather than having to worry about a server someone else relys on.

As a former hg user, I second that. The fact that git has so many users is a testament to our incredible adaptability as a species. If we can adapt to the git ui, surely we will have no problem colonizing alien planets.

Where have you run into derision/assholes with nix or FP? Absolutely there are assholes (or perhaps people who just don't realize how rude their words/actions are) in the community - especially on Twitter and reddit in my experience - but I've never not felt welcomed or been talked down to when I asked for help in an IRC/discord channel.

I see this get mentioned all the time in respect to FP communities in particular but I really can't relate at all. And its not like I haven't asked my fair share of stupid questions in these communities or anything.

I didn't encounter any assholes (or rather I didn't encounter any more than usual... I have yet to find a completely asshole-free open-source project).

Maybe "derision" was the wrong term. Nobody was rude or disrespectful. Just oddly patronizing. They just didn't feel there was anything wrong with the user experience, or any need to improve Nix to better meet the needs of non-experts. The main sentiment I observed was that Nix was pretty close to perfect, but also misunderstood - and that it was mostly up to the rest of the world to better understand it. That struck me as the wrong way to go about making your project successful.

The area that IMO needs the most work is the packaging experience, including the DSL itself... and the "I just want to build this source without having to become a black belt functional packager first" experience.

My biggest gripe with Nix packaging is that the tooling varies so widely per ecosystem. Aside from that, though, I generally find packaging for Nixpkgs to be much, much easier than packaging for most distros.

Most of what we can do with Nix is both a consequence of its model and constrained by it. Part of what this means is that things users want may not have direct replacements in the Nix world, even when they do have _alternatives_.

When we have a close but significantly different analogue of something a (potential) user seems to be asking for, I think that's when you tend to see this type of response the most. ‘Once you understand the model, you'll see that this slightly different thing will probably work better.’

I think:

1. Sometimes it's really true-- understanding the model can inspire you to take a different path, make a different choice than you naively might, and ultimately be happy with it.

2. There's a selection bias working against the community here: as long as the Nix UI is clunky, the people we'll find in the community will be those who are drawn in by its design principles and can overlook the UX warts. The improvements that come with the Nix 2.0 release make me hopeful that we can attract more contributors for whom such things are very important, and who will be inclined to further refine the UX as they work with Nix.

I think most Nixers have known that the UX is horrible in some respects for a long time, and also that improvements in that regard were in the pipeline for Nix 2.0 (which was to be called '1.12' until recently). Heavy Nix users have largely been focused on our own use cases, because we only have so much time and energy. I hope that as our userbase grows and UX improves, the composition of our community can change so that users can feel better heard and represented when they raise concerns like you have here.

I've found that within other ecosystems, people are increasingly considering replacing existing tooling with Nix. For example, I've heard some rumblings in the Elixir space considering using Nix for managing SDK versions, for native/binary package dependencies, and for generating release packages.

To accelerate the adoption of Nix, the Nix maintainers would need to reach out and help other ecosystems adopt Nix or Nix-like logics. But I think, as it stands, other ecosystems will eventually look toward Nix on their own, as they grow and attempt, one by one, to solve all the same problems Nix already has 100% solved.

> Except that Docker et al are already that compromised solution. No need to replicate.

I think you're right that Docker has embraced the "meet half-way" school of thought (maybe too much?). But that doesn't mean they have exclusivity over the approach.

Docker lacks many of the features that make Nix so powerful. On the other hand Docker is much more pleasant to use, for me at least.

If a tool came along that combined the simplicity of Docker with the power of Nix, I would probably stop using both, and just use that. I think Nix itself could be that tool... if only they fixed the issues I described above.

But when you need to rebuild your container image, how do you make sure it builds in the same way? Nix imho is perfect for building reproducible containers, which you can then run on a minimal install of an OS with a larger ecosystem (or just GKE/EKS) to find & fix crippling bugs before you run into them.

Instead of deploying individual configuration files, you would generate `/etc/nixos/configuration.nix` + some modules maybe followed by a `nixos-rebuild switch`.
There is not special support required in Chef/Ansible to support this feature. Both Chef and Ansible are available in nixpkgs.

You may be right here. I like to think I am not a complete idiot, but I just read their 'about' page and now I know Nix has a folder naming convention and something about functional programming, but I have no idea how it is going to solve the issues I have with pip/apt/conda etc. In fact I am not really sure if it is supposed to, apart from seeing that it is a package manager.

I've been kinda waiting for something like what Ubuntu is/was to Debian. Approachable. Last time I tried nix, learning curve was too steep. This update looks like they've started to work towards approachability a bit, so I might give it another go.

I have been using nix for a while to build binary packages for crashcart[1] and I really love the premise of isolated source-based builds.

Unfortunately, over time I've become quite frustrated with the pull-everything from the internet model. If you are building packages from scratch each time instead of pulling down cached version using the nix command, the build breaks quite often.

Mostly it is silly stuff like the source package disappearing from the net. A particularly egregious offender is the named.root[2] file being updated, which will cause everything to fail to build until the sha is updated in the build def.

I don't know that there is a great solution for this problem. Maybe there needs to be a CI system that does from scratch build of all of the packages every day and automatically files bugs. Alternatively, a cache of sources and not just built packages could ease the pain. This issue probably affects ver few nix users, but it has demoted my love of nix down to "still likes but is somewhat annoyed by".

Yeah I'm aware of the build mirror, but crashcart builds things under a different prefix so it must build things from source each time. I realize this doesn't affect most users (which is why it takes a while for disappearing sources to be found and fixed). A content addressed mirror for sources as well would solve the problem nicely.

The point of crashcart is to be able to side load a filesystem with utilities into a running container. It is very important that the location we pick doesn't conflict with any path already in the container. If we used /nix as the mount path it would conflict with any container that uses nix. In order to prevent this (probably rare) conflict, we build our utilities in /dev/crashcart/ instead.

Hmmmm, interesting... That does seem like a pretty good reason, though maybe you could bind mount over /nix? That should be fine from a Nix perspective, not sure how well it works with container technologies.

we could bind mount over /nix or /nix/store, but that means any existing nix packages from the container would not be available. The whole point is to have the whole container file system available along with the utilities. We could alternatively find each package that we need for our debugging utilities and bind mount each directory from the store individually. This would work due to unique paths in the store, but that means potentially hundreds of bind mounts and is an orchestration nightmare.

we didn't look at using overlay. Might be possible, although that would introduce a dependency on kernel version and/or module. A custom fuse might be an option here as well but fuse in containers is a bit sketchy at the moment.

This looks like a neat feature. Reading up on it briefly, it looks like it changes the mount namespace / chroots when running the binary in order to accomplish this. Unfortunately that defeats the purpose of crashcart, since the whole idea is to have access to the container file system while running your sideloaded binaries.

Unless there is some magic that I'm missing, I don't think this option helps for our use case.

What about running your own caching HTTP proxy for your build's external dependencies or else pulling static copies of these dependencies into your own repo? It seems like the problem isn't building everything from source but rather that the sources of truth for the inputs are unreliable. You'd have the same problem trying to build e.g. Debian from scratch if you couldn't reliably pull down all the sources for things.

A caching http proxy would help me build the same things reliably, but it wouldn't help anyone else who cloned my repository and attempted to do their own build unless I also gave them access to the proxy. And it hides the fact that the standard from-scratch build doesn't actually work. I think the cached nix packages is why it takes so long for some of these issues to be discovered.

The difference with Debian (and other linuxes) is that the source code for the build is also provided. The upstream source might be from random place on the net, but Debian provides a source package that you can use to rebuild the binary package. Maybe the solution is for nix/nixos to provide something similar.

Generally everything is cached that you build/download. But only on the machine you do it on.
That's why you usually want a collective cache inside your org additionally, because not all machines will have everything yet.

I love Nix and I have been looking forward to this new makeover of the UI with the 'nix' command. It seems like the original command line usages developed incrementally over time, making them quirky and inconsistent, and so it is really welcome to have them redone based on long experience. (Thanks, Nix hackers!)

Seeing this released as Nix 2.0 is a really lovely surprise for me this morning :).

> It introduces a new command named nix, which is intended to eventually replace all nix-* commands with a more consistent and better designed user interface

This is pretty nice. I've been using nix on my Mac for more than a year now, it works well on mac. But the separated commands are not easy to remember and the help documentation is also separated. This change really improves command line UX

Heh- after a many years haitus from running any kind of UNIX/Linux at home, I was thinking about installing a Linux based distro and NixOS was near the top of my list to try. How does this 2.0 Nix release effect a new install of NixOS- should I wait a bit for a corresponding overhaul of NixOS to come out? I suspect theoretically it's not necessary, but wondering if NixOS will be tracking this Nix release in some way shortly... Anyone know?

I upgraded my personal laptop to NixOS 18.03 (currently still nixos-unstable; the release branch hasn't been cut) a week ago and I've had no problems

You can switch which version of Nix you're using whenever you want, and you can install multiple versions of Nix side-by-side if you want to, just like with any other package. To install Nix 2.0 on an older release of Nixpkgs/NixOS, just use the package `nixUnstable`. You can install it with `nix-env` if you want to try it out right now.

The only real differences between a typical upgrade and upgrading between releases are:

1. You'll need more storage space for the course of the upgrade, because most of your dependencies have been updated.

2. You might have to change your config up a little, because a few packages have been removed, renamed, or are out of date.

You still get all the nice rollback features that you're used to, and if Nix 1.x is part of your old system profile and 2.0 part of your new one, when you roll back, Nix will roll back, too.

To more directly answer your question: NixOS 18.03 will indeed include Nix 2.0.

It looks like they're at least attempting to work towards user friendliness, which is by far my biggest complaint.

My other big complaint is that when something goes wrong its damn near impossible to figure it out. Trying to figure out why I couldn't get postgres working with postgis was nightmarish last year sometime when I last tried nixos.

I'm still really optimistic that this will someday replace arch for me. I just don't know when

As a nix user for several years this is pretty exciting. I hadn’t been following the 2.0 development, but I was really hoping to see a mention of support for something like a .gitignore equivalent when hashing directories from the filesystem. Seems that that isn’t in this release :(

As one of the co-maintainers of GNU Guix I'm obviously biased, but here's what I consider some important unique features of Guix:

- Guix is all written in Guile Scheme (with the exception of parts of the inherited daemon, which hasn't yet been completely implemented in Guile); this extends to development tools like importers, updaters, to user tools like "guix environment", and even bleeds into other projects that are used by GuixSD (the GNU system distribution built around Guix), such as the shepherd init system. There is a lot of code reuse across the stack, which makes hacking on Guix really fun and smooth.

- Packages are first class citizens in Guix. In Nix the idea of functional package management is very obvious in the way that packages are defined, namely as functions. These functions take their concrete inputs from an enormous mapping. In Guix you define first-class package values as Scheme variables. These package values reference other package values, which leads to a lazily constructed graph of packages. This emergent graph can be used as a library to trivially build other tools like "guix graph" (for visualising the graph in various ways) or "guix web" (for a web interface to installing and searching packages), "guix refresh" (for updating package definitions), a lovely feature-rich Emacs interface etc.

- Embedded DSL. Since Guix is written in Scheme---a language for writing languages---it was an obvious choice to embed the package DSL in the host language Scheme instead of implementing a separate language that needs a custom interpreter. This is great for hacking on Guix, because you can use all the tools you'd use for Scheme hacking. There's a REPL, great Emacs support, a debugger, etc. With its support for hygienic macros, Scheme is also a perfect vehicle to implement features like monads (we use a monadic interface for talking to the daemon) and to implement other convenient abstractions.

- Graph rewriting. Having everything defined as regular Scheme values means that you can almost trivially go through the package graph and rewrite things, e.g. to replace one variant of a package with a different one. Your software environment is just a Scheme value and can be inspected or precisely modified with a simple Scheme API.

- Code staging. Thanks to different ways of quoting code (plain S-expressions and package-aware G-expressions), we use Scheme at all stages: on the "host side" as well as on the "build side". Instead of gluing together shell snippets to be run by the daemon we work with the AST of Scheme code at all stages. If you're interested in code staging I recommend reading this paper: https://hal.inria.fr/hal-01580582/en

- Bootstrapping. Some of us are very active in the "bootstrappable builds" community (see http://bootstrappable.org) and are working towards full bootstrap paths for self-hosting compilers and build systems. One result is a working bootstrap path of the JDK from C (using jikes, sablevm, GNU classpath, jamvm, icedtea, etc). In Guix we take bootstrapping problems serious and prefer to take the longer way to build things fully from source instead of just adding more binary blobs. This means that we cannot always package as many things as quickly as others (e.g. Java libraries are hard to build recursively from source). I'm currently working on bootstrapping GHC without GHC and without the generated C code, but via interpreting a variant of GHC with Hugs. Others are working on bootstrapping GCC via Scheme.

- GuixSD, the GNU system distribution built around Guix. GuixSD has many features that are very different from NixOS. The declarative configuration in Scheme includes system facilities, which also form a graph that can be inspected and extended; this allows for the definition of complex system facilities that abstract over co-dependent services and service configurations. GuixSD provides more Scheme APIs that apply to the whole system, turning your operating system into a Scheme library.

- I like the UI of Guix a lot more than that of Nix. With Nix 2.0 many perceived problems with the UI have been addressed, of course, but hey, I still prefer the Guix way. I also really like the Emacs interface, which is absolutely gorgeous. (What can I say, I live in Emacs and prefer rich 2D buffers over 1D command line strings.)

- It's GNU. I'm a GNU hacker and to me Guix is a representative of a modern and innovative GNU. It's great to see more GNU projects acting as one within the context of Guix and GuixSD to provide an experience that is greater than the sum of its parts. Work on Guix affected other GNU packages such as the Hurd, Guile, cool Guile libraries, and led to a bunch of new GNU packages such as a workflow language for scientific computing.

On the other hand, although Guix has a lot of regular contributors and is very active, Nix currently has more contributors than Guix. Guix is a younger project. The tendency to take bootstrapping problems very seriously means that sometimes difficult packages require more work. Oddly, Guix seems to attract more Lispers than Haskellers (I'm a recovering Haskeller who fell in love with Scheme after reading SICP); it seems to be the other way around with Nix.

Having said all that: Nix and Guix are both implementations of functional package management. Both projects solve similar problems and both are active in the reproducible builds efforts. Solutions that were found by Nix devs sometimes make their way into Guix and vice versa. The projects are not competing with one another (there are orders of magnitudes more people out there who use neither Guix nor Nix than there are users of functional package managers, so there's no point in trying to get people who use Nix to switch to Guix). At our recent Guix fringe event before FOSDEM Eelco Dolstra (who invented functional package management and Nix) gave a talk on the future of Nix surrounded by Guix hackers --- there is no rivalry between these two projects.

Let me end with a comment that's actually on topic for the original post: Congratulations on the Nix 2.0 release! Long live functional package management!

Would it be safe to run `curl https://nixos.org/nix/install | sh` (safe as in it won't screw with my existing installed packages), and if so, will that get me Nix 2.0? Or do I still have to wait for nixpkgs-unstable to rebuild?

You can upgrade right now with `nix.package = pkgs.nixUnstable;` in your `~/.nixpkgs/config.nix` [1] or on the command-line: `nix-channel --add https://nixos.org/channels/nixos-unstable nixos` followed by `nix-channel --update` if on any system but NixOS or `nixos-rebuild switch` if on NixOS.

With Ubuntu, every time you want to fix something with your car, you roll it into the garage, pop open the hood and get to work. It's intensive labour, results will vary, and undoing a change can be really difficult.

With NixOS, it's like 3D printing a new car every time. You'll design a model, press a button, and the car gets built from scratch. If you don't like it, tweak the design a bit, and print a new car. If the new car breaks, just go back to the previous known-good one, which is already in your garage. You can even take the design documents to your friend and generate an exactly identical model.

> With Ubuntu, every time you want to fix something with your car, you roll it into the garage, pop open the hood and get to work. It's intensive labour, results will vary, and undoing a change can be really difficult.

You can do it that way, but I wouldn't recommend it. If your Ubuntu system becomes that way, it has become unmaintainable.

All modern server deployment methods describe the deployment in code so you do "print a new car" every time you change something. This includes Ubuntu.

On the desktop, you largely don't need to pop open the hood at all. If you find yourself doing that, you have yourself an experimental system and not production system.

Unfortunately sometimes you do need to pop open the hood to see whats going on. Regarding ubuntu or rather its derivative mint for example I had to fiddle with xorg.conf to allow me to manually set the fans on a card because the desktop was overheating even with reasonable cooling in a small apartment with no ac in the middle of summer.

In case you didn't know nvidia driver doesn't let you manually set the fan without enabling this in xorg.conf or drop a file in /etc/X11/xorg.conf.d Not knowing about xorg.conf.d at the time I merely set xorg.conf and was very confused to find that it continued to overheat and further that my file was not modfified but gone. This happened periorically seemingly at random.

Turns out their driver manager mints gui for installing proprietary drivers had installed the optimus package to enable a laptop with dual gpus to work properly on a desktop and that the post install script for this package was helpfully removing /etc/xorg.conf every time it was run when said useless package was updated.

Moving the snippit to xorg.conf.d was helpful as was finding and removing the useless package but we are still looking at an issue on a relatively recent machine that couldn't be fixed without grep and a xorg config file in a recent version of a ubuntu derivative.