is very important for users to realise. The solver can actually do a good job of getting all the dependencies consistent if you show it the whole picture, ie the top level thing(s) you want to install.

I'm not sure I get this bit though:

So I changed the Yesod installer scripts to use this approach. However, even though I could install yesod from source, I found it did not guarantee that I could build my Yesod application. I needed to installi both yesod source and my application at the same time.

This is another instance of the same issue. Ask to install your app, rather than asking to install yesod then your app. If your app depends directly on some of the things that yesod depends on, then the app might need different versions from what were picked initially for yesod. (Yes, it might be possible to cope with picking different versions, that's what private deps are about.)

Another solution to this is to ask to use only installed yesod instances, and never rebuild from source (though of course that might not be possible):

cabal install ./myapp --constraint='yesod installed'

Also note that cabal already lets you install multiple local directories or tarballs. The cabal-meta thing of handling git repo targets is of course useful. When I added support for dir targets and local and remote tarballs, I considered adding darcs and git repos too, but there's rather more policy that you need there, because you have to have somewhere to cache the repo, you don't want it downloading all the time.

Being able to write down the environment, rather than supplying it on the cabal command line is also a good thing. We've been considering various designs for this, covering this use case and others like local package repos/archives.

Actually half the problem here is about UI and policy and expectations. It's about what did the user mean when they said:

cabal install ./mywebapp

Some people argue that if you and I both run this on our different machines we should get the same result, that is it should only depend on what hackage index snapshot we got.

But other people say that if I do:

cabal install yesod
cabal install ./mywebapp

Then it should pick that instance of yesod that was previously installed. Note that these two ways of looking at it are contradictory.

And then cabal's currently default policy is that it'll prefer to use the installed instance, but it's not a hard constraint. As I noted above, you can change that policy, by adding constraints, but of course that's harder for users.

Johan said something related to this in his recent blog post. He asked, what is the point of installing libraries? There is obviously a point in installing applications, but why libraries? He suggested that the right thing is to define an environment for your app and then libraries are built as required. I think that's a useful view and would help to resolve the problem above.

That's indeed a useful thing to do, for example if you're testing out a new library.

However, imagine if we had a working cabal repl command, that loaded your current package (application or library) and all of its dependencies into ghci. I bet you'd be calling plain ghci much less. After all, the reason you want to play with a library in ghci is often because your current project depends on it and you need to figure out how some part of the library works.

(I case you didn't know, cabal-dev ghci works by calling cabal build --with-ghc=fake-ghc-cabal-dev, collecting how cabal called ghc, fiddling with the args and then calling ghci. Obviosuly it's possible to do something much more robust and with more features.)

I use libraries for more than just linking an application or "testing out". A lot of my use of haskell is small scripts. These are either in a single file (with no package or associated cabal spec) and are interpreted with runghc, or they're one line function pipelines written and run in GHCi.

This only works because of all the functionality in installed libraries. I care what libraries I have installed in GHC as much as I care what python modules I have installed.

I completely agree with you here. The current toolchain works great for the first argument, where things only depend on the hackage snapshot. That's fine for end users of Haskell applications. But our ecosystem has grown complex enough that it is unacceptable for modern Haskell developers working on significant code bases. It creates a big incentive to dump all your code in one package, and for organizations developing proprietary code, this means that they'll also be less likely to release things back to the community. Having the policy be a pure function of the hackage snapshot is just not the right position any more.

There are two different problems here, so my choice of wording was overly simplistic. The biggest problem is that cabal is a pure function of hackage in the sense that you can't have cabal also be aware of other packages in your local filesystem (or ideally, in github repositories) that are under development and not ready for upload to hackage. I agree that problems are also caused by the fact that cabal looks at what's installed on your system, but in my mind this is a much less significant issue. I actually don't mind blowing away my ~/.ghc directory. Yes, it feels wrong and scares beginners, but I can still work around that problem. Until this release of cabal-meta, I didn't think it was possible to get around the former problem easily.

But going back to what tibbe said, it actually makes sense for cabal to look at what is installed. The problem here is that installing something new can break things that were already installed.

This is definitely the biggest issue that I see with Cabal/cabal-install today. We need the ability to write down an environment that at the very least includes local directories. Working with various version control repositories would also be nice, but isn't crucial. The same is true for more complex issues like improvements to the dependency solver or automatic checking of function signatures in exported APIs.

I used it to build a large proprietary project of mine with ~120 package dependencies including several other packages from my local filesystem that either haven't been released to hackage or have modifications from something already on hackage. Prior to cabal-meta, I had only been able to build this project with cabal-dev, but doing so made it painful to use GHCi and gave me other pain points that I've discussed before.

In short, this functionality solves my biggest complaint about Cabal/cabal-install and really needs to be integrated as soon as possible.

builder configuration (package config flags and basically all other cabal args that the agent doing the build can specify)

Currently cabal-dev has aspects of 1. and 2., while cabal-meta has aspects of 1. and 3.

I have a design in my head for 1. a extended package index format. I have another design in my head for 3. which is essentially a local cabal config file. These mechanisms might be orthogonal, but as cabal-dev demonstrates for various workflows you want to make use of multiple aspects and have a UI that makes that easy to do. So I think we want a more comprehensive design for "environments".

I'll be attending the Utrecht hackathon and I'd like to spend the time there with people working out a design, including some UI and checking that the various use cases we want will work ok.

Ideally I would like to see aspects of the cabal-meta configuration put directly into a .cabal file.

I don't think this is the right approach. The current Cabal design has a division between package author and package builder and I think that is a good thing to preserve. Certainly it is the case that in some use cases those two roles are filled by the same individuals, but we should not loose the distinction because in other use cases it is useful for the roles to be distinct.

The .cabal file is where the package author writes things down, and currently with cabal the package builder specifies everything on the command line. So what we're talking about is a way for package builders to write down the environment they want.

As you said there are multiple aspects to cabal-meta, but particularly if we stick to "writing the env down", this makes a lot of sense. Perhaps the problem is that Cabal does not have a big enough division between package author and application builder. Note I am using the term "application" builder because there is no point to building a package except as a dependency if it doesn't expose an executable.

In Ruby there is a gemspec for the package author. This is exactly like a cabal file: it includes a lot of information about the package including dependencies. If I say "gem install package", then package is installed according to the gemspec. A gemspec should specify as large of range of package dependencies as possible.

As an application creator, I don't touch a gemspec (cabal) file. Instead I use a tool called Bundler. Bundler is a tool for writing the environment down. It uses a Gemfile, which is analagous to using cabal-meta'a sources.txt (on top of a cabal file build-depends). When you install, it creates a Gemfile.lock, which includes exact dependency information which can be used to make sure everyone on the team has the same versions and to facilitate conservative dependency upgrades.

There is some overlap between these tools: when developing a library it is convenient to use Bundler and a Gemfile. There is a Gemfile command that lets you import the gemspec of the library into your dependencies.

Fantastic. Funny, just last week after losing 2 hours to dep hell, I was thinking to myself "if only there was a simple and non-tedious way to submit all the deps with my application itself while being able to pass cabal flags so cabal just takes care of it all..."