My problem with make is not that there is a bad design. It is not THAT bad when you look at things like CMake (oops, I did not put a troll disclaimer, sorry :P).

But it has only very large implementations that has a lot of extensions that all are not POSIX. So if you want a simple tool to build a simple project, you have to have a complex tool, with even more complexity than the project itself in many cases…

So a simple tool (redo), available with 2 implementations in shell script and 1 implementation in python does a lot of good!

There is also plan 9 mk(1) which support evaluating the output of a script as mk input (with the <| command syntax), which removes the need for a configure script (build ./linux.c on Linux, ./bsd.c on BSD…).

But then again, while we are at re-designing things, let’s simply not limit outself to the shortcomings of existing software.

The interesting part is that you can entirely build redo as a tiny tiny shell script (less than 4kb), that you can then ship along with the project !

There could then be a Makefile with only

all:
./redo

So you would (1) have the simple build-system you want, (2) have it portable as it would be a simple shell portable shell script, (3) still have make build all the project.

I haven’t used any redo implementation myself, but I’ve been wondering how they would perform on large code bases. They all seem to spawn several process for each file just to check whether it should be remade. The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

The performance cost of that not a particularly fast operation might be prohibitive on larger projects. Does anyone happen to have experience with that?

No experience, but from the article:

Dependencies are tracked in a persistent .redo database so that redo can check them later. If a file needs to be rebuilt, it re-executes the whatever.do script and regenerates the dependencies. If a file doesn’t need to be rebuilt, redo can calculate that just using its persistent .redo database, without re-running the script. And it can do that check just once right at the start of your project build.

Since building the dependencies is usually done as part of building a target, I think this probably isn’t even a significant problem on initial build (where the time is going to be dominated by actual building). OTOH I seem to recall that traditional make variants do some optimisation where they run commands directly, rather than passing them via a shell, if they can determine that they do not actually use shell built-ins (not 100% sure this is correct, memory is fallible etc) - the cost of just launching the shell might be significant if you have to do it a lot, I guess.

The biggest problem with Make (imo) is that it is almost impossible to write a large correct Makefile. It is too easy for a dependency to exist, but not be tracked by the Make rules, thus making stale artefacts a problem.

Have you appreciated how huge CMake actually is? I know I had problems compiling it on an old machine since it required something like a gigabyte of memory to build. A two-stage build that took its precious time.

CMake is not lightweight, and that’s not its strong suit. To the contrary, it’s good in having everything but the kitchen sink and being considerably flexible (unlike Meson, which has simplicity/rigidity as a goal).

Meson is nice, but sadly not suitable for every project. It has limitations that prevent some from using it, limitations neither redo nor autotools have. Such as putting generated files in a subdirectory (sounds simple, right?).

I’m quite happy with my Obins Anne Pro (60%). I have been unable to program it with the official iOS software that simply hangs every time I try to use it. Fortunately, there’re some projects trying to replace both the firmware and the software (I still have to find the time to try them).