But this is an open discussion, so I'd appreciate anyone's views. I'd
especially value Simon Kallweit's views as someone who has actually used
both code implementations which gives him a very good perspective.
Although if anyone wants to contribute, please keep it on topic, within
this thread, and technical.

I have been following the NAND discussion since it started two weeks
ago. I actually don't have much to add, because I think most points have
been brought to topic already. Also I currently don't have an immediate
necessity for NAND flash support in our products, so it's not of high
priority to me at the moment.

I still try to give a quick flashback of my work with both R's and E's
implementations. I started out using R's implementation, trying to add a
driver for synthetic NAND chips, which would not exist back then. In the
meantime Rutger has implemented a synthetic chip, but in form of a NAND
controller and not as I have tried, in form of a NAND chip. In
retrospective, this seems to be the better (and simpler) approach. What
I dislike is how the synthetic chips have to be configured. R's
implementation needs the user to assign a valid NAND chip device id,
which will then be used through chip interrogation to determine the
chips geometry. I find it much more useful if you can directly define
the chips geometry in CDL, as it's more explicit. This brings me to my
biggest concern with R's design. It's pretty rigid in terms of future
chip implementations. Sure if everyone is going to make ONFI chips in
the future, that's fine. Otherwise, parts of the layering could be
rendered wrong/useless rather soon. I also dislike the generic
determination of the chip geometry in io_nand_chip.c:read_id(). IMHO it
is already a bit messy by mixing interrogation for small-page,
large-page and ONFI chips. This is probably fine, until exceptions have
to be implemented to support more exotic chips. I might be wrong, as I
think MTD does chip interrogation in a similar way. E's model splits
chip interrogation into the drivers, adding more flexibility in turn for
a bit of code duplication.

My work with E's framework involved writing basic drivers for the STM32
evaluation board as well as doing a few tests with YAFFS1. This was a
breeze. I only implemented the basics, adapting/copying the drivers from
Ross. It occurred to me, that a lot of the code could just straight be
copied. So there is a certain level of code duplication in E's
framework, but as John pointed out, it's questionable if things like
address/command writing etc. should be abstracted out, as they are so
simple, and may need adjustment in the future for new chips. In general,
I think E's code is more lightweight and quite a bit looser in coupling,
which I think results in smaller code size, lower overhead but most
importantly in more flexibility. Just the right thing for a platform
where resources are scarce. My tests with YAFFS1 were promising, but I
had to realize that YAFFS just needs too much memory for my current
platform, so I abandoned it.

My current preference clearly is with E's framework. But I might be
biased, as my current platform is quite low on resources, and I'm
looking for a small, simple and lightweight framework.

I will gladly elaborate in more detail if there are further questions
about my experience with both frameworks.