Yes, that is true. But there are few apps that need complex data
processing, but not much ram. I am not saying there are not *any*, just
not very many. In the same way, i would love to see an FPGA in a small
package, like a 48 pin TQFP. But there are few apps for this sort of
part and so they don't make them.

I do a lot of designs for scientific instruments where some
data processing is done before storage, so I guess I see more
need for floating point and large data arrays than does the
guy designing the controller for a toaster oven. ;-)

I guess the FPGA would be nice, but I'm still working on
64-register CPLDs! One of those in a 44-pin QFP sure
does beat a handful of LCX chips both in flexibility of
design and ease of PCB layout and assembly. At my
product quantities (100's per year), an extra 50 cents
in part cost is less important that saving some design
time, adding flexibility, and simplifying assembly.

I second this opinion. I do also design measurement instrument, and
very often some rather complex analysis or calculations are required
even though the data set is relatively small. Also, the bit banging
requirements may be rather fast, which justifies the faster clock.
In some applications multithreading is very useful, and doing that
with a small 8-bit piece is usually not so simple.

Unless you are short of power. Some CPLDs (Xilinx CR2, Lattice
Mach 4000Z) consume little power, but FPGAs tend to need a
lot even at a low clock frequency. In this sense the modern
MCUs behave better, their current consumption is almost proportional
to the clock frequency.

Is a minimum of 5 to 10 mA (depending on temp range) low enough? The
Altera ACEX (EP1K) parts are rated for this. If you are trying to get
much below this, you are really looking at a speciallized application
(or you can just cut off the power to the FPGA).

A maximum of 5 to 10 mA would be low enough :) I am more geared
towards slowish (certainly below 100 MHz, often below 10 MHz)
applications. In this regime there are applications which would
be rather simple to do in hardware and quite difficult in software.

There are two reasons why I am looking for low max values instead
of low average values:

So, usually the first problem with the configurable devices is
the amount of start-up current they draw. (I do admit that with
battery powered devices the start-up energy is usually insignificant.)

There are some very nice low-power CPLDs already, so maybe the
same trend will be seen in the FPGA's as well. Even though, it
seems that Altera is not so much into low-power as Xilinx and
Lattice. OTOH, at the moment a low-end ARM costs as much as a
128-cell CPLD. This means some ridiculously stupid tasks are
more economically made with a MPU.

Heck, the Philips and OKI parts are already as cheap if not cheaper than
a lot of the PICs that have much less capabilities.

//-------------------

As someone who uses PIC/AVR/8051 on a regular basis, the Philips and OKI
parts are really attractive, however getting hold of them easily is a
real issue at present. Most of the projects I work on involve small
quantities (ie. a couple hundred), and as a result, I tend to be pretty
low priority to the distribution companies. Component availability is
therefore a prime consideration when choosing parts.

I can buy PICs or 8051 from a large number of sources in the UK in any
quantity, without any hassle. If I can't buy the exact part, chances
are a similar version is available which is pin and code compatible.
AVR is a little more difficult, especially if you want access to the
full range, but if I can't get them locally, then Digikey give me
excellent service.

We have looked at the Philips range and have a dev board which we have
been using to evaluate various tool chains, and from a technical
standpoint, I'm really like it (apart from the lack of code protection).

I think Philips need to get this range into the catalogue suppliers like
Farnell and Digikey, at which point it should really take off.

martin_underscore_walton_a@t_flyingpig_full.stop_com says...
I don't know if they'll really take off until there is a good Windows-
based C compiler for under $500. Not all embedded engineers have the
time to set up a Linux system and get a GNU toolchain running.

I would probably wouldn't have used either the ARM or MSP430 except that
I had a customer foot the $4K bill for the IAR ARM compiler and I
started investigating the MSP430 about the time ImageCraft came out
with an MSP430 C compiler.

Now if ImageCraft will just do an ARM compiler---Richard are you
listening? ;-)

Another possibility is to figure out how to use the ARM compiler
in the MetroWerks PalmOS system to generate ARM code for
an embedded ARM system. The PalmOS system is only $399, and
it has been adapted for some 68K embedded systems. When I
get some free time (yeah, right), perhaps I'll get the
latest PalmOS system and see whether you have the ability
to easily generate ARM binaries.

IMHO, a good LPC2104 demo board and fully capable C compiler for
under $400 would sell like hotcakes in the Digi-Key catalog.

Yep, and it's working fine for me (thanks Chris). Brief scary moment
when my 2 year old godson picked it up when I was taking it to show his
father (who works for the family firm, who currently use a lot of Keil
8051 stuff) but it survived intact :-)

Anyway, it comes with a ready set up GNU compiler chain with a nice IDE
on top, with the only restriction being that the debugger is an
evaluation version and can only deal with up to 16KB of code and isn't
to be used for commercial gain - but the actual compiler chain is
unencumbered.

Well, I can run our RTOS with all the services it provides on LPC210x, but
I can't do that on PIC or even HC12 (we only support non-banked memory for
now and it seems there's little or no reason to change that). Also, I like
writing software for a decent 32-bit chip with "lots" of memory much better
than working with limited 8-bit chips and optimizing the code using assembly
language.

The input transients are the reason why we _need_ to be able to read the
pin state. If we set the compare logic to detect a rising OR a falling
edge and then have an input glitch shorter than our interrupt latency time,
we will no longer know what the actual pin state is and we may program
the compare logic to wait for the wrong edge. On Motorola chips, we
always get an interrupt on both edges and then check for the pin state.
If there was a glitch, shorter than our interrupt latency, we will see
the pin is still in the "old" state and simply ignore the irq. If we
can't poll the input state, we don't know whether this was a "real" edge
or a glitch we should ignore.

Yes, it seems the pin is always an open drain output. :-I English is not
my native language, so I may be misreading the datasheet, but to me it
looks like the pin should be open drain in I²C mode only.

I think we know what is going on, now. If we try using the SS pin for
GPIO, it might be that the SPI logic has the SS input pin (inside the chip)
floating which causes SPI collisions. The Motorola engineers knew better -
the SS pin was available as GPIO if it was needed for the SPI operation.

Do you know a contact at Philips? I'd like to get a confirmation for the
things mentioned above. It seems we'll stop trying to use SPI and do it
in software, so we don't have to modify the hardware (which is waiting to
be delivered to a client).

And the PICs are _weird_. I hate them. And I still keep designing them
into new projects. I am weird. :-)

No, my contacts are all in the US. I used to have a friend handling
Philips, but they just pulled the line from the independant reps and now
have an in-house sales force. I have a contact name here in Maryland,
but I don't think that will help you much.

If the pins are used for I²C, then they should be open drain, yes. But if
I configure the pins for GPIO or match/capture, then I can't see why not
use normal output buffers. This is how all other chips I know behave.
I think this is a bug in the LPC210x or the data sheet - IMHO, in the chip
;-)

Because normal output buffers have a clamp diode in the P-FET, which
needs more design work to avoid. It is possible, but designers from
this end are not used to thinking of every pin as important.....
Not so much a bug, as an oversight, or 'could have done better', but
almost all chips have those...

Well, why does the data sheet mention "open drain" in the I²C description
only, then.. Even Microchip documents these things and their datasheets
are about the worst I've ever seen ;-) "This instruction clears a bit..
did I tell you about the UART already? This one sets a bit (did you notice
we have _three_ timers?!) in memory.."

Can one not use the pin select registers to switch the pin from CAPture
to GPIO before sampling? Should only take a single extra VPB write
cycle, if you have the direction register already set up in advance?

I seem to remember that you need a pullup or pulldown or something on SS
when in master mode - because in multi-master situations, the device can
only be master when it's not being told it's slave, or something. No, I
have no idea how multi-master SPI is done, perhaps with all the SS lines
running to a central arbitrator or something? Not sure how prospective
masters would 'claim' mastership... Anyway.

If we change the pin from CAPture to GPIO, read the pin state, an edge
occurs, and we change back to CAPture, we have lost an edge again..
The margin is very small, but as long as it's there, the edges WILL
occur during that time ;) Plus, I think the CAPture logic will not
work reliably if we switch the input to GPIO which (I believe) floats
the internal CAPture logic input. Plus, it's more overhead ;-)

Well, if you configure SS as GPIO, there's no way of pulling up the
internal SPI SS signal.. they should have pulled it up internally
when the pin is configure as GPIO.

Actually, the SS is an output in master mode, so there should be no problem
using it for GPIO while SPI is running. It just seemed that we couldn't
get SPI working at all because it first comes up in slave mode and the SS
pin was an input (which was floating). Sometimes, on some chips, the SPI
would start working and when we were able to config it as a master, it
then worked until the next reboot. (This is how I remember it, I haven't
worked on this myself..) The reason we wanted to have SS as GPIO is that
we are using a FLASH chip which terminates an operation when /CS goes high
(and SS goes high after every single byte).

Anyway, we "fixed" the problem in this design by using a software SPI
implementation and having all the pins as GPIO..

Well, enough of this... ;-) Back to studying LCD's and LCD controllers..

This is garbage. On Motorola parts (like HC12), SS is an output in master
mode and goes low during SPI operations. On LPC210x, SSEL is always an
input - even in master mode. If it goes low, LPC thinks there was an SPI
collision. If you config the SSEL pin as GPIO (to be able to use it as an
SPI chip select), the SPI logic sees the input floating and will not work
properly.