Posted
by
samzenpus
on Monday April 29, 2013 @01:11AM
from the a-little-help-from-my-friends dept.

AchilleTalon writes "As many of you may know, there are two main competitors on the Windows platform for embedded software development, namely IAR and Keil. By embedded development, I mean development for microprocessors like the well known 8051 and the likes, not mobile platforms which include a complete OS in first place. I am seeking for alternatives to IAR and Keil in the OSS world. Even if I can find pieces of code here and there, I haven't found yet a fully integrated development platform. Does it exist? What do you use?"

yes I think SDCC is the only real OS 8051 solution - I've been through this same process looking for tools for a cc2533 recently and this is what I've found works it's not gcc, and 8051 is a crap target, you have to code with all the memory hierarchies in mind.

I used jmce for simulation - 8051s are all different enough that chances are you'll have to hack on you simulator (and configure sdcc) to match the memory layout and dptr/p2 weirdness of your particular variant

BeRTOS is very nice for the money (free), is going in the right direction and is more than libre enough. Develop for ARM or AVR in Windows or Linux. Don't see why other core's and SoCs can't be added. It has a nice Pyython/QT configuration manager, lots of abstraction and can be debugged with GDB in conjuction with CodeIite though I have used AVRStudio4 to do the same. It just needs some lovin'. http://www.bertos.org/ [bertos.org]
A pox on my first AC post on the subject.

I have used both IAR and KEIL, but I find using gnu tools far superior. For a number of reasons

* Dongles (ok they have license services too, but they are always more expensive).. If you 2 years after release find you need to do a emergency fix of your released software. You start by trying to find that dongle that is needed for the software.* Licenses, when you need that quick fix, you can almost be certain that your license has expired* Integration problem on the build servers.. When you are building on your local machine everything is fine and dandy. But trying to migrate that your build farm, good luck with that* linker scripts.. when you need very esoteric features where you wanna lie your code in ROM, the gnu tools are just far superior in flexibility* and I love to type make on the prompt and build the artifact.. instead of firing up som clunky IDE

I've tried a variety of IDEs, including Eclipse and Keil. My favorite is Visual Studio on Windows, and I did sort of get it integrated with the GNU tools but in the end it wasn't worth the hassle.

I'm in the final stages of large-ish embedded ARM project cross-compiled on Linux x86 using nothing but vi, make and free CodeSourcery GCC ARM tools. All of this was on Ubuntu 12.04 running under VirtualBox on Windows or OSX.

The best thing Atmel ever did was to license Visual Studio for their 8/32 bit microcontroller IDE. The use GCC as the compiler and their own debugger. They throw in a licenced copy of Visual Assist X as well, and all for free.

Microchip's IDE (MPLAB) was always terrible, and for the latest version they decided to switch to Eclipse but the thing still sucks. Their compiler is terrible as well, and their debugger hardware is awful. I hate having to support software written for PICs because while the PICs thems

The GNU toolchain does not support most microcontrollers. I love how IT people think they know anything about embedded. And no "embedded Linux" is far from embedded unless you don't care about interrupt latency, priority inversion, or any of the things hard real time systems have to deal with.

There is SDCC but it isn't the greatest and frequently falls apart on complex code that IAR and Keil handle well. But if you are willing to rework your code and verify the assembly output by SDCC it is usable for most

The 6502 and 8051 are used in lots of little embedded applications. Also don't forget chips like the Z80. GCC does fine for chips like M68K/coldfire & ARM but for some of the smaller chips where special C keywords are common it becomes a pain to make GCC generate good code. The PIC is a piece of crap but thanks to the agressive pricing of Microchip sometimes that POS is thrown at engineers so you have to occasionally deal with that too.

As others have posted, sdcc (based partly on gcc) has 8051 and Z80 covered. There are also other custom versions [kpitgnutools.com] of gcc that provide support for microcontrollers outside of the main gcc tree.

It really depends which microcontroller you're talking about. The GNU toolchain won't help you for PIC, but (if we're all very honest) PIC is a shambles of an architecture. GNU tools for platforms like MSP430 and AVR are good. Add Eclipse if you want. If any aspect lets you down, unfortunately it's likely to be gdb.

I've used both Keil and SDCC, and it turned out the code generation of Keil was much better (years ago though). I'm mostly using GNU tools and my editor is emacs, but for source-level debugging the Keil tools are useful.

Yup, GCC works great on just about all the platforms, 8 to 64 bit. There's not really much that commercial compilers give you as an advantage anymore. They don't even give you decent support which is what people claim you're paying for. GNU stuff does fall down a bit on the documentation side, mostly because it's either out of date or just so huge it's hard to get a good grasp on it (I have a cube neighbor that hates gcc since he's used to compilers that give a anual that's very specific to the chip being used).

You also can't beat make for building stuff. I can't believe people try to use IDEs for these things, it's just so clunky. We used an IDE for a larger system at a previous company and it was just so painfully slow. With visual studio that used an external compiler, the exported makefiles were slower than the hand crafted ones, and it was just plain stupid to open the IDE just to click the make button.

To use these tools on Windows you need to get Cygwin to make it work more smoothly. It's not the greatest system in the world but it's much better than bare bones Windows command line. If you have a choice though, it's easier to just do it all on a real unix system like Linux or Mac OS.

To use these tools on Windows you need to get Cygwin to make it work more smoothly

Unless you need fully-emulated Unix on your PC (clear down to Unix signals, fork(), etc.), Cygwin is really overkill. To make matters worse some of the toolchain renders code built with it GPL unless you pay Red Hat for a looser license.

Any Windows developer who doesn't need full Unix emulation should probably be using MingGW [mingw.org].

For basic development without having an IDE, Windows is amazingly lacking out of the box. There is no equivalent of grep, no basic scripting language like sh, etc. MinGW (and MSYS) is nice. However you can get a small subset of Cygwin as well, and you can even get mingw link libraries with Cygwin, so the size is not any different and is not overkill. The fundamental difference really is the requirement for cygwin.dll to run the utilities.

Findstr is surprisingly close to grep. It's a new set of arcane command line switches to learn, of course, and even if its regex syntax is identical to grep's (unlikely, though I've never done a proper comparison), you have to deal with the oddities of CMD when it comes to character escaping. However, to state "There is no equivalent to grep" just shows your ignorance of the platform. There's actually a lot of Unix-esque commands in Windows that many people don't know ("tasklist" and "taskkill", "mklink" an

I use GNU toolchain (GCC, Binutils, GDB) and Eclipse or Codeblocks. Setting it up takes some time, and can cause some headaches. Also not all hardware platforms and all emulators will work with this setup, but once you get it working, it has plenty advantages over commercial platforms.

I don't understand the penchant for an IDE. It is just another layer between me and the finished product. I've ran into too many developers that have no idea how to use an actual compiler...it is terrifying!

I recommend building your toolchain from source and setting up a console build environment manually. This is probably the simplest yet most effective tutorial I've seen; coincidently, it also targets the LM3S product line:http://kunen.org/uC/LM3S1968/part1.html [kunen.org]

I don't understand the penchant for an IDE. It is just another layer between me and the finished product. I've ran into too many developers that have no idea how to use an actual compiler...it is terrifying!

I don't understand the penchant for a compiler. It is just another layer between me and the finished product. I've ran into too many developers that have no idea how to use an actual assembler...it is terrifying!

i'm not sure what you mean by ide for embedded. i tend to think of ide as integrating code with visuals, but there's no forms in embedded, and i imagine it would be a big stretch to simulate outputs when they could consist of anything. you can integrate debugging, flashing, and simulation of chip response. in that case avrstudio is very good for avr chips.

And just regular gcc, avr-libc and avrdude is still better without any shitty environment on top. With whatever editor you prefer.

Really, what is this obsession with integrated development environments, with their crappy UI, editors that can't let me have two windows with different parts of a source file side by side, implemented in Java or worse, and with no redeeming qualities other than letting a user to mash one button to start the build? Do people really expect that much handholding while doing very co

Good IDEs have ways to search across symbols, source files, etc. They allow you to quickly search for references to symbols. They allow integration of one-click compiler-error/warning-to-source jumps. They do static analysis and performance profiling. They have easy ways of pausing execution modifying code and resuming execution. They let you use version control from inside them. They have plug-in oriented debuggers that let you write simple visualizations for your own datastructures and much much more. All of those things save development time. Thankfully the vast majority of programmers these days have a choice for using an IDE. Nobody sane would want to maintain Makefiles, unless it was the only option. Your opinion of people who use IDEs is outdated propaganda.

Amen there. Some IDEs are pretty nice, if you get around the quirks, such as eclipse. But the ones that come with a proprietary compiler or tool set for tiny embedded CPUs just seem to be awful. The company that creates the hardware is not necessarily the best company to create usable software to develop for the hardware. Sometimes it seems like these are just additional revenue streams; once you've got their proprietary debugger dongle you're stuck using their proprietary toolset.

Yes, I do. Don't get me wrong, I code in vi regularly and in some ways prefer it. I usually hand-craft my makefiles. But the idea of an IDE is not just to give the user one button to start a build. A good IDE helps you read code. There's still lots of progress to be made in this, but here are some features that make me more productive in an IDE:

Jump to definition - the Eclipse implementation of Ctrl+Click is particularly good.

Show definition by hovering

Code folding

Syntax highlighting - editors like Emacs or Notepad++ get you part way, but for completeness your editor needs to understand the build system.

Integrated debugger - I can use gdb when I have to, but being able to see variable values and code at the same time while you step through code is invaluable

Your fallacy is in assuming that complex code requires working always at a basic level, but the opposite is true - the more complex the code, the more helpful tools improve your productivity.

At worst, I may benefit from cross-referencing, but it does not have to be welded into the guts of my editor, and insist on creating myriad of little windows, so I can't have my favorite layout -- two editor windows side by side, one where I edit code, one where I refer to things, and a terminal window with shell, where I run compilation, target loader, manual, searches, version control, etc.

Good code must work, but looking nice is optional. Additionally, when debugging code (sometimes code you wrote an hour ago), you need to see not only the code, but what is happening in the hardware. At that point, it's irrelevant whether the code looks nice or not. It's also not always true that good looking code will work. There are errata for microcontrollers (and microprocessors, though those are usually handled at the OS/driver level), so the obvious and beautiful solution may

surely even the die hard cli coders would concede that syntax highlighting is a pretty cool feature at least... code should be readable without it for sure, but it does make a lot of code easier to look at, particularly if your looking for that stubborn bug.

i don't do a lot of embedded (used avrstudio in windows but haven't had the motivation to work through the issues getting avrdude working for an et-avr development+stamp board from futurlec) but plain ol' gedit is pretty good for most other programming (

Syntax hilighting is done very well in vi and emacs and slickedit and all sorts of other non-IDE editors, along with lots of customization. You don't need an IDE for that.

Yes, I'll admit that until compile time you won't really know what the preprocessor will do, but some editors manage a stab at doing that. IDEs are going to fail at that problem too beyond the simple case. Ie, if the code has a wide variety of builds that it can be used with or is portable across a variety of targets, then which of the

Yup, the real world intrudes. I remember being frustrated at some forums in the past when I'd ask "how do I solve this problem" and the idealistic readers would respond "this problem will never exist if the code is been written properly." Pragmatically, one should be prepared to deal with worst case scenarios:-)

Have you tried Scons [scons.org] instead? Takes a day or two to get used to its different approach, but after that you will never go back. Make is a crude and error prone tool in this day and age. Also, Scons is a very solidly written and tested piece of software. Their idea of alpha is what most people call final release.

Yeah, had a few people try that because they "didn't understand" the Makefiles. Afterwards they had a system even fewer people understood an in addition didn't work (parallel builds randomly failing).Now I am quite certain that's not the fault of scons per se, but it's not an indication that it will actually save anyone having a problem with make.And if you have a problem with make I'd rather suggest something like cmake which at least can help you with stuff like generating/maintaining project files for Vi

I evaluated a lot of make alternatives awhile back. I just found that they all sort of run into troubles making them a hassle to adapt to an existing project. Ie, they made things very easy if your project was the sort that the authors intended, but if you did anything too far out of the ordinary then they seemed much more difficult than the equivalent change to a well written makefile. There are lots of builtin rules for make but they're reasonably easy to understand and override, whereas builtin rules

Yup, if your code is ported you're very often going to need a preprocessor support. If not in the language then at least you need it in the build system. Ie, header files are not called the same thing in all systems, different targets will require different approaches in the code, etc.

Everyone hates this stuff, and occasionally someone will proclaim that there must be a way to avoid all this, but pragmatically you have to deal with it and just do your best to minimize the headache.

Plus if you're stuck using an IDE then you can't create an automated build environment very easily, or automatically kick off tests, use the same environment for different projects if they are in different languages or for different chips. Once you use an IDE then you're pretty much mandating that every single developer on the project use it, and you can make a lot of enemies that way if they're not on board.

I should clarify (I wrote it at 2am). By "stuck using" I mean that you must use the IDE in order to compile. Some IDEs will export their builds into a makefile or something similar and allow building the project without using the GUI.

I have seen IDEs that don't have this option though, very often those for small embedded CPUs, and they can really annoy the build engineers until you find an alternate set of tools that work better.

Really, what is this obsession with integrated development environments, with their crappy UI, editors that can't let me have two windows with different parts of a source file side by side, implemented in Java or worse, and with no redeeming qualities other than letting a user to mash one button to start the build?

Well, Visual Studio can have two (or three, or four) windows side-by-side showing different parts of the source. It has good multi-monitor support and isn't written in Java.

Code completion is a really nice feature that only IDEs offer. A text editor that doesn't understand the language can't give you very much help in that regard. Even basic stuff like being able to right click on something and see all references to it and jump directly to the definition is a huge improvement over hunting through files in a

There are editors like emacs that will do code completion. Lots of them that will do very good cross referencing too, every bit as good as an IDE. Sometimes it's basic, like ctags or etags, sometimes it's much more extensive. I find that some of the stuff people tout as advantages of using an IDE are things that emacs did first in some fashion.

for compiled software i use delphi (if i had a need for linux versions i would use lazarus/freepascal), but for scripting web apps (php/html/sql/js/css) i have gedit on one screen and a browser open on the other. that way i can hit ctrl+s and then tab to the browser and press F5 to see the effect. i guess everyone developing web apps probably uses 2 screens nowadays. at work i only have one screen on the development server, with the other screen with the browser

There are people who just never used anything but an IDE, and can't imagine life without it, even if they don't need visuals. Ie, it's a glorified text editor (yet dumber than vi or emacs) plus a clumsy debugger plus class browser (really the only thing that keeps some people hooked on them).

This is the most idiotic thing I've read here in a while (excepting the AC posts). An IDE is not a "glorified text editor"; the fact you believe this shows you have little, if any experience of using one. I've spent almost 30 years writing code in various platforms, in various languages, and while I've used vi and EMACS in the past, would never return to that. Those text editors from the 1970's were designed for an entirely different environment, and using them in preference to an IDE is at best like tying

I'm just saying that some people use an IDE that way. They're not using the full capabilities.

On the other hand, as an Emacs user I have not seen an IDE that lets me edit code as easily. Most IDEs I've used are stuck in a single code window at a time. Others are stuck only capable of dealing with one language or family of languages. Very few have anything near to the customization you get with Emacs or Vim, you're stuck with just a few tweaks to indentation style for example.

So which IDEs have you used then? Because either you haven't used any, or you haven't even bothered to learn the most basic features. One of the most popular and well known IDE's - Eclipse - can display as many code windows as you like (hint:click on tab, Window menu, then new Editor - there, that was easy, wasn't it? If that's too difficult, you can drag the tab sideways and it'll open another editor), hell you can even drag them out of the

Visual Studio couldn't when I had to use it. Eclipse had it but you were still confined to the editing area as I recall, though maybe later they added a way to detach to an independent window or even alternate monitor, but it's been awhile since I used it.

Where do you come up with this rubbish?!

Visual Studio has the worst customization I've seen and it's the one I was forced to use the most. Eclipse at least gives you partial emacs like key bindings, though in many customizations you still need an external plugin. AVR Studio has very little cu

I've been using the KEIL tools for years now, so I may be prejudiced, but I am very happy with the quality of the code and the support, both device-wise (I've used KEIL compilers from 8051 to ARM on a lot of platforms over the years), and when I need support services.

For some other development projects I have to use Eclipse, and it is a pain in the a**, especially when it comes to debugging. Looking into a chips hardware registers or simulating a devices IO hardware is a BIG advantage when using the KEIL de

I have been using Codelite for a few years for editing and building AVR and Atmel SAM7X stuff. Generally works well but was a bit finicky getting a tool chain setup.

What I like about it is,

Written in C++ based on wxWidgets and the scintilla editor. No Java bullshit.Very fast and responsive.Supports multiple projects in a workspaceDoesn't care if files are shared between projectsIt has minimal thoughts on where files in a project are supposed to be.Has code completion, style highlighting.Supports clang, so

(1) Vendors do not want to document their internals and interfaces, so you can only use their proprietary software.

(2) Vendors would prefer you use their tools, which in turn use their proprietary description file formats for the design files. This increases the expense of switching to another vendor for fungible embedded controllers. A good examples of these are the ECs for laptops available from TI, Western Digital, and FTDI, in particular, those used for battery and power management and 8051 emulation for matrix keyboard decoding.

It's possible to reverse engineer it, but when you are trying to get a product out the door, you care more about time to market than you care about whether or not the tools are available for free. A good vendor example in this category is Atmel. If you want to reverse engineer things so that you can use your Dediprog to program a wide range of microcontrollers, feel free to buy a wide range of microcontrollers, a wide range of SDK softwaare, and have at it.

NB: If you are trying to do this for Samsung ECs, they tend to use ECs that have a cryptographic handshake, and require encrypted load payloads. For things where you can program them without soldering extra wires on them to get to the programming UART, you can pretty much operate in the clear... unless of course the EC does a validity check on the other WCS's, for example a cryptographic checksum of the BIOS using a secret key known to the proprietary BIOS vendor and the EC, e.g. Insyde H2O BIOS used by some motherboard vendors -- probably also reverse engineerable (EE's are notoriously bad at writing robust security code) at great expense.

(1) Vendors do not want to document their internals and interfaces, so you can only use their proprietary software.

This is wh I love PIC micros. The datasheets are enormous and tell you _everything_, there are good closed and open tools. They are well documented. The programmer is even another PIC and they give away the source code as well as document the programming interface!

PIC seem to have figured that by making it easy, cheap and realising that they don't have expertise in super proprietary and extraordinarily buggy development tools (like just about every prior microcontroller vendor) people might actually want to use their stuff. You know, use it witout wanting to roll up to the Vendors's HQ with a chainsaw...

Oh, and a sleep power of 2nW does't do any harm either:)

Of course, they don't go very big, but for deep embedded stuff they are realy handy.

I gather ATMEL are pretty good too, but I started with PIC, so i've stuck with them due to being good enough and mild intertia.

I don't know, the tool support for PICs seems really crummy compared to alternatives. Especially the compiler is a major pain point, everyone else (eg AVR, MSP430, ARMs from M0 to A10) is using GCC, but with PIC you are stuck with some crappy proprietary compilers instead. The PICs might be good chips and Microchip a nice company but without GCC toolchain it's all for nothing.

er do you even know what PIC's are?
IE a Harvard architecture with tiny amounts of memory is not going to be programmed vial the bloated C++ Linux tool chain most people use the PIC assembler there are hyper cut down versions of HLL to help with this if you cant hack assembly language.

It's mildly annoying, but I use PICs for the really small stuff (e.g. the 12F875). once you get to that stage, since there's 1k of program space and 64 bytes of RAM, the programs are barely big enough to make C or C++ worth it over ASM.

And the reason for no gcc is because of the banked memory. It's nasty, but probably one of the reasons they're cheaper and lower power.

The programmer is even another PIC and they give away the source code as well as document the programming interface!

The problem is that their debuggers are absolutely terrible. I have to support some products using old 8 bit PIC18s that are not supported by the ICD3, and the ICD2 doesn't work on 64 bit platforms so I have to use it via a virtual machine. Both of them are flaky, often stop responding or fail to connect properly and don't seem to be very robust.

On top of that they both seem very limited in terms of the number of breakpoints you can set, lack of data breakpoints and so forth.

On top of that they both seem very limited in terms of the number of breakpoints you can set, lack of data breakpoints and so forth.

If they could just sort their hardware out they might have a nice platform there.

Breakpoints and other debugging features are a matter of hardware- it adds silicon, and therefore cost. Some PICs come with embedded debugging hardware (in every part), some have special debug parts ($25-$50 US)- compared to the development parts of yesteryear, these are super-cheap.

The ICD2 is a *really* old design- it is now obsolete, unfortunately, there are lots out there. The ICD3 is a far more robust platform, from the drivers on up.

Actually we find we have more problems with the ICD3. It seems like the boost circuit for generating programming voltages is flaky. Also it doesn't support huge numbers of older PICs, so we have products that are only five years old but can't be debugged on 64 bit machines.

I take issue with the 'doesn't support huge numbers of older PICs' Microchip has something on the order of 600-700 different PICs in production (and which rarely if ever get obsoleted) - looking through the latest MPLAB 8, I see a few rfPICs and some PIC18s that are not supported by ICD3. But if the part you're using isn't supported, that can be a huge issue.

I've seen quite a few issues with the boost circuit, they haven't traced back to an issue with the ICD3 itself- they were due to the resistor on the M

The VPP on the ICD2 is completely broken. You need an external PSU to get it up but often times it just fails to switch. Removing the external power several times will eventually make it work and then programming is okay. Maybe you are right about the ICD3 one, but we followed the advice given in the datasheet at the time and are not about to retrofit resistors on tens of thousands of units.

It's possible to reverse engineer it, but when you are trying to get a product out the door, you care more about time to market than you care about whether or not the tools are available for free. A good vendor example in this category is Atmel. If you want to reverse engineer things so that you can use your Dediprog to program a wide range of microcontrollers, feel free to buy a wide range of microcontrollers, a wide range of SDK softwaare, and have at it.

Sorry, it wasn't clear if you were saying that Atmel is good at documenting their interfaces or not. Older Atmel parts use SPI and the newer ones have two wire interfaces (PDI or TDI), all of which are well documented. There are open source programmers (hardware) and an open source command line programming application called AVRdude. Even the debug interface stuff is well documented and supported on Linux.

Unless you are supporting a legacy system or require ultra-high temperature or rad-hard parts, why the 8051? Put another way - why do you need a new tool for an old part? I was using Keil v4 around 20 years ago on some 87x51 part. It was okay.

Keil and IAR do work (though IAR's AVR32 is still rubbish). But they are expensive and often come with a dongle or other machine-locked keys, which are ALWAYS a problem.

One very seldom uses the standalone 8051 but rather the 8051 core.Because of the simplicity and small size of the core there are plenty of specialized ICs out there with an embedded 8051 code.The most obvious application is one chip radio solutions. Instead of having to use both a 1-chip radio and a micro-controller you get them both in a single package. This is great for high end small size remote controls where you might want to do some bidirectional authorization or just signal back that the message got

8051s are everywhere, and growing in number, astonishingly. For instance, the latest bluetooth low energy endpoint controllers from Phillips come with... an embedded 8051. Basically every piece of hardware that needs some sort of noddy low power controller and isn't especially demanding on it will probably use an 8051.

The thing is they are cheap, unencumbered, plenty of cell and layout libraries exist, development tools ar ready made, they clock into hundreds of MHz and do the job well enough.

I would not be overly surprised if the 8051 instruction set outlasts x86, to be honest.

Sure, you don't get free-floating 8051s any more (on the low end there are other controllers like PIC which are cheaper and lower power) and on the high end, everything beats them. But for everything else, they are ubiquitous.

I'm not sure about that. As far as I am aware Intel still license the design. It is probably a question of licensing a certified implementation, as some other vendors do. Most companies won't develop one themselves, which makes me wonder why no-one else has made a really serious attempt to get into that market.

I'm using IARs IDE. However, starting with version 5.50 (EWARM) it seems to have an issue drawing its GUI elements which, over time, leads to interesting screen corruption and an eventual crash to the desktop (running under XP, when running under W7 the screen corruption doesn't happen, but the crash eventually occurs, too). Version 5.42a was still fine.

For 8 bit work, you can go with Atmel ATMega and ATTiny, using the free AVRStudio.Theres also MPLAB IDE for PIC micros, also free.

For 16/32 bit work, you cant go past ARM. You have Eclipse and CooCox as options for free IDE's ( coocox is more integrated, and has open source hardware debuggers available that can be easily used). Both are based on GCC toolchain.

Id recommend the ARM route, as the Cortex M series is very good for the price ( esp the LPC1700 series from NXP ), and the programmer and debugging tools are cheap and non proprietary.

it used to be.. true. They recently moved to become a wrapper for Eclipse though, so it should only be as buggy as Eclipse is now.. knock on wood.
FYI, I used to use CCS in the early 00's working with C5x DSPs, man whata a pain in the @SS it was to use!
- Eddy_D

http://www.toppers.jp/ [toppers.jp] Is what I and many many Japanese electronics and automotive manufacturers use. It's said uTron/iTron is the most used OS on the planet actually, due to it's almost universial usage in Japanese electronics. I once heard the Toyota Prius has 5 Tron instances running in each break system alone.

Of course all the information and documentation, despite being very plentiful and for a completely Open Standard (Tron) base and Open Source (TOPPERS, etc.) implementations, is in Japanese. Probably not ideal for you, but I just wanted to mention it exists and is pretty nice.

Vendors of ARM's Cortex-m based micro-controllers tend to use gcc and you get a build of it from them or their third-party partner. IDE wise, some vendors such as NXP and TI have an eclipse based offering, but I've found regular eclipse with the zylin plugin is better. I've even built a compiler from source for the cortex-m0 and managed to get working code from it. (I had some spare time).

Vendors of other architectures often bundle useful libraries and code with their tools. Throwing them away and starti

The lessons enumerated at http://deardiary.chalisque.org/multistage-computer-language-evolution/ [chalisque.org] have yet to be learned in many parts of the current world of computing. This is more important when stringent resource constraints mean that the obvious 'industry standard best practice(TM) stuff' is no longer applicable and you have to work stuff out from scratch.

I find the IDE has less to do with how much of a utter mess the embedded system's platform is. AVR's are great, but if you program for any of the TI platforms like the MSP chips you will find that the code from TI is a complete and utter nightmare mess that is 100% useless unless you want to spend months trying to figure out how to program like they do... or take a lot of peyote when you program because it seems that is how TI programmers work.

I have used, at one point or another, almost every type of embedded system there is. My company specifically targets clients with embedded needs and I solve those applications entirely in OSS (except for programmable logic, where OSS is not an option).

In the last few years, ARM has taken over the embedded world. It has solutions that span the entire range embedded problems, and it can be programmed entirely with the GNU toolchain. 8051s and PICs have been loosing dominance for years, and non-OSS toolchains have been declining in quality for years.

ARM has many different vendors and many different cores:

The smallest is the Cortex M-0. These come as small as 2.17x2.31 mm package made by NXP. This a 50 mhz processor with 12 io pins muxed with a few peripherals, and is between 1-4 dollars depending on quantity. There are many equally good and cheap Cortex M-0's.

If that does not quite cut it, you have Cortex M-3 series. There are MANY processors in this series. If you are looking for something good in this series, I would recommend the STM32 processors. There are many cheap and easy dev boards to get one of these processors up and running.

From here ARM just gets more and more powerful. Cortex A8 and A9 processors run at ghz now and run embedded linux. I have used these with linux with great results from Motorola, Atmel, TI (those these tend to require some effort) and Freescale. I have not yet had a chance to test the Exynos chips (this is up to quad core at 1.7 ghz) or the AllWinner chips.

All of these chips can be programmed with the gnu toolchain. The ones without linux os involve building the the gnu toolchain with the newlib library instead of the glibc/uclibc library. This is a bit of an involved process, but normally there are toolchains that are built and downloadable.

Further, any company that builds an ARM micro can be built with the gnu toolchain.

Also, never underestimate the power of attaching a small CPLD or FPGA to your application. That can drastically reduce the complexity of your software when done correctly.

I have used almost every toolchain and IDE at one point or another, and this has been *BY FAR* the most sustainable solution I have found.

Just my two bits, I have been using Embedded IDEs for over 10 years and not found any good free ones. The issue is that it takes a lot of effort to get an embedded IDE to be not only useable, but really seamless. The points of difficulty are usually in the debugger and in the physical connection to the processor

These days the pysical interface is mainly JTAG which replaced the venerable (and expensive) ICE (In circuit emulator). In the past, many processor manufacturers would not release specs for their J

If the dude don't know everything about everything, or knows there are somethings he don't know he can always ask, man. And people, would be all, like, hey, man, here's what I know, and he be like, thanks man. So we all happy now.

BeRTOS is an RTOS or Real-Time Operating System (no surprise, given its name), and like most RTOS it runs on the embedded microcontroller. That's not what TFA was asking for at all.

What the submitter is looking for is an open source integrated environment (an application) that run on normal desktops, cross-compiles to the microcontroller, and interactively debugs it, typically through JTAG. In other words, just like the proprietary examples (IAR and Keil) that TFA gave but open source.

The minimum cost on ARM-based processors is somewhere around the $2 mark. I can get PICs for down to about $0.30. There's plenty of products where that sort of difference is significant - and easily high enough volume to drown any development time cost difference.

Your minimum cost is pretty dated, I think. *Many* modern Cortex-M0 parts can be had for under $1 (from ST, NXP, Freescale, and many newcomers), and a few under $0.50 in quantity (the only place where such penny-pinching really matters). These parts also have significantly more resources than anything you can find in the entire 8-bit PIC ecosystem, much less the cheapest parts... There is almost certainly a small window of space where the tiniest 8-bit microcontrollers belong, but for most tasks there is