Technologies

Our JumpStart C compiler technologies are built from the ground up to best address the specific requirements of embedded systems programming. Our compilers are small, fast, and generate excellent and efficient code.

(for information on the GCC compiler in the JumpStart C++ products, please see the official page from GNU)

The source of a C program is typically separated into multiple source files, so that it is easier to write and maintain. The source files are processed by multiple programs, eventually resulting in a “program image” that can be download to the microcontroller. The term “C Compiler” is commonly used to refer to the entire chain of tools, even though properly speaking, a C compiler is only one part out of many. In a narrow definition, a C compiler translates a C source file into assembly code for the target CPU.

JumpStart C compilers feature:

C processing tools written from the ground up, load and runs fast, not based on GCC or Small C, etc.

I have been using Imagecraft AVR compilers for a decade now and there have been massive improvements over the years. Apart from the aforementioned support which is very good, I feel that the code generated is also very good. I am using this compiler for a product where processing time is critical and I have made it a habit to often check the generated assembly and also measure parts of code using one of the timers. I go to great lengths to save a few microseconds if possible.

A few examples: If I need to run through an array of structs and have to access individual fields, I never take the the indexed approach. {ImageCraft - traditionally, it is said that using pointers may produce faster code, even though it may make the code less readable } Instead, I run the loop and at the beginning I set a pointer to the list member and access all struct fields through this pointer. So I also to the same approach when using arrays of bytes where I also have to keep track of the byte pointer, such as in a fifo. After the update to V8 it turns out that accessing an array was much faster. Below is a clipping of my notes at the time:

In store_nmea_char() a received character is stored with the following code:

*sbuf->ptail++ = c;

sbuf->length++;

This takes 27 cycles. When this is replaced with

sbuf->data[sbuf->length++] = c;

it takes 14 cycles.

Now for the floating point part: In this same product, I need to store and process sensor values like wind speed and direction. To speed up things, I avoided floats as long a possible. Since I only need to process at 1/10th or 1/100th (digital places), and values are received in ASCII strings, I had replaced atoi() by my own atoi10() and atoi100() functions. So a value received as 25.67 would be stored and processed internally as 2567. At a later stage, these values needed to be converted back to either text or a real float value.

After the update to JumpStart C for AVR, it turned out that some of these calculations were actually faster when using floats right from the start.