Introduction

First of all, what does term "inline" mean?

Generally the inline term is used to instruct the compiler to insert the code of a function into the code of its caller at the point where the actual call is made. Such functions are called "inline functions". The benefit of inlining is that it reduces function-call overhead.

Now, it's easier to guess about inline assembly. It is just a set of assembly instructions written as inline functions. Inline assembly is used for speed, and you ought to believe me that it is frequently used in system programming.

We can mix the assembly statements within C/C++ programs using keyword asm. Inline assembly is important because of its ability to operate and make its output visible on C/C++ variables.

GCC Inline Assembly Syntax

Assembly language appears in two flavors: Intel Style & AT&T style. GNU C compiler i.e. GCC uses AT&T syntax and this is what we would use. Let us look at some of the major differences of this style as against the Intel Style.

If you are wondering how you can use GCC on Windows, you can just download Cygwin from www.cygwin.com.

Register Naming: Register names are prefixed with %, so that registers are %eax, %cl etc, instead of just eax, cl.

Ordering of operands: Unlike Intel convention (first operand is destination), the order of operands is source(s) first, and destination last. For example, Intel syntax "mov eax, edx" will look like "mov %edx, %eax" in AT&T assembly.

Operand Size: In AT&T syntax, the size of memory operands is determined from the last character of the op-code name. The suffix is b for (8-bit) byte, w for (16-bit) word, and l for (32-bit) long. For example, the correct syntax for the above instruction would have been "movl %edx, %eax".

Immediate Operand: Immediate operands are marked with a $ prefix, as in "addl $5, %eax", which means add immediate long value 5 to register %eax).

Indexing: Indexing or indirection is done by enclosing the index register or indirection memory cell address in parentheses. For example, "movl 8(%ebp), %eax" (moves the contents at offset 8 from the cell pointed to by %ebp into register %eax).

For all our code, we would be working on Intel x86 processors. This information is necessary since all instructions may or may not work with other processors.

If there are no output operands but there are input operands, we must place two consecutive colons surrounding the place where the output operands would go.

It is not mandatory to specify the list of clobbered registers to use, we can leave that to GCC and GCC’s optimization scheme do the needful.

Example (1)

asm ("movl %%eax, %0;" : "=r" ( val ));

In this example, the variable "val" is kept in a register, the value in register eax is copied onto that register, and the value of "val" is updated into the memory from this register.

When the "r" constraint is specified, gcc may keep the variable in any of the available General Purpose Registers. We can also specify the register names directly by using specific register constraints.

Example (2)

In the above example, "val" is the output operand, referred to by %0 and "no" is the input operand, referred to by %1. "r" is a constraint on the operands, which says to GCC to use any register for storing the operands.

Output operand constraint should have a constraint modifier "=" to specify the output operand in write-only mode. There are two %’s prefixed to the register name, which helps GCC to distinguish between the operands and registers. operands have a single % as prefix.

The clobbered register %ebx after the third colon informs the GCC that the value of %ebx is to be modified inside "asm", so GCC won't use this register to store any other value.

Volatile

If our assembly statement must execute where we put it, (i.e. must not be moved out of a loop as an optimization), put the keyword "volatile" or "__volatile__" after "asm" or "__asm__" and before the ()s.

asm volatile ( "...;""...;" : ... );

or

__asm__ __volatile__ ( "...;""...;" : ... );

Refer to the following example, which computes the Greatest Common Divisor using well known Euclid's Algorithm ( honoured as first algorithm).

Summary

GCC uses AT&T style assembly statements and we can use asm keyword to specify basic as well as extended assembly instructions. Using inline assembly can reduce the number of instructions required to be executed by the processor. In our example of GCD, if we implement using inline assembly, the number of instructions required for calculation would be much less as compared to normal C code using Euclid's Algorithm.

I am trying to do something similar with a CFStringRef, without success: CFStringRef cfstr_1 = CFSTR("A"); CFStringRef val2;

__asm__ ("movl %1, %%ebx;" "movl %%ebx, %0;" : "=r" (val2) // output : "r" (cfstr_1) // input : "%ebx" );I get the following build error: "Invalid operand for instruction."I understand the crstr_1 is a pointer, so I have also tried to pass $cfstr_1 as input, but this does not compile. Something wrong with my syntax?Thanks

Many of the examples modify registers which are neither listed as outputs or in the clobber list, which results in undefined behaviour. For example variables or constants may mysteriously change, or even take 2 different values simultaneously.

how can I run or build it in visual studio? which project type I need to choose? which template I need to choose? If I past the c file code (arithmetic for example) I'm getting errors like:error C2143: syntax error : missing ')' before ':' arithmetic.cregarding the asm commands.So I guess I need to tell him or reference him to the assemble language compiler or something?Please please be detailed and specific in your answers!

some assembly is good for you, if nothing for looking into the disassembly window during debugging

as for using inline assembly to "speed-up" code, let me give you a recent example from my coding life. I was trying to find a fast fuzzy-search algorithm and found a few pieces of code for Ratcliff/Obershelp algorithm

one was in C, using recursion, and another in assembly, looking really mean and lean. I tried them both. The compiler optimized C version turned out to be 5 times faster than inline assembly!

Just before everyone rushes to learn assembly, don't forget that parts of the C library are already written in assembly, and some implementations (e.g. memcpy) even utilise SIMD instructions, when supported by your processor.

Quite recently, I came across a certain "need for speed", as follows.

I was trying to scan a file of several hundred megabytes - as quickly as possible - for the appearance and continuation of a synchronisation pattern. A given 3 byte pattern indicated the beginning of synchronisation, thereafter the file should contain similar sync patterns at fixed offsets. The code needed to cope with continued loss and gain of the sync pattern - some files might maintain sync perfectly throughout, others would contain several breaks and resynchronisations, and others might contain no sync at all. The code had to work fast in all cases.

My original implementation read large chunks of the file into a large circular buffer, and performed the search for the sync pattern on that circular buffer. The circular buffer conveniently provided an overloaded operator [], so that the user of the class dealt in the logical byte offsets rather than the physical locations of buffered bytes.

Under the particular file chunk size I was using at the time (perhaps 64K, I can't remember), each synchronisation check was taking 33 milliseconds - not exactly slow, but considerable given such huge files. I wanted to do much better.

After a brief study of the code, I noted the circular buffer's operator [], although user-friendly, was having to do arithmetic to translate the logical byte number into a physical byte in its internal array. The sync detection routine was working one byte at a time, as was the sync verification routine, and both were using the circular buffer's "user-friendly" operator [].

The first optimisation step was to add a method to the circular buffer class to get hold of a raw contiguous byte buffer. When the buffered data wraps around between the end and the beginning of the physical buffer, the circular buffer class uses memcpy to re-arrange its internal layout. The sync detection and verification code was then modified to make use of the raw byte buffer instead of relying on the more costly operator [].

The next step was to speed up the sync detection. Instead of checking each byte via circularbuffer::operator [], the code was changed to use wcschr - the wide-character version of strchr. This can be used to very quickly locate the first two bytes of our pattern. Ideally, we would have liked to use wcsstr for detecting the complete 3 byte pattern, but this routine is not implemented in assembly code, as stepping into the function will show.

The final step was to speed up the sync verification - a memcmp() was used in place of the 3 serial circular buffer operator [] calls.

After making these changes, the chunk processing time fell from 33 milliseconds to 1/3 of a millisecond!

I expect, with effort, dropping down to assembly could yield an even faster algorithm - but I doubt it would improve matters greatly. This optimisation using only C library calls and a small bit of refactoring is 100 times faster than the original code already.

The moral of the story here, then, is to be sure your C/C++ coding is as tight as possible in the first place, before deciding that a venture into assembly language is necessary. Don't forget that assembly, unlike C/C++, is not portable, and, that if you do make use of SSE instructions directly in your code, you'll be restricting your code to run only on processors supporting those instructions.

Maybe nobody require just an addition in assembler, but not everybody works with forms and databases. There are lots of uses for assembly language:

1) Time critical applications have inner loops often written in assembler. Video codecs we all use when viewing movies are an example of that

2) Games use assembly as a daily basis: you'll never be fast enough when you're coding a game engine.

3) Device drivers are mostly written in C nowadays, but small part in assembly are still used becouse it's easier to cope with when you're dealing with low level memory and I/O access, not to mention interrupts.

4) Debugging C++ high level code. I often took advantage of my assembler knowledge in understanding why and how a class virtual function call is crashing my app by simply looking at the assembly code and tracing the CPU registers.

5) Even if the compiler is doing a good job at creating and optimizing assembly code, there are some cases when it's just more effective to write an handful optimized assembly code. I had to do a linear search in an array and I wrote an assembly language loop that's at least twice as fast than the corresponding C optimized code.

So, an addition may not seem that much useful, but assembly language is a tool, as everything else, and should used whenever appropriate to ease our life. That's the reason we create tools, isn't it? )

Interesting article. I wish this had been around 15 years ago! As far as I remember, I never managed to get inline assembler working (but the only time I tried was in Turbo Pascal for DOS, on a 386 processor!)

First - I think you are confused, GCC stands for GNU Compiler Collection, it is a compiler for C, C++, Objective-C, Fortran, ADA, Java and has front ends for a bunch of other languages (D, Pascal, ...).

Second - The Microsoft MSVC++ syntax is completely different and uses Intel syntax, IIRC it looks like this: