An Introduction to GCC Compiler Intrinsics in Vector Processing

Speed is essential in multimedia, graphics and signal
processing. Sometimes programmers resort to assembly language to get
every last bit of speed out of their machines. GCC offers an intermediate
between assembly and standard C that can get you more speed and processor
features without having to go all the way to assembly language: compiler
intrinsics. This article discusses GCC's compiler intrinsics, emphasizing
vector processing on three platforms: X86 (using MMX, SSE and SSE2);
Motorola, now Freescale (using Altivec); and ARM Cortex-A (using Neon).
We conclude with some debugging tips and references.

So, What Are Compiler Intrinsics?

Compiler intrinsics (sometimes called "builtins") are like the
library functions you're used to, except they're built in to the
compiler. They may be faster than regular library functions (the
compiler knows more about them so it can optimize better) or handle
a smaller input range than the library functions. Intrinsics also
expose processor-specific functionality so you can use them as an
intermediate between standard C and assembly language. This gives you
the ability to get to assembly-like functionality, but still let the
compiler handle details like type checking, register allocation,
instruction scheduling and call stack maintenance. Some builtins are
portable, others are not--they are processor-specific. You can find
the lists of the portable and target specific intrinsics in the GCC
info pages and the include files (more about that below). This
article focuses on the intrinsics useful for vector processing.

Vectors and Scalars

In this article, a vector is an ordered collection of numbers, like
an array. If all the elements of a vector are measures of the same
thing, it's said to be a uniform vector. Non-uniform vectors have
elements that represent different things, and their elements have to be
processed differently. In software, vectors have their own types and
operations. A scalar is a single value, a vector of size one. Code
that uses vector types and operations is said to be vector code. Code
that uses only scalar types and operations is said to be scalar code.

Vector Processing Concepts

Vector processing is in the category of Single Instruction, Multiple
Data (SIMD). In SIMD, the same operation happens to all the data (the
values in the vector) at the same time. Each value in the vector is
computed independently. Vector operations include logic and math. Math
within a single vector is called horizontal math. Math
between two vectors is called vertical math.

Instead of writing: 10 x 2 = 20, express it vertically as:

10
x 2
------
20

In vertical math, vectors are lines of these values; multiple
operations happen at the same time:

Saturation arithmetic is like normal arithmetic except that when the
result of the operation that would overflow or underflow an element in
the vector, that is clamped at the end of the range and not allowed to
wrap around. (For instance, 255 is the largest unsigned character. In
saturation arithmetic on unsigned characters, 250 + 10 = 255.)
Regular arithmetic would allow the value to wrap around zero and
become smaller. For example, saturation arithmetic is useful if you
have a pixel that is slightly brighter than maximum brightness. It
should be maximum brightness, not wrap around to be dark.

So it talks about ancient tech like MMX and SSE2, my guess these days you would write about AVX. Also the links at the end often lead to nowhere, and an article from 2005. This makes me wonder when this article was actually written.

so far, i encountered only one case where intrinsics are somewhat useful - when trying to unroll a loop of non-trivial vector code. if you write a test implementation using intrinsics and let gcc unroll that a bit for you, gcc's liveness analysis and resulting register allocation may give you useful hints for writing the final asm function. but i have never seen a case where gcc produces optimal code from intrinsics for a non-trivial function.

and regarding vendor libraries - the functions they provide are of varying quality with regard to optimization, but even in the cases where the code is pretty good, they don't compete on equal grounds. they have to be pretty generic, which means you always have some overhead. optimizations in simd asm often come from specific knowledge regarding variable ranges. data layout, or data reuse. the vendor lib can't do that.

so write your proof-of-concept using intrinsics or vendor libs. and if performance satisfies you, just keep it that way. but if a function still is a major hotspot, you can do better if you go asm (maybe only a bit, more likely a lot)

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.