Also it supports all the arithmetic operations like plus, minus, times, quot, rem, negate for the vector types
FloatX4#, DoubleX2#. These operations are the atomic units of vectorizations and more complex algorithms can be expressed by composing these functions.

The first two evaluations were spent designing working on AVX and SSE support for the above. The above functions shall emit x86 vector instructions depending on the flags provided.

The third evaluation phase involved redesigning part of the code written above to support -O2 optimizations. Initially -O2
would result in segmentation faults and other errors which were fixed.

The data types like Int8, Int16, Int32, Int64 are all wrappers over the machine dependent Int# unlifted type. To provide deterministic and uniform SIMD support for Ints, a major part of the third evaluation constituted of adding support for Int8#, Word8#, Int16#, Word16#, Int32#, Word32#,
Int64#, Word64# types and their respective primops like plus, minus, times, quot, rem, negate etc.

The last few days constituted of designing a basic slice of SIMD support for 8 bit integers.

I have added a very recent work in progress library, which abstracts over the unlifted types and provide general polymorphic SIMD functions. The library can be found here: https://github.com/Abhiroop/lift-vector. It contains examples like polynomial evaluations or dot products using vectors.

Work Link

GHC uses Phabricator to track all of the code changes as well as for code reviews. We provide link to the phab diffs: