Details

Currently, the loop vectorizer does not use the alias analysis infrastructure. Instead, it performs memory dependence analysis using ScalarEvolution-based linear dependence checks within equivalence classes derived from the results of ValueTracking's GetUnderlyingObjects.

Unfortunately, this means that:

The loop vectorizer has logic that essentially duplicates that in BasicAA for aliasing based on identified objects

The loop vectorizer cannot partition the space of dependency checks based on information only easily available from within AA (TBAA metadata is currently the prime example).

This means, for example, regardless of whether -fno-strict-aliasing is provided, the vectorizer will only vectorize this loop with a runtime memory-overlap check:

void foo(int *a, float *b) {

for (int i = 0; i < 1600; ++i) {
a[i] = b[i];
}

}

This is suboptimal because the TBAA metadata already provides the information necessary to show that this check unnecessary. Of course, the vectorizer has a limit on the number of such checks it will insert, so in practice, ignoring TBAA means not vectorizing more-complicated loops that we should.

This patch causes the vectorizer to use an AliasSetTracker to keep track of the pointers in the loop. The resulting alias sets are then used to partition the space of dependency checks, and potential runtime checks; this results in more-efficient vectorizations.

When pointer locations are added to the AliasSetTracker, two things are done:

The location size is set to UnknownSize (otherwise you'd not catch inter-iteration dependencies)

For instructions in blocks that would need to be predicated, TBAA is removed (because the metadata might have a control dependency on the condition being speculated).

For non-predicated blocks, you can leave the TBAA metadata. This is safe because you can't have an iteration dependency on the TBAA metadata (if you did, and you unrolled sufficiently, you'd end up with the same pointer value used by two accesses that TBAA says should not alias, and that would yield undefined behavior).

Sorry for the delay in response. The motivation for this change is clear and the direction makes sense. Quick question: did you measure the effect of this change on the mailing list? Were there any performance wins or losses?

Sorry for the delay in response. The motivation for this change is
clear and the direction makes sense. Quick question: did you measure
the effect of this change on the mailing list? Were there any
performance wins or losses?

You mean in the test suite? I saw nothing significant either way.

Performance wise, things should be strictly better than before. I have a number of internal benchmarks that improved for two reasons:

Loops accessing many different arrays, some of different types, now require fewer checks (and thus now vectorize when they didn't before).

The loop being vectorized was an inner loop that, at runtime, had a relatively small trip count; the elimination of the unnecessary dependency checks was significant.