What is the rationale behind weakening standard guarantees to the point that this program may show undefined behaviour?

The standard says:
"An object of array type contains a contiguously allocated non-empty set of N subobjects of type T."
If objects of type T do not occupy contiguous storage, how can an array of such objects do?

1.
This is an instance of Occam's razor as adopted by the dragons that actually write compilers: Do not give more guarantees than needed to solve the problem, because otherwise your workload will double without compensation. Sophisticated classes adapted to fancy hardware or to historic hardware were part of the problem. (hinting by BaummitAugen and M.M)

2.

a) It is not that objects of type T either always or never occupy contiguous storage. There may be different memory layouts for the same type within a single binary. (experimental result, not derived from standard, not contradicting the standard)

b)
'contiguously allocated' or 'stored contiguously' may simply mean &a[n]==&a[0] + n (§23.3.2.1), which is a statement about subobject addresses that would not imply that the array resides within a single sequence of contiguous bytes. However, the standard is not very clear in this respect, and the other interpretation (element offset==sizeof(T)) is equally compatible with the wording. The latter interpretation would imply that one could force otherwise possibly non-contiguous objects into a contiguous layout by declaring them T t[1]; instead of T t;

Therefore, it is conceivable that the memset in the question post might zero a[1].i, such that the program would output 0 instead of 3.

There are few occasions where one would use memset-like functions with C++-objects at all. (Normally, destructors of subobjects will fail blatantly if you do that.) But sometimes one wishes to scrub the contents of an 'almost-POD'-class in its destructor, and this might be the exception.