Gauss developed the gaussian distribution to analyze measurement errors. He is often considered the greatest mathematician of all time, for many reasons, and in this case he was doing applied work in astronomy and realized he needed a general theory of error analysis. He proved that as long as the deviations from the true value are random, then the measurements will be in a gaussian distribution.

In other words noise is almost always normally distributed in all measurements under very general conditions, aside from unusual cases that involve a nonlinear bias in the measurement, or nonlinearity in the system itself (such phenomena are difficult to study for many reasons, and so the vast majority of applications deal with linear phenomena).

Yes, central limit theorem. By the central limit theorem, any sum or average of samples from ANY distribution (with finite mean and standard deviation) will be approximately Gaussian with the approximation better for larger samples. We can always consider "noise" or any measurement, as made up of many smaller parts so we can always assume an arbitrarily close approximation to Gaussian: i.e. Gaussian itself.

"We can always consider "noise" or any measurement, as made up of many smaller parts so we can always assume an arbitrarily close approximation to Gaussian: i.e. Gaussian itself."

In relation to what?

It is actually rare for data to be exactly Gaussian (if I slip and say normal, I apologize for showing my American language preference). The center of data sets often seems to be normal, but typically we can't tell whether the data is Gaussian, close to it, is a contaminated normal, or is some other distribution that is symmetric and unimodal but longer-tailed than the Gaussian. This was (and is) the point made by Huber, Tukey, Hampel, and many others.