The concept of “stationarity” is indoctrinated into the heads of signal processing students (those poor bastards). However, with practical signals it seems to me that the difference between stationary signals and non-stationary signals is somewhat blurry.

The Wikipedia article on “Stationary process” has some funny examples on stationary processes. In the plot here I have made a signal which you might think is from a non-stationary process, well, because it got these spiky outliers at the end of the signal. In fact the signal has been generated with a iid (independent identically distributed) univariate process: it is two Gaussian distributions, one standardized occurring often and the other with a large amplitude seldomly occuring. That is what I would call stationary.

It seems to me that the determination in practice of whether a signal is stationary may depend on the time scale. If you only examine a short part of the signal it might look non-stationary, but if you examine the signal on a longer time scale you might discover that it is actually stationary and a more complex model should describe the signal.