In CS, systems/graphics/applied guys seem a lot more comfortable with their manhood than theoreticians / data miners, based on the styles they use to write their papers.This is probably why I never heard of the term "sigma shock" until I hit grad school.

Apparently the thing to do in Machine Learning is to cram as many sigmas (or other hairy mathematical notation) as you can into a paper. It does indeed correlate with publication rate. The difficulty of reading said paper is then its sigma shock factor.

And frequently the equations are really useless, and they don't bother to define the variables until AFTER the equation for which they're useful, and then they use the same variables one page later (after using several others) and you've totally forgotten what those first ones meant....fortunately, the actual equations are rarely useful to me.

"I'm secure enough to present my results so that my fellows can understand them and debate with me" vs."I have no penis/insights, but after I throw up Dirichlet and the Chernoff bounds, no one will notice!"