It can be discouraging to know that so many peer-reviewed studies are invalid, but it’s great to see that researchers are moving toward more rigorous testing of previously-accepted findings. While all people should avoid jumping to conclusions from insufficient evidence, this is especially relevant to data practitioners — we must be extra careful, because it’s very easy to unintentionally mislead others with data.

Boris Gorelik

Why can’t you divide by zero? Why does the factorial of 0 equal 1? What’s so special about the number 78557? If you, your brother, sister or grand-uncle love numbers but lack the formal math degree, show them this wonderful YouTube channel called Numberphile — a channel that hosts short professionally made “videos about numbers.”

Why does adding more evidence for a particular case decrease our confidence in that case? In the 2016 paper “Too good to be true: when overwhelming evidence fails to convince,” Lachlan J. Gunn and his co-authors start with citing an ancient Talmudic law that requires that one can not be unanimously convicted of a capital crime. They proceed to Bayesian statistics to show how adding unanimous evidence should reduce our confidence, and raise the suspicion of bias:

And, of course, I was reading more about AlphaGoZero’s foray into chess and how flabbergasted Grand Masters (#, #, #) around the world are reacting to AlphaZero’s peculiar genius. Quote from the videos: “Any A.I. smart enough to pass Alan Turing’s test, would be smart enough to fail it.”