The recent paper at hand approaches explaining deep learning from a different perspective, that of physics, and discusses the role of "cheap learning" (parameter reduction) and how it relates back to this innovative perspective.

This recent paper addresses the use of still facial images in an attempt to differentiate criminals from non-criminals, doing so with the help of 4 different classifiers. Results are as troubling as they are unsettling.

Despite their confidentiality, machine learning models which have public-facing APIs are vulnerable to model extraction attacks, which attempt to "steal the ingredients" and duplicate functionality. The paper at hand investigates.

Deep learning pioneers Yann LeCun and Yoshua Bengio have undertaken a grand experiment in academic publishing. Embracing a radical level of transparency and unprecedented public participation, they've created an opportunity not only to find and vet the best papers, but also to gather data about the publication process itself.

ArXiv.org gives researchers the ability to instantly publish research, free of peer review and the publication cycle. This capability offers both advantages and pitfalls. We should warily eye the 24-7 news cycle as a cautionary tale for how this could go wrong.

How to tell correlation from causation is one of the key problems in data science and Big Data. New Additive Noise Models methods can do it with over 65% accuracy, opening new breakthrough possibilities.