This week on /r/MachineLearning, we see a discussion of the current state of ML research, a new linear algebra course, a network analysis of Reddit comments, a hands-on introduction to deep learning, and a basic introduction to genetic algorithms.

This self-post (text-only post) decries the use of hyperparameter tweaking as a justification for publishing entire papers. The author sees these papers, which they claim are becoming more common, as being misleading and unnecessary. It seems that many in the /r/MachineLearning community agree, based on the fact that this is so highly-upvoted. There is some interesting discussion in the comments.

This new Coursera course covers basic linear algebra. More importantly, it does so through applications. So if you need to learn basic linear algebra and you find an application-based method best for you, give this one a shot.

This post analyzes the network of Reddit comments over a large dataset. The included visualization is well-made (and very complex), but I fear that complexity makes it fairly difficult to draw general conclusions about the network and Reddit itself. Regardless, the author has some interesting thoughts about apparent patterns in the data, and it’s worth looking at.

This step-by-step video explains why you may want to use deep learning, then dives into using deep learning using Python. It’s fairly in-depth for an introduction to the topic, so if you’ve been waiting for a good hands-on way to get into deep learning, this may be the way to do it.

This article is a great first-exposure to the topic of genetic algorithms. It goes through the structure and motivation for using genetic algorithms. It then details some pseudocode and explains various properties of genetic algorithms through some examples.