Machine learning

Which machine learning algorithm should I use? - Subconscious Musings. This resource is designed primarily for beginner to intermediate data scientists or analysts who are interested in identifying and applying machine learning algorithms to address the problems of their interest.

A typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is “which algorithm should I use?” The answer to the question varies depending on many factors, including: The size, quality, and nature of data.The available computational time.The urgency of the task.What you want to do with the data. Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms. We are not advocating a one and done approach, but we do hope to provide some guidance on which algorithms to try first depending on some clear factors. The machine learning algorithm cheat sheet Additional algorithms will be added in later as our library grows to encompass a more complete set of available methods.
A Tour of Machine Learning Algorithms.

In this post, we take a tour of the most popular machine learning algorithms.

It is useful to tour the main algorithms in the field to get a feeling of what methods are available. There are so many algorithms available that it can feel overwhelming when algorithm names are thrown around and you are expected to just know what they are and where they fit. I want to give you two ways to think about and categorize the algorithms you may come across in the field. The first is a grouping of algorithms by the learning style.The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together).
Google Created an AI That Can Learn Almost as Fast as a Human. Deep Learning, Fast Deep learning machines have been generating incredible amounts of buzz in recent months.

Their extensive abilities can allow them to play video games, recognize faces, and, most importantly, learn. However, these systems learn 10 times more slowly than humans, which has allowed us to keep the creeping fears of a complete artificial intelligence (AI) takeover at bay. Now, Google has developed an AI that is capable of learning almost as quickly as a human being. Claims of this advancement in speed come from Google’s DeepMind subsidiary in London. Their method mimics the processes of learning that occur in human and animal brains. If you’re unfamiliar with how deep learning works, it uses layers of neural networks to locate trends or patterns in data. Systems can be taught to learn differently depending on many different variables, such as the strength of the connection between layers.

But where do you start? Which library do you use? There are just so many! Inside this blog post, I detail 9 of my favorite Python deep learning libraries. This list is by no means exhaustive, it’s simply a list of libraries that I’ve used in my computer vision career and found particular useful at one time or another. Some of these libraries I use more than others — specifically, Keras, mxnet, and sklearn-theano. Others, I use indirectly, such as Theano and TensorFlow (which libraries like Keras, deepy, and Blocks build upon). And even others, I use only for very specific tasks (such as nolearn and their Deep Belief Network implementation). The goal of this blog post is to introduce you to these libraries. My Top 9 Favorite Python Deep Learning Libraries Again, I want to reiterate that this list is by no means exhaustive.

Sure you do! Get a great introductory explanation here, as well as suggestions where to go for further study. By Zygmunt Zając, FastML. So you know the Bayes rule. How does it relate to machine learning? While we have some grasp on the matter, we’re not experts, so the following might contain inaccuracies or even outright errors. Bayesians and Frequentists In essence, Bayesian means probabilistic. Frequentists have a different view: they use probability to refer to past events - in this way it’s objective and doesn’t depend on one’s beliefs.

For a thorough investigation of this topic and more, refer to Jake VanderPlas’ Frequentism and Bayesianism series of articles. Priors, updates, and posteriors. Long short-term memory - Wikipedia. A simple LSTM block with only input, output, and forget gates.

LSTM blocks may have more gates.[1] Architecture[edit] An LSTM network is an artificial neural network that contains LSTM units instead of, or in addition to, other network units. An LSTM unit is a recurrent network unit that excels at remembering values for either long or short durations of time. The key to this ability is that it uses no activation function within its recurrent components. LSTM units are often implemented in "blocks" containing several LSTM units. LSTM blocks contain three or four "gates" that they use to control the flow of information into or out of their memory. The only weights in an LSTM block ( and ) are used to direct the operation of the gates. .
, and the output from the previous time step ) and each of the gates.