An Artificial Intelligence Primer

If you’re in tech, you hear these terms all the time and probably wonder what the differences are. They can get a bit confusing, which is why I make these kinds of articles.

The basic hierarchy

The basic hierarchy of the main terms is as such:

Artificial Intelligence is an attempt to design an intelligent agent that perceives its environment and makes decisions to maximize chances of achieving its goals.

Machine Learning gives computers the ability to learn from data alone rather than needing to be explicitly programmed.

Supervised Learning provides a classification or a value for the input given.

Unsupervised Learning finds patterns and correlations in things.

Reinforcement Learning looks to find the best actions to take to achieve goals or maximize reward.

So, AI is an attempt to make a human-like entity that tries to achieve its goals, machine learning is a way of making computers learn without being explicitly re-programmed, and there are many types of machine learning.

These definitions are not perfect because I’m not an expert in this field. These primers are often used to help me through my learning process, and that’s the case here with AI.

Terminology

Backpropagation is where you tweak the weights and biases of the previous layer to get the value you want in the current layer, and you do this recursively (propagating backward) throughout however many layers are in the system.

Cost is the amount of difference between reality and the output and model. It’s basically how wrong or bad the model is in its current form. The goal is to have zero cost.

Gradient Descent is where you work to find the minimum of the cost function. In general, you move to the left or right of your current location, and you find which direction produces a slope with a lower value than your current location, and you take a small step in that direction and then try again.

Stochastic Gradient Descent is the method that’s usually used for Gradient Descent, and just refers to how iterations are made.

Markov Chains are systems that transition from one state to another, where the next state is fully fixed based on the previous one.

Regression to the Mean is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement—and if it is extreme on its second measurement, it will tend to have been closer to the average on its first (Wikipedia).

(in progress…)

I hope this has been useful.

Notes

A number of these are simply taken from open resources; others are my own summaries. For a much deeper look, I recommend this overview article from Wikipedia.