If you are a software developer who wants to build scalable AI-powered algorithms, you need to understand how to use the tools to build them. This course is part of the upcoming Machine Learning in Tensorflow Specialization and will teach you best practices for using TensorFlow, a popular open-source framework for machine learning.
In Course 2 of the deeplearning.ai TensorFlow Specialization, you will learn advanced techniques to improve the computer vision model you built in Course 1. You will explore how to work with real-world images in different shapes and sizes, visualize the journey of an image through convolutions to understand how a computer “sees” information, plot loss and accuracy, and explore strategies to prevent overfitting, including augmentation and dropout. Finally, Course 2 will introduce you to transfer learning and how learned features can be extracted from models.
The Machine Learning course and Deep Learning Specialization from Andrew Ng teach the most important and foundational principles of Machine Learning and Deep Learning. This new deeplearning.ai TensorFlow Specialization teaches you how to use TensorFlow to implement those principles so that you can start building and applying scalable models to real-world problems. To develop a deeper understanding of how neural networks work, we recommend that you take the Deep Learning Specialization.

CM

A patient and coherent introduction. At the end, you have good working code you can use elsewhere. Remarkably, the primary lecturer, Laurence Moroney, responds fairly quickly to posts in the forum.

RC

May 15, 2019

Filled StarFilled StarFilled StarFilled StarFilled Star

Excellent material superbly presented by world-class experts.\n\nSorry if this sounds sycophantic, but this series contains some of the best courses I've encountered in50+ years of learning.

From the lesson

Augmentation: A technique to avoid overfitting

You've heard the term overfitting a number of times to this point. Overfitting is simply the concept of being over specialized in training -- namely that your model is very good at classifying what it is trained for, but not so good at classifying things that it hasn't seen. In order to generalize your model more effectively, you will of course need a greater breadth of samples to train it on. That's not always possible, but a nice potential shortcut to this is Image Augmentation, where you tweak the training set to potentially increase the diversity of subjects it covers. You'll learn all about that this week!

Taught By

Laurence Moroney

AI Advocate

Transcript

To this point, we've been creating convolutional neural networks that train to recognize images in binary classes. Horses or humans, cats or dogs. They've worked quite well despite having relatively small amounts of data to train on. But we're at a risk of falling into a trap of overconfidence caused by overfitting. Namely, when the dataset is small, we have relatively few examples and as a result, we can have some mistakes in our classification. You've probably heard us use the term overfitting a lot and it's important to understand what that is. Think of it as being very good at spotting something from a limited dataset, but getting confused when you see something that doesn't match your expectations. So for example, imagine that these are the only shoes you've ever seen in your life. Then, you learn that these are shoes and this is what shoes look like. So if I were to show you these, you would recognize them as shoes even if they are different sizes than what you would expect. But if I were to show you this, even though it's a shoe, you would likely not recognize it as such. In that scenario, you have overfit in your understanding of what a shoe looks like. You weren't flexible enough to see this high-heel as a shoe because all of your training and all of your experience in what shoes look like are these hiking boots. Now, this is a common problem in training classifiers, particularly when you have limited data. If you think about it, you would need an infinite dataset to build a perfect classifier, but that might take a little too long to train. So in this lesson, I want to look at some tools that are available to you to make your smaller datasets more effective. We'll start with a simple concept, augmentation. When using convolutional neural networks, we've been passing convolutions over an image in order to learn particular features. Maybe it's the pointy ears for cat, two legs instead of four for human, that kind of thing. Convolutions have been very good at spotting these if they're clear and distinct in the image. But if we could go further, what if for example we could transform the image of the cat so that it could match other pictures of cats where the ears are oriented differently? So if the network was never trained for an image of a cat reclining like this, it may not recognize it. If you don't have the data for a cat reclining, then you could end up in an overfitting situation. But if your images are fed into the training with augmentation such as a rotation, the feature might then be spotted, even if you don't have a cat reclining, your upright cat when rotated, could end up looking the same.

Explore our Catalog

Join for free and get personalized recommendations, updates and offers.