If you are a software developer who wants to build scalable AI-powered algorithms, you need to understand how to use the tools to build them. This course is part of the upcoming Machine Learning in Tensorflow Specialization and will teach you best practices for using TensorFlow, a popular open-source framework for machine learning.
In Course 2 of the deeplearning.ai TensorFlow Specialization, you will learn advanced techniques to improve the computer vision model you built in Course 1. You will explore how to work with real-world images in different shapes and sizes, visualize the journey of an image through convolutions to understand how a computer “sees” information, plot loss and accuracy, and explore strategies to prevent overfitting, including augmentation and dropout. Finally, Course 2 will introduce you to transfer learning and how learned features can be extracted from models.
The Machine Learning course and Deep Learning Specialization from Andrew Ng teach the most important and foundational principles of Machine Learning and Deep Learning. This new deeplearning.ai TensorFlow Specialization teaches you how to use TensorFlow to implement those principles so that you can start building and applying scalable models to real-world problems. To develop a deeper understanding of how neural networks work, we recommend that you take the Deep Learning Specialization.

CM

A patient and coherent introduction. At the end, you have good working code you can use elsewhere. Remarkably, the primary lecturer, Laurence Moroney, responds fairly quickly to posts in the forum.

RC

May 15, 2019

Filled StarFilled StarFilled StarFilled StarFilled Star

Excellent material superbly presented by world-class experts.\n\nSorry if this sounds sycophantic, but this series contains some of the best courses I've encountered in50+ years of learning.

수업에서

Multiclass Classifications

You've come a long way, Congratulations! One more thing to do before we move off of ConvNets to the next module, and that's to go beyond binary classification. Each of the examples you've done so far involved classifying one thing or another -- horse or human, cat or dog. When moving beyond binary into Categorical classification there are some coding considerations you need to take into account. You'll look at them this week!

강사:

Laurence Moroney

AI Advocate

스크립트

Now, this is a new data set that I created for learning opportunities. It's freely available, and it consists of about 3,000 images. They've all been generated using CGI with a diverse array of models, male and female, and lots of different skin tones, and here's some examples. If you want to download the data sets, you can find them at this URL. It will contain a training set, a validation set, and some extra images that you can download to test the network for yourself. Once your directory is set up, you need to set up your image generator. Here's the code that you used earlier but, note the class mode was set to binary. For multiple classes, you'll have to change this to categorical like this. The next change comes in your model definition where you'll need to change the output layer. For a binary classifier, it was more efficient for you to just have one neuron and use a sigmoid function to activate it. This meant that it would output close to zero for one class and close to one for the other. Now, that doesn't fit for multi-class, so we need to change it, but it's pretty simple. Now, we have an output layer that has three neurons, one for each of the classes rock, paper, and scissors, and it's activated by softmax which turns all the values into probabilities that will sum up to one. So what does that really mean? Consider a hand like this one. It's most likely a paper, but because she has her first two fingers open, and the rest joined, it could also be mistaken as scissors. The output of a neural network with three neurons and a softmax would reflect that, and maybe look like this with a very low probability of rock, a really high one for paper, and a decent one for scissors. All three probabilities would still add up to one. The final change then comes when you compile your network. If you recall with the earlier examples, your loss function was binary cross entropy. Now, you'll change it's a categorical cross entropy like this. There are other categorical loss functions including sparse categorical, cross entropy that you used in the fashion example, and you can of course also use those. Around this for 100 epochs, and I got this chart, it shows the training hits a max at about 25 epochs. So I'd recommend just using not many, and that's all really that you have to do. So let's take a look at it in the workbook.