AI is not only for engineers. If you want your organization to become better at using AI, this is the course to tell everyone--especially your non-technical colleagues--to take.
In this course, you will learn:
- The meaning behind common AI terminology, including neural networks, machine learning, deep learning, and data science
- What AI realistically can--and cannot--do
- How to spot opportunities to apply AI to problems in your own organization
- What it feels like to build machine learning and data science projects
- How to work with an AI team and build an AI strategy in your company
- How to navigate ethical and societal discussions surrounding AI
Though this course is largely non-technical, engineers can also take this course to learn the business aspects of AI.

Enseigné par

Andrew Ng

Transcription

When you work with AI teams, you may hear them refer to the tools that they're using to build these AI systems. In this video, I want to share with you some details and names of the most commonly used AI tools, so that you'd be able to better understand what these AI engineers are doing. We're fortunate that the AI world today is very open, and many teams will openly share ideas with each other. There are great machine learning open source frameworks that many teams will use to build their systems. So, if you hear of any of these: TensorFlow, PyTorch, Keras, MXNet, CNTK, Caffe, PaddlePaddle, Scikit-learn, R or Weka, all of these are open source machine learning frameworks that help AI teams be much more efficient in terms of writing software. Along with AI technology breakthroughs are also publish freely on the Internet on this website called Arxiv. Spelled like this. I hope that other academic communities also freely share their research since I've seen firsthand how much this accelerates progress in the whole field of AI. Finally, many teams will also share their code freely on the Internet, most commonly on a website called GitHub. This has become the de facto repository for open source software in AI and in other sectors in AI. And by using appropriately licensed open-source software, many teams can get going much faster than if they had to build everything from scratch. So, for example, if I search online for face recognition software on GitHub, you might find a web page like this. And if you scroll down, this actually has a pretty good, very readable description of software that is made available on this website for recognizing people's faces, and even finding parts of people's faces. There's just a ton of software that is freely downloadable for doing all sorts of things on the Internet, and just double-check the license, or AI team will double-check the license before using it in a product of course, but a lot of these software is open source, or is otherwise very permissively license for anyone to use. Although GitHub is a technical website built for engineers, if you want you should feel free to play around GitHub and see what are the types of AI software people have released online as well. In addition to these open source technical tools, you often also hear AI engineers talk about CPUs and GPUs. Here's what these terms mean. A CPU is the computer processor in your computer, whether it's your desktop, your laptop, or a computer server off in the Cloud. CPU stands for the central processing unit, and CPUs are made by Intel, and AMD, and a few other companies. This does a lot of the computation in your computer. GPU stands for graphics processing unit. Historically, the GPU was made to process pictures. So, if you play a video game, it's probably a GPU that is drawing the fancy graphics. But what we found several years ago was that the hardware that was originally built for processing graphics turns out to be very, very powerful for building very large neural networks. So very large deep learning algorithms. Given the need to build very large deep learning or very large neural network systems, the AI community has had this insatiable hunger for more and more computational power to train bigger and bigger neural networks. And GPUs have proved to be a fantastic fit to this type of computation that we need to have done to train very large neural networks. So, that's why GPUs are playing a big role in the rise of deep learning. And video is company that's been selling many GPUs. But other companies including Qualcomm, as well as Google making his own CPUs are increasingly making specialized hardware for powering these very large neural networks. Finally, you might hear about Cloud versus On-premises, or for short, On-prem deployments. Cloud deployments refer to if you rent compute servers such as from Amazon's AWS, or Microsoft's Azure, or Google's GCP in order to use someone else's service to do your computation. Whereas, an On-prem deployment means buying your own compute servers and running the service locally in your own company. A detailed exploration of the pros and cons of these two options is beyond the scope of this video. A lot of the world is moving to Cloud deployments. Whether you search online you find many articles talking about the pros and cons of Cloud versus On-prem deployments. There is one last term you might hear about, which is Edge deployments. If you are building a self-driving car, there's not enough time to send data from a self-driving car to a Cloud server to decide if you should stop the car or not, and then send that message back to the self-driving car. So, the computation has to happen usually in a computer right there inside the car. That's called an edge deployment, where you put a processor right where the data is collected so that you can process the data and make a decision very quickly without needing to transmit the data over the Internet to be processed somewhere else. If you look at some of the smart speakers in your home as well, this too is an edge deployment where some, not all, but some of the speech recognition task is done by a processor that is built in right there into this smart speaker that is inside your home. The main advantage of Edge deployment is it can increase response time of the system, and also reduce the amount of data you need to send over the network. But there are many pros and cons as well about Edge versus Cloud versus On-prem deployments that you can also search online to read more about. Thanks to finishing this optional video on the technical tools that AI engineers use. Hopefully, when you hear them refer to some of these tools, you'll start to have a better sense of what they mean. I look forward to seeing you next week.