How can you put data to work for you? Specifically, how can numbers in a spreadsheet tell us about present and past business activities, and how can we use them to forecast the future? The answer is in building quantitative models, and this course is designed to help you understand the fundamentals of this critical, foundational, business skill. Through a series of short lectures, demonstrations, and assignments, you’ll learn the key ideas and process of quantitative modeling so that you can begin to create your own models for your own business or enterprise. By the end of this course, you will have seen a variety of practical commonly used quantitative models as well as the building blocks that will allow you to start structuring your own models. These building blocks will be put to use in the other courses in this Specialization.

Avis

SC

Course is having ultimate content regarding the understanding of Quantitative modeling and its applications. Having great explanation with examples of linear, power, exponential and log functions.

NM

Jul 23, 2017

Filled StarFilled StarFilled StarFilled StarFilled Star

Very good background to quantitative modelling. It gets a bit heavy on the mathematical formulas in places, but if you follow through, it helps cement understanding. Good speed/pace of material.

À partir de la leçon

Module 3: Probabilistic Models

This module explains probabilistic models, which are ways of capturing risk in process. You’ll need to use probabilistic models when you don’t know all of your inputs. You’ll examine how probabilistic models incorporate uncertainty, and how that uncertainty continues through to the outputs of the model. You’ll also discover how propagating uncertainty allows you to determine a range of values for forecasting. You’ll learn the most-widely used models for risk, including regression models, tree-based models, Monte Carlo simulations, and Markov chains, as well as the building blocks of these probabilistic models, such as random variables, probability distributions, Bernoulli random variables, binomial random variables, the empirical rule, and perhaps the most important of all of the statistical distributions, the normal distribution, characterized by mean and standard deviation. By the end of this module, you’ll be able to define a probabilistic model, identify and understand the most commonly used probabilistic models, know the components of those models, and determine the most useful probabilistic models for capturing and exploring risk in your own business.

Enseigné par

Richard Waterman

Transcription

I'm going to finish off this set of distributions by introducing what is perhaps the most important of all of the probability distributions, and that's called the normal distribution. Sometimes you might have heard the normal distribution referred to as the bell curve colloquially. Now many different processes turn out to be well approximated by normal distributions, and there are some mathematical reasons to not be surprised by that. And one of those mathematical reasons is known as the Central Limit Theorem, and it says to us that we really shouldn't be surprised to see normal distributions around us in day-to-day life. And so it's not a goal of this particular module to talk in depth about the Central Limit Theorem in any way, but it's out there and it does provide an explanation as to why we see normal distributions in practice. Now one of the important features of the normal distribution is that you only need to know two numbers to completely characterize it, and those two numbers happen to be the mean mu and the standard deviation sigma. Furthermore, a normal distribution is symmetric about its mean, so it's one of these symmetric distributions. So here are some examples of processes that might lend themselves to be modeled well with normal distributions. And so in the biological world if you go out and you measure the heights and weights of people and start plotting those, wouldn't be at all surprised to see something that looks a little bit like a bell curve. If you go into the financial world and you start looking at stock returns and you plot stock returns, then you see something in terms of a distribution, a histogram that looks a little bell-shaped as well. Not exactly, but it's not a bad first approximation. If you were to give an exam and look at the scores on this exam, then it's not unusual to see an approximate normal distribution for a set of exam scores. And in a manufacturing process, if we were looking at the size of a particular automotive component, there's always a little uncertainty in any manufacturing process. Nothing comes out exactly the same every time. And if we were to, say, take a automotive component and measure its width and look at a histograms of those widths, I wouldn't be at all surprised to see a normal distribution. So the point that I'm making here is that these normal distributions have some universality component to them. We see them all over the place. And the normal distribution is a commonly used model and if you're creating one of these Monte Carlo simulations, then it's very frequent to make normality assumptions for the inputs, and these Bernoullian binomials as well really lend themselves as building blocks of models. So with that said to introduce the idea of the normal distribution, we're going to need to have a look at some of these distributions. So first of all, the normal distribution is the example of one of these continuous distributions. Contrast that with the Bernoulli and the binomial. Remember the binomial was counting the number of successes in N trials, so that's a discrete outcome, whereas the normal, at least in theory, can take on any possible value. And so what I've done on this slide is show you some different normal distributions. Now there's more than one normal distribution, because you can have different values for the mean and the standard deviation. And the standard deviation, recall, is a measure of spread. So the smaller the standard deviation, the more squashed up the graph is. The larger the standard deviation, the more spread out it is. So these four graphs are all examples of normal distributions. They differ in terms of their centrality, different means, and their spread, or standard deviations. The normal distribution that you can see in the pinky color, with a mean of 0 and a standard deviation of 1, is sometimes called the standard normal. It's a reference point. The green normal distribution just to the right of it still has a standard deviation of 1, but the mean has shifted to the right by 1. The blue one on the left hand side, that has a mean of -3 and a standard deviation of 2, so it's more spread out. And the mauve one on the right-hand side, purpley one, has a mean of 3 and a standard deviation of 3. So you can see as the standard deviation goes up, so they get more spread out. They do have one thing in common, apart from the so called bell shape. Under each of these curves you're going to have an area exactly equal to 1, because something has to happen. So there's a set of normal probability distributions. Normal probability distributions tend to appear in practice a lot. They're a good approximation to many processes. And when you have data or an underlying distribution that you think is approximately normal, again, don't forget all models are wrong, but some are useful. So I don't believe that normality will be an exact representation of a process, but it could be a very useful approximation.