The purpose of this course is to review the material covered in the Fundamentals of Engineering (FE) exam to enable the student to pass it. It will be presented in modules corresponding to the FE topics, particularly those in Civil and Mechanical Engineering. Each module will review main concepts, illustrate them with examples, and provide extensive practice problems.

Avaliações

TJ

Its a good way to start studying for the FE exam, but you will need to get a book with all the FE topics to study with as well.

PD

Dec 09, 2018

Filled StarFilled StarFilled StarFilled StarFilled Star

These are best videos for the Working professional who can not devote much time reading the test material. Nicely explained !

Na lição

Probability and Statistics

This module reviews the basic principles of probability and statistics covered in the FE Exam. We first review some basic parameters and definitions in statistics, such as mean and dispersion properties followed by computation of permutations and combinations. We then give the definitions of probability and the laws governing it and apply Bayes theorem. We study probability distributions and cumulative functions, and learn how to compute an expected value. Particular probability distributions covered are the binomial distribution, applied to discrete binary events, and the normal, or Gaussian, distribution. We show the meaning of confidence levels and intervals and how to use and apply them. We define and apply the central limit theorem to sampling problems and brieflyt- and c2. We define hypothesis testing and show how to apply it to random data. Finally, we show how to apply linear regression estimates to data and estimate the degree of fit including correlation coefficients and variances.In all cases, basic ideas and equations are presented along with sample problems that illustrate the major ideas and provide practice on expected exam questions. Time: Approximately 3 hours | Difficulty Level: Medium

Ministrado por

Dr. Philip Roberts

Transcrição

Continuing our discussion of probability distributions, now I want to discuss the central limit theorem. The central limit theorem, shown here, states that if we take many samples of N items from a larger population which has a normal distribution with a mean MEU and a variance sigma squared, then the means of the samples, or the sample means, are themselves normally distributed with a standard deviation given by sigma divided by the square root of the number of samples, and the mean value of the samples is equal to the mean value of the larger population. To see how this arises, let's consider this example that I showed some time ago of the variations of velocity in a turbulent velocity field where the vertical axis here is the speed in centimeters per second, and the horizontal axis is the time in seconds. So from this record, we can easily compute an average value, which turns out to be 5.82 centimeters per second, and a standard deviation sigma, which turns out to be approximately .53 centimeters per second, as shown here. But what if, instead, I made samples of, or computed the averages of smaller sub samples. For example, I take the first sequence of points here, and I compute an average value and I get that. Then, I take another, say 20 values, and I get an average value here. And another one here. So I get a sequence of averages which is shown by the red dots here. And similarly, I can compute an average of those averages, and that average, provided I take enough samples, turns out to be exactly the same as the average of the larger population. The variation, though, is obviously much reduced. The standard deviation of those red dots around about the mean line is much less than the larger standard deviation, and is given by this expression here. So, this is how this situation arises. To show that in an example, let's suppose that we have batches of concrete which are manufactured by a factory that contain random amounts of impurities, and the mean value of the impurities is five grams and they have a standard deviation of 1.5 grams. Let's suppose we make samples of 50 batches each. What is the probability that the average impurity in one of those samples of 50 batches, is greater than 5.3 grams? Is it which of these probabilities? So in this case, we'll assume that the central limit theorem applies to the sub samples and therefore, their average value is also going to be 5.0 grams. Their standard deviation though, is equal to the standard deviation of the population, 1.5 divided by the square root of the number of samples or batches, square root of fifty, which is equal to .212 grams. Now we calculate a normalized parameter, x minus u over sigma. So, that is equal to 5.3 because we want to know the probability it being greater than 5.3, minus 5.0, the average, divided by the standard deviation of the batches, which we've just computed, is .212, is equal to 1.42. In other words, this is 1.42 standard deviations from the mean. So, from the table, we can look up what the value is. At a value of 1.42, which is approximately here, we want the probability is the probability of this area here. In other words, the probability that this particular average is more than 1.42 standard deviations from the mean. If we look up that in the table, we find that r of z, which is the area greater than a particular value, at 1.42 is equal to .0808, or rounding that off, multiplied by a 100 is equal to 8.1%. So, the correct answer is B. There is an 8.1% probability that the average of any one sample is greater than 5.3 grams. And one final note about this, is that a good rule of thumb is that the central limit theorem is usually okay if the number of samples is greater than about 30. And this concludes my discussion of the central limit theorem.