Idea exchange on how to make simulation and scheduling projects more successful.

Main menu

Category Archives: Expertise

In the Simulation Stakeholder Bill of Rights I proposed some reasonable expectations that a consumer of a simulation project might have. But this is not a one-way street. The modeler or simulationist should have some reasonable expectations as well.

1. Clear objectives – A simulationist can help stakeholders discover and refine their objectives, but clearly the stakeholders must agree on project objectives. The primary objectives must remain solid throughout the project.
2. Stakeholder Participation – Adequate access and cooperation must be provided by the people who know the system both in the early phases and throughout the project. Stakeholders will need to be involved periodically to assess progress and resolve outstanding issues.
3. Timely Data – The functional specification should describe what data will be required, when it will be delivered and by whom. Late, missing, or poor quality data can have a dramatic impact on a project.
4. Management Support – The simulationist’s manager should support the project as needed not only in issues like tools and training discussed below, but also in shielding the simulationist from energy sapping politics and bureaucracy.
5. Cost of Agility – If stakeholders ask for project changes, they should be flexible in other aspects such as delivery date, level of detail, scope, or project cost.
6. Timely Review/Feedback – Interim updates should be reviewed promptly and thoughtfully by the appropriate people so that meaningful feedback can be provided and any necessary course corrections can be immediately made.
7. Reasonable Expectations – Stakeholders must recognize the limitations of the technology and project constraints and not have unrealistic expectations. A project based on the assumption of long work hours is a project that has been poorly managed.
8. “Don’t shoot the messenger” – The modeler should not be criticized if the results promote an unexpected or undesirable conclusion.
9. Proper Tools – A simulationist should be provided the right hardware and software appropriate to the project. While “the best and latest” is not always required, a simulationist should not have to waste time on outdated or inappropriate software and inefficient hardware.
10. Training and Support – A simulationist should not be expected to “plunge ahead” into unfamiliar software and applications without training. Proper training and support should be provided.
11. Integrity – A simulationist should be free from coercion. If a stakeholder “knows” the right answer before the project starts, then there is no point to starting the project. If not, then the objectivity of the analysis should be respected with no coercion to change the model to produce the desired results.
12. Respect – A good simulationist may sometimes make the job look easy, but don’t take them for granted. A project often “looks” easy only because the simulationist did everything right, a feat that in itself is very difficult. And sometimes a project looks easy only because others have not seen the nights and weekends involved.

Discussing these expectations ahead of time can enhance communications and help ensure that the project is successful – a win-win situation that meets everyone’s needs.

Human Judgment, also known as Seat Of The Pants Analysis (SOTPA), is probably the least acknowledged but most widely used alternative to simulation. SOTPA is making decisions by instinct and feelings rather than using objective analytical tools. With SOTPA you never actually have to get out of your chair, or even spend any significant time to reach a decision.

I use SOTPA all the time and you probably do too.

When I am in a hurry to go out and want to know if I should wear a jacket or bring an umbrella, I might take a quick look out the window, reflect on the season and yesterday’s weather, then make a decision. That’s SOTPA. I know that there is a high likelihood that I will be wrong, but I don’t want to take the time to do the weather channel research to get more objective information.

Human judgment beat simulation in this case. But not always.

When I am going on an all-day outside outing and have the same decision to make; the importance of being correct increases. In that situation I will take the time to consult at least two weather sources and even step outside for some direct research. With this objective analysis I can make a more informed decision. Although such an analysis is never perfect, including objective data in my analysis dramatically increases the likelihood that my decision will be correct.

Now let’s say I am a manager and my staff comes to me proposing purchase of a new piece of equipment to solve an important problem in my facility. They may give me technical specifications, maybe some manual or spreadsheet calculations, perhaps even show me a case study about how that equipment was used in another facility. The easy thing for me to do at that point is to make a SOTPA-based decision. After all, I must be pretty smart to get to be a manager, right? Right?? 🙂 And I know THE BIG PICTURE. So who better than me to make the decision? And why should I need more information?

If you haven’t read it, I’d suggest you pause now and read the blog on Predicting Process Variability. Did you pass the test? Don’t feel bad, almost no one does. My facility is much more complicated than that one. If I cannot predict the performance of such a simple system, why should I expect that I can predict the impact of adding this proposed equipment to my facility?

“But I don’t have time to simulate.” I don’t have time to research weather when the penalty of being wrong is low, but I make the time to do it when the penalties are higher. With modern simulation tools you can often get valuable results in a short period of time. In two or three days you can often provide an objective analysis that can save a few hundred thousand dollars. Let’s see, invest $3000, save $200,000 … I think I can make the time for that. How about you?

Simulation beats human judgment when it matters.

When someone else uses SOTPA you might say they bent over and pulled the answer out of their… um, … er, shoe. “What was he thinking?” Don’t let that be you.

Reserve SOTPA for decisions that don’t matter. Use simulation for the decisions that do.

This is a three part series on Six Sigma, Lean Sigma, and Simulation. The first blog will explain the Six Sigma methodology and the bridge to simulation analysis and modeling while the second and third parts will describe the uses of simulation in each of the Six Sigma phases and Lean Sigma (i.e., Lean Manufacturing) respectively.

“Systems rarely perform exactly as predicted” was the starting line for the blog Predicting Process Variability and is the driving force behind most improvement projects. As stated, variability is inherent in all processes whether these processes are concerned with manufacturing a product within a plant, producing product via an entire supply chain complex or providing a service in the retail, banking, entertainment or hospital environment. If one could predict or eliminate the variability of a process or product, then there would be no waste (or Muda in the Lean World which will discussed in a third part) associated with a process, no overtime to finish an order, no lost sales owing to having the wrong inventory or lengthy lead-times, no deaths owing to errors in health care, shorter lead times, etc. which ultimately leads to reduced costs. For any organization (manufacturing or service), reducing costs, lead-times, etc. is or should be a priority in order to compete in the global world. Reducing, controlling and/or eliminating the variability in a process is key in minimizing costs.

Six Sigma is a business philosophy focusing on continuous improvement to reduce and eliminate variability. In a service or manufacturing environment, a Six Sigma (6?) process would be virtually defect free (i.e., only allowing 3.4 defects out of a million operations of a process). However, most companies operate at four sigma which allows 6,000 defects per million. Six Sigma began in the 1980s when Motorola set out to reduce the number of defects in its own products. Motorola identified ways to cut waste, improve quality, reduce production time and costs, and focus on how the products were designed and made. Six Sigma grew from this proactive initiative of using exact measurements to anticipate problem areas. In 1988, Motorola was selected as the first large manufacturing company to win the Malcolm Baldrige National Quality Award. As a result, Motorola’s methodologies were launched and soon their suppliers were encouraged to adopt the 6? practices. Today, companies who use the Six Sigma methodology achieve significant cost reductions.

Six Sigma evolved from other quality initiatives, such as ISO, Total Quantity Management (TQM) and Baldrige, to become a quality standardization process based on hard data and not hunches or gut feelings, hence the mathematical term, Six Sigma. Six Sigma utilizes a host of traditional statistical tools but encompasses them within a process improvement framework. These tools include affinity diagrams, cause & effects, failure modes and effective analysis (FMEA), Poka Yoke (mistake proofing), survey analysis (voice of customer), design of experiments (DOE), capability analysis, measurement system analysis, statistical process control charts and plans, etc.

There are two basic Six Sigma processes (i.e., DMAIC and DMADV) and they both utilize data intensive solution approaches and eliminate the use of your gut or intuition in making decisions and improvements. The Six Sigma method based on the DMAIC process and is utilized when the product or process already exists but it is not meeting the specifications or performing adequately is described as follows.

Define, identify, prioritize, and select the right projects. Once selected to define the project goals and deliverables.Measure the key product characteristics and process parameters to create a base line.Analyze and identify the key process determinants or root causes of the variability.Improve and optimize performance by eliminating defects.Control the current gains and future process performances.

If the process or product does not exist and needs to be developed, the Design for Six Sigma (DFSS) process (DMADV) has to be employed. Processes or products designed with the DMADV process typically reach market sooner; have less rework; decreased costs, etc. Even though, the DMADV is similar to DMAIC method and start with the same three steps, they are quite different as defined below.

Define, identify, prioritize, and select the right projects. Once selected to define the project goals and deliverables.Measure and determine customer needs and specifications through voice of the customer.Analyze and identify the process options necessary to meet the customer needs.Design a detailed process or product to meet the customer needs.Verify the design performance and ability to meet the customer needs where the customer maybe internal or external to the organization.

Both processes use continuous improvement from one stage back to the beginning. For example, if during the analyze phase you determine a key input is not being measured, new metrics have to be defined or new projects can be defined once the control phase is reached.

Now that we have defined six sigma, you may be wondering what is the bridge to computer simulation and modeling. Simulation modeling and analysis is just another tool in the Six Sigma toolbox. Many of the statistical tools (e.g., DOE) try to describe the dependent variables (Y’s) in terms of the independent variables (X’s) in order to improve it. Also, most of the statistical tools are parametric methods (i.e., they rely on the data being normally distributed or utilize our friend the central limit theorem to make the data appear normally distributed). Many of the traditional tools might produce sub-optimal results or cannot be used at all. For example, if one is designing a new process or product, the system does not exist so determining current capability or future performance cannot be done. The complexity and uncertainty of certain processes cannot be determined or analyzed using traditional methods. Simulation modeling and analysis makes none of these assumptions and can yield a more realistic range of results especially where the independent variables (X’s) can be described as a distribution of values. In Six Sigma and Simulation: Part 2, a more detailed look at how simulation is used in the two six sigma processes (DMAIC and DMADV) will be discussed.

The annual Winter Simulation Conference (WSC) starts two weeks from today. Initially as a practitioner and then later as a vendor I have attended over 20 of these conferences in addition to dozens of other similar events. WSC is just one of many events that you could choose to attend. But why should you attend any of them?

All such events are not identical, but here are a few attributes of WSC that are often found in other events as well:

Basic tutorials – If you are new to simulation, this is a good place to learn the basics from experienced people.

Advanced tutorials – If you already have some experience, these sessions can extend your skills into new areas.

Practitioner papers – There is no better way to find out how simulation can be applied to your applications than to explore a case study in your industry and talk to someone who may have already faced the problems you might face.

Research – Catch up on state-of-the-art research through presentations by faculty and graduate students on what they have recently accomplished.

Networking – The chance to meet with your peers and make contacts is invaluable.

Software exhibits and tutorials – If you have not yet selected a product or you want to explore new options, it is extremely convenient to have many major vendors in one place, many of whom also provide scheduled product tutorials.

Supplemental sessions – Some half and full day sessions are offered before and after the conference to enhance your skill set in a particular area.

Proceedings – A quick way to preview a session, or explore a session that you could not attend. This serves as valuable reference material that you may find yourself reaching for throughout the year.

I think every professional involved in simulation should attend WSC or an equivalent conference at least once early in your career, and then periodically every 2-3 years, perhaps rotating between other similar conferences. If you want to be successful you have to keep your skills and knowledge up to date. And in today’s economy, a strong personal network can be valuable when you least expect it.

I read a lot, both for business and pleasure. But it seems I never have enough time. So when I sit down with a magazine, for example, most articles probably get less than a couple seconds of attention. Unless an article immediately captures my attention, I quickly move on to the next one. I know that I occasionally miss out on good content, but it is a way to cope with the volume of information that I need to process each day. Consider the implications when you are writing a project report for others to read…

We are all busy. When we are presented with information to read or review, we often don’t have time to wade through the details to see if the content merits our time.

Tell me the most important thing first! Give me the summary! How many times have you asked (or wished) for that?

At one point, it was common to give presentations by starting with an introduction, building the content, and ending with the conclusion – “the big finish”. While this is appropriate for some audiences, many people don’t want to take the time to follow such a presentation. Instead, they want to be presented with a quick overview and a concise summary first. They will then decide to read on if the overview has captured their interest and they need more information.

Think about your own experiences. When you have a document to read and you are not sure it is worth your time, what do you do? If you are like most people you will probably consider most, if not all of the following:
• Does the title look interesting?
• Do you know/respect the author?
• Scan the major headings or callouts for content of interest.
• Scan any pictures/diagrams for content of interest.
• Evaluate the summary or abstract.
While the order and details might differ slightly, at each stage of the above process if you are not convinced of the value of continuing, you will put the document aside. Only after the document has passed this gauntlet of tests, will you spend the time to seriously read the content.

What can we learn from this?

Content is not enough. The best content in the world is of little value unless it is read.

When you are preparing a project report, try to get inside the head of your target audience. If you expect that they will also have a process something like the above, spend adequate time on those parts. Take an extra minute to create an interesting title. Add major headings and callouts to help focus the reader’s attention. Add some figures to help convey and support your message. Have a good abstract and/or summary that is easy to find to help your audience quickly get the point of your report.

Write each report so everyone, including your busy stakeholders, will take the time to read it. Keeping these simple suggestions in mind will help you succeed at getting your message across.

Systems rarely perform exactly as predicted. A person doing a task may take six minutes one time and eight minutes the next. Sometimes variability is due to outside forces, like materials that behave differently based on ambient humidity. Some variability is fairly predictable such as tool that cuts slower as it gets dull with use. Others seem much more random, such as a machine that fails every now and then. Collectively we will refer to these as process variability.

How good are you are predicting the impact of process variability? Most people feel that they are fairly good at it.

For example, if someone asked you what is the probability of rolling a three in one role of a common six-sided die, you could probably correctly answer one in six (17%). Likewise, you could probably answer the likelihood of flipping a coin twice and having it come up heads both times, one in four (25%).

But what about even slightly more complex systems? Say you have a single teller at a bank who always serves customers in exactly 55 seconds and customers come in exactly 60 seconds apart. Can you predict the average customer waiting time? I am always surprised at how many professionals get even this simple prediction wrong. (If you want to check your answer, look to the comment attached to this article.)

But let’s say that those times above are variable as they might be in a more typical system. Assume that they are average processing times (using exponential distributions for simplicity). Does that make a difference? Does that change your answer? Do you think the average customer would wait at all? Would he wait less than a minute? Less than 2 minutes? Less than 5 minutes? Less than 10 minutes? I have posed this problem many times to many groups and in an average group of 40 professionals, it is rare for even one person to answer these questions correctly.

This is not a tough problem. In fact this problem is trivial compared to even the smallest, simplest manufacturing system. And yet those same people will look at a work group or line containing five machines and feel confident that they can predict how a random downtime will impact overall system performance. Now extend that out to a typical system with all its variability in processing times, equipment failures, repair times, material arrivals, and all the other common variability. Can anyone predict its performance? Can anyone predict the impact of a change?

With the help of simulation, you can.

This simple problem can be easily solved with either queuing theory or a simple model in your favorite simulation program. More complex problems will require simulation. After using your intuition to guess the answer, I’d suggest that you determine the correct answer for yourself. If you want to check your answer look at the comment attached to this article.

And the next time you or someone you know is tempted to predict system performance, I hope you will remember how well you did at predicting performance of a trivial system. Then use simulation for an accurate answer.

Yes, it looks like hard economic times may be coming. But no, this has nothing to do with that.

This blog is a community service. To continue to be effective, we need community participation. That means you.

There are many ways you can participate.

1) Comment – At the end of each article is a link. Click it and add to the discussion. Agree. Disagree. Add new information or a different viewpoint. All civil discussion is welcome.
2) Suggest Topics – Contact me with any ideas you have about future content or ideas for making the blog more useful.
3) Write an Article – It doesn’t have to be rocket science. Nor does it have to be long or formal. Everyone has something to share. The main rule is to keep it unbiased and non-commercial. I am happy to edit it if you like and even publish it under a pen name if you are publicity shy (although I strongly prefer using your real name).
4) Become a Guest Author – I would like nothing better than to “share the limelight” with others. You can write one article or regular articles. Choose your own topics and frequency.

It’s all about sharing to help the simulation community. This is a simple way to give back. Anyone can do it. For any of the above or other ideas, you can contact me using dsturrock at Simio dot biz (name slightly obscured to slow down spammers).