We often talk with our customers about their “grand challenges", trying to understand exactly what it is that keeps them up at night thinking: “if I could just _________, that would be a real game changer for my business!” Regardless of which industry they come from, their answers are often quite similar: The majority of these discussions relate to evaluating the performance of an entire system.

An uncomfortable truth about modern engineering is that there really are no easy problems left to solve. In order to meet the demands of industry, it's no longer good enough to do "a bit of CFD" or "some stress analysis". Complex industrial problems require solutions that span a multitude of physical phenomena, which often can only be solved using simulation techniques that cross several engineering disciplines.

What our customers are really asking for is the ability to "see the big picture". Simulating whole systems rather than just individual components, taking account of all of the factors that are likely to influence to performance of their product in its operational life. In short, to simulate the performance of their design in the context that it will actually be used.

In the past CFD was often described in terms of analogous experimental techniques such as "numerical wind tunnels" and "virtual towing tanks". Although this is a great way of explaining what you do for a living to a stranger at a party, these analogies no longer capture the entirety of the way that engineering simulation is used in industry today.

Wind tunnels and towing tanks explicitly measure the performance of a design in a carefully controlled environment, deliberately excluding lots of real world influences. Wind tunnel experiments are great for telling you how your racing car will perform in a straight line, or even with a bit of yaw, but what happens when it tries to overtake another car? How do you account for the way in which hot exhaust air interferes with downforce producing capability of the rear wing?

For example, a jet engine manufacturer might talk about being able to evaluate the performance of an engine, or the performance of certain components during engine operation. Of course, this could be done with flight testing, but that would require manufacturing a prototype engine in addition to everything else that goes along with testing. The cost and time scales of flight testing do not permit it to affect engine design; rather, it is usually the last phase of engine development that merely validates the safety and performance of the near-final engine design. But now imagine those engineers have access to a full set of flight test-like data for every design candidate, and this data could be had for a small fraction of the time and cost of a flight test program. This would transform the way business is done in that industry, and have a significant impact on the bottom line.

My point here is that our simulation tools, and the infrastructure that surrounds them, are now mature enough that we can begin to see the bigger picture, and include more of the physical factors that will influence the real-world performance of a design.

This ambition is not a new one. At the very beginning, a third of a century ago, CD-adapco's interest in Computational Fluid Dynamics was driven by the need to provide better thermal boundary conditions for the structural simulations of engines that we were working on at the time. Since then we have developed our tools with a view to co-simulation, recognizing that specialist simulation tools are often necessary to solve the most difficult engineering problems.

Of course, I'm not denying that our modeling ambitions are constrained by a range of practical considerations. Although the principle consideration is obviously one of accuracy, modeling choices are often dictated as much by economic constraints as those of veracity of the prediction alone. Most modern engineers are acutely aware that, in order to influence the design process, simulation results need to be delivered on-time, every-time. With access to limited simulation and computing resources, simulation engineers are often forced to ask "How much of the problem can I really afford to simulate?".

There are other constraints. Historically engineers have tended to align themselves strictly along disciplinary lines: the fluids guys do CFD, the stress guys do FEA, the chemical guys do all sorts of other stuff that no-one else understands. Getting individual engineers to talk to each other was often as much of a challenge as interfacing the individual software tools.

It should also go without saying that including complexity for the sake of it is not "good engineering" either. Part of the art of engineering is in deciding exactly how much complexity can be excluded through modeling assumption, without reducing the overall quality of the prediction. In fact, at it's purest, engineering might be described as the "art of simplification through modeling approximation", rendering apparently intractable physics problems into neatly packaged engineering solutions. By making the correct modeling assumptions you can accurately predict how an abstract design will perform under a range of real world operating conditions. Make the wrong assumptions and your simulation results will either be a poor representation of the real life performance of your design, or your model will be so complicated that you won't get any results at all.

In the coming weeks we will, with aid of real world examples, explore how CD-adapco can help you to "see the big picture" by simulating entire systems rather than the individual components. We'll be talking about co-simulation and the many ways in which you can couple our software with other simulation tools and addressing the question of "modeling fidelity" by demonstrating how certain parts of the problem can be modeled at lower resolutions (in either space or time), without influencing the accuracy of the overall prediction. We'll also be discussing the economics of simulating large and complex systems and discussing how our licensing models allow you to exploit all of your available computing resources in the most cost effective manner.