Launching a New Era in Large-Scale Systems Modeling

Over the past 25 years, we’ve been fortunate enough to make a mark in all sorts of areas of science and technology. Today I’m excited to announce that we’re in a position to tackle another major area: large-scale systems modeling.

It’s a huge and important area, long central to engineering, and increasingly central to fields like biomedicine. To do it right is also incredibly algorithmically demanding. But the exciting thing is that now we’ve finally assembled the technology stack that we need to do it—and we’re able to begin the process of making large-scale systems modeling an integrated core feature of Mathematica, accessible to a very broad range of users.

Lots of remarkable things will become possible. Using the methodology we’ve developed for Wolfram|Alpha, we’ll be curating not only data about systems and their components, but also complete dynamic models. Then we’ll have the tools to easily assemble models of almost arbitrary complexity—and to put them into algorithmic form so that they can be simulated, optimized, validated, visualized or manipulated by anything across the Mathematica system.

And then we’ll also be able to inject large-scale models into the Wolfram|Alpha system, and all its deployment channels.

So what does this mean? Here’s an example. Imagine that there’s a model for a new kind of car engine—probably involving thousands of individual components. The model is running in Mathematica, inside a Wolfram|Alpha server. Now imagine someone out in the field with a smartphone, wondering what will happen if they do a particular thing with an engine.

Well, with the technology we’re building, they should be able to just type (or say) into an app: “Compare the frequency spectrum for the crankshaft in gears 1 and 5″. Back on the server, Wolfram|Alpha technology will convert the natural language into a definite symbolic query. Then in Mathematica the model will be simulated and analyzed, and the results—quantitative, visual or otherwise—will be sent back to the user. Like a much more elaborate and customized version of what Wolfram|Alpha would do today with a question about a satellite position or a tide.

OK. So what needs to happen to make all this stuff possible? To begin with, how can Mathematica even represent something like a car—with all its various physical components, moving and running and acting on each other?

We know that Mathematica is good at handling complexity in algorithmic systems: just look at the 20+ million lines of Mathematica code that make up Wolfram|Alpha. And we also know that among the functions in the Mathematica language are ones that powerfully handle all sorts of computations needed in studying physical and other processes.

But what about a car? The gears and belts don’t work like functions that take input and give output. They connect, and interact, and act on one another. And in an ordinary computer language that’s based on having data structures and variables that get fed to functions, there wouldn’t be any good way to represent this.

But in Mathematica, building on the fact that it is a symbolic language, there is a way: equations. Because in addition to being a rich programming language, Mathematica—through its symbolic character—is also a rich mathematical language. And one element of it is a full representation of equations: algebraic, differential, differential-algebraic, discrete, whatever.

But, OK. So maybe there’s an equation—based on physics—for how one component acts on another in a car. But in a whole car there are perhaps tens of thousands of components. So what kind of thing does one need to represent that whole system?

It’s a little like a large program. But it’s not a program in the traditional input-output sense. Rather, it’s a different kind of thing: a system model.

The elements of the model are components. With certain equations—or algorithms—describing how the components behave, and how they interact with other components. And to set this up one needs not a programming language, but a modeling language.

We’ve talked for nearly 20 years about how Mathematica could be extended to handle modeling in a completely integrated way. And for years we’ve been adding more and more of the technology and capabilities that are needed. And now we’ve come to an exciting point: we’re finally ready to make modeling an integrated part of Mathematica.

And to accelerate that process, we announced today that we’ve acquired MathCore Engineering AB—a long-time developer of large-scale engineering software systems based on Mathematica and Modelica, and a supplier of solutions to such companies as Rolls-Royce, Siemens and Scania.

And as of today, we’re beginning the process of bringing together Mathematica and MathCore’s technology—as well as Wolfram|Alpha and CDF—to create a system that we think will launch a whole new era in design, modeling and systems engineering.

So what will it be like?

The basic approach is to think about large systems—whether engineering, biological, social or otherwise—as collections of components.

Each component has certain properties and behaviors. But the main idealization—central for example to most existing engineering—is that the components interact with each other only in very definite ways.

It doesn’t matter whether the components are electrical, hydraulic, thermodynamic, chemical or whatever. If one looks at models that are used, there are typically just two parameters: one representing some kind of effort, and another some kind of flow. These might be voltage and current for an electrical circuit. Or pressure and volume flow for a hydraulic system. Or temperature and entropy flow for a thermodynamic system. Or chemical potential and molar flow for a chemical system.

And then to represent how the components interact with each other, one ends up with equations relating the values of these parameters—much like a generalization of Kirchoff’s laws for circuits from 1845. Typically, the individual components also satisfy equations—that may be algebraic (like Ohm’s law or Bernoulli’s principle), differential (like Newton’s laws for a point object), differential-algebraic (like rigid-body kinematics with constraints), difference (like for sampled motion), or discrete (like in a finite-state machine).

When computers were first used for systems modeling back in the 1960s, all equations in effect had to be entered explicitly. But by the 1980s, block diagrams had become popular as graphical ways to represent systems, and to generate equations that correspond to them. But in their standard form, such diagrams were of necessity very restrictive: they were set up to work like flowcharts, with only one-way dependence of one component on another.

By restricting dependence in this way, one is forced to have only ordinary differential equations that can be solved with traditional numerical computation methods. Of course, in actual systems, there is two-way dependence. And so to be able to model systems correctly, one has to be able to handle that—and so one has to go beyond traditional “causal” block diagram methods.

For a long time, however, this just seemed too difficult. And it didn’t even help much that computers were getting so much faster. The systems of equations that appeared—typically differential-algebraic ones—just seemed to be fundamentally too complicated to handle in any automatic way.

But gradually cleaner and cleaner formulations were developed—particularly in connection with the Modelica description language. And it became clear that really the issue was appropriate manipulation of the underlying symbolic equations.

Now, of course, in Mathematica we’ve spent nearly 25 years building up the best possible ways to handle symbolic equations. And starting about a decade ago we began to use our capabilities to attack differential-algebraic equations. And meanwhile, our friends at MathCore had been integrating Mathematica with Modelica, and creating a sequence of increasingly sophisticated modeling systems.

So now we’re at an exciting point. Building on a whole tower of interlinked capabilities in Mathematica—symbolic, numerical, graph theoretic, etc.—together with technology that MathCore has been developing for a decade or so, we’re finally at the point where we can start to create a complete environment for systems modeling, with no compromises.

It’s a very high tech thing. Inside are many extremely sophisticated algorithms, spanning a wide range of fields and methodologies. And the good news is that over the course of many years, these algorithms have progressively been strengthened, to the point where they can now deal with very large-scale industrial problems. So even if one wants to use a million variables to accurately model a whole car, it’ll actually be possible.

OK, but what’s ultimately the point of doing something like this?

First and foremost, it’s to be able to figure out what the car will do just by simulation—without actually building a physical version of the car. And that’s a huge win, because it lets one do vastly more experiments, more easily, than one ever could in physical form.

But beyond that, it lets one take things to a whole different level, by effectively doing “meta-experiments”. For example, one might want to optimize a design with respect to some set of parameters, effectively doing an infinite family of possible experiments. Or one might want to create a control system that one can guarantee will work robustly. Or one might want to identify a model from a whole family of possibilities by fitting its behavior to measured physical data.

And these are the kinds of places where things get really spectacular with Mathematica. Because these sorts of “meta” operations are already built in to Mathematica in a very coherent and integrated way. As once one has the model in Mathematica, one can immediately apply Mathematica‘s built-in capabilities for optimization, control theory, statistical analysis, or whatever.

It’s also “free” in Mathematica to do very high-quality visualization, interface building, scripting and other things. And perhaps particularly important is Mathematica‘s ability to create interactive documents in its Computable Document Format (CDF).

So that one can have a “live” description of a model to distribute. In which one mixes narrative text, formulas, images and so on with the actual working model. Already in the Wolfram Demonstrations Project there are lots of examples of simulating small systems. But when we’ve finished our large-scale system modeling initiative, one will be able to use exactly the same technology for highly complex systems too.

Gone will be the distinction between “documentation” and “modeling software”. There’ll just be one integrated CDF that covers both.

So how does one actually set about creating a model? One has to build up from models of larger- and larger-scale components. And many of these components will have names. Perhaps generic ones—like springs or transformers—or perhaps specific ones, based on some standard, or the products of some manufacturer. The diversity of different objects, and different ways to refer to variants of them, might seem quite daunting.

But from Wolfram|Alpha, we have had the experience of curating all sorts of information like this—and linking underlying specific computable knowledge with the convenient free-form linguistics that get used by practitioners in any particular specialty. Of course it helps that we already have in Wolfram|Alpha huge amounts of computable knowledge about physical systems, material properties, thermodynamics and so on—as well as about environmental issues like climate history or electrical prices.

Today we are used to programmers who create sophisticated pieces of software. Increasingly, we will see modelers who create sophisticated models. Often they will start from free-form linguistic specifications of their components. Then gradually build up—using textual or graphical tools—precise representations of larger- and larger-scale models.

Once a model is constructed, then it’s a question of running it, analyzing it, and so on. And here both the precise Mathematica language and free-form Wolfram|Alpha-style linguistics are relevant.

Most of the modeling that is done today is done as part of the design process. But the technology stack we’re building will make it possible to deliver the results of models to users and consumers of devices as well. By using Wolfram|Alpha technology, we’ll be able to have models running on cloud servers, accessed with free-form linguistics, on mobile devices, or whatever. So that all sorts of people who know nothing about the actual structure and design of systems can get a whole range of practical questions immediately answered.

This kind of methodology will be important not only across engineering, but also in areas like biomedicine. Where it’ll become realistic to take complex models and make clinically relevant predictions from them in the field.

And when it comes to engineering, what’s most important about the direction we’re launching is that it promises to allow a significant increase in the complexity of systems that can cost-effectively be designed—a kind of higher-level form of engineering.

In addition, it will make it realistic to explore more broadly the space of possible engineering systems and components. It is remarkable that even today most engineering is still done with systems whose components were well known in the nineteenth century—or at least in the middle of the twentieth century. But the modeling technology we are building will make it straightforward to investigate the consequences of using quite different components or structures.

And for example it will become realistic to use elements found by the “artificial innovation” of A New Kind of Science methods—never constructed by human engineers, but just discovered by searching the computational universe of possibilities.

A great many of the engineering accomplishments of today have specifically been made possible by the level of systems modeling that can so far be done. So it will be exciting to see what qualitatively new accomplishments—and what new kinds of engineering systems—will become possible with the new kind of large-scale systems modeling that we have launched into building.

40 Comments

I like the abstraction of “effort” and “flow”. Or potential and kinetic. Or noun and verb. The divide runs deep.

Anyway, integrating all these rules, algorithms, and abstractions into a single multiphysics-like simulation environment is very important. I’m glad WRI made this acquisition.

But it doesn’t easily address the often significant data volumes produced by cross-discipline simulations which don’t easily abstract into simpler tractable component models. (often with a recursive or empirical aspect.)

If only someone could have effectively modeled the consequences of building six nuclear reactors next to each other in an earthquake zone right beside the ocean and a hundred and fifty miles from the thirteenth largest city on the planet. Yeah. A good model would have made all the difference. [coughs] Really. Is the answer improving our technology or is the answer improving the idiots using our technology?

well Mark. That is a very basic Faustian question “who is helped by technological progress” , which made me quit my pursuit of a Math Stats PhD , when I was young student in Madison/WI. Technology, so the answer of my professor Mr. Box, is good or bad depending on who is using it for waht. Sadly enough, that is the answer. And the times are a changing and we are either players in it or are being played with.
Yes sure Mathematica with this expert system like approach, is also a good tool for “idiots”. And the “New Kind of engineering” , if it is comes to be real, will be inline with the Mathematica Principle of Automation. It is a beautiful principle, but the “Teufel steckt im Detail”. And this will keep technology designers busy and rewarding.

First I want to apologize for the abuse of these comment, but:
@Mark: what do you expect from “idiot 2.0 beta” in government? Since more than 2000 years it is known that philosophers would be the best in politics. Have a look at Πολιτεία.

I was just getting ready to purchase MathModelica. Does this mean that if I wait one revision of Mathematica I can get that capability built in?

This is big for me. I have been doing this kind of modeling for several years using packages I wrote in Mathematica. Having Wolfram programmers take over the burden of writing and debugging the code will let me concentrate on modeling.

I am glad to see WRI take on the challenge of integrating symbolics, pattern matching, transformations and rules with with the object based model in Modelica. Having looked at the simple screencasts I am left with more questions than answers tho. For example, can you create your own objects and define your own ‘effort’ and ‘flow’, and can objects have multiple efforts and multiple flows. (As an engineer, inputs and outputs seems more natural, but hey tomatoes are tomaydoes to some). And is the intent to integrate the functionality of Modelica into a Notebook FE (requiring mind boggling use of EventHandler, PassEventsDown and the like). It will be interesting to see if the end result is something sufficiently flexible to model real world problems (I have one in mind right now).

I can’t wait to explore the New Kind of Engineering! Eager to see how M8 can be integrated with a somewhat labView-like program (correct me if I am wrong). I guess Palettes will be the first place to start, but I am sure WRI will dream up something even better!

I have to say I am very pleased. To me this is the natural evolution of Mathematica. Looking ahead at Wolfram one can see the components coming together in such a way as to create a system approach to modeling that eliminates much of the drudgery while enabling you to focus on the problem.

Have you heard about bond graphs? Invented by Henry Paynter in the 60ties, this is the right tool to create realistic physical models, even very complex ones. Bond graph is a graphics language and basically works with paper and pencil. From bond graph you can generate equations for Mathematica, C or Modelica. Obviously Modelica will bring modularity in modeling that C doesn’t have, and this a an important step to be able to work with models in Mathematica. I have implemented bond graph modeling in Mathematica nearly 20 years ago and I am still working with it.

This blog post really appeal me a lot because my PhD subject was to model a R&D spark ignition engine with electromagnetic valves actuators, and I have been an early user of Mathematica. In 1992, I was able to ear the sound of a virtual engine before its mechanical test bench assembly, processed from a bond graph using Mathematica, to generate C code and run on a Cray supercomputer. In 2004, for a car manufacturer, I was able to run this kind of highly complex non linear multi-physics model in real time on a good PC, and interact with it like a real engine with a standard control box, hearing acceleration, turbo-compressor start, torque stall etc, like on a real one, all using Mathematica as the main tool to document, design and generate the equations.

So this blog does not sound like a 1st April joke, but is just something that is already doable. Using such a model with Wolfram Alpha moreover asks the question of the human language model analysis interface. With a realistic model, I believe that if question can be accurate, the answer can be accurate. Question like “can I start in 2nd gear in a slope of 10% ?”, or “What sound makes the engine if a plug fails ?” could be answered. A realistic model could then replace a human expert, and it is possible to answer any question without any limitation because it is the model of reality. Note that the first question is much more difficult to answer, because the question “Can I …?” involve model causal inversion, just the kind of thing that bond graph and Modelica can do.

In a realistic model, the answer is a potential, it is not preprocessed like when coming from a human expert or expert system. To answer a question from a realistic model, it is required to simulate it (not just doing expert-database processing). As soon as this possibility is integrated in Mathematica, it becomes possible with Wolfram Alpha.

@Mark. Any model (whether made by an idiot or not) depends on the axioms involved, the first principles the algorithms have to obey. Aristotle (and Hegel, too, bless him) distinguished between dialectical reasoning and formal logic. The reasoning discovers and refines the axioms underlying a system – and since they can’t be formally proven (Gödel) they are what Hegel calls apodeictic – you can just point at them and explain that the logic of investigation, discussion, and demonstration has got you here. Euclid is about as clear as you can get on this.
None of this is Kantian, however – he expels dialectical reasoning to the black box of the Thing in Itself, and tells us basically to go hang when it comes to discovering axiomatic principles. And so much of the work of today’s science and study is based on Kant (at least lip service is paid) and the worship of formal logic, that the axioms are arrived at by trial and error.
In the case of economics and politics, the investigation process is constrained by ideology and prejudice. Our system cannot deviate from equilibrium (eg linear development) over time on average. Great. And then the crises come and are dismissed as soon as they’re over as anomalies. Inadequate axiom, catastrophic result, regardless of whether the algorithms are created by a rocket scientist or an astrologer. Sun around the earth? Same thing. Nukes on earthquake faults – our models assure us bad things can’t happen, so they won’t and it haven’t.
Bottom line, the quality of our axioms depends on our freedom from ideology and prejudice in our search for fundamental principles.
Most politicians and economists today have clout without quality, they’re always getting it wrong. Quality without clout exists too. But clout is a political thing. The Inquisition had clout without quality, Galileo had quality without clout.
In other words, Quality is Us, and US is a political war with THEM.

How is Mathematica/Mathmodelica different from Maple/MapleSim? What is unique about your offering?

Maple/MapleSim is already exploring this concept for a considerable time, of course this is the new paradigm, but, imho, Mr. Wolfram is not bringing anything new to the table in terms of modelling concepts.

So: what is you unique value proposition in this new product, Mr Wolfram?

We’d like to use this for complicated business modeling and business intelligence operations – giving various core execs status displays of various operational metrics and also allowing them to play what-if scenarios – all tied in to the accounting, billing, network operations, customer service, loyalty, etc ad infinitum systems.

This would/could also play a vital role in the healthcare industry…most interesting.

While a long time user of Mathematica and appreciative of the reliability and value of this product, one concern I frequently experience as a system modeller and user is verification (does the code match the specs) and validation (does the model predict real world effects accuracy) of models used.

If advice I provide is based on models, I need to understand their accuracy and validity for the work I’m doing.

Stephen: I was amazed how much can be achieve by simple codes, as was reading your book a few years ago. That’s all I know about programming.
I have not been able to get someone to simulate the collisions of randomly moving inelastic spherical particles. Can you help me? I have the equations for the binding energy the enrgies of translation and rotation of the bound codependent units of two of such “Primordial Particles”.
With hopes: George

I share your enthusiasm.
One of the most profound experiences in my life was my intro GPSS 30 years ago, followed by the reading of “A New Kind of Science”.
Ever since the latter, I have referred to the cellular automata of ANKOS as Wolfram processes in my notes…

Dear Michael (post – April 14, 2011 at 6:32 am)
Please add me on linked in, I already have exactly what you are looking for! I already build and work with operational-financial-risk models of complex businesses/industries/projects.
Look for me in Linked in as Andre Cury Maialy
Tks

Do such complex systems undergone chaotic behavior?
System 1 gives input to System 2, and vice versa, System 2 gives input to System 3 and vice versa. But could the whole system behave differently because of inability to exactly predict every input/output by exact amount?

I have dreamt of what you are proposing for two decades and would like to be involved as this evolves. I strongly agree with the comments by D.Demsey re verifiability of results. I have a particular vision of a medical application that would only be feasible via such a system as could result from this.

Does mathematica work for surface water to aquifer contamination percolation studies? for example to determine an approximate rate at which an aqueous solution when spilled along a drainage basin takes to reach underground water reservoirs a.k.a aquifers and the spilled liquids could include gasoline coming from non-double-walled non-leak-sensor-installed underground
tanks, leachates coming from municipal or clandestine dump sites or anything in liquid form that results from a traffic collision.

Mathematica could become like the new RadioShack 200 and 1 project kit. Work Sheets with Oragami projects could be created like that old Commodore 64 program Toy shop and Heck get Forest Mims behind this to write a componant for all science lab series of books. Put them on Safari OnLine

I’m hoping that a strong and seamless multi-physics approach will be taken by Wolfram Research to allow effective deterministic and probabilistic modeling of electro-hydraulic systems and subsystems. The latter could use Mathematica’s already strong statistical capabilities.

3. Also in those countries, where both Left hand and right hand driving is allowed, the mirrored vehicle can have on the same side Steering Wheel, Brakes, Clutches, Gears positions for both Left hand and right hand drive vehicles.

4. In this Structure taking of a reverse turn once parked in a dead end will be eliminated as the Driver will now sit on the opposite side to forward motion the vehicle.

5. The Sitting Accessory will be aligned as per the Driver’s Position on both the sides i.e front and rear side of the vehicle.

6. Business Rules/Laws will have to be set regarding Newton’s Laws of Motion.

i.e. Start, Stop, Forward Motion, Reverse Motion.

a. A vehicle can be started only one at a time from one side. if on the other side (rear end) somebody tried to start the vehicle, he/she will not be allowed. There will be a Locking mechanism for this.

b. A vehicle can be started only one at a time from one side. Once the vehicle is in motion i.e forward or reverse, if on the other side (rear end) somebody tried to start the vehicle, he/she will not be allowed. There will be a Locking mechanism.