Could someone please tell me what analog modeling actually consists of? Are you using math to emulate the behavior of electrical current in a complete piece of electronics with all components and circuits or are you just emulating a few select bits and pieces here and there where they're considered most likely to make obvious changes in the audio results of said circuits?

Is circuit modeling more a marketing concept or are you really doing all the physics involved in electrical flow through components? Do you model the current, resistance, impedance, changes in electrical properties from heat and humidity variation, behavior of one type of material used in, say one type of capacitor vs another or the length of winding in one transformer over another?

Or is it just "lets assume this value equals our rate of oscillation and here's how that usually changes when you use brand X transformer/resistor/cap/tube, etc" ?

Sorry for repeating myself but I'm not sure how to ask this because I'm not clear on what modeling in the context of analog circuits really means.

I hope someone can give a more satisfying answer to this than what I got when I asked what makes one filter more musical than another filter...

Jace-BeOS wrote:I hope someone can give a more satisfying answer to this than what I got when I asked what makes one filter more musical than another filter...

The unsatisfying (but possibly more accurate) answer is that different models consist of different things. At the end of the day, IMO they should be replicating the behaviour of a circuit at some level (ie rather than just simulating its output by some sort of internal 'post-processing' on the obvious 'parts' like the oscillator and filter) but the level of detail of that varies from company to company, synth to synth, and the difference gets to a grey area anyway.

Consider a delay; a 'pure' digital delay is a very simple thing. Its basically just a list of the most recent N samples, and the output is plucked off the list.
Making that sound 'like' an analog delay could be done with some basic filtering and maybe compansion, and some other stuff to approximate the sonic characteristics. You could pretty much do all of that with discrete 'blocks' of code each doing a particular thing to process the clean digital delay into something sounding like an analog one.
Making that a model of an analog delay needs more than that, though, IMO. BBD delays have clock bleed, and they distort, corrupt, mess up the signal in subtle ways that interact with each other. BBD's alias, believe it or not. You're no longer thinking about 'post-processing' the digital equivalent, you've got to get closer to what is happening, not the sound of what results...

Modelling at the ultrafine level 'all the physics' you're talking about, though, is the realm of dedicated electronic-circuit emulation software which (invariably AFAIK?) can't really run in realtime...

When I hear analog modelling the first thing that comes to mind is something like the PSPICE engine, though I'm sure that most companies aren't actually creating such detailed models of each component or even an entire system. I'm inclined to think davidguda is right but there are probably also some developers who do actually model the analog hardware to some degree. Whether or not they go so far as to incorporate such detail as temperature effects on conduction and resistor or capacitor tolerances is beyond me but if I had to guess I'd say that they probably don't.

if the character of a given analog synth is in how it behaves with temperature, then such synth must be very boring

i thought analog modeling is about making digital synths behave more like analog ones
but simulating the circuits is another thing
should you model how temperature changes in the air so that you can then model how a thermistor behaves to counteract oscillator tuning drift?
this is stupid IMO

It doesn't matter how it sounds....as long as it has BASS and it's LOUD!irc.freenode.net >>> #kvr

I would have to agree that modelling down to such a low-level (more on the physics side of things) seems a bit ridiculous for software synths, especially when most of the sound would come from the components used. Of course each of the components in the signal path will have its own intricacies, which end up coloring the overall sound but the term analog modelling in a softsynth sense doesn't really seem to refer to analog circuit modelling so much as modelling entire analog systems from a higher level. On a side note I just came across this on the U-he website - "Diva is the first native software synth that applies methods from industrial circuit simulators (e.g. PSpice) in realtime".

whyterabbyt wrote:Modelling at the ultrafine level 'all the physics' you're talking about, though, is the realm of dedicated electronic-circuit emulation software which (invariably AFAIK?) can't really run in realtime...

SPICE remains the gold standard for analogue simulation and is not designed to run real-time. It creates a system of equations that it then attempts to solve iteratively, which takes a while. The trouble with analogue circuits is that the system is constantly hunting for equilibrium and, as feedback is used almost universally, y depends on x which depends on y etc. The only way to get there is to recalculate the system of equations every time the system is 'disturbed' (ie the signal changes).

You then have a choice as to how detailed the model is. BSIM4 is the model used to simulate bipolar transistors (ie the kind used in amplifiers, oscillators etc) in SPICE and in terms of physics it goes extremely deep today, right into quantum interactions. However, as Moogs etc use antediluvian silicon technology, higher-level abstractions within BSIM work just fine for the most part. The tricky bit is deciding whether the detail is important or not.

My understanding from chatting to a few people and what people such as Andrew Simper and Urs have described is that SPICE or a close-enough-for-jazz simple alternative called Qucs gets them started. But, because you can't run them in real time, the process starts of abstracting away just enough to maintain good audio accuracy but provide better computational efficiency. Sooner or later you get to a working model without, hopefully, throwing too much detail away.

The Drop, according to Andrew's description on the Cytomic site, maintains some elements of the equation solving thing. And the feedback estimator described by Urs when Diva came out I think does a combination of equation solving and more direct techniques depending on how important the feedback component is (ie at high resonance, it's very important).

nerdpatrol wrote:I would have to agree that modelling down to such a low-level (more on the physics side of things) seems a bit ridiculous for software synths, especially when most of the sound would come from the components used. Of course each of the components in the signal path will have its own intricacies, which end up coloring the overall sound but the term analog modelling in a softsynth sense doesn't really seem to refer to analog circuit modelling so much as modelling entire analog systems from a higher level. On a side note I just came across this on the U-he website - "Diva is the first native software synth that applies methods from industrial circuit simulators (e.g. PSpice) in realtime".

I guess Id see a hierarchy of modelling 'detail' which went along the lines of

+1 to antto. To expand (or possibly just display my ignorance), I'd imagine that it'd be best to only model the parts that sound good and contribute to the sound, not necessarily all the supporting electronics. You know, the parts that matter. Then again, there's probably plenty of room for reasonable people to disagree on what matters... everything interacts, alas. It's tough enough for devs to model and account for everything in a guitar amp or stompbox, where few parts interact in relatively simple ways; I can only imagine what a synth must be like.

There are 2 general approaches to modeling:
- black box,
- physically-informed.

In the first case you set up a system of signal processing blocks that you think is enough to model the required aspects of the prototype system. Then you come up with an algorithm to "train" i.e. adapt the variables of the model to match the prototype behavior. That usually involves using a specially tailored input sequence that you run through the prototype. Good example is Softube. You can read more in their patents. Another variation is fine-tuning model parameters by hand using expert listening tests.

In the second case you actually replicate (to some extent) the signal flow of the prototype. It hardly ever goes as far as modeling the temperature drift, but it's much more complex than just using some filter in the feedback to model the "analogue" losses.

My approach is to examine the schematic, find a points where it can be broken down into independent parts (such as voltage-follower buffers or other points with high input impedance). Sometimes I'll change the circuit a bit to introduce such points. Along the way I verify the circuit behavior with SPICE to make sure I'm still within my target tolerance. After the circuit is segmented I identify blocks with high nonlinearity and those that are just linear circuits. Ideally they are separate, then the former are modeled as a static memoryless nonlinearity, and the latter are conveniently digitized by applying Lalplace transform followed by the bilinear transform. If it's mixed (i.e. a nonlinearity with memory) then I'd make an offline time-domain model based on a numerical solution of a system of nonlinear differential equations and try out a few simplified models that are implementable in real-time and still maintain a good matching with the direct computation model. Good example is a diode clipper coupled with a capacitor that is often found in a typical stompbox overdrive effects. Direct solution would require a complicated differential calculus while a spectacularly good approximation is achieved by splitting it into a high pass filter followed by a static waveshaper. There's a good paper on this analysis, I can find a link if anyone is interested.

Despite what anyone might want to believe, unless someone says that they are modeling components (which they may be doing to some degree...maybe), then it means that they are trying to mimic the overall system.

For instance, your classic guitar amps have passive filters for tone controls; one side effect is that the controls interact—cranking the bass may affect the mids, etc. The "modeled" guitar amp most likely uses active DSP filters, and simulates the control interaction by using lookup tables to simulate the interaction.

Similarly, the amp may simulate power supply sag, but so it with feedback from the calculated output. What's the point of simulating transformers, etc.? Not everything in the real thing is good. Simulate tubes so accurately that you have to wait for the unit to warm up? Degrade over time? Hum?

Having developed a number of modeled goodies for a company that was founded in modeled goodies, I'm not guessing. And there is no deception—no one claimed they were modeling components—these things model guitar amps, stompboxes, etc., not electronic components.

We like the old stuff because it was good enough to survive, because it sounded good (we let crappy sounding stuff go out of business). Now that we have "perfect" audio, we try to recreate that sound with a model. For a typical old-school guitar amp, that means some filtering, a non-linear gain element, and some more filtering (at the most basic level). That's the model, not the collection of resistors, capacitors, power supply, and tubes. If you do that in a way that it's mimicking the tone and tone control of an AC-30, then it's an AC-30 model.

For the most part, simulating individual components is silly. In one sense, you could argue that if you need to model a circuit on the component level (transistors, capacitors...), you don't understand what the circuit does well enough.

that is indeed true. Everything is done just to mimic the overall system. Musicians care for the equipment they've grown to like, not the resistors and capacitors it has inside. If it sounds close enough to the whole box then it deserves to be called a model. While typical guitar tonestacks are easily computed directly on a modern PC, most commercial HW products that are dealing with a cost-effective large scale production run on a very cheap DSP chips and use very simple models that nevertheless sound good enough because of an effort put into tuning them. To give an example, Fender amps with embedded digital modelling are still running on Motorola's DSP56... chips using a look-up tables for all the tone stack coefficients. And Line 6's and Digitech's guitar processors are using a chain of n*[biquad] - [static waveshaper] - m*[biquad] to model every kind of real-world amps.

Getting down to the component detail is more of a scientific than a real market-driven interest.

Gamma-UT wrote:SPICE remains the gold standard for analogue simulation and is not designed to run real-time. It creates a system of equations that it then attempts to solve iteratively, which takes a while. The trouble with analogue circuits is that the system is constantly hunting for equilibrium and, as feedback is used almost universally, y depends on x which depends on y etc. The only way to get there is to recalculate the system of equations every time the system is 'disturbed' (ie the signal changes).

well you can put a circuit together in LTspice and (using directives) load data from a .wav, use it as a voltage source, and "render" the output of some node as another .wav
that's how i've tested a few distortion effects before building them

It doesn't matter how it sounds....as long as it has BASS and it's LOUD!irc.freenode.net >>> #kvr