Welcome to the Piano World Piano ForumsOver 2.5 million posts about pianos, digital pianos, and all types of keyboard instruments
Join the World's Largest Community of Piano Lovers
(it's free)
It's Fun to Play the Piano ... Please Pass It On!

Nigeth, you're wrong - there is no loss of information in modeled piano because "information" is created on the fly as needed not recreated as in sampled one. Same pointless discussion as with digital camera versus analogue...and analogue cameras are pretty much gone by now aren't they?

Yes, off topic. The perfect digital piano (if there were such a thing) need NOT exactly reproduce an acoustic one. If the two sound the same, then they are the same. Instrumentation might be able to distinguish them. But in the end only the ear and mind matter. If those are satisfied, the digital is as perfect as it needs to be.

So if the arbitrary cutoff frequency is above the threshold of hearing, instrumentation would know. But the ear would not. Similar arguments can be made for any parameter.

But the point is moot, for now. The ear CAN tell the difference between a digital and an acoustic. The question becomes: In what ways does the digital fall short? If someone knows the answer, then the question becomes: Will anyone do anything about those shortcomings?

Is quantization the problem? I doubt it.

Is the limited number of sample levels the problem? I think we're at (or near) the point where that ceases to be the problem.

In digital pianos, the short samples and looping of the native samples ARE a problem. But these are largely eliminated by the better samplers, and become irrelevant with the modelers.

So what's left? Does anyone know what the problems are? (I don't. Do the researchers know??)And will something be done to make things better? (I can't. Can the developers??)

If the two sound the same, then they are the same. Instrumentation might be able to distinguish them. But in the end only the ear and mind matter. If those are satisfied, the digital is as perfect as it needs to be.

Though there can also be instances where some people can hear a difference and others cannot.

Originally Posted By: pv88

There are no answers... since digital technology has not been able to properly emulate acoustic phenomenon.*

*This includes every aspect of the way real acoustic piano strings interact with the soundboard, the case, harmonic resonance, etc.

I think the "acoustic space" may be the trickiest part. As I alluded to in another thread... you can record someone playing a piece on a spectacular acoustic piano with the best microphones, play it back through the finest speaker (or pair of speakers) you can get your hands on... and it still probably won't sound like there's a real acoustic piano in the room. And people expect digital pianos to sound indistinguishable from the real thing, through a couple of relatively inexpensive speakers yet. That's part of the cleverness of things like the Avant Grand, which use speakers to throw different aspects of the sound out of the cabinet in different ways. But you can't capture that effect on a recording, or in live performance through a PA. Maybe a binaural headphone system could come close.

But I think this means that, at least as far as a "slab" piano goes, playing it through regular speakers, there's really little hope of getting it to sound indistinguishable from a real piano... I think the more realistic goal is to try to get it to sound as if a real piano were being mic'd and played through speakers, which is not the same thing. But for a home console kind of application, a more sophisticated use of speakers could be more impressive than that.

I'm sorry if we have confused you. So let me try to explain. Any sound, speech, music, noise is really a combination of sound waves.

A wave has two properties, amplitude i.e the power or volume of the wave and frequency i.e how 'fast' or how often the wave vibrates or changes from highest amplitude to lowest amplitude in a defined amount of time.

Both of those values, amplitude and frequency, can technically be arbitrarily large or small and the wave is a continuous thing (meaning that you won't find any pauses or gaps when you record it).

Both scales, amplitude and frequency, are of infinite resolution i.e you'd only be able to measure one or the other perfectly if you had a measuring instrument with infinite resolution (why exactly that is would lead us to far off the basic explanation).

A computer or digital system can only store a finite amount of information though, so it is only able to store a representation of your sound that 'kind of' resembles the original.

The computer can only assign a finite number of levels for the representation of the amplitude and it can only assign a finite number of levels for the representation of frequency.

That's true about all kinds of data a computer processes by the way.

Luckily there is a physical principle at work that a guy named Shannon discovered.

He discovered that one can 'perfectly' recreate a wave from its digital representation (both amplitude and frequency) if you measure it more than twice per period. If you do it more than twice per period then the exact time of your measurement doesn't even matter you can always recreate the waveform from the data.

So if I wanted to recreate a waveform with the frequency of 1 Hz (1 period per second) I'd have to measure the amplitude of the wave at least twice per second. If I wanted to measure a signal of 20,000 Hz or less I'd have to measure 40,000 times per second and so on.

That process is called discretization.

That's why CD's are mastered with a sampling frequency of 44,100 Hz for example so that you can store and recreate sounds up to about 20,000 Hz which is the limit most people can hear.

Turning that data back into sound is basically the same process in reverse.

You also have to measure the amplitude of the wave (loudness, volume, power) and you also only have a finite amount of space for that. Therefore the amplitude is also 'discretized'.

This process is 'lossy' though. If your amplitude is between two digital levels (and it can be since it has 'infinite' resolution in amplitude) you have to 'map' it to the nearest lower or higher digital level. So your measured amplitude is slightly lower or slightly higher than the original.

If you recreate sound from the digital value you'll get some additional noise that stems from that mismatch.

Consumer electronics often uses 44,1 kHz at 16 bit which means that every wave up to an upper limit of about 20 kHz is sampled and there are 16 bit or 65535 (2 to the 16th) different values available to store amplitude information.

There's an additional catch though. In order for that process to work at all you'll have to 'cut off' all of the frequencies above the maximum sample frequency. Otherwise you'll get a very unfortunate effect called aliasing.

Basically, since the converter only measures with say 20 Hz (20 times per second) he won't be able to discern a signal that has a frequency of 10 Hz for example from one that has twice/three times/four times/ the frequency.

It doesn't measure quickly enough and so all of the harmonics over the sampling threshold of 10 Hz (remember measure twice per period) look like waves below the sampling threshold. This can have some very unintended results if you play it back later.

To prevent this you'll have to prevent all of the frequencies above the sampling threshold from even entering the ad-converter.

If you have analog to digital to analog conversion (you record something digitally and then play it back later) this is achieved by using a low pass filter (a filter through which only waves up to a certain frequency can pass through) that is inserted before the analog to digital converter so that the AD-converter only 'sees' frequencies up to a certain value.

So now you have a signal that uses a finite number of levels to store the amplitude and which can only store signals up to an arbitrary frequency threshold.

Everything else is basically 'lost'.

Even if you don't work with an analog input signal but instead create the signal in a computer (for example with some sort of modeling) the basic principle stays the same. You only have a finite (e.g 16 bit) resolution for the amplitude and only a finite (say up to 20 kHz) resolution for the frequency information.

For it to be different would require a computer with infinite storage and the capability to handle an infinite amount of data in a finite amount of time.

Therefore there is a finite number of 'outputs' or steps you can create on a digital system.

There is a lively debate at which point the amount of useful information that is lost is small enough so that people won't notice it. Some say you'll always notice, it some say it's at 24 bit/96 kHz, some say even the invasive 'lossy' encoding of mp3 isn't really noticeable.

There are additional technical limitations in current systems for example midi only offers 7 bit (128 levels) in the standard implementation.

Nigeth, you're wrong - there is no loss of information in modeled piano because "information" is created on the fly as needed not recreated as in sampled one. Same pointless discussion as with digital camera versus analogue...and analogue cameras are pretty much gone by now aren't they?

A modeling algorithm 'recreates' the sound of an acoustic within the limits of the digital system and the fidelity of the digital to analog conversion.

This recreation is an approximation that leaves out 'information' the real instrument would provide but the digital one cannot due to technical limits. So there certainly is a loss of information.

The only real argument is whether or not that approximation is close enough to the original so that a human would be unable to notice said difference.

20 years of the record vs. CD argument have told me that people will never agree on what constitutes "unnoticeable' or 'good enough' so good luck with that.

I'd argue that if the real criterion is 'you can't notice the difference' then you can't really argue that modeling is better than sampling or vice versa, At some point both technologies will cross into the realm of 'can't notice the difference' at which point the whole argument ceases to matter.

Yes, off topic. The perfect digital piano (if there were such a thing) need NOT exactly reproduce an acoustic one. If the two sound the same, then they are the same. Instrumentation might be able to distinguish them. But in the end only the ear and mind matter. If those are satisfied, the digital is as perfect as it needs to be.

So if the arbitrary cutoff frequency is above the threshold of hearing, instrumentation would know. But the ear would not. Similar arguments can be made for any parameter.

Which is my point because if the quality threshold is 'you won't notice the difference' then there can be no preference of modeling or sampling since both technologies have the potential to achieve said goal.

Quote:

But the point is moot, for now. The ear CAN tell the difference between a digital and an acoustic. The question becomes: In what ways does the digital fall short? If someone knows the answer, then the question becomes: Will anyone do anything about those shortcomings?

For modeling the answer is most probably that the algorithms are much simpler than they'd need to be in order to stay below a certain threshold with regards to computational power and price.

Modeling the complex interactions of all of the parts of an acoustic is a very hard and computationally expensive math problem. I'd guess that certain relationships and interactions between the different parts aren't even entirely understood yet and can't therefore be modeled at all.

So I'd guess there is still some research required and enough processing power to do all of the differential equation and fourier analysis work.

For sampling the biggest issue is storage space ( a single sample set for one acoustic grand can now use up to 12 gigabytes of disk space) and bandwidth to store an even greater number of samples for different velocity levels.

And one problem I can't really put a finger on but someone else in this thread posited also.

Somehow even the best sampled piano reproduced over the best sound equipment somehow doesn't feel like the 'real deal'. There is probably a sensory element that is not reproduced by the recording.

Nigeth,You haven't quite explained why you think that modeled DP sound is no different from sampled DP sound, when the latter is derived from so few samples per note, whereas the former is generated from keystrike, with no preset number. Unless you're saying that the modeled sound also originates from no more than the same number of different possible sounds per note, only that it's computer-generated rather than from a recording, which I don't think is what you mean.

Any pianist worth his salt can produce more variations of dynamics - each with its own degree of overtones generated - than the number of actual sampled notes as recorded. And that's just for one note. When notes are combined, generating various resonances with different weighting for each individual note within say, an eight note chord (which any pianist worth his salt can do), a modeled DP runs rings over any sampled DP which just sounds like 8 notes all distinct from each other, rather than intermingling and producing different timbres depending on how the pianist voiced (in the classical sense) that chord, and what preceded it, like the decaying notes of another chord. In other words, the modeled sound produces a pretty convincing analog-like representation of an acoustic piano with all the blurring and clashing sounds and overtones; the sampled sound is sterile, and quite unlike what happens in the real thing. Even if, according to you (if I understand right), the modeled sounds have no more different sounds available than sampled sounds.

A CD obviously doesn't give the complete original sound recording, being a digital representation, just like a digital photo compared to a slide film, which I presume is your analogy above; but the information loss on CD (but not MP3) is to all intents and purposes undetectable by the human ear (though I'm sure most people can tell the difference between a digital photo and the smoother slide film photo, even if the former has lots of pixels), but the difference between modeled and sampled DP sound is easily detectable (even from just one struck note, even disregarding the looping that occurs with sampled DPs).

_________________________
"I don't play accurately - anyone can play accurately - but I play with wonderful expression. As far as the piano is concerned, sentiment is my forte. I keep science for Life."

Nigeth, you're wrong - there is no loss of information in modeled piano because "information" is created on the fly as needed not recreated as in sampled one. Same pointless discussion as with digital camera versus analogue...and analogue cameras are pretty much gone by now aren't they?

A modeling algorithm 'recreates' the sound of an acoustic within the limits of the digital system and the fidelity of the digital to analog conversion.

This recreation is an approximation that leaves out 'information' the real instrument would provide but the digital one cannot due to technical limits. So there certainly is a loss of information.

In that sense even real acoustic instruments are experiencing loss of information in real world due to weather, temperature, humidity and a distance from the listener....

Nigeth,You haven't quite explained why you think that modeled DP sound is no different from sampled DP sound, when the latter is derived from so few samples per note, whereas the former is generated from keystrike, with no preset number. Unless you're saying that the modeled sound also originates from no more than the same number of different possible sounds per note, only that it's computer-generated rather than from a recording, which I don't think is what you mean.

That's EXACTLY what I mean.

Quote:

Any pianist worth his salt can produce more variations of dynamics - each with its own degree of overtones generated - than the number of actual sampled notes as recorded. And that's just for one note. When notes are combined, generating various resonances with different weighting for each individual note within say, an eight note chord (which any pianist worth his salt can do), a modeled DP runs rings over any sampled DP which just sounds like 8 notes all distinct from each other, rather than intermingling and producing different timbres depending on how the pianist voiced (in the classical sense) that chord

This would be entirely and undeniably true if we'd be talking about the real instrument.

I won't even disagree that modeling might sound better under certain circumstances.

I just sense some sort of confusion here about how digital instruments (regardless if they are modeled or sampled) actually work and instead I hear a lot of conjecture about how they're supposed to work.

High quality sampling doesn't simply "just sound like 8 notes all distinct from each other" there is some modeling and synthesis going on.

Most companies model sympathetic resonance for example or improve the sampling with synthesis and modeling to blend samples for different velocities together etc.

So high quality sampling that is helped by modeling and synthesis sounds better than you give it credit for.

While a pianist might be able to "produce more variations of dynamics" than there are samples, the digital piano with modeling will not be able to reproduce all of them.

It simply can't.

You play on a simulated key action. The sensors that measure the key travel and velocity convert that information into digital numbers of finite length and resolution (say 16 bit or 24 bit) so right there your theoretical limitless number of "variations of dynamics" is converted into no more than 65535 different levels (for 16 bit) per keystroke.

After that process the modeling algorithms use that digital information to calculate how a real piano would sound like. The quality of that process is determined by several factors.

- Is the modeling realistic and complete enough to recreate the sound of the real instrument within a quality threshold that makes the difference unnoticeable.

- Could my hardware even run such a model if it exists.

- is the fidelity and resolution of my digital system good enough to actually reproduce all I want it to reproduce.

What I object to quite simply is the notion by many people in this thread that modeling is inherently 'better' because only modeling is able to really recreate the sound of an acustic piano because they attribute some sort of magical properties to it that supposedly sampling won't ever be able to match.

Both methods are 'digital representations' that lose information.

One or the other might sound better or more real to you depending on the current state of the art of the competing technologies. I won't even argue about that.

If you factor out business decisions, feasibility and processing power however as some people seem to do to make their case for modeling then there is no inherent reason why one has to be better than the other. After both are good enough so that the human ear won't notice the difference then it's a matter of preference and not quality.

If you live in the real world and have to consider things like price, technical feasibility and your target audience however then there are differences.

So to get back to the question of the OP.

Most instruments today are sampled because it's cheaper, sampling is less taxing on the CPU and hardware so you can offer more features in a smaller package and the technology is more mature.

It's also much cheaper to simply spend the effort to record a piano instead of spending a huge budget on R&D for a good modeling algorithm especially when you can improve sampling with synthesis and modelling parts of the instruments.

Thirdly for most use cases (band context) the supposed higher fidelity is wasted due to environmental factors (fidelity limits of PA, amplifiers, sound reproduction properties of the hall you play in etc.)

And last but not least in contexts where that matters (concert halls) people most probably would use a real acustic.

In that sense even real acoustic instruments are experiencing loss of information in real world due to weather, temperature, humidity and a distance from the listener....

For all practical intents and purposes: Yes.

In Theory: No.

The harmonic series of overtones of a base tone goes on to infinity, so in theory the n-th overtone of say a c' is still part of the sound for arbitrary n even if n -> infinity.

Since the sound of a piano is the complex combination of all sounds including all of their harmonics plus the interaction with the sound board, corpus and resonating elements, the number of different combinations is truly 'infinite'.

We simply agreed though, that after a certain point the additional 'information' is below the capabilities of our sensory organs to notice and becomes 'unnoticeable' by human ears so it doesn't mater if it is left out.

The lively debate about digital vs. anaolog shows however that those limits might be a little too arbitrarily defined

You could spend a fortune on a acoustic concert grand and get a piano you don't like, so why do many people pretend that acoustic always equals perfection, in the same way they pretend sampling is always inferior to modelling?

It very often isn't. The vast majority of digital pianos are perfectly passable replicas of acoustics.

The vast majority of digital pianos are perfectly passable replicas of acoustics.

No. The vast majority of digital pianos are somewhat close to an acoustic piano in terms of sound and action so as to make for a somewhat satisfying pianistic experience. Calling DPs replicas of acoustics is a massive overstatement.

The vast majority of digital pianos are perfectly passable replicas of acoustics.

No. The vast majority of digital pianos are somewhat close to an acoustic piano in terms of sound and action so as to make for a somewhat satisfying pianistic experience. Calling DPs replicas of acoustics is a massive overstatement.

I own both an acoustic piano and several digital pianos and am glad to have and use them all for different purposes. I'm not on either side of the "debate," since for me there is no debate.

I do think, though, that we'll be able to say convincingly that a digital piano is a passable replica of an acoustic piano when we're able to run blind listening tests in which experienced pianists cannot distinguish between the sound of a digital and that of an acoustic.

With the qualification that I'm largely innocent of software virtual pianos (so far), I'm not sure we've reached that point yet.

The vast majority of digital pianos are perfectly passable replicas of acoustics.

No. The vast majority of digital pianos are somewhat close to an acoustic piano in terms of sound and action so as to make for a somewhat satisfying pianistic experience. Calling DPs replicas of acoustics is a massive overstatement.

I do think, though, that we'll be able to say convincingly that a digital piano is a passable replica of an acoustic piano when we're able to run blind listening tests in which experienced pianists cannot distinguish between the sound of a digital and that of an acoustic.

Well, that hasn't happened yet though, has it? All current DPs fall short in the sound department - some are better than others. Decays are short, resonance is below par. That's just as a listener, but it's when you play on one you realise how lacking in life DPs are. There is a certain sterility and deadness to the sound. The tone colour isn't there either. I find it amusing when people can't tell the difference between a real piano and a DP - it calls into question their level of musicianship more than proving the DPs are "perfectly passable replicas" of acoustic pianos. There's a fair way to go before that can legitimately be said.

DPs do certain jobs well, but they haven't cracked the magic of a real piano yet - there's considerable work to be done first. I'd imagine the new Kawai VPC Or AG + a top software piano would be the closest thing we have so far.

Why is it so forbidden to criticise DPs anyway? Why shouldn't we have high standards? Are we not allowed to be discriminating when it comes to music?

My girlfriend use to say digital piano is like a dildo compared to real thing,If you get her drift... It's actually pretty good analogy. Easy in use, portable and you can use headphones but other then that...

My girlfriend use to say digital piano is like a dildo compared to real thing,If you get her drift... It's actually pretty good analogy. Easy in use, portable and you can use headphones but other then that...

Nigeth,... Any pianist worth his salt can produce more variations of dynamics - each with its own degree of overtones generated - than the number of actual sampled notes as recorded. And that's just for one note....

With 128 levels limit imposed by MIDI I don't think this is a factor at all.

Originally Posted By: bennevis

When notes are combined, generating various resonances with different weighting for each individual note within say, an eight note chord (which any pianist worth his salt can do), a modeled DP runs rings over any sampled DP which just sounds like 8 notes all distinct from each other, rather than intermingling and producing different timbres depending on how the pianist voiced (in the classical sense) that chord, and what preceded it, like the decaying notes of another chord. In other words, the modeled sound produces a pretty convincing analog-like representation of an acoustic piano with all the blurring and clashing sounds and overtones; the sampled sound is sterile, and quite unlike what happens in the real thing. Even if, according to you (if I understand right), the modeled sounds have no more different sounds available than sampled sounds.

Well said. If there as to be an advantage between modeled and sampled is that one: interactions between notes. You can sample 88 notes at 128 levels, but if the sound of a previous note alters in any significant way the attack of a new one (and I think it probably does), then it will be really difficult and expensive to sample all the possibilities. And let's not even talk about the damper pedal at every possible level.

So, forget everything you said about steep and steeples because IMO the difference is not there when you just play a single note.

If there as to be an advantage between modeled and sampled is that one: interactions between notes. You can sample 88 notes at 128 levels, but if the sound of a previous note alters in any significant way the attack of a new one (and I think it probably does), then it will be really difficult and expensive to sample all the possibilities. And let's not even talk about the damper pedal at every possible level.

So, forget everything you said about steep and steeples because IMO the difference is not there when you just play a single note.

Carlos

Sampling and modeling are so diametrically different in concept to each other that they seem to appeal to different people, who may be looking for different things in a DP. Regardless of what or how 'similar' they really are after you take away the initial source of their sounds (as implied by previous posts), the fact remains that they do respond and behave differently when you actually play them. And dewster's tests in his DPBSD project also show up the marked differences between sampled and modeled DPs.

All my classical pianist friends, when they try out my V-Piano, say things along the lines of 'it's the only digital that feels and responds like a real piano'; and that if they had to exchange their acoustics for a digital, it's the only one they would consider. On the other hand, my two pop and jazz-playing acquaintances weren't at all impressed (and they are the ones who play regularly on DPs, and own several themselves) - partly because, as someone here once said, 'it's a one-trick pony', and it's 'poor value for money'.

Well, you pays your money and you makes your choice......

_________________________
"I don't play accurately - anyone can play accurately - but I play with wonderful expression. As far as the piano is concerned, sentiment is my forte. I keep science for Life."

You can sample 88 notes at 128 levels, but if the sound of a previous note alters in any significant way the attack of a new one (and I think it probably does), then it will be really difficult and expensive to sample all the possibilities...So, forget everything you said about steep and steeples because IMO the difference is not there when you just play a single note.

I agree. Within the 126 levels supported by MIDI (0 = silence, 1 would be key down and no sound, for proper piano behavior implementation), I don't see any inherent advantage to modeling over sampling for duplicating the piano sound from a given point in space, short of being able to do it with less memory. You could model different mic placements for different variations, rather than having to sample each of those mic placements, or different lid heights, or how perfectly in tune each of the 200+ strings are, or how worn the hammers and felts are, etc.... but these things give you more piano sounds, not necessarily any single better one.

And contrary to what someone said, modeling has no inherent benefit in creating longer decays. It can arguably create a longer, more natural decay if you're working on a system with less memory, but that's due to a limitation of hardware, not a limitation of sampling.

And you really don't have to sample more than 126 levels even for this theoretical perfectly sampled piano. The dynamic range of an acoustic piano (the difference between its quietest note and loudest) is probably under 63 decibels, so sampling at 126 levels would permit the samples at different velocities to be within a half decibel of each other, which is just about the limit of the smallest difference anyone could perceive. (Though getting the velocity response right is another issue, and I think this might be where being able to generate more than 127 values and map them accordingly could be valuable.)

Of course, whether you have electronics/amplification/speakers that can produce the entire 63 dB range of an acoustic piano at real-life levels and without adding distortion of its own is another problem altogether.

Anyway, it seems to me, in terms of a single accurate piano sound, the advantage of modeling only comes into play when replicating the probably infinite possible interactions between multiple notes. The sound of striking and holding middle C by itself could be captured in 126 30-second (or whatever) samples. But without the damper pedal, the sound of that strike will vary with whether you are holding down 1, 2, 3, or more other notes when you strike it, and specifically which notes they are, and possible how loudly those previous notes had been struck in the first place; and if the pedal is down, it might change depending on how loudly each other string is sounding, which would be affected by the sequence and velocity of each and every other note you had struck since depressing the pedal. That is, the "resonances" which the OP implied were unimportant are, in a sense, the only real inherent sonic benefit to modeling in the first place, as I see it.

Which is not to say that anyone has successfully modeled all those string interactions. Just that I could see modeling being a better solution there than infinite sampling!

You either spend lots of memory for samples so that you don't need a powerful CPU or you use a powerful CPU so that you don't need a lot of memory with modeling.

A good computational model might be easier to tweak though but sampled pianos also do some modeling to let you customize the sound or to add effects that are difficult to reproduce with samples alone (string resonance for example)

So we'll probably see more hybrid approaches in the furture that use quality sample sets as base and modeling to make it more life-like.

Right now fast storage and RAM is so much cheaper than CPU power that it's easier to simply throw gigabyte after gigabyte of samples at the propblem and supplement it with modeling than to implement a realistic model of a certain complexity

what are we talking about here? probably 99% of all music is being listened to on and through electronic/ digital media so whether it comes from acoustic or digital instrument it doesn't really matter because it'll be converted to digital signal anyway and nobody can tell what was the original source. So why bother?

A lot of people would gladly own a real upright or grand piano for practice and playing. Alas for many people owning a piano is not practical or feasible.

You need the money to buy and maintain one, you need the space for the instrument, you might not be able to practice at certain times because of the noise. It's heavy and you can't move it to gigs/recitals.

So people go digital. That doesn't mean however that just because you bought a digital piano you don't want something that sounds and feels like a real piano.

what are we talking about here? probably 99% of all music is being listened to on and through electronic/ digital media so whether it comes from acoustic or digital instrument it doesn't really matter because it'll be converted to digital signal anyway and nobody can tell what was the original source. So why bother?

Anybody who's played a real piano at a reasonable level can hear the difference. And sitting down playing it, it's a no brainer. Unless you are talking about pianos sitting in a band mix.

I think sampling is not going anywhere fast, but the resonance modelling needs a lot more work. It also seems to me that by the time they are able to model resonance well enough to cope with all the vast complexity of dozens of strings interacting with each other with the pedal down, that will be the time when they are probably also good enough to model all the notes too. Until then, samplers still sound decent, even though the resonance is a bit disappointing. I do think full modelling will eventually take over, but perhaps not for quite a while.

I do think full modelling will eventually take over, but perhaps not for quite a while.

And not when there's still a complete monopoly even now, 4 years after the V-Piano was introduced, and the biggest and most successful DP manufacturer (Yamaha) still content to rest on its laurels, just tinkering around the edges.......

_________________________
"I don't play accurately - anyone can play accurately - but I play with wonderful expression. As far as the piano is concerned, sentiment is my forte. I keep science for Life."

And not when there's still a complete monopoly even now, 4 years after the V-Piano was introduced

Progress is slow, this is not trivial stuff. Pianoteq is the main software based modeling piano, and they're not worlds ahead of where they were four years ago either, and unlike Roland, they don't have the overhead of having to design a whole hardware system to go with it. (And I think some people still prefer the Roland even today.) So many people here seem to think that, if they can conceive of it, an engineer should easily be able to do it!

Originally Posted By: bennevis

(Yamaha) still content to rest on its laurels, just tinkering around the edges.......

Yamaha's SCM does add modeling to their samples (CP1, CP5, CP50). I wonder if we'll see anything new at NAMM.