This is getting silly, I'll give you your 16 v.s. 24 bits examples if you need them but "detailed assesment" will require signifficant amounts of time. There were several examples on this board already, b.t.w. speciffically refering to sending multiple signals thorugh the same delay line in order to deal with busses. Switching bit depth within one system will create artifacts regardless of how you do it.

As I wrote; Clavia at this very moment states on their site that the G2 uses "true 24 bit processing" I never read this official erata; I'm sure it was published in two places but I'm unaware what those places might be or how potential G2 buyers might get to this information. If I were looking to purchace a synth I'd look at the specs at the manifecturers website. I'm personally absolutely sure some G2 owners are currently unaware of this because I had to personally tell it to some. If it's wishfull thinking to expect anounced updates within two years or expect data that got changed during the development phase to be updated on the website (especially since the site got completely overhauled in the meantime) then I'd like to know what could be expected.

Malice is a big word; neglect would be a better one.

About the christal ball thing; No, I can't look into the future, but it was years ago that Clavia explicidly told me the issues I had would be solved "in the next update" while the Clavia site right now explicidly says the NM is discontinued and that they will "fully focus on the existing products and amazing new product development.". Those two can't both be true at the same time so either must be untrue. I can't look in the future but I can look at what Clavia is saying about the software on that section of their site. Those two don't line up, hence my disapointment.

About physical modeling; yes Tassman is better suited for this, but I'm not selling Tassman; if it's up to me those different architectures will remain different and continue to cater to wildly different markets. While we are resenting implications anyway; I resent the implication that me mentioning P.M. here has anything to do with my ties to A.A.S. instead both me mentioning it and me having cennections to A.A.S. stemm from the same source being me interest in P.M.. Chances are the people interested in P.M. will be the ones that point out problems in that field and indeed those will also be the people that use and perhaps even are involved in specialised products for that purpose. For D.I.Y. delay-based implementations of P.M. I probably wouldn't currently use Tassman either. I don't think anyone would; it's not especially well suited for that and nobody claims it is. I would however use a system that featured the option of writing floating point values to arbitrary places in tables that could also be read in arbitrary ways. Either way; you are right; I do not know how to do some of the things I do in Tassman in the G2, in fact I strongly suspect many of those could not be done at all but I'm not particularly inclined to try and prove negatives.

On the art analogy; I took your analogy to refer to this because it was placed within this thread and I couldn't see any other relevance to it within this context. If it was a side-remark then I can just thank you for placing that side-remark here and not in a thread on -say- the flooding of New Orleans since here it gave me a nice hook on how I look at choosing instruments from the perspective of a sound artist as I see this. I'm very sorry if I misread the intended meaning of that section but trust you'll see how I got that misguided idea._________________Kassen

You might well be right in that I phrased things incorrectly. I can't recall what I said exactly (or where), so I cannot officially rephrase here to set things right.

However, I stick to my simple point:

If I use the delay module to -uhm, well- simply delay a signal at the end of an audio chain, the resulting signal hitting the output will only have 16 bits of audio information. Even if the DACs are 24 bits. The lowest 8 bits are gone for ever.

This is not "true 24/96 processing throughout" in my book.

If I use the delay module as a primary audio source (eg. a waveguide resonator) this source will only have 16 bits of dynamic depth. Even if subsequent modules operate at 24 bit, they cannot completely make up the lack signal information and resolution (as is demonstrated easily in the buildup of noise artefacts in long decays of KS oscillators).

This is not "true 24/96 processing throughout" in my book.

Get me right: I love the G2. It's my only axe these days.

It's just that I personally have been somewhat mislead by Clavia. I bought the G2 especially for what it was advertised: a true 24/96 machine suitable for (among other things) physical modelling. Something which, for one related reason (16 bit delays), it is not quite, IMO. The physical models audibly suffer from lower dynamic resolution. (According to Chet Singer, the NM1 sounds better for this since its (albeit very short) delays are 24bit wide.)

So, I feel let down here.

This is my only real gripe with the G2.

Other phM-modules would be nice, but as you say, it's a modular synth, and having too many pre-built macro-modules distracts from the "sportive element" in modular synthesis. I agree 100%. Building and troubleshooting the stuff oneself is way more fun anyway. (I just could go out and buy a second hand Yamaha VL1 otherwise.)

FYI, I'm not into pure "imitative" phM. It's just that I would like a better audio quality. So I really really hope for a 24bit option on the delays. I think everybody would like that.

I think you can use "bandwith" as a word there. It may not (obviously) hold true in the musical, spectral sense of that word but at higher bit depth you clearly are processing more information per unit of time which is a perfectly valid use of that word and which can be quite signifficant in complex systems.

Aditionally, though the lowest and highest possible frequencies won't be changed more bit-depth will affect how much controll you have over what frequencies are and especially which ones aren't present in the signal and what ratio they are mixed at; if bit depth didn't matter we'd be listening to square waves all day._________________Kassen

However a patch mutator is the bigest bang for the bug for clavia because it allows the blind to do own sounds with the nord modular..
a thing that was before exclusivly in the hands of expert synthesists.

Umm, there are blind people who are expert synthesists.

A bit late..with the "blind" i was reffering to the mean rompler and preset user... could have called it scum but found blind a more friendly term...

You might well be right in that I phrased things incorrectly. I can't recall what I said exactly (or where), so I cannot officially rephrase here to set things right.

However, I stick to my simple point:

If I use the delay module to -uhm, well- simply delay a signal at the end of an audio chain, the resulting signal hitting the output will only have 16 bits of audio information. Even if the DACs are 24 bits. The lowest 8 bits are gone for ever.

This is not "true 24/96 processing throughout" in my book.

.......

best,
tim

Big discussion
I personally dont think that it was wise to use 16bit ram for the delays..
And missprint or tactical information bending,
the word 16 bit is killing reputation theese days..that for sure...

So question...
Why have clavia not used 24 bit chips?
Is there a good reason for that? Or is it just a bug during projekt management that they regrett now but cant change anymore?

You might well be right in that I phrased things incorrectly. I can't recall what I said exactly (or where), so I cannot officially rephrase here to set things right.

However, I stick to my simple point:

If I use the delay module to -uhm, well- simply delay a signal at the end of an audio chain, the resulting signal hitting the output will only have 16 bits of audio information. Even if the DACs are 24 bits. The lowest 8 bits are gone for ever.

This is not "true 24/96 processing throughout" in my book.

Ok, I guess that if one has a full 24-bit setup, meaning that the physical output of the G2 goes into a 24-bit mixing and recording system, and a very low G2 output signal is heavily amplified later, one could actually hear it. The point is that in practice this situation would rarely occur.

If the output of the G2 is at standard output level (e.g. what one might call 0 dB on the mixer fader), and taking two bits of headroom on the G2 into account, and the lowest 8 bits are truncated one would have 14 bits left, which is equal to CD quality normalized to -12dB average value. And if the G2 signal is e.g. -3dB below its headroom, it would still account for CD quality normalized to -3dB (which is a pretty loud CD).
One would need really good ears to hear the lowest 8 bits of a 24 bit signal in a mix that is normalized to -12 dB, we're talking about signals at minimal -84 dB below normalization level here! Signals that get lost anyway when a CD is the final medium for distribution. And the ear would certainly mask them away compared to the sounds in the mix that are at least +84 dB louder.

In the Dutch language there is an expression that translates to something like "making love to ants". I think it applies here.

In fact, 24 bits is not specifically used to aurally improve the quality of a single sound, it is instead mainly used to be able to mix lots and lots of tracks that are recorded at 16 or 18 bits. I have an AKG C414B-ULS mic here and some pretty good monitors, and I really don't hear a difference if I record on 16 bits at 96kHz or at 24 bits at 96kHz.

tim wrote:

If I use the delay module as a primary audio source (eg. a waveguide resonator) this source will only have 16 bits of dynamic depth. Even if subsequent modules operate at 24 bit, they cannot completely make up the lack signal information and resolution (as is demonstrated easily in the buildup of noise artefacts in long decays of KS oscillators).

This is not "true 24/96 processing throughout" in my book.

In my experience it is not the 16 bits in the delay memory that accounts for a possible noise build-up in PM models. What instead can cause severe noise build-up is actually the modulation of the delay lengths, e.g. by applying vibrato or pitch bend. Cause when this type of modulation is applied little chunks of memory might be skipped and later reinserted at the wrong place, which is much, much worse than loosing those 8 LSB bits of the 24 bit signal. The G2 delays do not appear to have the excellent anti-aliasing (intersample interpolation) that was on the short delay module on the old NM. But this is an entirely different thing. Actually, the major nasty issue to tackle when doing PM is what to do about residues in the waveguide when the waveguide is lengthened. I actually suspect that in this discussion this issue is confused with the 16-bit thingy. Because one has to know how and why there is a noise build-up, and what is actually able to cause it.

When patching a physical model one would most probably patch one's own damping mechanism. In this damping mechanism some sort of filtering will be used. This filter will also inherently introduce some time delay, and the total length of a waveguide is equal to the time delay of the delay memory used plús the phase delay in the filter for the fundamental frequency of the pitch to be played. Instead of applying modulation to the delay length one can simply apply modulation to the damping filter, which will result in a 'noiseless' and very smooth vibrato. This is not specifically 'patching around the limitations of the G2', it is actually common sense to do it this way one any system when doing PM.
Even more so because the vibrato will sound much more natural.
This will still leave an issue, and that is that filters have a cutoff frequency tracking error when the cutoff is set to something like higher as one eigth of the sample rate. This is inherent in digital IIR filters, so common to all digital systems. This will influence the tuning of high notes and is not so easy to correct. It is again a common problem in PM and definitely not G2 specific.

It ain't so easy to understand these physical models, as in essence they are balanced systems where energy is fed in by the excitation functions and reduced by the damping functions, which together are in some balanced state to keep the thing oscillating, without collapsing or exploding. Now I can tell you for sure that íf the balance would depend on the lowest eight bits of a 24 bit system, the model would be so hypercritical that it would almost certainly immediately explode or collapse. If a blown instrument model would generate noise in the model itself instead of being fed noise this noise would be part of the same balancing systems as the generation of the pitched components of the model. Whatever quantization noise resulting from a small bit-depth would actually be constantly damped away and be very small compared to the noise that is purposely generated to create the noise present in a blown sound.

The same does actually apply to plucked string models, whatever quantization noise is generated in the damping mechanism damps away itself as well, and is actually an error that is never greater as the weight of half of the least significant bit. So, it can not build up as it is damped on every iteration. If it would build up it would mean that damping is less than the energy fed into the system or feedback is over unity gain somewhere in the spectrum, and explosion is the result. Meaning that within moments you would hear a loud squarewave, clipped at headroom level, or a signal that would clamp to either the positive or the negative greatest value.

So, if there is some definite quantization noise louder than it should be, even at 16 bits, where does it come from? Two possibillities, the noise is not caused by the 16 bits, but instead by modulation of the delay time (the most probably cause). The other possibility is that there is simply a plain bug in the DSP-code. A common bug in damped waveguides is when the programmer uses a plain middling algorithm for damping, like the original damping used by Karplus and Strong, and forgets to add one LSB to the middled value if the value happens to be negative. This would slowly feed a negative DC offset to the waveform that eventually can push the negative peaks of the waveform below the negative headroom clipping point. Nasty about this issue is that this happens when e.g. a plucked sound has already decayed to a low amplitude. This bug has to do with the fact that there is one more possible negative value than there are positive values, because a range of numbers expressed by a binary representation always contains an even amount of numbers and the value zero is sort of part of the positive numbers. E.g. a sixteen bit number could range from -32768 to+32767 and not to +32768. Floating point numbers have the same property as also here the value zero is stolen from the positive numbers. There are other possible arithmetic bugs, so it is not at all certain that the mentioned noise build-up has anything to do with the 16 bit memory chips. And there is a whole range of other quite subtle bugs that can cause nastiness.

Myself, I started to experiment with PM shortly after Karplus and Strong published their algorithm in the year 1983. This was on an eight bit system. Recently I played an old cassette tape from 1984 to Kassen, so (if I didn't piss him off too much with my previous post) he could vowe for me that I did actually achieve quite acceptable results with eight bits only, actually by using a "fixed length - variable readout" waveguide system. Later I used 12-bit, 16-bit, 24-bit and floating point systems. And learned all the pitfalls of PM. And the funny thing is that all the issues concerning this noise build-up mentioned here on this forum several times, sound very similar to issues I found in the past, issues that had never to do with the bit-depth of the waveguide, but with other nastiness. So, personally I rather suspect bugs in the DSP-code for the modules using the delay memory or using the wrong PM methods, instead of the bit-depth. In fact, for me the prime suspect is the DSP-code, but I have no access to it, so I can not be definite about this.

So, I do agree there are issues, but I doubt if the 16 bit memory is the real culprit.

tim wrote:

Get me right: I love the G2. It's my only axe these days.

It's just that I personally have been somewhat mislead by Clavia. I bought the G2 especially for what it was advertised: a true 24/96 machine suitable for (among other things) physical modelling. Something which, for one related reason (16 bit delays), it is not quite, IMO. The physical models audibly suffer from lower dynamic resolution. (According to Chet Singer, the NM1 sounds better for this since its (albeit very short) delays are 24bit wide.)

So, I feel let down here.

This is my only real gripe with the G2.

Other phM-modules would be nice, but as you say, it's a modular synth, and having too many pre-built macro-modules distracts from the "sportive element" in modular synthesis. I agree 100%. Building and troubleshooting the stuff oneself is way more fun anyway. (I just could go out and buy a second hand Yamaha VL1 otherwise.)

FYI, I'm not into pure "imitative" phM. It's just that I would like a better audio quality. So I really really hope for a 24bit option on the delays. I think everybody would like that.

I don't think we disagree at all.

best,
tim

Oh well, its no bad thing to disagree.

Here is a little flute model, a bit like a recorder, that uses an allpass filter to lengthen the waveguide. In fact this is a pretty stable model because the feedback loop is compressive. Vibrato is produced by modulating the allpass. In variation 8 the vibrato is purposely quite deep and fast and no extra noise is fed into the model. If the vibrato modulation would now be applied to the pitch modulation input of the OscString module (that acts as part of the waveguide) one gets a whole different story.

This is getting silly, I'll give you your 16 v.s. 24 bits examples if you need them but "detailed assesment" will require signifficant amounts of time. There were several examples on this board already, b.t.w. speciffically refering to sending multiple signals thorugh the same delay line in order to deal with busses. Switching bit depth within one system will create artifacts regardless of how you do it.
<snip>

Yep, one can mix two eight bit signals through a 16 bit delay and then split them into two eight bit signals again. And if the delay would be 24 bits this could have been three eight bit signals. Definitely true. And if the whole system would have been 32 bit it could have been four signals and eight signals on 64 bits, etc. But imho this all comes close to the 'if pigs could fly...' thingy. It is no practical problem to use two parallel delay lines to delay e.g. three or four eight bit signals.

What seems more the issue to me is the scope of proportion of this 'problems' or 'system limits' thingy. To me it all seems magnified to an exaggerated level when all this 'Clavia lied to me and now I feel so hurt' stuff is passing this forum. And also simplified in a way that it all gets very confusing for all those who do not know the rare details that Clavia is suppposed to have lied about. It all starts to sound like something is very, very wrong. While when working with the G2 I experience only things that are very, very right. Of course, on every patch I make I do have to do a lot of deep thinking to get solutions that suit me fine. But that's the game, isn't it?

I'm not pissed off at that last post, I just felt I needed to defend Tim against what I saw was a overly agressive reply to him. If I realy were pissed I would have structured my reply quite differently. I do stick to my point that his use of bandwith was entirely correct. I was somewhat displeased about the implications of the A.A.S. bit, especially since I mentioned many time that I am indeed somewhat involved with those people, that this might affect the way I look at Tassman (a synth I didn't introduce into this topic and which uses entirely different forms of P.M.), I especially stressed that people should not take my word when I mention it and should instead try the demo. From that persective I'm somewhat insulted by that implication.

I'm not, nor would I ever deny that very acceptable results can be booked with PM in fixed point 8bit systems, however I think that just looking at the 16 bit delay line is a overly symplistic way of aproaching this matter and it should be stressed that it's a 16 bit delay within a otherwise 24 bit system which causes side effects. I'm not going into those now, I'll return to them later._________________Kassen

Yep, one can mix two eight bit signals through a 16 bit delay and then split them into two eight bit signals again. And if the delay would be 24 bits this could have been three eight bit signals. Definitely true. And if the whole system would have been 32 bit it could have been four signals and eight signals on 64 bits, etc. But imho this all comes close to the 'if pigs could fly...' thingy. It is no practical problem to use two parallel delay lines to delay e.g. three or four eight bit signals.

Yes, true. That should be in the manual. The problem isn't nesicarily in the delay itself, I think a larger problem is that the delay might cause behaviour that it shouldn't cause according to the specs which might be hard and time-consuming to trace by unexpecting patchers.

Quote:

What seems more the issue to me is the scope of proportion of this 'problems' or 'system limits' thingy. To me it all seems magnified to an exaggerated level when all this 'Clavia lied to me and now I feel so hurt' stuff is passing this forum. And also simplified in a way that it all gets very confusing for all those who do not know the rare details that Clavia is suppposed to have lied about. It all starts to sound like something is very, very wrong. While when working with the G2 I experience only things that are very, very right. Of course, on every patch I make I do have to do a lot of deep thinking to get solutions that suit me fine. But that's the game, isn't it?

This, I feel, is quite simply unfair. I can't be expected to repeat and quote all informationthat might posibly be needed to understand all I say in every post. It's not that relevant either. Here it concerened a bug in the NM that causes problems in the interaction between morphgroups and the compressor but even if it concerend a pixel placed too far to the right then the case still stands that a O.S. update was prommised. Yes; some of those comments might conceivably be taken the wrong way when read out of context.

To be honest I don't like the tome of this paragraph at all. I don't think it's apropriate to call the bugs I reprorted "rare details", I happen to be very interested in the dynamics of signals and especially transformations on those so morphgroups on compressors are not a "rare detail" to me and instead a important compositional device. I don't like this "clavia lied to me, etc" quote either; it's not up to you to judge my feelings or their apropriateness and certainly not to make fun of them in order to protect Clavia, a company I hear you have close ties to.

Continuing on this cource *will* get me pissed off and then I *will* write a differently structured post; there are many ways of looking at bit depth within modern musical instruments. Not all of them are pritty or pleasant._________________Kassen

Yes, true. That should be in the manual. The problem isn't nesicarily in the delay itself, I think a larger problem is that the delay might cause behaviour that it shouldn't cause according to the specs which might be hard and time-consuming to trace by unexpecting patchers.

There is so much that could be in the manual but isn't there, it could easily be a two thousand or more pages manual. I think you and me can agree to that.

Kassen wrote:

<snip>
Continuing on this cource *will* get me pissed off and then I *will* write a differently structured post; there are many ways of looking at bit depth within modern musical instruments. Not all of them are pritty or pleasant.

Ok, let's just stay on that topic of 16 vs 24 bits. This deeply intrigues me as well. I think we can do that without loosing our cool. (You got me a bit in an unfriendly mood in an earlier post, when you reïnterpreted my remark about pathing being an art into a suggestion that I did mean patching around the limitations of the G2 being an art. Which resulted in me not being very chique towards you. I suppose we can easily settle this over a beer or two).

My personal suspicion about the DSP-code of the delay modules comes from the fact that on the very first occasion that these modules have been shown (the first time an actual preproduction G2 model was shown in Frankfurt) they don't seem to have changed at all, apart from adding the Clk option. To me the modules look very much like they have been quickies and somehow didn't get a thorough going over later. E.g. not slewing the modulation input in a ratio relative to the set delay length is a big sin to me, as the disadvantage of not slewing is so readily audible and renders the modulation option pretty useless.

And yes, I suppose I am as close as one can get to Clavia, well, at least was while testing. During testing I tried to criticise the hell out of them, for the common good (which didn't make me popular). As I know very well how easy it is for a developer to get into some sort of tunnelvision on what one is developing. And if nobody gets you out of that tunnel it can indeed be quite dangerous. It ain't easy to get one out of tunnelvision, as people simply tend to hold on to what they know and concentrate on that. And simply don't know what else there is to know, like those other things simply don't exist. This I experience daily myself when struggling to find solutions to all those unresolved issues in life.

Yes, all will be well over beverages. We'll go into the total entropy of finite systems, impulses in non-linear systems and so on. I do not want to have a wank-fest over who can bring up the most obscure side effects of P.M., especially since that isn't under debate here at all.

It is indeed interely apropriate for you to give Clavia suitable amounts of hell during the right stage but you have to understand that if somebody like Tim raises what I see as a valid point and somebody like you jumps on this then very few options are open on the forum in general; either sombody adresses these points or we as a comunity let it slide. In this case I felt that letting it slide would do more harm then good in the long run but as corectly pointed out; my glass ball here (I do have one) is realy pritty but more suitable for looking at the past then at the future. Yes; 24 bit delays are wishfull thinking to us at this stage, but that wish is not one that arbitrarly popped up in everybody's head at the same time. 16 bit delays are not the end of the world nor the end of P.M. in the G2 but there are siginifficant issues that do need adressing. As I wrote above what I'd like to avoid is having obscure issues cause headaches of undefinable nature to unsuspecting patchers. Much better would be information on exactly what the differences between obeserved and anticipated behaviour are (intentionally avoideing the word limitations here) and how to deal with those differences. Ants are beatifull creatures, ants build structures in space by reordering randomness into complex order and ants build the equivalent of intelligence by comunicating between nearly identical building blocks; I think that making love to ants in this context is actually highly apropriate and people caught doing so to especially interesting ants should be commended, not scorned.

As far as I'm concerend this closes this matter since all points I felt needed adressing have been satisfactory concluded in my opinion

On beers; Rob does have a taste in beers. He doesn't drink beer like the average The Hague punk does; counting the people in the company and anouncing "n beers?" giving potential drivers three seconds to ask for coke or water but he does drink dark/exotic southern beers._________________Kassen

Ok, I guess that if one has a full 24-bit setup, meaning that the physical output of the G2 goes into a 24-bit mixing and recording system, and a very low G2 output signal is heavily amplified later, one could actually hear it. The point is that in practice this situation would rarely occur.

If the output of the G2 is at standard output level (e.g. what one might call 0 dB on the mixer fader), and taking two bits of headroom on the G2 into account, and the lowest 8 bits are truncated one would have 14 bits left, which is equal to CD quality normalized to -12dB average value. And if the G2 signal is e.g. -3dB below its headroom, it would still account for CD quality normalized to -3dB (which is a pretty loud CD).
One would need really good ears to hear the lowest 8 bits of a 24 bit signal in a mix that is normalized to -12 dB, we're talking about signals at minimal -84 dB below normalization level here! Signals that get lost anyway when a CD is the final medium for distribution. And the ear would certainly mask them away compared to the sounds in the mix that are at least +84 dB louder.

I don't mean to fuel the fire, but that's not really applicable for users wanting to finalize to new 24 bit formats like HDCD and DVD-A. Just because you personally might not be able to hear a difference between 16/96 and 24/96 doesn't give Clavia the right to hide modules that truncate to 16bits around the place - that's just sneaky and an insult to the professional users who bought the system for it's "full 24bit processing" capabilities.

I hope they provide 24bit alternatives to the current 16bit modules, instead of just changing the wording in their specs.

I want to point out that the 16bit-ness of delays was accidentally discovered by somebody here who wanted to delay control values representing binary information and wondered why a third of that information disappeared (IIRC, and I don't remember this person's name).

The argument that "nobody hears the difference between 16 and 24 bits anyway" limits, by implication, the use of the delay modules to audio -which, IMO, is out of line with the true spirit of modular synthesis ("everything goes with everything"). Who says that I necessarily want to delay audio? I might want to delay a delicate high resolution control signal before using it for some intricate math process, where resolution is paramount. Or binary stuff, like the example above. Here the difference between 16 and 24 comes in big time.

And, like Kassen says, if I don't know this (due to misprinting in the manual and false info on the Clavia web site), I'm in for some surprises.

But if PM artefacts are more due to bad intersample calculations (as Rob says) -well, hell, they should fix that too! If it was better on the NM1, I don't understand why "the new model", the G2, should be a step back in these matters.

Before I rest my case, here's my line of reasoning again, in a nutshell:

1) The quality and resolution of delay lines is a core level issue of a modular system.

2) Delaying a signal is a simple way of processing it. Therefore, Clavias statement (copied and pasted from the website) "True 24 bit processing system with 96 kHz sampling rate" is false information! It isn't true.

3) A patch mutator is a nice feature, but it is an add-on at the periphery of a system.

ergo, 4) Seeing development efforts directed to the periphery while ignoring that the core level isn't working up to its own stated specification, is bad engineering and immoral marketing.

I guess it all boils down to this:

"New OS with patch mutator!" sounds better in an advertisement than "Finally, the delays work as they should."

Delays were not part of the original modular synthesizers. Listening to much electronic music posted here and at the many concerts I attend, I sometimes wish the delay had never been invented. It is often overused. Anyway, if there was no delay in the NMs it wouldn't be missed by me. Still, now that they are there, I do use them and there are some fabulous things the they can be used for. I have never found them to be in any way unsatisfactory. Still, I hope Clavia attempts to address this problem by putting in a 24 bit delay module or two in the new release, they don't mess with the modules we already have in our patches.

If one is desparate for 24 bit delays, why not use two 16 bit ones in parallel with some signal scaling?

The fixed filter bank is a module that was in the original modular synthesizers and it is a glairing omission on the G2. It is at least as useful as a delay._________________--Howard
my music and other stuff

E.g. will they be bought by some software company that will stop producing updates because they want to remove competition for one of their own products? Or will the Clavia premises burn down and all documentation get lost? Please tell what you see.

True..but delays was in every studio part of the production...
Maybe stockhausen havent used them but beside the classical aproach to electronic music they are all around... I dont think that the G2 needs to be a moog emulation...that is allready done by the aturia people and delays can do quite interesting stuff within an arrangement of synthezised voices...
Overuse sounds however very old fashioned...

I am not so happy with the G2 delays for other reasons than the 16 bit thing...
In a modular system i would have wished for delays with variable readout rate.. this is much mor fun within a patch than delays where just the readoutpoint can be modulated.

Before I rest my case, here's my line of reasoning again, in a nutshell:

1) The quality and resolution of delay lines is a core level issue of a modular system.

True!

tim wrote:

2) Delaying a signal is a way of processing it. Therefore, Clavias "True 24 bit processing system with 96 kHz sampling rate" is false information.

If by means of the modular nature of the G2 it is dead easy (though patching is of course required, as its a modular system) to indeed process a delayed signal with 24-bits, the assumption that the system is capable of "True 24 bit processing" is true and will imho hold in any Platonian sense of proof of argument. All what I would have to do is present you with a patch that does indeed use 24 bits in a delay echo function. You find such a patch attached to this post.

The principle used to get 24-bit delay bit-depth can be used for any application requiring such a bit-depth.

tim wrote:

3) A patch mutator is a nice feature, but it is an add-on at the periphery of a system.

Exactly! Which imho wouldn't inherently render it less useful or inferior to other functions. It all depends what one needs in a specific situation.

tim wrote:

ergo, 4) Seeing development efforts directed at the periphery while ignoring that the core level isn't working up to its own stated specification, is bad engineering and immoral marketing.

If Clavia happens to have one C++ programmer avialable at the moment to work on the patch mutator (which is only an editor function), and right at this moment does not have a DSP programmer available to develop new modules or check and fix possible DSP-code bugs, it is not bad marketing but simply lack of development resources to meet all those market demands. I'm pretty sure this describes matters more accurately. And of course has nothing to do with the 16-bit delay memory issue.

tim wrote:

<snip>
I rest my case now. Peace.

Yours,
tim

(Note that in this issue I do by no means represent Clavia and thus defend them, it is by my own interest as a G2 user that I reply in this matter. I feel urged to stress this as I feel so much the pressure in this thread for 'absolute correctness'. Imho it is instead an issue of individual patching skills, that by many are undoubtedly steadily growing, but perhaps not yet up to a level of 'full fluency'. Time will resolve this.)

I rest my case as well,
/Rob

---
How the "True 24-bit delay" patch works:
The lowest 8 bits of the 24 bit signal are extracted by subtracting the output of a 'dummy' delay module, which is set to a zero delay time (yes, the displayed value on the module is not exact), from its own input signal and then multiply the result by 256 to get it into a range that can be delayed in an extra delay line parallel to the main 16 bit delay line. Then the outputs of the two parallel delay lines are simply added again after first scaling down the 8 LSB bits to their proper range. The fact that the delay lines are 16 bit is this way put to good use to use one 'dummy' delay module to implement a modulo function to recover those 'lost bits'.
For convenience (and a little wickedness from my side ) I added another A/B switch to hear only those eight LSB bits at an amplified level.

A/B'ing between a true 16 bit and the true 24 bit echo delay gives the opportunity to judge for oneself the musical urgency of this discussed matter, and whether it is necessary to do the extra patching to be able to use "True 24 bit processing" in one's own applications.

Note that the G2's perfect sample accuracy within a patch makes ideas like using a dummy delay module in a "quick'n dirty" way as a modulo function a jiffy. It is in essence not more special as e.g. using a lowpass filter plus a mixer module to create a two-output crossover filter.
One could easily create a 32 or 48 bit depth delay this way to delay in one go an assorted amount of signals. Though again some patching would be required to combine and split signals, in which the modular nature of the G2 would undoubtedly help a lot.

Of course, when using the G2 demo software one needs a soundcard that runs at 96kHz and has 24-bit DA converters for this patch. Otherwise the difference will not be there, due to soundcard specs.

24bit_delay.pch2

Description:

Patch showing the differencde between a 16 bit echo delay and the same delay operating at 24 bit.

Well, no, but neither was polyphony nor keyboards and certainly not concerns over being able to put it on a stage, recall a patch and get playing a few hours after getting off the plane. You'll have to agree that it's great that we moved on from that stage and that arguments along the lines of "the great prophet didn't eat hamburgers either" don't do anyone any good.

More to the point; the old ones weren't as concerend with using low level modules in order to construct higher level functions either and the delay is a cornerstone of that way of thinking. Delays can be used to build a chorus or a flanger, delays can be used to build various types of filter or a envelope follower, spetailisation, including reverb, needs delays; the list goes on and on (and indeed does include P.M.). In order to build such higher level structures it's important to be able to depend on predictable behaviour of the components._________________Kassen

<snip> arguments along the lines of "the great prophet didn't eat hamburgers either" don't do anyone any good.

A historian once told me that hamburges were the common food in the ancient Roman empire and whole streets of Rome had rows of hamburger shops where Romans would come to eat. Seen the great influence of the Roman empire around the mediterranean it could actually be very well possible that those 'great prophets' indeed ate hamburgers. Of course the bloody things weren't named hambugers yet, but it seems to have been the same thing.

Kassen wrote:

<snip> In order to build such higher level structures it's important to be able to depend on predictable behaviour of the components.

Could this mean: getting to know the behaviour of the modules 'as is' and increasing one's patching skills to put that knowledge to good use? I mean in a sense that one should not only rely on theoretical knowledge and how tech specs would relate to that, but one should also undertake experiments oneself and thus get to know in a practical sense how things work out (musically) during and after the patching?

Which could be related to the question: "Is patching an art or is it a science?". Or maybe: "How much part of patching is art and how much is science?". And then perhaps: "How much of the art part can make up for deficiencies at the science part, and how much of the science part can make up for deficiencies at the art part?". Provided we would agree that both art and science are part of patching. And maybe even having to first agree whether applying science could be an art, in which case the science part might even be simply inclusive in the art part. Or vice versa, that creating art can actually be a science, which e.g. can be learned and subjected to strict sets of rules to decisively define the nature of the created works of art.

A/B'ing between a true 16 bit and the true 24 bit echo delay gives the opportunity to judge for oneself the musical urgency of this discussed matter, and whether it is necessary to do the extra patching to be able to use "True 24 bit processing" in one's own applications.

I modified the patch to get a better 16 versusu 24 bit fx...
I am not so sure if the lsb signal is really correct because it sounds so rough..but maybe its allwright...
What do you say?

Another point regarding G2 updates...
Please look at the assigned knobs...

Am i really the only one that hates the way the g2 wastes valuable display lines?
I really found the "source" tag of the switches so irritating all the time...
I would feel much better with the G2 gui without theese "beton head style" labeling.
Is there anybody that feels the same?

Could this mean: getting to know the behaviour of the modules 'as is' and increasing one's patching skills to put that knowledge to good use? I mean in a sense that one should not only rely on theoretical knowledge and how tech specs would relate to that, but one should also undertake experiments oneself and thus get to know in a practical sense how things work out (musically) during and after the patching?

Yes, but I'd like to avoid the dreaded "pilot error" syndrome. In war when a military plane goes down this is typically blamed on "pilot error" towards other pilots, regardless of the cause (which might be such things as inferior planes, instruments or planning). THe other pilots typically take this info; it's very reasuring, after all THEY are capable, They would never make a error and go down, that other guy did look rather twitchy after last 36 hour run, wasn't his amphetamine use just a little excessive?

This works quite well, it avoids mutiney amongs pilots which are exceptionally valuable resources. However, behind the schreens you'd better fix that connector to the autopilot that slips lose at 5G turns to the right or you very well might lose the war._________________Kassen

"Is patching an art or is it a science?". Or maybe: "How much part of patching is art and how much is science?". And then perhaps: "How much of the art part can make up for deficiencies at the science part, and how much of the science part can make up for deficiencies at the art part?". Provided we would agree that both art and science are part of patching. And maybe even having to first agree whether applying science could be an art, in which case the science part might even be simply inclusive in the art part. Or vice versa, that creating art can actually be a science, which e.g. can be learned and subjected to strict sets of rules to decisively define the nature of the created works of art.

I vote for art... psychoacoustics, mathematics, electrical engineering, computer science, physics, psychology, and biology play supporting roles._________________--Howard
my music and other stuff

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum

Please support our site. If you click through and buy from our affiliate partners, we earn a small commission.