Very nice. I think its always important to pay tribute. Our hobby is stagnating decade by decade for enthusiastic and brilliant dedicated scientists.

From an interview (1996) :

Atkinson: John, you've had an amazingly variegated career. You've designed antennae for the military, you've worked in pure science, you were involved in the International Geophysical Year back in 1957, you've lectured at university, you hold patents for all kinds of things, even for cancer treatment and CD playback. You even moved to Australia for a while in the '80s. But despite all that, all you really seem to want to talk about is speakers. What is the fascination of audio?

Dunlavy: It's a labor of love. I like music and I like to hear reproduced music the way I hear the live performances. And it's a great challenge. Accurate audio reproduction is probably the most demanding challenge of any that I know of. So it's an interesting pursuit. And a very rewarding pursuit.

Atkinson: It's interesting that you say it's a great challenge, because surely it's a much more simple field than some of the others you've worked in.

Dunlavy: Yes and no. Speakers can be very daunting. About the time you think you've got an idea that, "Boy, this is really going to be the living end!" you put it together and measure it and say, "Ummm, why did I think that?" [laughs] We try a lot of new ideas out. Probably less than one out of 20 ever really goes beyond the first or second stage of development. Because it's fun to try new ideas and new things.

Originally posted by Joe Rasmussen It was found that modeling the phase was not all that accurate and so the crossover was stripped and we only concentrated on the phase and sure enough the tweeter's pulse when going positive (since it is wired out of phase lectrically) and lining it up with the positive going midrange showed a significant error.

Did you discover the ghost in the machine? Was it a working procedure error? If its rooted as a common problem in some accepted ways of collecting and simulating data, please let us know so to double check or avoid when we create speakers.

Did you discover the ghost in the machine? Was it a working procedure error? If its rooted as a common problem in some accepted ways of collecting and simulating data, please let us know so to double check or avoid when we create speakers.

If you are familiar with how modeling works, I will be repeating a few things you already know. We get the impulse from the driver in situ (box), then don't change the position of the mic and get same info of the other driver(s). The position of the mic stays the same as we want the relative phase to be the same. These will be your farfield measurements. Then nearfield measurements are also taken, these are merged and any port is summed. The data then produces a freq response AND a phase response. They are linked together.

But when we export the frequency response we loose the link. The modeling program now needs to produce the phase from the frequency response and not from the phase response captured by the hardware. What I am talking about is well known to users of SoundEasy (and other similar programs), and the imported phase is there but it is only used as a guide. Using tools within SoundEasy and tailoring the response at the edges of the bandwidth, much higher freq than the original hardware was able to measure and also down where the nearfield supplied info - remember the phase was sacrificed there. The imported phase is only linked to the farfield and yet the modeling needs to harmonise the whole picture including missing nearfield data, and Hilbert-Bode requirements satisfied. You probably know about that.

So there are tools within SoundEasy that allows you to tailor these and they pull the phase in line with the imported phase and the phase is then derived from THAT frequency response.

Usually, with those tools, you have a rough idea what your crossover frequency(s) are going to be, right? It is not that difficult to get the phase to match at, say 3KHz, several octaves below and above. Save and produce the new phase. The algorithms required one (phase) to come from the other (frequency). And it must correlate to infinity otherwise things will cough and splatter and come to a halt. Actually SoundEasy won't even allow you to go that far.

But in the end this proved to be less than absolutely reliable. If this was a high order crossover it would be of less consequence, the more abrupt stop-band makes it so. But low and especially 1st order where relative phase integrity is important, it wasn't terminal but not accurate either. We were still getting near 6dB summing.

Where the problem showed up was examining where the pulse started in the final system (I now recall this was with crossover in place), it was clear the pulse from the tweeter was late by around 50uS (0.00005 Sec) if the response was to line up correctly. We need the step response of the tweeter to go negative and it reached max in 25uS (that's why it was still summing surprisingly close to -6dB) at which point it will go positive and the midrange driver will need to start positive at this point. Now the negative part of the tweeter's step response cannot cancel out any output from the midrange/bass driver combination and they sum as they go positive at the same time.

Have you read the article/chapter "The Renegade Tweeter Theory" as this shows how the final result hangs together.

If so please send me a pm as I am interested in auditioning this design.

Ian

Hi Ian

I don't have a contact but I do know at least one person who has built them, he called me a couple of times on the mobile, on both occasions I was shopping. He just asked me a few questions about parts selection and what local sources there were.

Maybe if that person reads this thread, could he let his presence be known or contact me by email? Maybe that might lead to something.

There could of course be more down there, so if you are able, calling Victoria, where are you?

What I make out from your adventure, is that the time sync you look for is beyond pseudo anechoic and simulation prediction capabilities. You did not make any mistake with SE. To absolutely predict such an accurate sync takes no echo and no latency. This is large anechoic chamber and dedicated measurement systems territory. Not gated and sound cards. Raw & pure, wide band, related and proof synchronized phase by one space point measurements for all drivers would make the grade. But that's ideal and takes B&W and B&O style investment.
The gating and Hilbert is great tricks, but still they calculate.
All this splicing and tailoring is a great approximation and very practical.
But your requirements are beyond approximation.
Ruling out the anechoic chamber and dedicated measurement system with absolute sync, I would go like that:

Do the basic work with gated and near field in SE, then decide no cabinet yet. Make a movable Z plain tweeter panel. Adjust and measure until the step response is best. Coherent rise time before my eyes, and then fix the final cabinet drawing. Dahlquist did that in the 70s with oscilloscope, 6dB series crossover, 5 drivers, square waves. It took him ages. Now days that last critical 1% is a matter of sliding the tweeter a bit. Its just the fact that we use higher orders for other reasons today and we did not hit on those simulation deficiencies. Bcs we don't mainly make rise time coherent speakers in the first place by using higher orders. The time sync animal lives in confined spaces anyway. Dunlavy's speakers were famous for absolutely gelling at a tight spot, hence the X mark on floor for the premium chair in major demos. Same stuff with Spica. Move a little up or down and bye bye. Lynn Olson has sometimes warned the vertical array minimum phase proponents or the horizontal DQ10 nostalgia that their dream exists in a small space bubble and is killed by moving the mic an inch...

You arrived at your last (?) version by trial on error for tweeter Z placement anyway. Keep up the good work.

What I make out from your adventure, is that the time sync you look for is beyond pseudo anechoic and simulation prediction capabilities. You did not make any mistake with SE. To absolutely predict such an accurate sync takes no echo and no latency... But your requirements are beyond approximation.

Well put.

Quote:

Make a movable Z plain tweeter panel... Dahlquist did that in the 70s with oscilloscope, 6dB series crossover, 5 drivers, square waves. It took him ages...

what you call 'movable Z tweeter panels', I still have some here stored away. I suppose others do too. But THAT many drivers, whew! At least with MTM type arrays you can pair the drivers, I do.

Quote:

Now days that last critical 1% is a matter of sliding the tweeter a bit. Its just the fact that we use higher orders for other reasons today and we did not hit on those simulation deficiencies...

Absolutely spot on!

Quote:

Dunlavy's speakers were famous for absolutely gelling at a tight spot, hence the X mark on floor for the premium chair in major demos. Same stuff with Spica. Move a little up or down and bye bye. Lynn Olson has sometimes warned the vertical array minimum phase proponents or the horizontal DQ10 nostalgia that their dream exists in a small space bubble and is killed by moving the mic an inch...

You arrived at your last (?) version by trial on error for tweeter Z placement anyway.

And I can confirm the extreme narrow window and indeed have told many Duntech users/owners over the years about how to listen to them: Smack ON axis, exactly at the same height of the tweeter and much further away than many do and NEVER nearfield. In fact the Sovereigns I mentioned earlier should not, and this is virtually current Duntech philosophy to the "Classic" range, nearer than 3 metres. Many rooms are simply not appropriate.

But since you read the renegade article, let me fill you in a little further (maybe the following should be an amendment). The 1% margin (or very near to it) is what has concerned me for so many years and was very much behind what has developed.

I want to come back to the 50uS Z error. WHY was it so innocuous in the end result? Why did such a large error that equates to 18mm change in the Z plane still get so close to 100% summing at the crossover? It took me a while to figure it out. As Niels Bohr famously replied to the question, what the definition of an expert is: "Somebody who has made many mistakes in a narrow field."

So the mistake had the benefit as it lead to certain realisations. I shall try to put this as succinctly as possible. The rise time of the mid/bass is relatively slow, the tweeter however goes BANG! In time it is easy to define the starting point of the latter but less so for the mid/bass. In that 50uS the rise time of the mid/bass is so slow that it cancels out little of the tweeter's output if we follow the renegade theory. It takes much more than 50uS for the rise time of the mid/bass to reach maximum slope/acceleration. In the case of this particular error, it was only 25uS that output was cancelled. Our predicted response and crossover function was still holding up.

The errors involved with the usual way of time aligning with all the drivers in the same phase and only 50% vector summing at a singular critical point, the renegade theory, if followed, is far less reactive to errors, they are far more benign.

But since we are talking about focusing the array for that singular point or extremely narrow window, what is happening off axis? The classic Duntechs don't look all that that pretty. I think that a different school has emerged that is concerned with that and off axis issues, power response et al. (Dunlavy largely ignored this. The world's most accurate speaker was measured within an inch. I don't mean to be disparaging. ) The approach we have tried hard to develop tries to marry the two and resolve it the best possible way.

Finally, look at the fact that the renegade theory also have an Achilles heel, but one that Achilles would still survive. If you measure the effect at 2 metres at x power, then up +6dB SPL increases power 4x and it will now take longer for the negative part of the tweeter's step response to reach maximum. In theory we now need to have the tweeter even further forward as the positive going point is delayed in time. But the Elsinores are quite sensitive, most of the time you are listening to them they use only 1 watt anyway (if not less). But a reasonable dynamic peak comes along and... but coming back to the point that even 50uS significant error (only leading to 25uS cancellation overlap) was still so-so and we are still hanging in there. The much slower initial rise time of the complement driver largely hides the problem and it becomes more academic than audible. The straight jacket has been removed. Dunlavy went (a la Dalquist) to extraordinary lengths, speaking to his associates it was a virtual obsession. But we are moving on...

Further thoughts...? I am enjoying this. This kind of discussion helps to distill things in the mind.

Originally posted by salas If it was that nature worked only precision wise, nothing would have worked. So your first version appeared very happy.

The two added physical benefits are, easier to deal with diffraction effects and getting to use felt, and easier box construction.

Quote:

A bang on start time needs an infinite rise time to totally cancel with an opposite bang on infinite rise. Lab thoughts.

Yes, the time error only caused partial cancellation, that's what the results clearly indicate. I think you understand it it well enough. I don't think I can teach you much.

Quote:

Your initial speaker had an infinite reverse null? No. Your current has a much better one? Like double? If yes, the renegade theory must hold ground.

Exactly, yes, the null is an important indicator and if nil cancellation you should get infinite null. And yes, the second one was better, but... it also comes down to what I was aiming for.

I want the null, if used as an indicator, to hang in there a lot better and not just one discrete mic'd point, even in the earlier version. So, by all means get it, max null, and then explore from there. So my aim would not so much to find that extremo null as to find a null that would work acceptably in a larger window. Also moving forward and backwards from the ideal. The renegade theory allows me to get that comfortably without going to extremes as has been the need in the past.

Quote:

And yes, Duntech and Dunlavy users had said that your room needs treatment if you want them work great. Maybe there is reason that the pro sector holds to them. Treated work environments...

That's not a bad point. Yeah, that could well be it.

I am glad you understand the renegade tweeter theory. It's not just a challenge in itself but whether the theory comes across clearly is another. Does us both credit.