If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

A bit got a logical value AND a physical shape. (It's been even mentioned earlier in this thread)

For you as an IT person the logical value is obvisously the only thing you're interested in. That also explains your overall and "buffer" logic.

For me as an audio+it person I'm interested in the logical value AND the physical shape and conditions.
I can tell you that'll give you a much wider perspective.

Anyhow. I give it up on you. You doesn't seem to be able to cross your pretty narrow IT universe.

Enjoy.

P.S: 1. I consider the "plug the cable" test nonsense.
2. Since we discovered the server impact more than a year ago, we've
been running one server wireless.
Guess what. The server optimzation still made a difference.
Again. All stuff that I post has been verified on several systems
before I post it.

You don't understand what we are talking about. Sorry, I can't help with that!

shall give the various suggestions a test and let you know. As other people have noted having the wireless running in the Touch affects the sound which is why I prefer ethernet, but shall try the wireless bridge again.

re 'The touch is only a 100 mbit device' the network chart shows it is only running at 12% capacity at initial start of the song and then runs at a lot less than that so looks like there are no capacity issues.

Capacity is not an issue at all. Even wav files being sent are only about 1.2mbit/s. the issue is nics do not reduce there rate based on the device on the endpoint. They still clock the data out at the same rate they just end up reducing how often and how much they send out. This reduction is not instant either it is a constant back and forth tug of war between the host and the client. My point is if timing is such an issue I would think this constant burst and adjustment would look horrible.

Having read soundchecks last posts it seems quite clear that he is saying that the server optimisation works because it affects the shape of the data packages going into the touc and does not depend on noise being transmitted via Ethernet.
I am also aware that soundcheck feels able to discount perceptual bias in those reporting improvements from server side tweaks ( but to rely on it to explain those who report no audible effect).

Just to clarify could evdplanke please confirm whether this was what he described as being wise a few dozen posts ago?

Having read soundchecks last posts it seems quite clear that he is saying that the server optimisation works because it affects the shape of the data packages going into the touc and does not depend on noise being transmitted via Ethernet.
I am also aware that soundcheck feels able to discount perceptual bias in those reporting improvements from server side tweaks ( but to rely on it to explain those who report no audible effect).

Just to clarify could evdplanke please confirm whether this was what he described as being wise a few dozen posts ago?

Agree he doesn't think it is noise. It's not clear what he thinks it is. He talks about "bit" and "shape" which if it means anything at all would be an OSI layer 7 (physical) thing that the OS has no control over - it is managed by the NIC firmware and hardware and the physical components such as cabling.

The point has to remain that once an IP packet has been received and the (music) bits stored in RAM for playback WHATEVER HAPPENED to the music information (data) on its journey from hard disk to Touch is now over and done with and the server etc has no way of influencing it. All that is left is data values and we know they are 100% correct within the buffer.

Hence, even if what is being described is not noise injected via the ethernet cable or being generated in the Touch NIC... playback from the buffer with the network disconnected and dormant must represent the best possible situation for the Touch to create its best sound... free from ANYTHING the server/network is doing

... and yet nobody can hear it and I can't detect it with ADM (FWIW).

I'm pretty sure that 45 seconds of (FLAC) playback should be enough to hear a difference... if it isn't, then I stand by my argument that if you haven't heard a difference by then, there is no difference.

The analogue "shape" of the Ethernet packets is irrelevant if a switch or router is being used.

Unless the Win7 Server is directly connected to the Touch by a single cable, the Ethernet packets from Win7 PC will be going through a switch or router where the original PC Ethernet packets will be converted from analogue back into digital on one port and then forwarded by the switch/router with a new regenerated "shape" to the Touch on a second port. So the Touch will never see the "shape" of the packets as sent by the Win7 PC.

The analogue "shape" of the Ethernet packets is irrelevant if a switch or router is being used.

Unless the Win7 Server is directly connected to the Touch by a single cable, the Ethernet packets from Win7 PC will be going through a switch or router where the original PC Ethernet packets will be converted from analogue back into digital on one port and then forwarded by the switch/router with a new regenerated "shape" to the Touch on a second port. So the Touch will never see the "shape" of the packets as sent by the Win7 PC.

In my network there is 3 switches between server and Touch in the hifi rig and 1 switch to the Touch for my kitchen speakers

Your earlier ADM tests proved that some of the player tweaks make a detectable difference to the analogue outputs. I think Soundcheck was rather pleased about this, even though he's not a 'measurements' sort of guy. But now you've used the same tool to prove that there's no detectable difference between (1) a player corrupted by being connected to the network and (2) one isolated from these corrupting influences and playing back from a perfect buffer.

I don't think anyone expected Soundcheck to go along with this test, because he's got too much invested in the idea that the network and server DO have an influence. "I consider the 'plug the cable' test nonsense" was completely expected. But to be honest, I expected him to report hearing a positive difference. That at least would have been very difficult for anyone to disprove.

But your ADM test rather limits that option. Of course, he could still report hearing a difference, and then argue once again that your test means nothing - the old "there are differences we can't measure" argument.

But you've certainly limited his wiggle room.

Last edited by chill; 2012-01-28 at 10:25.
Reason: Edited to be slightly less confrontational.

There is a very simple qualification for these changes specified on page 1. If you cannot hear any difference when applying Fidelizer audiophile then these settings are not for you.

My system is producing absolutely gorgeous music thanks to these and Soundcheck's mods.

Phil, I tried the Ingus DRC plugin and I was alarmed at the loss in resolution and have read that users of it have to accept this as a trade off to having the ability to have room correction. So would advise if you are going to try these mods then remove the Ingus DRC.