How about six instruments, one playing pure tone 20KHz, one 18Khz, one 17.5KHz, one 15KHz, one 14.2KHz and one 11KHz. Will they all be clearly discernable from each other in a recording sampled at 44.1KHz? How would 96KHz sampling compare?

Though this is beyond my skill to create and listen, what you're suggesting is very much testable at the theoretical level.

Convolve 6 different frequencies into a 96kHz PCM signal and a 44.1kHz PCM signal and ABX. This would be fairly easy to construct if you're handy with Matlab.

typically though in compression we hold video to much lower standards than audio - i've done a fair bit of mpeg-1/2/4 codec programming and the amount of detail you can get away with discarding is amazing, however once you know what to look for it gets quite easy to abx. with audio it's all-or-nothing - no video codec hoping for usability is going for lossy transparency (at least i've never heard of any, and can't see much need for one), however with audio that goal is attainable at tantalizingly low bitrates - sub 200 kbps for most signals.

Evidently you've never worked with professional video equipment. DV is a lossy codec that strives (and almost succeeds) to achieve transparency. Many broadcast signals are also compressed, although these typically use bitrates beyond the threshold of transparency. Video compression is important for keeping satellite bandwidths down (and allowing more channels, for example). With HD coming along, video compression is likely to become even more widespread in the television world.

yeha wrote:

for the nuance search, the only valid test of opacity is when two signals are presented and a listener can discern between them with a given confidence level - if they can't reach that level they're not hearing a difference and the signals are perceptually identical, regardless of how much they'd like to believe otherwise. having people give an untrained and largely random grade to a passage after one playing without a reference, will test people's moods on the day of the test, their musical preferences, their level of caring, whether they want to get in your pants, etc. and maybe the level of nuance contained in the signal as one of the many other things being tested. you could run the results through anova but i doubt there'd be anything useful coming from it, the noise would be extremely high as there are so many different things being compared at once. one hydrogenaudio member claimed that he had headaches after listening to lossy audio for several hours, but not with lossless audio. i believe a long-time-scale abx test was theorized but i think it fell through.

to test this nuance theory that's exactly what you'd need - a long-term abx test that had rounds of days or weeks at a time. it doesn't sound very appealing to coordinate such a test, especially with multiple participants, and would also severely limit the subject's usage of their own equipment as a lot of legitimate and common usage could destroy the blindness of whatever's being tested.

Why is an A/B comparison the only valid test? My test specifically avoids presenting both samples to the same subject because hearing one before the other might influence their emotional response to it. Keep in mind that what my test is supposed to measure is an emotional response, not transparency. I want to know whether two slightly different signals can cause us to respond differently emotionally, regardless of what we hear through conscious perception. All of the variables that you mentioned are indeed uncontrolled variables, but so long as the tests are conducted at the same place and time, there should be no significant difference between the two samples. The uncontrolled variables should be the same across each sample and effectively cancel each other out (like cancelling out an unknown constant in a physical equation).

I agree that ABX testing provides strong evidence for perceptual transparency, I'm just not sure that perception is all there is to music. Since we listen to music because of our emotional response to it, I'd be curious to discover whether emotional transparency occurs at a different place.

DV is a lossy codec that strives (and almost succeeds) to achieve transparency. Many broadcast signals are also compressed, although these typically use bitrates beyond the threshold of transparency.

done right, dv does what was asked of it - a cbr, simple-to-implement-in-hardware good-enough-for-most upgrade of mjpeg. overly-simplistic bit allocation schemes have appeared in a lot of the implementations i've seen, and for anything involving processing afterwards (not to mention recompression into another 8x8 dct-based codec) i'd try to avoid it. if you want raw footage for editing, keep it raw. if you want to save space, go straight to mpeg-2. for the first job dv introduces unwanted levels of noise, for the second it's at best a superfluous middleman and at worst an upper-bound on quality.

of course i'm biased - most people would never notice the difference nor care, granted. same way most of the population thinks 128 kbps mp3s are as good as the cd - for many they are as good due to hearing ability, for others they may be able to perceive the artifacts, but they're not obvious nor offensive enough for them to even think about.

Devonavar wrote:

Video compression is important for keeping satellite bandwidths down (and allowing more channels, for example). With HD coming along, video compression is likely to become even more widespread in the television world.

it's already as widespread as it can get - everything seen on television bar some straight-to-air local productions is compressed - analog back-end station feeds went the way of the dinosaurs long ago from everyone i've heard from in the business. most satellite operations apply disastrous amounts of compression to maximize channel counts - watching directv or dish network is not a fun experience for me at all, all i see are poor transrating mechanisms, ample opportunities for dct noise-shaping, wonderment at the lack of post-processing in the tuner boxes, and with hdtv broadcasts i've seen more of the same. ratios there are pushed even higher than satellite 480x480 signals due to the smaller block as a percentage of frame dimensions.

Devonavar wrote:

Why is an A/B comparison the only valid test?

well, valid for realistically possible tests if you're not specifically testing whatever it was you set out to test but a range of different things, you necessitate a much higher subject count to raise the signal above the noise, and i doubt you could ever get it as high as an abx test. i'm talking several hundred people as an off-hand guess - the population consists of such a varied mix of grading psychologies, musical tastes and daily mood shifts. from day to day a person's opinion of a song can vary greatly, regardless of quality - it's a matter of state of mind. imagine a national tragedy occurred - everyone would give a 10 to a sentimental song be it from a cd or scratchy am radio, and 0 to someone rapping about their escalade or probably anything else that didn't strike a chord with grief for that matter. unless you're finding complete strangers, many of those subjects tested will have shared experiences that mirror the 'national tragedy' note on a smaller scale, but the test still has no control method to screen for this other than massive numbers. to find out exactly how many extra people you'd need to balance these variables out, you'd have to perform more tests which isolate them (i don't know how you'd isolate someone's mood), test for them, then use the variance found in those tests to see how much it's throwing off the readings from what you were trying to find out originally - you can't just stop after 50 tests and call it a day. it would be much, much more work than a long-term abx test, and still wouldn't be accepted by most test-philes i've met. the differences among the population and day-to-day life would eventually balance out the findings, but you'd have to be confident about exactly how many data points you'd need to be certain of that.

now if you added a 3rd sample to the mix, one audibly different (say a 10 khz lowpass) so there was actually a control in the whole experiment you could bring down the numbers markedly. i still wouldn't point to such a test as evidence unless several hundred people participated, even then it could be shot down by either side of a debate as being so far from itu testing recommendations.

my objections are just first impressions, this ground has been covered before in subjective test design and analysis but i'm too out-of-the-loop to recall exactly what classification (and associated shortfalls) the specific test idea would have.

Devonavar wrote:

I want to know whether two slightly different signals can cause us to respond differently emotionally, regardless of what we hear through conscious perception.

that's what blind testing is for, it doesn't have to be a conscious or even detectable-by-the-senses phenomena being tested. emotional states can be quantified, so your per-round finding is how you feel. if you get a sample, then every day sit down and listen to it (actual playback is randomized between the original and altered emotion-test version), record your emotional state in a binary form ("i was moved" / "i was not moved") you have a bullet-proof test of whether your brain, consciously or unconsciously, detects any difference on any level whatsoever between the two signals. well bullet-proof is a strong word, but if you get the confidence level high enough it would be illogical not to accept the results.

of course that's just for a subset of emotions covered by "being moved" - to find out emotional transparency conclusively for an individual, you'd have to perform separate testing runs detailing how each emotion you're interested in was affected

Devonavar wrote:

I agree that ABX testing provides strong evidence for perceptual transparency, I'm just not sure that perception is all there is to music. Since we listen to music because of our emotional response to it, I'd be curious to discover whether emotional transparency occurs at a different place.

in terms of testing methodology, there's no difference between perceptual transparency and emotional transparency - both can be quantified after exposure to a given stimulus and the likelihood of conscious or subconscious recognition found. emotional transparency could well be much higher than perceptual transparency (since in most cases it will be inclusive of it), how much higher can only be tested on an invididual basis.

most satellite operations apply disastrous amounts of compression to maximize channel counts - watching directv or dish network is not a fun experience for me at all, all i see are poor transrating mechanisms, ample opportunities for dct noise-shaping, wonderment at the lack of post-processing in the tuner boxes, and with hdtv broadcasts i've seen more of the same

Then you must be watching the equipment and not the show, that seems even worse than audiophilia - like getting an expensive stereo for listening to audiobooks. They can't please everyone. Some people would be really pissed if they pulled some of the periphery channels to give more bandwith to the core channels. And I want as much timeshifting as possible, I'm not going to follow a TV schedule.

On the other hand, I think that all the pay per view channels are just as bad a waste of bandwith as HDTV.

this could be read as a digital source revealing more (new) information after a transformation, which we certainly don't want to communicate! Smile perhaps instead of "discard bits" we could say "sample from a source using fewer bits" as you certainly don't gain anything from putting an already-digital image through a colorspace conversion

Excellent point--it would be better phrased as "sample from a source using fewer bits." The only good colorspace conversion is the one you don't have to perform.

Then you must be watching the equipment and not the show, that seems even worse than audiophilia - like getting an expensive stereo for listening to audiobooks. They can't please everyone. Some people would be really pissed if they pulled some of the periphery channels to give more bandwith to the core channels. And I want as much timeshifting as possible, I'm not going to follow a TV schedule.

sort of - i tuned my vision to notice block-based codec artifacts, and it's not something i can switch off. imagine how annoyed someone would be if they were stuck with a crt monitor they could see visibly flickering due to the refresh rate, and the equipment couldn't be run at higher rates to remove the flicker. sure it's usable, but no matter what's on the screen there's always that maddening flicker drawing your attention. it's not that severe with satellite tv for me, but it's still much harder to tune out and enjoy what's on when my brain keeps pointing out i-frames, skipped blocks, quantization noise, etc.. the same thing happens with poorly-mastered dvds.

but yes, most people either wouldn't notice or couldn't care less because they're not as big a dork as i am. i very much envy that, having your brain stay active when you want to relax is a major pain. now that i don't work much with video compression i wish i could untrain myself.

I think that Devonavar's test is a very good idea in many (most) respects, though flawed in others. First, I would like to speak about the good.

Devonavar is actually quite underestimating the power of his proposed test. If we (slightly) restructured the test in the way I propose, we could actually learn quite a lot about not only "nuance", but about the listeners as well: I propose a test with several musical clips using different compression schemes and with different types of music and a large number of observers (this method is a hog for data).

Rather than getting into the deep math, let me try and tell you why Devonavar's idea is intuitively appealing. Assume there are two unobserved parameters we wish to estimate: our sensitivity to musical information and the degree to which compression schemes affect our enjoyment of music. Assume we have multiple compression schemes and music (and uncompressed music as well, of course) as above, and many listeners. Truly terrible compression schemes would be correctly identified by all and very good compression (say true lossless) would be correctly identified by none--telling us much about the quality of the compression, though little about the listeners. The middling compression would seperate well however, with the better listeners progressivly clustering towards each other in their answers and the less able clustering as well. From this test, we could determine both the best listeners (the "golden ears") and the best and worse compression (or music)--the degree to which a given clip separates the listeners into the two groups. This is called a two-parameter item response test (IRT) model and is the basis for standardized tests such as the SAT, GRE, LSAT, etc. wherein the questions on these tests stand in for the music in our proposed test. When recording the results, those who proctor these tests are able to determine both the "difficulty" of each question (its discrimination parameter) and the quality of those tested. So cool (... for stats nerds).

But here is the problem: we must assume that the test is truly a true estimator of our unobserved parameter(s). Here, I am less certain. First, if the system is too poor to reveal the differences in the clips, we would be led astray--for example I have my doubts I should be moved emotionally to any music played through a Bose Wave Radio (BTW, this is also yet another reason why demo-ing gear in "Bog Box" retail stores, Best Buy Circuit City, is such a farce: in order to easily switch amongst electronics and speakers, these retailers use very lossy and long lengths of cable and switchers which are almost sure to obscure the differences between equipment by dumbing it all down, but I digress).

Second, even if a suitable system were assembled (contrary to the anti-audiophile mafia, this could be done quite easily and for very little money--perhaps as little as $2-3k), would we still be testing the differences in the music? Well, indeed we would be if we stuck to Devonavar's suggestion and and all listened on the same system--though given our geographic dispersion this seems unfortunately unlikely.

The problem is easy to see: assume that there is uncertainty regarding our perceived ability to perceive an unobserved but "true" parameter "sound quality". Setting aside the vagaries of compression for the moment, let us study the uncertainty surrounding the listeners (call this var_l for variance-listener) and that of the stereo (var_s). If we do not account for the stereo, but observing the true amount of uncertainty or error we should incorrectly assume that listeners' uncertainty = the quantity (var_s+var_l), assuming corr. var_s, var_l = 0, which may or may not be correct. This would lead us to a possibly biased conclusion and one that certainly overestimates the variability in "hearing". By terribly abusing some statistical ideas and terms (IRT does not work in quite the way I describe...), you could think of this as (incorrectly) expanding our confidence intervals and leading to Type two errors (incorrect acceptance of the null). This could be overcome by using several listeners at each of several stereos each playing several clips of music and then estimating a three parameter IRT model... though a) I have never done that and b) I think that that would make it difficult to get quality results due to inefficiency (but I really am not sure about that... see point a).

Finally (or, if you skipped that over-long statistical detour, next), I take offense at the way that nuance and emotion are used here. In fact, the difference between equipment can be quite profound, not limited in the way that Devonavar conceives of it. For example, when switching between my roommate’s $100.00 Samsung DVD player as a transport and my EMM Labs CDSD transport, I was struck 1) by how incredibly much they sounded alike (like 99.99 percent), and how much I am sure they measured alike (though i did not measure them), but then again 2) how profound these very small differences were on an emotional level (to use Devonavar's term), or on my ability to connect with the music. Much the same might be said about cable elevators or my Arcici equipment rack: small differences, yet profound ones none the less. Of course I am sure these would fail to impress yeha, as he himself has said he is far more concerned with how a piece of equipment measured than how it sounds, but for lovers of music, I do not think that Devonavar's idea of small and nearly-but-not-quite unimportant differences really captures this idea. Whether these differences justified the $7000.00 difference between the EMM Labs and the Samsung is a question that each person has to answer for his or herself, but to write this off to only a minor difference, a splitting of hairs, or of nuance, is to confuse a difference in degree (which is, again, slight), with a difference in kind (which is profound).

Truly terrible compression schemes would be correctly identified by all and very good compression (say true lossless) would be correctly identified by none--telling us much about the quality of the compression, though little about the listeners.

what else, if anything, would be changed about the test though? would you still be doing only one playback with no reference and asking the subject to rate their emotional response? or playing the reference, asking their opinion, then following on with the test signal (highly compressed as control, subtly altered/compressed signals, lossless as control) and asking their opinion again? or making that second sequence played of a different piece of music (yikes)? how many additional runs if any? i am uncomfortable with single-session tests but could be talked into it.

BobDog wrote:

But here is the problem: we must assume that the test is truly a true estimator of our unobserved parameter(s). Here, I am less certain. First, if the system is too poor to reveal the differences in the clips, we would be led astray--for example I have my doubts I should be moved emotionally to any music played through a Bose Wave Radio

you can only feel emotionally moved by music when played by ultra-accurate equipment? what about a scratchy decaying never-before-heard just-found-yesterday-thought-lost-forever recording of your favorite long-dead artist? i get goosebumps from beethoven's 6th regardless of what i listen to it through, so long as the full range of the orchestra is audible. as a musician i'm moved more by the music and its performance, than how accurately the signal is recreated (when listening for enjoyment).

musical preference is such a subjective area, my biggest problem with this test was the fact that people will respond more to what they're listening to and its relation to their mood, rather than how accurately it's being recreated. you'd get such massive spreads in the grades between all samples that the variance would overlap between all of them. heck some people will prefer distorted signals over originals, even while being capable of identifying artifacts in the best compression methods! i know i've met people who were seemingly allergic to clarity in the music they were trying to record, despite having very good hearing - they just prefered the loss of detail.

BobDog wrote:

Finally (or, if you skipped that over-long statistical detour, next), I take offense at the way that nuance and emotion are used here. In fact, the difference between equipment can be quite profound, not limited in the way that Devonavar conceives of it. For example, when switching between my roommate’s $100.00 Samsung DVD player as a transport and my EMM Labs CDSD transport, I was struck 1) by how incredibly much they sounded alike (like 99.99 percent), and how much I am sure they measured alike (though i did not measure them), but then again 2) how profound these very small differences were on an emotional level (to use Devonavar's term), or on my ability to connect with the music. Much the same might be said about cable elevators or my Arcici equipment rack: small differences, yet profound ones none the less. Of course I am sure these would fail to impress yeha, as he himself has said he is far more concerned with how a piece of equipment measured than how it sounds, but for lovers of music, I do not think that Devonavar's idea of small and nearly-but-not-quite unimportant differences really captures this idea.

i dislike the terms nuance or emotional content when used to describe things which already have adequate descriptors - adding "nuance" to frequency, amplitude and phase as audio characteristics makes debate impossible - it's like saying copper is always better than aluminum for heatsinks because copper has more "condabruency" and thus is the better substance. what does it mean and consist of? how can i measure it? how can i test to see if it even exists? remember that the original entry of nuance was when describing differences in a signal so small that electrical equipment is unable to detect it (!). how much better does the equipment have to get before it can? more importantly, what will the testing procedure be to see whether our equipment has gotten to that stage?

how can i test to see if it even exists? remember that the original entry of nuance was when describing differences in a signal so small that electrical equipment is unable to detect it (!).

I agree that the (one) trick is indeed to be able to test electronically, in an objective reproducible way, the results that we hear with our ears. Thus far, this has not been the case and I agree with yeha that good test ought to be our goal. Where we differ is in two places. First, in that I believe that listening tests (the more I read about ABX the less I think these qualify but I remain open to the possibility of their being able to work, at least) are every bit as valid as the electrical ones if or when the electrical ones fail to measure that which we hear. Second, I argue that listening tests of equipments' ability to produce a reasonable facsimile of a musical event is the most important test.

I am not arguing that electronics cannot be made more sensitive than human ears, but this is not the same thing as saying that they can replicate humans' functioning. This is in much the same way as computers can be programmed to be immensely powerful thinking machines, beating their creators at solving complex mathematical problems, pattern recognition, and even chess, and yet they cannot "out think" humans on many of the simplest and commonest tasks--many of which are what make the human experience so special to us. Perhaps because Deep Blue defeated Kasparov yeha thinks that human perception is at fault when it contradicts Deep Blue's. I remain of the mind that when the goal is to satisfy an intimately human desire (such as our appreciation of music), humans remain the preferred judge of what is best. When electronics can begin to tell us why so many prefer terribly "measuring" SET amps or why two solid-state amps do not sound very much like at all, or why two AC cables do indeed make a large difference in the sound we (as humans) hear, then I will begin to have as much faith in tests as yeha. That is to say, machines must try and reproduce the human experience, not the other way around (at least until machines begin to listen to music for their own pleasure).

yeha wrote:

Bobdog wrote:

Truly terrible compression schemes would be correctly identified by all and very good compression (say true lossless) would be correctly identified by none--telling us much about the quality of the compression, though little about the listeners.

what else, if anything, would be changed about the test though? would you still be doing only one playback with no reference

The tests are not an absolute standard, they are a relative standard wherein all unobserved parameters are estimated in relation to themselves (think of an SAT, what a "good" overall score is may vary from year to year, but the questions on a given test, and the exam takes of that test will both be properly identified relative to each other (subject to error)). If you wish, you may think of it as being the case that all of the musical clips and all of the listeners are their own references. It is for this reason that as many musical clips and listeners as possible would be used.

I agree that the (one) trick is indeed to be able to test electronically, in an objective reproducible way, the results that we hear with our ears. Thus far, this has not been the case and I agree with yeha that good test ought to be our goal. Where we differ is in two places. First, in that I believe that listening tests (the more I read about ABX the less I think these qualify but I remain open to the possibility of their being able to work, at least) are every bit as valid as the electrical ones if or when the electrical ones fail to measure that which we hear. Second, I argue that listening tests of equipments' ability to produce a reasonable facsimile of a musical event is the most important test.

one major problem i have with electrical measurements failing is that you're usually playing a digital source - if it was recorded digitally once it can be recorded digitally again with equal precision - giving us our measurements of how much the signal has been altered. if the differences are below the threshold of hearing, which can only be defined by blind testing, whatever components are being tested are transparent to each other. if the differences can be abxed despite being below the commonly held threshold of hearing, the threshold of hearing data is wrong and must be retested (obviously).

BobDog wrote:

I am not arguing that electronics cannot be made more sensitive than human ears, but this is not the same thing as saying that they can replicate humans' functioning. This is in much the same way as computers can be programmed to be immensely powerful thinking machines, beating their creators at solving complex mathematical problems, pattern recognition, and even chess, and yet they cannot "out think" humans on many of the simplest and commonest tasks--many of which are what make the human experience so special to us.

first off we're coming at this from different angles. yes, the human experience of 'hearing' is vastly beyond the abilities of any computer's hardware or the software running on it, because the brain is a far superior processor of the signal coming in. similarly, deep blue was nothing but an over-glorified hash table and shortest-path algorithm - it was simple programming, but by throwing enough processing at an unintelligent process (the rules of chess are fixed and thus an upper bound on complexity exists) you can exploit the fixed rules to to create an ever-increasing probability of victory. the hardware array was impressive in its scale, but there was nothing particularly brilliant about the machine or its software.

so yes, the ear and brain are astonishing processors of signals, but they can only apply that processing to signals they can physically perceive. all the brain has to work on are the cues sent by the ear, and all the ear can perceive are frequency, amplitude and phase, same as any piece of equipment. if every facet of a sound wave can be represented with greater accuracy than the ear can perceive, various signals which are measurably different will be perceived identically by the ear, and the brain will have no 'new' information with which to tweak its tremendous processing power of the signal.

the extra information must be coming from somewhere for the brain to outperform instruments, and the only medium this information can be transferred through are frequency, amplitude and phase. that's my stumbling block

BobDog wrote:

That is to say, machines must try and reproduce the human experience, not the other way around (at least until machines begin to listen to music for their own pleasure).

again that's my issue. there are two facets to the human experience - the actual transformation of a sound wave into nerve impulses and the processing that the brain puts those impulses through. the brain's abilities are far, far beyond our reproduction, but the hearing stage is not. if we can get two signals to the ear that are measurably different but transformed into 'equal' auditory nerve impulses, the brain has no new information with which to perceive a difference.

sort of - i tuned my vision to notice block-based codec artifacts, and it's not something i can switch off.

Oh. Sorry about that.

So I'm guessing you're not really a videophile, and perhaps you might even prefer worse equipment if it makes the artifacts harder to see? Are the artifacts of brightness or of color, if of color, does turning the colourness of the image down help?

If most of the compression damage is done by the TV signal retailer, maybe the artifacts would be reduced if you could take the same channel from two sources and blend them.

So I'm guessing you're not really a videophile, and perhaps you might even prefer worse equipment if it makes the artifacts harder to see? Are the artifacts of brightness or of color, if of color, does turning the colourness of the image down help?

vhs just looks uniformly bad compared to dvd, and since the entire image is affected instead of just pathological zones (in the case of block-based codecs) it's easier to ignore. though because of the loss of detail i really don't prefer one over the other - my preferred playback method is on my pc with a tweaked filter chain in ffdshow - moderately heavy deblocking and deringing, a high dose of uniform noise overlayed, some desaturation and some sharpening makes the video a mess from a clarity standpoint, but it stops my brain from tripping over itself to find flaws it's a sad existence i know, and my recent reformatting took the filter settings with it.

mathias wrote:

If most of the compression damage is done by the TV signal retailer, maybe the artifacts would be reduced if you could take the same channel from two sources and blend them.

the overall severity of artifacts would be lower, but the actual number of them would increase as you'd have the sum of artifacts from both images minus the union of artifacts common to both. it'd be best to get the median of three (five, seven..) versions of each frame, but that would be a bit of a bother

The big difference between vinyl and CD/DVD-A/SACD is that vinyl is analogue, and the others are digital. Digital is good in terms of suppressing noise elements (scratches), but is limited to a particular level of fidelity. It's also very dependent on the quality of the A->D and D->A converters. Analogue can, in theory, offer unconstrained fidelity, but is sensitive to noise elements (scratches, dust, and, if you insist, static!). I can't stand to listen to vinyl any more, because I can't help hearing the scratches and pops, no matter how carefully I clean the records.

My personal attitude is that I want equipment that does a decent job of reproducing the sound, but I'm willing to stop at a reasonable level. I have a couple of systems in different rooms, both using a top-of-the-line integrated amp (oh, yes, the horror of having the power amps in the same case as the pre-amp!). For example, my home theatre uses a Denon AVC-A1SE (US equivalent is Denon 5800, I think) - I don't need anything more expensive. I have listened to a variety of equipment costing many times as much, and I simply cannot hear any difference, so there's no point buying it.

I disagree with everything you say, but I will defend to the death my right to say it.

Green Shoes wrote:

And to everyone else, there's been plenty of Thermaltake bashing here recently. What about SilenX.....

Why do you say that? Most posts I've seen are valid criticism, even if the complaint is because it's too, er, phallic...

But on the original topic, PositiveSpin has the perfect system in that it's the best for him i.e. other equipment is on the same level. Sometimes I wonder if audiophiles are chasing what they think they want rather than what they actually want!

Who is online

Users browsing this forum: Bing [Bot] and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum