Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

PerformanceEng wonders: "I work as an engineer for a large technology company in the U.S., and have been privy to what I find a interesting practice. It's well known that marketing data sheets often paint the best picture of a product while leaving the devil in the details. I've come to expect this, and when I am evaluating technology, I always have a skeptic's eye for claims made by the sales and marketing folks.
However, I've also witnessed our product go into test labs (usually for the purposes of running a series of tests for a 'bake off' in a trade publication). Not uncommon is the attempt to 'tune' the configuration of the device under test to perform in the best light (not unlike tuning your car to pass emissions tests). I have seen it go as far as exploiting weaknesses in the test that, if the test operator discovered, would be considered bad faith. To the other engineers: Are you aware of this kind of practice at your company? To the IT professionals: How much faith do you put in these sorts of publications and their 'bake offs'? To everyone: When does spin doctoring cross the line and become false advertising?"

No, he's referring to ALL video card vendors. I have worked for 2 different such companies, and I did this to drivers to make sure they passed WHQL, or tricked a benchmark or whatever.

The tone of the article almost has an edge of "I can't believe we do this in our industry I feel so dirty!" to it. The poster of the story is obviously some kind of new college hire or hasn't been in the industry for very long or something. All vendors do this, all the time. Its just the way it is.

Wait, you're responsible for getting unstable drivers onto the WHQL? I'll kill you. I've had it up to here with unstable drivers. I lost a RAID set to unstable drivers that passed the WHQL.

I mean, cheating on benchmarks is one thing (the card is just slower than it benchmarks), but the WHQL is supposed to be a stop gap measure: sure, it's Windows, we hate Windows, whatever. But where I work, we use it. And WHQL drivers are something that you're supposed to be able to lean on; they're drivers that may not be the latest, greatest, but they will work.

I can't tell you how long it took to track down that the RAID WHQL certified drivers were the problem. It's something you're supposed to be able to put a little checkmark next to when diagnosing problems, a "it can't be that!".

Another one: just look at the old Dhrystone benchmark [wikipedia.org] and all of the over-the-top "optimizations" that were used to get better compiler/processor results. The SPEC organization [spec.org], created in a direct attempt to deal with this very kind of problem, still must update its bendhmarks regularly in order to deal with loopholes (and changing technology in general). A good example was when a particular benchmark (matrix300 [spec.org]; ref is 2/3 the way down) was defe

In the mid-1990s my company did quite a bit of graphics card testing (still does, but it was much higher profile back then.) It was pretty routine for us to get baked drivers (and there were some very impressive cheats) less routine, but still common was to get a board with a BIOS cheat, which would do anything from altering its own board timings to be out of spec (sort of "overclocked out of the box") to running code that would adjust the PC's heartbeat interrupt to slow the clock ticks to make the board appear faster if benchmarked using the PC's own clock.

In the end the best solution we came up with -- because we worked with a lot of alpha/beta silicon since we tracked chips more than boards -- was to more or less formalize the cheats and what was/wasn't permitted, and also to give the companies that submitted alpha/beta hardware to pull the results before publication, so that if one company pulled a fast one, the others that would be look bad in comparison simply wouldn't be compared; this resulted in a sort of a stalemate of cheating.

The most extreme (but permitted) cheat I ever encountered had the company involved paying over $100,000 to have a custom graphics driver written overnight that incorporated an optimized version of parts of the DirectX rendering engine (this was ~ DX5 era). When they found out their primary competitors pulled their boards from testing, you can imagine they were less than pleased.

The point of all this: a competent testing lab, particularly part of a magazine "shootout," should be well aware that cheating is taking place, and prepared to identify major cheats. Back in the heyday of PC Magazine in the mid-90s, their benchmark people were top notch and the benchmark ran a considerable number of cheating tests to clear out the more bogus attempts.

Oh, and you can be pretty assurred your competitors are doing the same things you are.

It was chip supplier in question was IIT (Integrated Information Technologies), they later dropped out of the graphics (and math co-processor business) and retargeted themselves at video, becoming 8X8. Later they evolved from video oriented at videophones to move into VoIP.

The 2D ZD WinBench had a string "The quick brown fox..." it rendered in different colors and sizes using the Windows GDI, and the IIT BIOS embedded it. I believe the parts were still ISA based, so embedded the string in a ROM on the card

I've also witnessed our product go into test labs (usually for the purposes of running a series of tests for a 'bake off' in a trade publication). Not uncommon is the attempt to 'tune' the configuration of the device under test to perform in the best light (not unlike tuning your car to pass emissions tests). I have seen it go as far as exploiting weaknesses in the test that, if the test operator discovered, would be considered bad faith.

Oh, you work for Intel then.:-) Seriously though, this has been the whole problem with "benchmarks" like SPEC and others that ultimately results in pissing matches between manufacturers saying "my product is faster than yours" which for 99% of the users out there means nothing. In fact, even for that 1% of us where it does make a difference, specific optimizations to ones code or algorithms typically will get you more performance. So, what it really comes down to is how productive is the product + environment + task that you are assigning to the platform.

To answer your question of false advertising, I would say keep to the standard that most of us scientists do: Specifically, peer review and ensure that your results can be duplicated by said peers. If results cannot be duplicated, then it is false advertising.

To answer your question of false advertising, I would say keep to the standard that most of us scientists do: Specifically, peer review and ensure that your results can be duplicated by said peers. If results cannot be duplicated, then it is false advertising.

Even science has a problem of touting the best data and "leaving the devil in the details." Research is driven by money just as much as industry. If you're not producing good results, you won't get funding.

Even science has a problem of touting the best data and "leaving the devil in the details." Research is driven by money just as much as industry. If you're not producing good results, you won't get funding.

And if you are caught falsifying data then you will never get funding again. At least from traditional sources this is true and you will have major problems finding a position in academia. There have been a few cases where folks even spent time in jail for scientific fraud. On the whole, most scienti

Even science has a problem of touting the best data and "leaving the devil in the details." Research is driven by money just as much as industry. If you're not producing good results, you won't get funding.

If you do produce results that are consistently not reproducible by your peers, then you'll quickly no longer have a career, much less funding. It may not happen right away but it will happen in reasonable course, especially the more impressive your claimed results are and the more clear it becomes that

It's also good to run multiple tests that are sufficiently dissimilar that an optimization in one won't (necessarily) confer any benefit to the others. This is one reason a lot of trade mags in the 90s ran a great many tests on PCs (I've seen up to 20 benchmarks for each model reviewed) - it was the only way to get round the tweaks designed to create utterly false data from the "standard" dhrystone and whetstone tests and the meaningless MIPS count.

I work in developing web applications. When choosing server technologies I have learned to conduct my own bake off using the application that will be deployed. Each application is unique. Comparing your custom app to a published bake off is usually an apples to oranges comparison.

You're comparing how products perform under a specific test that you have devised. (which ideally, is similar to your production environment).

Tuning can have a dramatic difference in performance, and unless you're familiar with all of the products involved, it's impossible to get the best performance out of each one.

The original poster is talking about where one of the systems has been modified so it is not a default install, and specifically customized before being sent to the testor, so that they will perform better. (like with ATI's Quake 'optimization' [tech-report.com]).

As another example, there were some folks trying to get higher rankings in SETI@home [zdnet.com.au], who would return bogus results -- as that was faster than actually performing the calculations. If someone knows that the results won't be checked for accuracy (or can't), and only for time, they can boost their rankings dramatically.

Consumer Reports [consumerreports.org] is a such a respected publication because they have strict standards for the products they test. They don't accept items from the product makers, they go out into the marketplace and buy their test subjects using cash whenever possible. (Up until a while ago they even bought cars with cash, until they realized that car dealers began recognizing them as the only people who paid cash for cars, and the IRS requirement of reporting large cash transactions got in their way too.) As a result, their tests are immune to any tweaking...

It'd be nice if the tech publications could afford to do this, because at times they start to resemble the video game websites set up by kids who do it only to get prerelease copies of games for free under the guise of reviewing them. Such kids always have to write glowing reviews of everything they get because as soon as they post a negative review their stream of free stuff grinds to a halt.

Bottom line is that there's a foolproof way of preventing tampering in any review, but it costs money. Any review that involves accepting free stuff compromises the integrity from the start.

I've noticed Consumer Reports has given glowing reviews to Sears products (Craftsman and Kenmore) in the past and I have purchased some of them based on a CR recommendation, only to be very disappointed in their performance. A vaccum cleaner, drill, refrigerator and leaf blower were all highly recommended and then gave me nothing but problems until a short time later when I replaced them all with products of a different brand. Perhaps Sears somehow sponsors or supports CR? Either that or they just have no r

Perhaps Sears somehow sponsors or supports CR? Either that or they just have no realistic way of testing reliability over a period of time.

You can't look at the top rated model and decide that it is the best one long term. The ratings in a CR review represent how the products performed during the test. The ratings do not necessarily represent the best products.

Nearly every CR review has another section that details the reliability of the brands represented in the test over a period of time that they've b

Hey! You mean to tell me that someone else is getting a whole bunch of freebie games by putting up a fansite? So, if I created a template that looked fannish, and used a random name generator to dynamically create all the possible future game names, I may be able to build a decent collection before somebody notices...

Just don't tell anyone else about this way of sponging off games companies, ok?

Just because CR is unbiased doesn't mean that their tests aren't subject to the type of "tweaking" that the original poster describes. If the methodology of the testing that CR uses is known by the manufacturers, then they can design their products to do well at the test. Hopefully this would have the effect of being an indicator of the overall quality of the product, but as we know, this isn't always the case.

As a hypothetical, let's say that CR judges crash-worthiness of a car using a 35 mph head on collison test. Car manufacturers which know this are going to optimize the structural integrity of the car to hold up well under this test at the expense of other types of crashs (side impact crashs, say). Another car may not perform as well in the head on test, but it may be safer over a entire universe of possible crashes. However, because it is not optimized for the CR crash test, it won't get as high a rating.

Lest you think I am putting stuff out of my butt, this situation actually occurred with respect to the Insurance Institute for Highway Safety. Up until a few years ago, cars were generally crash tested using the head on methodology. However, the IIHS decided to start using an offset crash methodology since was more likely to occur in real life. They found the results from the offset crashes did not necessarily match the results from the head on crashes. Cars that did well in the head on tests did not do as well in the offset crash tests. Obviously manufacturers had optimized crash worthiness for the test and not for overall safety.

So where does the blame lie? I would say it lies both with the testers and the manufacturers. The testers are to blame for coming up with a test that doesn't necessarily reflect real life. Meanwhile car makers are to blame for designing products to "beat the test" rather than to be safe overall.

I think the same is true in the case of the original poster. His company isn't doing anything illegal; if the tests can be beaten so easily, then what good are they? In fact, one could argue that his company is helping in the sense that they are revealing the test's shortcoming. However, I find it hard to believe that their underlying motives are altruistic. I would guess that their motivation for tweaking their system is to beat the test for their own gain, and not for some higher moral purpose. So in a sense they are violating the spirit of the competition, in my opinion, even if what they are doing isn't wrong in the legal sense.

Problem is, CR's testing procedures often are god-awful. Case in point: CR's almost incomprehensible ignorance of how computers work, resulting in an anti-Mac bias that borders on the laughable. This most famously reared its ugly head when they ranked the brand-new PowerPC computers as very very slow, not realizing they had put the wrong (68040-only) software on them. Now in a recent issue they ranked the Macs on the bottom of the heap even though they played the whole thing up with teaser ad copy about how Macs are the only virus-free, zombie-free, difficult-to-hack boxes they test. It apperars that CR only looks at hardware (and that only from a PC perspective) and does not consider software at all.

Another good one. CR downgraded the Protege5 wagon, despite it having as good or better gas mileage, much better reliability, and MUCH better handling and breaking (a sport suspension). Oh, and it was cheaper too, and unquestionably better looking. Why didn't CR like it? Solely because its competition (PT Cruiser / Vibe / Matrix / Imprezza ) was higher up and had a cushier ride, like an SUV. So while the rest of the car trade ranked the P5 at the top, CR complained that it didn't feel enough like a Surburban.

On the other hand, you were able to clearly and precisely describe why you disagree with their evaluations.

Based entirely on your comments, I would suggest that is the true strength of Consumer Reports' reviews--you have not just a ranking, but also a detailed explanation of how that ranking was arrived at.

The people who buy based only on a final arbitrary score or ranking are just as screwed as the people who choose a CPU based solely on its clock speed, or an audio amplifier based solely on its output power. Sure, such people exist, but there's useful content in CR for those who are willing to look.

One hopes that people willing to plunk down the cash for a copy of CR are also willing to spend a small amount of time reading the whole article before they buy a twenty thousand dollar vehicle....

You don't have to be Consumer Reports to approach purchasing a product in a responsible manner.

I have worked for a number of years in various roles in the mainframe IT industry, and have repeatedly observed (from both sides of the customer/vendor fence) that the best-prepared consumers take the vendor's claims with a grain of salt and ALWAYS do their own independent benchmarking to see how the product works in their own application environment.

This is true. They don't perform their tests under laboratory conditions, although they do everything possible to make quantitive measurements. They test under the same biases and prejudices used by most consumers. The theory is that they're far more interested in how a product performs as people will actually use it than how the manufacturer would like to see it used.

Until he retired, my uncle was head of their paint testing laboratory, and this is exactly what he did. He would, for example, test a paint's opacity by applying a coat directly to an unprimed test pattern. He used to drive the paint companies nuts -- but when he said a paint will cover in a single coat that's exactly what a consumer could expect.

As a person who worked in the advertising business side, I can say wholeheartedly that truth in advertising is a complete misnomer. The whole concept of advertising rejects the idea of truth. I don't sound bitter do I?

You mean Code Red isn't a sports drink for advanced athletes? That I shouldn't be on a dozen prescription drugs? That my children aren't better taught by a talking book? That school loans aren't the source of happiness for all successful students? That cross-over SUVs aren't station wagons? That my computer doesn't make the Internet go faster?...I don't know the meaning of my life anymore...:(

This is the golden age for ads. They're everything. Every webpage, above the urinal, people aren't very skeptical and have disposable incomes, the art of creating a working fad/meme is getting perfected, celebrities are manufactured from scratch, etc. And this is what people want.

The problem is two-fold. People, in general, need to take a good look at their consumerism and corps need controls on what they can and can't say. I'd like to see informative ads telling me cost, MPG, etc but a typical car ad is all mom, america, and apple pie stuff.

" Advertising usually affects the reptilian part of brains, preying on our patriotism (truck ads), "

This reminds me of something I saw the other day. There was a big old truck (F350) relatively new stopped at a light in front of me. The bed was empty except for a full size american flag that was tied to a broomstick which was attached to rear corner of the truck.

It's hard to know how long that flag had been there but it was in horrible shape. It was dirty and wet and the leading edge has been torn to shre

"By the way if anyone here is in advertising or marketing... kill yourself. No, no, no it's just a little thought. I'm just trying to plant seeds. Maybe one day, they'll take root - I don't know. You try, you do what you can. Kill yourself.

Seriously though, if you are, do. Aaah, no really, there's no rationalisation for what you do and you are Satan's little helpers, Okay - kill yourself - seriously. You are the ruiner of all things good, seriously. No this is not a joke, you're going, "there's going to be a joke coming," there's no fucking joke coming. You are Satan's spawn filling the world with bile and garbage. You are fucked and you are fucking us. Kill yourself. It's the only way to save your fucking soul, kill yourself. Planting seeds.

I know all the marketing people are going, "he's doing a joke... there's no joke here whatsoever. Suck a tail-pipe, fucking hang yourself, borrow a gun from a Yank friend - I don't care how you do it. Rid the world of your evil fucking machinations. I know what all the marketing people are thinking right now too, "Oh, you know what Bill's doing, he's going for that anti-marketing dollar. That's a good market, he's very smart." Oh man, I am not doing that. You fucking evil scumbags! "Ooh, you know what Bill's doing now, he's going for the righteous indignation dollar. That's a big dollar. A lot of people are feeling that indignation. We've done research - huge market. He's doing a good thing." Godammit, I'm not doing that, you scum-bags!

Quit putting a godamm dollar sign on every fucking thing on this planet!

"Ooh, the anger dollar. Huge. Huge in times of recession. Giant market, Bill's very bright to do that." God, I'm just caught in a fucking web! "Ooh the trapped dollar, big dollar, huge dollar. Good market - look at our research. We see that many people feel trapped. If we play to that and then separate them into the trapped dollar..." How do you live like that? And I bet you sleep like fucking babies at night, don't you?"

This isn't just a phenomenon in the IT arena. Have a look at medical journals some time... You have to be VERY careful when putting stock in the findings of studies -- the first thing to check is who *funded* the study.

I think it's just a fact of life: everybody wants their product to be seen in the best light, and to sell well (in the case of commodities or services).

That's why Amazon.com has reader reviews, sites like epinions.com exist, and Slashdot has moderator points. It's also why there are hardware review sites -- we can't just trust the manufacturer's PR now, can we?

So, people may be inherentely biased and often untruthful, but with proper monitoring (read: community involvement), the truth will out.

Totally agree, and it even extend to scientific journals, not just medical ones. I've even seen it happen... 'ignoring' stuff that doesn't back up your hypothesis, keeping only the 'good' results to do statistical analysis, choosing a very specific concentration on a dose-response analysis because it's the only one where its 'working'... its sad, but not generalized. I guess everywhere humans are involved and under pressure (for money or papers in the case of science), cheating WILL happen, no matter what k

... but naive.. Come on, what were you smoking? Of course the benchmarks/testing/what have you will be done in such a way as to out the product to be sold in the best possible position. Your question is naive. Even us scientists do this when providing paper plans for our bosses. We paint the best possible picture, do serious window-dressing and interpret our results in the most optimistic manner compatible with science. If you think that an advertising campaign will feature objective (if such a thing exists

This revolutionary power cable dramatically improves all sonic parameters - there is much greater resolution, image focus, dynamics, weight and impact, along with a much quieter background. CD playback is most improved, due to the particular sensitivity of digital timing jitter to noise on AC power.

It sounds too insane to be true. I almost dismissed the entire site as being an elaborate hoax, but searching for "magnan cable

Just to a Google search for cable burn-in. You'll be surprised at the hogwash you'll find. Supposedly, audio cables won't sound "proper" until they've been "burned-in."

It may be true that there is a capacitive charge on the cable (due to the inefficiencies of the dielectric, but that's beyond the scope of my ability to explain), and the degree of the charge may affect the sound quality. I'll agree that this is possible. BUT -- and this is a big but, at least as big as Roseanne Barr's -- this charge will va

From over-unity speakers (200W watts output from a 10W wall-wart), to "better-sounding" fiber optic cable, no claim seems too outrageous or fraudulent for a great many consumer audio manufacturers.

I like Dans Data's various takes on Monster Cable [dansdata.com] myself. I have to admit that my ex-wife worked for one of their distributors many years ago and we got it really cheap. Those thick cables seemed to make the imported German Quadral speakers sound better.

Even if someone came up with a pair of headphones that had an S/PDIF or AES/EBU interface, it would still have to have a DAC and an analog transducer, because my ears are not digital nor will they ever be.

Actually, you could theoretically make actual digital headphones if you could get a solenoid to move back and forth at a few GHz (for decent fidelity). Then you'd probably need to place some sort of acoustic low pass filter between the transducer and your ears* but it is possible.

Some years ago when I was working for a certain SCSI RAID HBA company (that shall be referred to as company M) we were shock to find out a certain OTHER company (that shall be referred to company D) is advertising that their SCSI RAID HBA out performs ours by substantial margine in LARGE BOLD letters.

When we took a closer look at the disclosure (in fine print) it states: Company M HBA tested in single threaded mode (READ: Tag Queuing Disable.)

So you see, misleading advertizing is not only good for the product (and it may even end up backfiring on the product) but it is certainly good for the advertizing business. That is what counts for advertizers, after all.

But the fancy numbers aren't for me, they're for PHBs that like to see lots of impressive numbers -- after all, the other product has them so if this one doesn't...

Looking at computer specs lately I'm beginning to think the principal point of them is to bulk out the specs -- make it look like it has lots of features, and the actual content of the specs is irrelevant.

Unlike many popular forms of advertising, I don't trust testimonials. When a piece of equipment is reviewed, I judge the review by it's source, since perhaps as a tech I'm a bit happier with a "clumsy UI" than with sheer abilities of, say, hardware.

So I look to Toms, [H], Ars for reviews by people who seem to have similar knowledge as myself. Then, when tests are formed, I don't trust just one benchmark, nor just one test or review.

Sun Microsystems (SUNW) shrugged off accusations today of unfairly reporting test scores for the beta version of one of its Java compilers.

Pendragon Software yesterday said that Sun, using Pendragon's CaffeineMark benchmarking tool, inaccurately inflated the test results of the Solaris 2.6 just-in-time Java compiler by optimizing the compiler specifically for that test. Solaris is Sun's version of the Unix operating system.

Sun responded by calling such optimization standard practice.

"The idea is that you want people to optimize for the benchmark," said Brian Croll, director of marketing for Sun's Solaris products. "We'll do everything in our power to do really well on all the benchmarks we get our hands on."

A benchmark is a battery of tests that gauges the speed and performance of software running in various configurations. Several developers have created Java benchmarks; CaffeineMark, which Croll called "the best benchmark we've got," is available free off the Web.

But how much optimization is fair play? Pendragon president Ivan Phillips contended Sun inflated the test results of the Solaris 2.6 just-in-time compiler by lifting code from CaffeineMark and inserting it into the compiler.

"The logic test is contained in the 'logicatom.class' file, and almost 50 percent of that file appeared in the compiler," he said. "The probability that this code made its way there accidentally is infinitesimal."

Reusing such a large chunk of specific code risks diverting too much of the compiler's resources, resulting in lower performance once the compiler is deployed in the real world, Phillips added.

Croll denied that Sun used CaffeineMark code but said the company "optimized around it." It will be difficult to determine who is correct, given that the beta compiler in question is no longer available. Croll stressed that the compiler is designed to perform well on a benchmark because that's what determines good real-world performance.

"If certain things happen frequently in a benchmark, you want to make sure you handle them well," he said. "If it turns out the benchmark doesn't truly represent true application performance, you need to evolve the benchmark."

The charges come at a time when Sun and Microsoft are entangled in tit-for-tat lawsuits over Microsoft's use of Java in its Internet Explorer 4.0 browser.

In an October 20 press release, Sun bragged that Solaris had the "world's fastest Java performance" and ran Java applications 50 percent faster than rival operating system Windows NT. After taking issue with Sun's test results, Pendragon said it asked Sun to retract its claims and remove the compiler from its Web site.

Sun removed the entire JDK 1.1.4 for Solaris on October 29 because the beta evaluation period ended, according to Croll. The company didn't take down the press release or rescind its claims, however, and Phillips responded yesterday by publishing his accusations.

Pendragon doesn't usually double-check testers' CaffeineMark scores. But when it saw Sun's results--the Solaris compiler hit a score of 1.4 million on the "logic" test, while the previous high for that test was 22,000--the software firm decided to investigate, fearing that CaffeineMark contained a bug.

If Sun indeed took deliberate steps to skew its results, Phillips was surprised at the lack of subtlety.

"If a company really wanted to conceal what they were doing, they could do a better job," he said.

Developer Quote Of The Week: "What we do is, given a benchmark, we try to do as well as we can on it, and make sure that our system is the fastest benchmark -- I mean, fastest system -- in the world."
-- Brian Croll, Sun Microsystems' director of marketing for Solaris

Two weeks ago, Sun Microsystems got caught with its hand in the benchmarking cookie jar. Or did it? Depending on your point of view, Sun either grossly misrepresented the performance of its Solaris Java just-

Tweaking Java test?: Sun Microsystems has been accused of manipulating Java benchmark software and using the results to state that its Solaris "runs Java applications 50 percent faster than Windows NT." Pendragon Software, maker of the benchmark software CaffeineMark, has put out a press release that claims Sun found a way to cheat on the benchmark tests, and then advertised the bogus scores. Sun has since removed the Java compiler from its download page, Pendragon says, but the o

"To everyone: When does spin doctoring cross the line and become false advertising?"

Instantly. There is no gray area between honesty and dishonesty. You either tell the truth, or you tell a lie. Your company either attempts to subvert tests [i.e., lies], or it doesn't [i.e., does not attempt to lie]. No ambiguity exists in this case.

Your question reminds me of a question posed on the cover of a national "news" magazine in the wake of revelations that the New York Times had published falsified news repor

...for about 5 years in the mid to late 90's. I started doing the testing on basic network equipment and graduated over time to oversee the testing methodology for every product comparison we ran.

I can tell you that, if the testers themselves are competent, it's a moot point. For instance, when testing server hardware by using a database application, I always insisted that the databases be identical and configured as identically as possible. Normal stumbling blocks were issues with stock disk sizes, but we always ensured that RAID configurations were as similar as possible within the realm of reason.

Testing is an art form. It requires a thorough and repeatable plan as well as a good bit of knowledge about real world usage of equipment and software (would it be realistic to enable a non-battery backed write cache on a raid controller in a database application?)

I can say that many, many vendors attempted to put one over on us. And it's entirely possible that I missed some of them, and they benefitted because of it. However, in general, professional test procedures should expose and nullify any sort of vendor tweakage of equipment or software.

Key principles for good testing:- Set any basic configuration to manufacturer's public recommendations

- Don't let vendor representatives touch anything. If they need to send someone into the lab, allow them to recommend changes, and document all of those for later review / revocation

- If third party hardware/software is involved in a test, use the third party as a sounding board. If you're testing a layer 3 switch using streaming media, talk to the streaming media provider about realistic stream rates and usage patterns.

- If at all possible, wipe and reload vendor equipment and software. You should be looking at the setup process anyway, so that helps the test as well as helping to prevent shenanigans.

In short, good test procedures prevent, or at least mitigate, the kind of abuse in question. And, as consimers of reviews and tests, it's in all of our best interests to get educated and develop opinions about the competence, thoroughness, and honesty of any souce.

... when I worked for a German owned plumbing fixture manufaturer's US subsidary, we had to have all faucets certified for lead contanimation (leaching from the solder and brass compounds). As it turned out, a lot of what we were already selling in the US market would not come close to passing. The Fatherland offered to send faucets that were garanteed to pass. All we had to do was tell them what levels that they needed to meet for a particular model (has a lot to do with the length of the flow chamber).

To the IT professionals: How much faith do you put in these sorts of publications and their 'bake offs'?

Absolutely none, I rely solely on product packaging.

Seriously though, I hold the belief that all sales and marketing folk are born liars and will never change. I purchase solely on word of mouth (from people I trust) and my past experience with a particular brand/manufacturer. I am the person that advertisers hate because I sit in front of the TV and explain to my wife exactly which mind fucts the advertiser is utilizing. Sales and Marketing (S&M how ironic) folk are beneath lawyers, politicians and criminals in my book.

I with you. The "bake offs" are all BS. Actually all marketing is just shoveling crapola.

When I was at NMSU for a special program, it was run through the business school. I got injured and couldn't get what I needed to finish so I thought I'd just take the business degree. It was marketing, I left shcool 15 hour short of my Bachelors because I wasn't qualified to get a Marketing degree, I have a conscience!

You must be new.
I've been working for in high-tech for about 20 years now for various companies, and I would not want my products to be evaluated on a level playing field. I will put in any tweak necessary to win a comparison.
This is not kindergarten...fair is nice, but I know my competitors are doing the same thing. And the old college try does not pay a very good Christmas bonus.

In the pay per click world of google adwords (those text ads you see when you search) I advertise a free service. But since this free service is bundled with other nonfree services I put the prices on the ad itself.
So although they may be looking for something free, I don't pay for the click unless they know they're going to pay *something*, the visitor is better informed, and I get a higher conversion rate from the qualified traffic.So although this may not be on the exact topic of yours, I submit that honesty in advertising works, especially when you pay for performance.

As long as the company explains the conditions under which its product achieved certain standards, the company is not lying. In most cases, the marketing materials explain the test scenario or the environment of the customer who achieved the results.

Marketing materials do not set out the faults of the product. This is not the role of marketing. Marketing aims to connect buyers to sellers. Providing information about faults does not help to make that connection. Also, many of the "tests" cited by marketers are labeled with titles such as, "Customer Success Story". This should be a clue that the material will not detail unsuccessful characteristics of the product.

Finally, marketers in most companies are not technical experts. They have to rely on the information provided by engineers and programmers. Many companies avoid ever telling the marketing department anything negative. As a result, in many cases, marketers aren't lying when they make claims -- they're explaining what they were told. Many of these marketers, especially the ones writing up collateral, are junior, new to the company, or even working on contract, so they don't have the depth of knowledge to tell that they've been given misleading information. Other people in the company sometimes lie to the marketers. It's not always black and white. (Not that all marketers tell the truth, of course.)

The sort of people who would tune their software for a specific benchmark are the same sort of people who would karma whore here on Slashdot by throwing off-topic lines with guaranteed Slashdot appeal.

And you know who else hates that type of benchmarking whoring? Linus Torvalds, that's who! Linus would never stoop to such a thing, because Linus is a great guy!

And you know who else would never do it? Apple Computer, the people who make the greatest computers in the world! They would never stoop to rigging benchmarks!

we had a bake-off on one of our products...i was called in when the results didnt meet what management were expecting. after adding cpus(!) to the product under test (without the knowledge of the tester) we finally got a result which beat our competition. this was in 2001. i was later first on the chopping block in 2002 after i noted at a meeting that we should not try to publicize the results too much since it might backfire. the VP who canned me noted that if we got results we should publicize them as much as possible and i was an "impediment to future marketing capaigns"). i got an above average severance package tho so i guess they paid me off to leave quietly. ironically HP's results got beaten by IBM which simply threw money at the problem 4 months later and won.

It's funny how that works. HP has somehow managed to go from one of the leading producers of quality printers, for example, to one of the many cheapo vendors.

Remember when a Laserjet 4 was the printer to have?

Or for that matter, remember when Diamond multimendia was the producers of graphics cards?

A company that overstates claims typically is a company that is cutting costs while sliding on their brand name. I wonder how many solid names in the industry have to go down the drain before they realize it's probably not a good idea, in the long run, to overstate the quality or performance of your products.

In my experience, the lies don't stop with the advertising. My bosses are both salesmen. The only thing they do at the small company that I work at is sell our software, and they'll tell anyone whatever it is they want to hear so that they'll buy it. But I've noticed that this is definately not the end of their dishonesty. They treat their staff, me included, just like a buyer. They promise us stuff like compensation for working weekends, etc, but then just like our software, they fail to deliver.

Or, for a careless end-user, is that the devil sold his/her soul and make a killing?

Any which way you dice or slice it, it boils down to the "trial" run to overcome the buyer's skepticisms. Be that it may, a trial balloon, trial-by-fire, trial-by-jury,... whatever.

In the case of Internet-based products, it takes a true network engineer to understand the fine subtleties between UDP throughput and TCP throughput (as well as any other application/presentation/session layered throughput combinations) and to

I work in the forklift truck industry and understandably the advertising there revolves around saving storage space, increasing efficiency per handling transaction which both in turn saves money.

The upshot of all of this is that when it comes to it, a prospective customer will usually say "prove it" and you well, have to. I for one took great pride in being part of the tech/development/demonstration team in that I had a say on what went into the sales literature as I'd often be the one proving it...

I've always completely disregarded benchmarks, etc. other than those I've run myself.

It's kind of like Microsoft's BS-filled "Linux TCO vs. Windows TCO" ads here on slashdot. Sure, maybe Windows Server 2k3 is cheaper to operate than linux (What a bloody joke) in Microsoft's excessively convoluted idea of how servers/whatever might be run, but chances are extremely high that Microsoft has no damned clue about how my servers are run, what content they serve, etc. etc., not to mention the fact that there's ra

To the other engineers: Are you aware of this kind of practice at your company?

More often we become aware of it when the competitor does it.

About 20 years ago there were a series of "shootouts" between Novell, Microsoft, and 3COM, to see which network OS was faster. That was when I was literated to the fact that tweaking parameters can make a HUGE difference in test results. If you have even more control, you can even tweak the tests. We used to have to supply "debunking" documents that explained ho

If companies can get away with spouting total bollocks (first 64 bit desktop anyone - my Mesh Alpha (from a consumer desktop computing company) is obviously now very valuable since it never existed?) and not get fined, what incentive do they have for telling the truth?

Lies sell, since most people are stupid and believe whatever they are told.

Is that there aren't any truth in advertising. Long gone are the days of the devil in the detail, even though capitalism prepared us to what is happening now we still have to confront the harsh truth:The single most unifying caracteristic of capitalist entities is to, at any cost, give you less than what they sold you in such way that you believe you are the bad guy, the one who overevaluated the product. Try hard, when was the last time you bought something and it worked as advertised, as implied, was as s

Yes, I've seen this sort of thing at other places I work. It's inherently dishonest. It's justified via a) claiming that it'll help sales (dubious), and b) claiming that everybody knows that they're bullshit anyways. Note that the two justifications are mutually exclusive. Doesn't stop them from using them though.

No, I trust none of these "bakeoffs". Or any other IT advertising for that matter. There isn't a single mainstream IT rag which is even marginally trustworthy. Go ahead and, instead of reading just the bakeoff that you're looking for, read an article about something you already know about (through hands-on experience with all the primary alternatives, including a FOSS alternative if it's software and there is a FOSS alternative). Note how much stuff they get wrong, how shallow the article is, and how it almost reads like an advertisement. The same is true for cars too, largely, at least from what I've read. I can't comment on other industries since I'm not particularly familiar with their trade press. Note, however, that I still don't trust them at all - I expect they're just as bad. It's just that I don't make enough decisions relating to those industries' products to warrant reading the trade press - instead I go to the store and carefully examine the alternatives.

This sort of thing crossed the line into fake advertising at least a decade ago. Companies routinely make absurd claims and get away with it. There's just no political interest in enforcing it. At best they'll include fine print in their ad. If it's a print ad, maybe you'll be able to read it. It's been a while since I've seen an ad with fine print whose fine print didn't take up at least 10 lines of extremely small type. Television ads are a joke, it's impossible to read the fine print at broadcast resolution, regardless of the size of your TV, and it typically takes up a whole screen.

What can we do about it? Elect governments with some spine. These sorts of advertisements will continue to be successful so long as people are poorly-educated, and people will continue to be poorly-educated unless there is a strong collective agreement in place that says "yes, everybody needs some minimum level of education, otherwise they're prone to manipulation and our society is controlled by those who control the media or the other forms of information dissemination." It's funny, isn't it, how political campaigns in the US almost exclusively take the form of commercials? (Except for the "debates", which are a joke to everybody outside the country.)

Note that when the US was founded, everybody who advocated democracy made sure to point out that the requirements for democracy included an educated public, free speech, and free press. People have totally forgotten the education bit and the press bit. (A government-controlled press is no more effective at disseminating important information than a press controlled by an aristocracy - corporate or otherwise.)

A great deal of the consumers time is wasted in finding out what a product can really do.

Such extreamism, if not worst than that, is counter productive for the whole industry, as the computer industry has been doing pretty good showing how well it applies double speak or its ability to manipulate abstractions, be it in producing code or producing advertising text...

A good example is teh recent stories regarding spyware removal products, how the freeware is far better...

I work at a fortune 500 company and give input into products we buy. Whenever a sales person gives me a "white paper" I smirk. 95% of them are from "independent" evaluators who are paid by the person selling you something. They also tend to make claims that are so outrageous, you know they are not true going in. Microsoft is the worst when it comes to this. (.Net is 900% faster than J2EE) I don't even know why they still sponsor these "independent studies" as no one in industry takes them seriously.

It's quite a simple answer - misleading or misrepresenting anything whasoever is falsehood. There's not really any grey area, proposing the existence of such is a socially acceptable way of making the lie pallatable or discusable.

People generally have the common sense to know themselves if they're lying or not, but mainly prefer to not worry about it. The problem is that we live in a societies based on and that thrives on lies. Liars often win in a consumerist culture, because lies are usually selling people their own dumb desires right back to them.

The real issue is whether it is actually acceptable to lie. All politicians without exception lie and muddy the water, advertisers and PR people lie so much perhaps they don't even notice anymore. The alternative is too unpalatable to a mindless and uneducated society who want everyone to do their dirty work for them,

Most Americans would rather think that their army for instance is well equipped with modern and state of the art equipment. We like to think that our governments care about every soldier as we do our friends and family. Regardless of who's in power - the government is not a benevolent father who loves each and every one of us and watches down on us like a proud patriarch.

The reality is that dumb kids lives are cheaper than good equipment (regardless of who you vote for and who's in power). Another dead kid in Iraq isn't really top priority, unlike keeping the Whitehouse furniture and art restored. People don't like to admit that some dumb grunt isn't worth as much as a nice piece of Louis XIV furniture, so people pretend to care when in fact they don't terribly much.

The holy grail of technology is no different - the utopia of consumerist culture is just to tempting to refuse new technology for it's own sake. Nobody wants to know that the latest thing isn't all that good - hell most people don't really have an actual use for their computers as they're lives and work are usually fairly inconquential. We want to eat the dream of technology and time saving devices even though deep down we know that it's all make believe, and we don't really have anything to do with all our saved spare time anyway.

Actually, there's a lot of grey area. First of all, you fail to take into account subjective statements, such as "it's fantastic". Is that misleading or a lie? Not in and of itself.

Furthermore, when you evaluate information about a particular concept for absolute truth, you're bound to find that some information is just not disclosed. This could be as esoteric as not disclosing the material that your software CD is crafted from, or as important as failing to mention that the software is not compatible with

This goes double for overclockers. I only bought one motherboard before I realized that reviews didn't hold up as well as the collective experiences of techies (this is especially important for overclocking results). Forum posts are direct links to specific experience and knowledge gleaned from those experiences, like how you shouldn't ever expect a Tiger Direct rebate. Reviews never stress the products to my liking - you need clumsy people and morons to do that for ya. Sometimes they can also tell you the step in the (dis)assembly that was missing from the manual. I guess that black heatsink gunk on the standard Intel HS can get really stuck to the processor. Stuck enough to rip the processor out of the socket with the pins still in the socket. Yeah, don't use intel thermal gunk. If you must, apply heat to remove. See? You learned something on a forum.

What are you selling your customers?Are you selling an emotion or some sort of strange detached feeling of satisfaction? Most companies nowadays do that.Then just sell your IT product with a nice looking GUI and lots of nice little buttons and habe the marketing dept. take some pictures and add their phrases. They'll ask you about a noteworthy feature or two and present it in such a way you wont recognize your own product anymore. It will sell like hot cakes.

This post got me thinking of a movie from the early 90's (?) called Crazy People. An advertising executive decides to write advertisements truthfully. Really quite funny...The movie was so far from reality though, it was sad.

One of my many jobs is participating in vendor selection for my company ([sarcasm]it's a beautiful committee process...[/sarcasm]).
Last year we had a certain computer company (IANAL, so the name is intentionally missing) come in and give a sales presentation on why we should dump our existing vendor and go with them.
For the most part, they had our existing vendor beat from a price point. But we had been burned by previous computer vendors...made all of the mistakes...and knew exactly what we wanted (and, frankly, had made our existing vendor comply with our requirements over the period of 4 years that we dealt with them)
We image all of our PCs, we have specialized software for ensuring that everything is up to a baseline and that our environment is as predictable as possible. We needed hardware that would be easily inventoried, and *consistent, long-term, globally available configurations.* There were several other requirements we laid out and prior to the "sales pitch" meeting, we supplied this vendor with these "absolute requirements."

Of course, we received a 45 minute long power point presentation that basically regurgitated back to us everything we told them were our requirements.
(lesson learned: it's better not to give the marketing guys the game plan. They tend to be more honest when they don't have time to power-point the lies and instead have to provide answers off-the-cuff).
It's a running joke on our team because if we took the entire content of their presentation and crossed out every word in each bullet point that represented some sort of "promise", we'd be left with about four words repeated over and over for 20 slides..."The" "a" "and" and "but".

I don't trust *anything* from any marketing or sales rep. After testing this vendor's products and talking to friends of mine who's companies had used this vendor in the past, we knew they weren't going to live up to their promises.From day one, the information they gave us about getting loaner PCs for testing was sold to us as "far more flexible" than it turned out to be, and this poor customer service was going on *while* we were evaluating this company to determine if we should sign the contract!
Unfortunately, as the story goes, our opinions were appreciated, but the decision to choose this company was made anyway.Myself and another coworker were noted as objecting to the switch in our final meeting minutes. Of course, that meant nothing except for a future "I told you so." And there was nobody left to say "I told you so" to because in the end, we were the ones left having to compensate for these broken promises.

I'd suspect that this isn't the case. flappinbooger was close when he said "When you get sued or someone dies, or both". That's still not right either, because plenty of people die without anyone being sued.

Funny things, words. "false advertising" is the what gets used in a lawsuit when someone is complaining about a vendor lying about their product. The real word is "lie".

Most people are already aware that a significant proportion of advertising is a lie. It becomes false advertising when a lawyer d

Can a NDA that forbids the disclosure of illegal practices possibly be binding?

What are they going to do, sue you becaues you exposed their illegal business practices? In a sense, it'd probably be against the law NOT to report them, since you are witness to a crime and could be considered an accomplice if you don't...