The decoy display control panel

Last time, we saw one example of a "decoy" used in the service of application compatibility with respect to the Printers Control Panel. Today we'll look at another decoy, this time for the Display Control Panel.

When support for multiple monitors was being developed, a major obstacle was that a large number of display drivers hacked the Display Control Panel directly instead of using the documented extension mechanism. For example, instead of adding a separate page to the Display Control Panel's property sheet for, say, virtual desktops, they would just hack into the "Settings" page and add their button there. Some drivers were so adventuresome as to do what seemed like a total rewrite of the "Settings" page. They would take all the controls, move them around, resize them, hide some, show others, add new buttons of their own, and generally speaking treat the page as a lump of clay waiting to be molded into their own image. (Here's a handy rule of thumb: If your technique works only if the user speaks English, you probably should consider the possibility that what you're doing is relying on an implementation detail rather than something that will be officially supported going forward.)

In order to support multiple monitors, the Settings page on the Display Control Panel underwent a major overhaul. But when you tried to open the Display Control Panel on a system that had one of these aggressive drivers installed, it would crash because the driver ran around rearranging things like it always did, even though the things it was manipulating weren't what the developers of the driver intended!

The solution was to create a "decoy" Settings page that looked exactly like the classic Windows 95 Settings page. The decoy page's purpose in life was to act as bait for these aggressive display drivers and allow itself to be abused mercilessly, letting the driver have its way. Meanwhile, the real Settings page (which is the one that was shown to the user), by virtue of having been overlooked, remained safe and unharmed.

There was no attempt to make this decoy Settings page do anything interesting at all. Its sole job was to soak up mistreatment without complaining. As a result, those drivers lost whatever nifty features their shenanigans were trying to accomplish, but at least the Display Control Panel stayed alive and allowed the user to do what they were trying to do in the first place: Adjust their display settings.

Correct me if I am wrong. But are you saying that new versions of Windows include these decoy windows? If so, I can’t find them on my system! Or are you saying that these decoy windows had to be put on specific machines with these misbehaving apps?

I don’t want to re-start old arguments, but would it be feasible to have different "modes" in Windows? E.g. Exchange and Active Directory can run in Native Mode vs Mixed Mode, depending on whether they need to be compatible with older versions. (It would obviously make sense to keep the backwards compatibility mode as the default.)

"Remember, WHQL signs only the driver. The shovelware that comes with the driver is not tested by WHQL."

I’ve also been given to understand that the drivers can contain codepaths that never get executed while being tested but that get enabled when the driver is released into the wild by the use of some switches or similar chicanery during installation. Given that I’ve had a number of WHQL signed drivers that cause stop errors regularly I’m actually not fussed whether they’re signed or not…

That’s a really slick solution that problem. As far as the end user sees it maybe missing a few buttons from the documentation, but already a common issue. As far as the driver cares it’s happy to do whatever it wants.

Did this hidden window migrate to XP, or did it die out with ME/9x?

Oh, and speaking of WHQL drivers, at my last job we saw a few clever installers that clicked the continue button on the uncertified driver warning. It seems like most driver companies see WHQL as something to be worked around, not with.

WHQL is totally ineffective anyway. Many (most?) drivers are unsigned (especially those by smaller hardware manufacturers) and even those that are often cheat by detecting the testing suite or using installtime switches.

In other words WHQL is a total failure as it provides no indication of driver quality.

Unless ofcourse the orginal aim of WHQL wasn’t quality control at all, but just a way of getting hardware manufacturers to pay for windows support…

"Unless of course the orginal aim of WHQL wasn’t quality control at all, but just a way of getting hardware manufacturers to pay for windows support…"

Unless, of course, the aim was to make hardware OEM’s pay for all the support calls that Microsoft had to field because the OEM’s driver broke everything. Just this once, I woudn’t blame ’em for a money grab.

Seriously, one moment MS in getting dumped on for putting other vendors in a development chokehold, and the next MS is taking flak for giving other vendors the freedom to produce junk. Pick one, people, you can’t have both!

Why not have it that (in addition to any special installers that are required), any drivers that you want to get signed have to be submitted in a format that would then be made available on a special "Windows Driver Update" area (where the "add new hardware" dialog could search for them and also where normal update processes can look for drivers that are updated compared to what is currently installed)

That would make it easier to find said WHQL drivers. Also, it would mean that, because the same installer and setup files microsoft uses when they do WHQL tests are available for anyone to get from the "Windows Driver Update" area, companies cant play tricks where the custom installer (that the WHQL people never see) triggers a hidden flag that activates some stuff they dont want WHQL to see because then people who use the custom installer get a different install to those who install from "Windows Driver Update". It should be possible to make the "Windows Driver Update" and WHQL stuff handle things like extra control panel pages and such.

"I thought drivers had to be certified. Please tell me that a driver that misbehaved this badly wouldn’t be certified."

Drivers *today* do not *have* to be certified, uncertified drivers can easily be installed. But that is completely irrelevant to the discussion since if I remember correctly there was no such thing as a certified driver in Windows 95 (which is what Raymond’s post is about).

So I guess the answer to your question is: "No, Windows 95 drivers that behaved this badly were not certified. Then again, *no* driver in Windows 95 was certified, so that doesn’t mean much".

As far as I know WHQL-drivers already are posted to WindowsUpdate. (Just the drivers and the INF-file and the signature)

I saw a powerpoint from MS where it showed the process that directly after WHQL the vendor can post his driver to windows update!

I NEVER install the CDs that come with hardware unless I really really have to. I use Winrar the unpack the setup or peek in the temp-folder and grab the INF-file and install this.

But vendors want to "differenciate" themself. They want to provide "more value" (like spyware, Internet-news, bundling and partnering with YahooToolbars and so on…).

MS should start a big advertising campaign at endusers that (albeit confusing them very much about stuff they don’t care about??) shows how easy and simple the world would be with WHQL-signed simple drivers that are shipped as a CD-rom with INF-files on them.

Windows handels installation of hardware so extremely excellent! It is so sad that the vendory and the managers of hardware companies screw it up so bad.

One unrelated thing:

Now if only Windows discs would contain drivers for Promise-raid/UDMA controlls the time needed to install windows would be reduced by 50%…

As far as I know the criteria to include a driver with Windows install media is to

"I don’t want to re-start old arguments, but would it be feasible to have different ‘modes’ in Windows?"

I remember from the good old book The Soul Of A New Machine how Ed Castro prevented the Eclipse team from building a computer with a mode-bit (a slower, compatible mode, and a new 32-bit super-mini mode). Apparently, the feeling was that once the mode-bit was in place, it would be difficult or even impossible to get rid of. Effectively, it would mean two different versions of the same computer in one.

I think the same argument holds up here: Two different modes of programs invariably would lead to unnecessary complexity. You’d have to approach it in two different ways depending on which mode it currently was in. And any older programs running would force the program into "compatible" mode, thus preventing any real development anyway. The incentive to upgrade programs to the new mode would be nil since they’re already working in the compatible mode.

Interesting balancing act here between allowing customization of Windows-supplied dialogs by other vendors, and stability. Reminds me of a story about some "bait code" in Win3.1 for Adobe fonts, or something. Amazing testing effort to think about backward and "forward" testing one’s changes. Sheesh!

I’m getting a real kick out of the impossible work and effort MS is continuously putting toward backward compatiblity – or just compatibility in general.

Mr. Chen, I have no choice at this point but to assume that the reason so many "fixes" and "hacks" are put into Windows to run all this crappy stuff is because of something deeper and darker that we are not aware of. I just don’t see anything else that could possible explain such blatant and careless practices on the parts of the third parties. Laziness to read through MSDN and stupidity only go so far.

I also think it’s time MS started really putting their foot down. Having to deal with crap like what Mr. Chen posted is one major reason Vista is taking so long.

Would we all rather Mr. Chen spend his time trying to make some crappy application or game run on Vista even though the game or app uses memory right after freeing it, or would we all rather Mr. Chen spend his time improving security or making a better UI?

James Summerlin brings up a good point. This level of backwards compatibility is unsustainable in the long-term. I attended a Jack Ganssle seminar a few weeks back (anyone doing embedded systems work should know who he is) and he showed a graph with two lines. At the top of the graph was the point where software becomes too complicated to maintain and is basically a dead product.

The first line was mostly linear. It represented software that was updated but ALSO refactored to clean up the code and FIX the problems. The second line looked more like an exponential curve. It represented software that had been hacked to get it to market faster and solve problems "the easy way."

Guess which one gets to the "dead product" point first?

These "decoy" windows sound nice, but they’re still hacks. Eventually, putting hacks on top of hacks will result in a product so hacked up it can’t be fixed. (And I think looking at the usability state of Windows today bears out my point.)

"instead of adding a separate page to the Display Control Panel’s property sheet for, say, virtual desktops, they would just hack into the "Settings" page and add their button there."

Raymond makes it sound like MS had a documented way of supporting multiple monitors under Win95 but reckless driver developers ignored the standards and hacked around for the hell of it.

Actually, Windows came fairly late to the multiple monitors game. MultiMon support wasn’t built into Windows until 98, right? While adding a page to the Display Properties is the "blessed" way of extending the property sheet, it isn’t the best way.

First of all, the property sheet is severly undersized for today’s needs. I don’t believe the size has changed since Win95, when a 640×480 monitor was standard. Today the standard is closer to 1024×1280, and that 404×448 pixel windows takes up just 13% of my usable space. One one of my displays.

Secondly, video drivers–especially multimon ones–have become complicated enough that they need multiple property sheets just for their own settings. My NVIDIA driver uses 14 pages alone, which you have to get to by going to

tab. It then it uses a weird flyout menu to navigate between the pages. (Using individual tabs would be even worse.)

Is this closer to the MS-sanctioned way of doing it?

And lastly, adding another page to the property sheet wouldn’t really work either, because the default MS tabs of Background, Screen Saver, Appearance, Web, Effects and Settings take up all the horizontal space of the first tab row. (This is ameliorated somewhat in WinXP.)

I see Raymond’s point, but MS is also culpable by being so far behind the curve. It’s easy for us to laugh in hindsight and say, "What idiots!" But idiots don’t write device drivers, and I think it’s reasonable to say that they had their reasons for doing it that way.

I would also like to point out that despite the millions upon millions of applications, games, and devices that run on Windows and the uncountable number of hacks incorporated into Windows to make them run, I think it is fair to say that Microsoft has done a completely outstanding and wonderful job making it all work and, especially, in managing it all.

No other company on this planet has the talent, the will, or the guts to make it happen.

And even with all those hacks, Windows still outperforms Linux according to George Ou at ZDnet.

Wow, I really cringe when I hear that MS is doing things like this. On the one hand I feel really sorry for MS that it has to deal with inept #@^%$!@ that do that sort of thing, but on the other hand I have to ask MS who it thinks its customers are. Speaking personally, I certainly DO NOT want my operating system filled with gigantic kludges and useless dongles just to give developers of horrible software a free ride. I would like instead that some animal excrement be mailed same-day-delivery to the developer’s doorstep every time software is caught doing something like that.

Would it at all be feasible to implement a global ImplementBrainDeadHacksForRepugnantApplications flag that users, if they chose, could disable? Would that give Microsoft some leverage in this situation? Or for that matter, when such a poorly behaved application is noticed, slap some big ugly watermark at the display level accross the application to inform the user and embarrass the software developer.

I think in the long term, constantly attempting to accomodate this bad software is defeatist and bad for the end user, who after all doesn’t really understand that it’s the application’s fault. I also hate to see these sorts of things retarding the speed of development and incorporation of truly useful and great technology from MS research, etc.