Posted
by
timothy
on Wednesday October 01, 2003 @12:30PM
from the but-programmers-don't-need-interfaces dept.

ellenf contributes this review of User Interface Design for Programmers. "Aimed at programmers who don't know much about user interface design and think it is something to fear, Joel Spolsky provides a great primer, with some entertaining and informative examples of good and bad design implementations, including some of the thought process behind the decisions. Spolsky feels that programmers fear design because they consider it a creative process rather than a logical one; he shows that the basic principles of good user interface design are logical and not based on some mysterious, indefinable magic." Read on for the rest of ellenf's review.

User Interface Design for Programmers

author

Joel Spolsky

pages

144

publisher

Apress

rating

8

reviewer

Ellen

ISBN

1893115941

summary

Aimed at programmers who don't know much about user interface design and think it is something to fear, Joel provides a great primer, with some entertaining and informative examples of good and bad design implementations, including some of the thought process behind the decisions. He feels that programmers fear design because it is a creative process rather than a logical one and shows that the basic principles of good user interface design are logical and not based on some mysterious indefinable magic.

Spolsky's light writing style makes this book an easy read, and his personal stories and anecdotes help make his thoughts on user interface stick in your mind when you're done reading. He provides programmers with a few simple guidelines to follow, such as "People Can't Read," and "People Can't Control the Mouse."

His focus on the logic of good user interfaces and his push to develop a good user model is bound to resonate and get programmers to think about making their interfaces logical from the user's perspective, rather than the perspective of the inner architecture, which the user could typically care less about.

The reminder to focus on the tasks the user is trying to accomplish rather than the long feature list that usually gets attached to product specifications should be read by product managers as well, of course. In fact, the absence of specific platform details makes the book a good read for anyone involved in software design -- with the caveat that it is not aimed at people with much design experience. This is a great starter book and makes the process understandable, friendly, and fun-sounding. (One of the things I appreciated was how much fun it sounds like Spolsky has when he's working.)

Spolsky encourages showing the in-progress software to users and watching them use it. I think one of his best points about usability testing is that if the programmers and designers cannot bother to watch the users during the testing, they're unlikely to gain much from a thick report by a testing lab. He encourages simple, quick, and casual usability testing, something even the smallest firm could afford and from which they would could draw useful improvements.

If you have much design experience, you'll find this book a bit basic, but even then the examples are worthwhile to read and remind yourself how a good idea can be poorly implemented sometimes -- usually by taking it too far! I was personally hoping for some richer comments about designing web applications, but if more people start paying attention to the basic guidelines he's covered here, web users will benefit.

In summary, the book is aimed at programmers without much design experience and Spolsky does a great job of hitting his mark. I think product managers without much design experience would benefit as well, as it provides a good basis for thinking about user interface design.

Are we supposed to assume that creative and logical are now mutually exclusive? I always thought they were complementary. I sure as heck wouldn't find computers interesting if it was all rote and mechanics.

I don't think the intention is for it to be exclusive, but rather that it is not exclusive. The point is that many programmers believe that designing a UI is a creative process, because at some point they designed a UI and they were told it was ugly. This is an unfortunate comment, since the rejection of the UI was more likely on cognitive grounds rather than aesthetic, but the word ugly can apply in either case.

There are fundamental rules of UI design, and there are UI best practices. When these are adhered to, then the UI will be cognitively appealing to the user. In addition, there are liberties that a UI designer may take, and innovations that can be made (per application) that can add up to a smashing UI. But if you are unaware of the rules and conventions, you will fail to create a good UI, and if you don't even know that the rules exist you may be liable to blame it on a gap in creativity rather than a failure to fulfill a logical design.

Unfortunately, UI can also be an area that should *not* be consumer-driven.

The recent facination (last five years) with media player authors to make "pretty" interfaces that immediately grab a user's interest is a great example. The UIs are far less usable, are inconsistent, are frequiently slower and buggy...yet authors keep pumping out these damned bitmap interfaces to DVD players, movie file players, audio file players, etc.

The problem is that every time someone does something with a tiny bit of justification, everyone copies it wrong.

Bitmapped interfaces have seen two major insurgences that are still present. The first, pointed out earlier, was in media player apps. There are a number of cases, but I think the first instance I know of was WinAmp. WinAmp was trying to fill a hole that had never been filled before. It needed to remain constantly up on a user's desktop to keep title, volume, and position available. However, it needed to save space (see the minimized form) -- I can't think of a good way to provide equivalent functionality using standard widgets. Anyway, a difficult HCI call -- to deviate from the standard OS interface was made. It has definitely had drawbacks, but there's at least a good argument that it was worthwhile.

Along came a huge number of media player designers, all of whom looked at WinAmp and decided at the bitmapped interface was what made the thing successful. They started churning out all kinds of horiffic unusable media players that *did* catch the eye, and *did* get users to try them out...only to hit irritation with the interfaces. Media players pioneered spikes hanging off of windows.

The other major example is graphic plugins, dating back to Kai's Power Tools. For those not familiar with the tool, KPT is a set of Photoshop plugins. It was written by Kai Krause, an extremely talented graphics programmer. He felt that using custom bitmapped widgets was a good idea. Again, his decision was somewhat arguable, but it let him showcase some of his software's effects, and more importantly, he did a reasonable job for someone going with an inconsistent interface -- he did a few things that would have been difficult with a conventional widget set. KPT had a tremendous functionality set, and succeeded wildly, allowing the company to grow, change names, and develop and acquire other software products like mad. The company continued to produce other outstanding products, also with bitmapped interfaces (with greater and lesser degrees of justification for their nonstandard interfaces. KPT Bryce is a notable example.

Naturally, a number of other, less talented, Photoshop plugin development companies that were producing products that were not particularly price-competitive or feature competitive looked at KPT and said "Gee...KPT uses a bitmapped interface and is successful. That must be what we're missing." Over the next few years, a *flood* of inconsistent, bitmap-interfaced Photoshop plugins hit the market. These were, as a rule, less well-done than the original KPT, and were a complete pain in the ass for a set of people that mostly used Macs, and had traditionally had one of the most consistent user interfaces in the history of personal computing.

Bitmapped, custom interfaces are almost always a bad idea.

There was also an influx of CD-ROM based titles with bitmapped interfaces starting in the early CD-ROM days. Lots of low-budget titles, educational titles, etc. Macromedia Director played a major role in the proliferation of these. Again, a bitmapped interface added nothing to usability, and frequently exposed bugs. It took a few years, but eventually designers realized that users didn't *like* atrocious bitmapped interfaces, and stopped.

Today, almost all games have a menu system that uses a nonstandard, bitmapped interface. Part of this is because they often have console ports, where there *is* no standard widget system, and part of it is because there's a perception that the customer *wants* a m

Unfortunately, UI can also be an area that should *not* be consumer-driven.

You are actually sorta wrong here. People don't ask for those ridiculous bubble-alien interfaces, they are often times shoved in their faces by over zealous graphic artists. (read MS media player, ugh I can't stand that thing anymore, I like version 6.4...) Though I do agree with the rest of your comments and think they are right in line with the reality of the end user.

The absolute worst interfaces I have seen in my life are made by pure artists, and then the poor programmer has to make the thing work.

I am a designer, artist and programmer. I have found my niche here, I design interfaces for about 1/3 of my job, I get hired just to do that at times. A couple of things I found are that -

1. I have to force myself to keep things simple.

2. The graphics have to amplify the use of the tools.

3. You have to always put yourself in the position of the end user.

These keys basically make my interfaces look like everyone elses out there except for some basic visual look and feel things. There is only so many places and so many ways you can make a button or a menu and have it be useable. My job ultimately comes down to dealing with custom interfaces for dealing very custom data. (not really like media players which are very common and a VCR style control can only be made so many ways)

End user's scream for easy to use stuff. Graphic designers are impressed with _cool_ interfaces and tend not to consider useability, but ultimately get the job of UI design regardless of their qualifications for it.

Programmers tend to not consider useability in the sense of where to put buttons/menus, what context to place them in or what to name them for end users.

So outside of these two camps is where I have to sit. I have to argue with the management, the other designers and programmers to make it obvious to them that the users' need things these groups don't consider important.

The values of the graphic designer are they make you feel good when you see and use the application. The value the programmer brings is that the application runs well and the controls work as they should. The value I bring as the UI designer is that I make sure everyone plays nice together to make something that an outside user will want to use, can use and ultimately doesn't have to be taught how to use, as it is intuitive.

What the basis of inituitive _is_ though is a matter of a different debate.

I had the same reaction. Of course I haven't read the book, so I don't know if Spolsky is being reported correctly either.

Considering that I often do agree with Joel's commentary on aspects of programming and programmer thinking (for example his concept of leaky abstractions [joelonsoftware.com]) I am wondering if it was something he was saying for effect that got took out of context.

OTOH Joel is quite aware that not all programmers are equal. So it could be that this new book is aimed at non-creative types who want to make t

Heh, reminds me of Donald Norman's book The Design of Everyday Things [amazon.com]. He loved picking on things where aesthetics weere given priority over utility, like doorhandles that were the same for push and pull. He dismissed it with a sniff and "probably won an award".

Of course, his own book suffered from the same problem...it was originalled "The Psychology of Everyday Things", which let the book refer to itself as "POET", kinda nice.

Of course, bookstores and other catalogers kept putting it under "psychology" rather than "design".

And I do remember hearing that anecodete about the renaming, but have no idea where.

The front of The Design of Everyday Things tells the story of the renaming of the book. I can't find the text of the intro anywhere on the web (via googling for specific phrases) so:

When Doubleday/Currency approached me about publishing the paperback version of this book, the editors also said, "But of course the title will have to be changed." Title changed? I was horrified. But I decided to follow my own advice an

Is that users are fucking idiots.Some user just posted an item how she highlighted her work and then hit 'backspace' and deleted everything.She wanted to know what we could do for her.'Feel bad' was about all we could come up with. 'Laugh' was another, but we didn't think she'd like that.

Ah, yeah. Although one level Ctrl-Z does work in textareas on IE at least, once you've gone past the starting page you're often screwed.

Actually, I tend to get screwed a lot by stuff like Slashdot's "wait 2 minutes" limiter...I don't mind waiting so much, but I think the submissions page has "No Cache" or something set, so everything I typed has gone away. I've had a similar problem elsewhere; one hacky solution (when I'm thinki

The problem I generally see is that some users don't pay attention or just don't care, or just like to gripe.

Case and point: I do a lot of web-based tool work at my job. I added a paricular feature recently. I explained this feature at a conference call with the group that was to use the feature. I then explained the feature AGAIN in a summary email about the changes I was going to install the coming week. The form has online help explaining the feature and how to use it s

I've just done a quick test: I created a page with a textarea element (multi-line text box), served it up from one of my machines, typed and selected some text and hit backspace, then Ctrl-Z (Windows) / Command-Z (Mac). I got the following results:

This statement is why data entry applications just shouldn't be HTML forms-based. That puts too many constraints on you to design a good user interface. There are alternatives. At work, the HR people in our department use two personnel systems:

The organization-wide one, which is a HTML form-based web application. It's awkward to navigate, and our people really hate it.

The one we designed for them. [*] It's Oracle Forms-based, which means they navigate

I'm not the smartest developer around, but a lot of users like me because I listen to them and try to implement what they want. Sometimes, that means talking to the smart developers to see how to do something so that the users don't have to talk to them; which, I guess is becoming a useful skill these days.;-) I like to give credit where credit is due. So, when smart developer helps me, I let everyone know it was the smart developer who helped me. That way eveyone is happy. The users get what they want, th

Most programmer think they know how to do UI.(Frankly, I think many of them do, to a certain extent, if they're reasonably smart and understand ideas like not throwing too many options at the novice user)

Well, speaking as a programmer who "uses" many other pieces of software...yeah, I think I do have some better ideas for many of the pieces of software I use...

Of course, many of my potential suggestions have to do with "improvements" made in UIs I know, so I have to sort out "I don't like it because I'm not familiar with it" from "I don't like it for these specific functional reasons" (and that's always with the risk of not seeing why the "improvement" was made...there could be decent reasons for some of t

And, and one other windows annoyance:any (non-browser) program that opens a URL thorugh the OS, be it the start menu, should OPEN UP A NEW FRICKIN' BROWSER WINDOW rather than highjacking an existing one. If I have a window open in the background, there are GOOD ODDS I that I *want* the information that's in there to STAY there. Double Duhhr.

any (non-browser) program that opens a URL thorugh the OS, be it the start menu, should OPEN UP A NEW FRICKIN' BROWSER WINDOW rather than highjacking an existing one.

Actually, it's the browser that makes that decision. If you find that these third party applications hose whatever background content you were holding regardless of the available, you might want to switch browsers.

I can tell you how to fix that in IE: Goto Tools -> Internet Options -> Advanced. Look for an item called "Reuse windows for launching shortcuts". Uncheck it.

I think not.
In my app I display about 1,000 genes. At any given time a user can right-click and open up a corresponding web-page. Does that mean that I want 1,000 windows? No. It means that the user should have the choice of getting 1,000 windows, or keeep using the same one, which you have postioned to be readable next to your app.

Programmers need the 80-bazillion options Visual Studio requires, because Visual Studio is a tool for making other tools.

On the other hand, users don't need all those options (at least, for the average user). Users want a hammer, not a combination forge-lathe-grinder with optional fiberglass extruder.

The argument is constantly made, "What about 'power users' and people who really do need extra functionality?". Fine, OK: put that stuff "under the hood" and document its location and functionality. But don't put in a user config dialog with 27 tab groups, 40 options per tab, with an 'Advanced' button on each one.

In fairness, there's less and less of this. Windows programmers are starting to understand the value of simplicity, just like Mac programmers are starting to understand the value of "power user" options (the `defaults` command, for example).

It's a non-trivial problem:* show too much, newbies get scared* hide it too well, and medium-experienced folk will never learn about it and even power users might curse you

And so many "power configurations" are so poorly explained. More than half the time context sensitive help just lists the name of the command, or doesn't give any context as to why you would want to use it.

I have very mixed opinoins about Window's "make unusued menu options go away" way of co

The argument is constantly made, "What about 'power users' and people who really do need extra functionality?". Fine, OK: put that stuff "under the hood" and document its location and functionality. But don't put in a user config dialog with 27 tab groups, 40 options per tab, with an 'Advanced' button on each one.

Way back in days of yore, when Microsoft was still working out how to do overlapping windows, there was a company called Geoworks that produced a really nice office suite for the PC.

I won't go into details about it, but one of the really cool features was that each application had a tunable user interface. For example, you could set the word processor to user level #1 (novice) and it would turn into Windows Write: most of the controls went away, and you ended up with toolbar buttons for italic, bold, underline, etc, plus justification options; you got simple menus that let you pick things like the font and size directly; you got really, really basic page layout features --- I think it let you pick your paper size, and that was it.

OTOH, turn it up to level #4 (expert) and it turned into Word. There were controls everywhere. Hierarchical editable character and paragraph styles, embedded fields, hyperlinks, a full vector drawing package including rotatable text (also with hierarchical editable styles), a full bitmap drawing package, up to four seperate customisable toolbars, ruler and frame based layout, etc, etc.

And they used the same files.

So it was perfectly possible for Precocious Teenager to log in in expert mode, put together some pretty templates, and then Grandma could log in in novice mode and type text into them with simple formatting. Mum and Dad could use levels #2 or #3, which gave you more features without the overwhelming complexity that level #4 gave you.

It was such a startlingly good idea that I am not at all surprised no-one appears to have done anything similar.

(Hmm. You might still be able to download an evaluation copy here [breadbox.com], but I suspect it's a pig to run on a NT-based Windows. Worth a look, though, if you want to be amazed at what it's possible to do on a 2MB real-mode DOS machine.)

Microsoft has contemplated this for years as it is a fairly common request. Raymond Chen, whom you might know better as the creator of the wildly popular TweakUI, has been a Windows developer for several years. He has a blog entry [gotdotnet.com] describing why they've never done this.On a side note, I've come to realize that Microsoft only makes products for 2.1 audiences:

1. Home/Inexperienced/Novice Users. This is your corporate drone, your mother, and the kids at school. They all want to get on the PC, get the email, write some documents, and surf the web. Don't care much for how or why things work, only that they do. This is why we end up with the gaudy Fisher Price interface and wizards and all sorts of unfunctional junk.

2.1 Developers. Yadda yadda yadda... need apps to sustain a monopoly... the whole bit. They get things their way inside Visual Studio and not very much else.

What I object to is there's no class for the ever-growing market of Techies. People who understand the desktop machine they use every day. Many of these are programmers or systems administrators so they know what's going on, they know how they want it done, and they know how they want the computer to do it. Unfortunately, theirs is a life of constantly changing unfunctional defaults to more efficient alternatives, which is of course a mind-numbingly difficult task after you've done it more than once. If we can have predefined security templates [microsoft.com] that apply to a machine to change a slew of default options, why not expert templates?

Windows isn't any better. Sure, CTRL X/C/V are fairly standard, but anything more than that is terrible.

Want to do a "find"? Well, it's CTRL-F... usually. Unless you're in Outlook, where CTRL-F does forward, and find is (intuitively!) F4. Oh, except for the main message list, where Find doesn't have a shortcut at all, but advanced find is CTRL-SHIFT-F. And don't get me started on third party apps like Textpad (which is a great app, but uses F5 for find and F8 for find/replace).

Button location is another bugbear. OK and Cancel randomly move around dialog boxes, swapping positions with merry abandon. Always assuming they're present, of course - dialogs are sometimes closed with "Ok", sometimes with "Close", both doing the same thing (often in the same application. Sometimes there's a close box, sometimes not.

A much more consistent interface is the mac, for historical reasons. Find is always CMD-F in every major application. Closing a window? Always CMD-W. Quit an app with CMD-Q. When it comes to dialog boxes, Apple doesn't just specify the names of buttons - they tell you where they should be placed (to the pixel), how they should work, what types of icon should be shown for each type of alert and so on. Sure, apps don't need to follow the guidelines - but they pretty much all do, simply because anything that doesn't just looks "wrong" to mac users who are used to consistency.

It always bugs me when I see linux advocates pushing coders to take Windows as an example of a good interface. It's a dreadful interface (admittedly much improved recently), and despite Apple's recent minor UI setbacks in OS X, it's still by far the best designed interface available. Don't just copy the style - if you understand why the mac interface was designed that way it was, you'll be able to produce something nicer than 90% of apps on any other platform.

It's always fascinated me how linux advocates will gloat about how microsoft spends millions on Windows security and ends up with an incredibly insecure OS, but are totally unwilling to believe microsoft can spend millions on usability research and wind up with a completely unusable interface.

Not to defend that attitude, but for many people 'usable' is defined as 'I learned it this way, so it must be right.' It's an area where the majority can be right (for some value of 'right') simply by weight of numbe

The last thing the world needs is more programmers designing user interfaces. Most programmers know they suck at it, and their results are/tend to be pathetic. Nobody knows how many lives have been lost (measured in hours of frustration) by bad programmer-designed interfaces?

Let's face it, an interface is too complicated for most programmers to handle. A UI can be seen as a multidimentional problem (dimension in the real sense of identifying property) that can be viewed from multiple points of view, and each point of view filters out various dimensions of the program underneath it. It also requires you to be able to actually view things from those multiple POVs.

So for those programmers thinking about UI, don't do it! Stick with command-line interfaces, and let other people take your code and wrap it in something like AppleScript studio, or whatever.

Nobody knows how many lives have been lost (measured in hours of frustration) by bad programmer-designed interfaces?

Of course, then people talk about how great MS HCI-designed interfaces are and igore the Start button as a whole, the Office XP uses-different-widgets-from Windows 2000-uses-different-widgets from Windows 95.

They're just used to it. There are damn few really well done user interfaces, and a huge amount depends on a given user's background and biases. I consider Anarchie to be one of the v

Good UI design just takes a lot of time and a lot of listening. First, you design the interface to do what you want it to do. You try to pretend you know very little about the actual mechanics of what gets done behind the scenes to make whatever it is happen (a difficult proposition, but you should be able to get relatively close). Then, code the interface (just the framework, don't waste a whole lot of time at this point).

Then, show it to someone representative of the intended audience. If you're coding a general purpose Windows app, show it to your grandmother. See if she can figure out how to work it. Encourage conversation about it. If she can't figure it out, don't get argumentative. Find out what SHE thinks the interface is trying to do, and try to find out what about the interface makes her think that. Then, try to get a few ideas on how to improve it. She won't be able to give you any real specifics, but maybe she can give you a thread you can explore in detail on your own.

Re-design based on what you learned. Show it to her again. Repeat until she "gets it". Then, go show your new design to someone else in your target group. Make changes by what they say. If what they say contradicts what your grandmother said, do your best to reconcile the differences. Make up any gaps you can't fix with documentation targeted at the bits you can't seem to make any less confusing.

A lot of engineers fall into the trap of designing interfaces and sticking with them, even if they are deficient. They insist the users are just "too stupid" or just "don't get it" or just "aren't using it right". They fail to realize the whole idea of a good UI is to make sure users CAN'T use it wrong, and to make it as difficult as possible for the user to fail to understand.

"The customer did something wrong" is NEVER a reasonable excuse for a problem in a UI. If the customer did something wrong, it's YOUR fault for making it possible for the customer to do whatever it was he did wrong.

This psyco-babble about grandma being the target annoys me. Sure everyone is a novice at sometime. However most applications are used by experts. (they started at beginers, but have learned it) How do you support those experts who are doing a task everyday? They have different demands, now your easy to learn app needs to be easy to get the common tasks done with. That is a completely differnent level of design.

Take configuring the network on windows. It is fairly easy, except for two points: the tas

It seems as if you are only familiar with the GUI method of setting network properties. While it's easy to make fun of Windows XP for all its gaudiness, Microsoft finally added a whole slew of great command line tools which are often overlooked. netsh for example is a great command line, hierarchical interface to network adapter properties and settings. Spend a little time with it and you'll never go fishing through those silly dialogs again. diskpart is another great addition that should have been there long ago. sc for service configuration and bootcfg for making changes to your boot.ini - the list is pretty extensive. More info in %systemroot%\help\ntcmds.chm.

Then how do you deal with contradicting feedback? What if users are contradicting each other? A very good example would GNOME: half of the users scream "more options! more options!" while the other half screams "less options! less options!" (this is of course a heavily oversimplified view of the situation; but you get the point).I's happened more than once that users contradict each other.

That's why I said to try your best to reconcile those differences. Try to satisfy both as much as possible, but when push comes to shove, the group representing the most money to be earned (or in the case of free software, the largest or most desired audience) wins.

You do both. Why not group options together as a single one, and then provide a way to get to the specific ones. You could for example have a pop up menu with themes, and at the bottum put "Custom..." which opens a new dialog with all the specific options. And a compatibility level menu, etc.

I'm definitely in the "less options!" group, and I think one important thing is to consider if everything really has to be an option, or if the program can figure it out itself. Not sure how that applies to something l

That is why it is a good idea to watch what users are doing and what their goals are.
What users think they need, and what they really need are often not the same thing. Users are users, not usability experts.

'Options' are good case in point. Often people want extra options to un-break some poorly chosen UI behaviour or functionality. It is beter to find out what is really causing the problem and fix that.

"The customer did something wrong" is NEVER a reasonable excuse for a problem in a UI. If the customer did something wrong, it's YOUR fault for making it possible for the customer to do whatever it was he did wrong.

Oh, I don't know about that kind of extreme. I would say there is a threshold for your audience. You can only hope to get it right for the vast majority of people. Some people are unqualified for tasks. While UI design is certainly about getting it right as much as possible, you're always going

You try to pretend you know very little about the actual mechanics of what gets done behind the scenes to make whatever it is happen (a difficult proposition, but you should be able to get relatively close).

Ideally, the backend coder(s) and UI coder(s) would be completely separate. Ideally.

Aimed at programmers who don't know much about user interface design and think it is something to fear

I've always found it to be somewhat the reverse: many programmers, most particularly those involved in the open source community, seem to view user interface design as something that enables them to fill the user with fear.

Or at least that's how it seems based on using any number of free apps with terrible GUIs. Maybe there is a UI design guide for sadists too.

1. If the user doesn't have to stop what he's doing to solve an inexplicable puzzle every few minutes, he'll be done waaaay too fast.

2. Obey the principle of most astonishment. Surprise the user as often as possible! Preferably with something terrifying that makes him literally fling himself out of his chair (example: the aliens in Alien Vs. Predator love to sneak up on you along walls and ceilings and suddenly let you have it from three directions -- a guaranteed excuse to press "pause" and go put on a new pair of underwear).

3. If the user screws something up, HE MUST BE PUNISHED. Usually, this means his onscreen persona (resume, spreadsheet, etc) should die a wretched, gory death, scaring the crap out of the user (see #2) and he should have to start whatever he was doing over from his last save point. This of course encourages saving documents frequently, always a good thing with Microsoft software.

4. If the software includes networking features, you MUST include a "taunt" feature. Allow preformatted taunts and on-the-fly taunts; both are equally fun for all. "Hey, BILL! Your powerpoint SUCKS!"

5. And, finally, you have to include a few easter eggs and hidden areas. These should include a "must-have" that isn't granted to ordinary users (like, say, print preview).

Some of the guys who work at our place are excellent programmers and are extremely knowledgable about the underlying technology that they're using. When it comes to interfacing their software with the user though, they start to get some funny ideas about what the user needs.

"Yes but that's how I would think it works" they'll say. Says I, "Yes but you're a certain type of guy who knows what's going on underneath it all, from the user's point of view he's looking for something completely different."

That's why our company has people like me, renaissance people if you will, who can think with both sides of the brain and provide a bridge between the technical people and the creative people who design the user interface.

It's a good learning process, all this interaction means that they get to learn a bit more about the needs of the user and I get to learn about the underlying technology. Books like this would probably help us all.

Another book that's doing the rounds at our place is The design of Everyday Things. [amazon.com] It covers much more than just computing and gives a good insight into the psychology of the user. Some of the psychoanalysis stuff is a bit deep for my liking, although overall it's quite informative.

For many developers, I don't think that UI considerations are all that important. I've often spent a long time thinking about, and discussing with users, the best means of controlling a particular (web) application. In practice, though, users tend to spend a bit of time figuring out an interface -- however esoteric or poorly designed -- and then use it without complaints. They may not be using it 'optimally', but they're happy enough anyhow.

I'm playing Devil's Advocate, I know; but still, when cost/benefit analysis comes into play then there are arguably very many cases where it just doesn't matter how much effort goes into user design: even with the simplest, most elegant interface, users will take some time to figure out how to do things - and besides, many users are now trained into using Microsoft-style interfaces, meaning that they _are_ the 'most usable' format to follow irrespective of classical design/HCI principles.

Finally, I think that there's a marked difference between having something "look nice" and "be usable". And I think that many developers *are* adept at designing systems that are usable; it's the "prettiness factor" which is more elusive - and which most users tend to care and think about.

I have to say that unless I am using some tool that is mandated by work, if I have to spend more than about 5 to 10 minutes trying to figure out your user interface, I'm going to go find another solution to my problem. Web sites and web tools in particular are subject to this.

I do some web design for work, for people who *have* to use my tool to accomplish a particular task, and I have spent a lot of time thinking about how to make the tool work best for them, simply out of consideration. I hate it when work tools force me to twist my head around some horribly byzantine interface, and I don't want to do that to anyone else.

As a side note, _Don't Make Me Think_ by Steve(n?) Krug is one of the best introductions I've seen to the topic, and his coverage is quick and to the point. I'd be curious how the book reviewed here compares to it, as described by someone who's read both.

I disagree 100%. The problem usually is that users DO NOT figure things out. Either they don't have the time, or don't know how to figure out the interface. That is why good interface design is so important. Creating a simple, intuitive interface that takes zero time to figure out is much more beneficial than a complex, technical interface that the users can't figure out or that takes too much time and effort to figure out. In my experience, when an interface requires the users to spend their time figu

I think it depends on the application (in every sense of the word), as well as on a multitude of other factors including vendor lock-in, expectations, etc. But I think that the move towards better user interfaces by users is generally a part of a reactive process: until users are shown a better option, they take what's given to them - or whatever parts of it that they understand. Most people can handle a VCR, but few bother to set the clock. I'm not saying that UI design isn't important - and I think it

Users rarely complain about badly designed user interfaces. They accept that computers are nasty, evil devices that make their lives hell and prevent them from doing work as much as possible. They say nothing to you, and then they come home to their families and say "I hate computers".

An end user not complaining about a bad UI is like somone complaining that a torture device like the rack is "uncomfy". It's just accepted that the experience will suck.

There is so much general computer-phobia in the world because end-users have not yet realized that it's not the computers in general that's the cause of their problems with an application, but rather it's the individual programmers who wrote the application who are the problem.

There are three concepts which really need to sink into the head of anyone trying to develop a good user interface on any kind of tool, device or application.
* know the users their goals
* don't make the user feel stupid
* provide rich feedback so the user is confident
Any application which merely connects APIs to Widgets will just fail horribly. The GUI isn't intended to expose the implementation to direct, marionette-style control. The GUI is there to understand the user's intentions and then

What you really want it GUI Bloopers [amazon.com]. GUI Bloopers take you step by step through the majority of the UI widgets out there and tell you what it is, why it is there, what it should do and what it should not do. This way you have a much better feel for WHY something should be one way over another. I own both the above books, but I tossed out the reviewed book. Way to much theory (some of which I very much disagreed with) and little to no substance at all. Yes every programer should know a little theory about how users interact, but the key words are "a little". What developers really need is what GUI Bloopers provides, an explination of what you should and shouldn't do with widgets.

Example: "Explain why a Macintosh pull-down menu can be accessed at least five times faster than a typical Windows pull-down menu. For extra credit, suggest at least two reasons why Microsoft made such an apparently stupid decision."

"He feels that programmers fear design because it is a creative process rather than a logical one and shows that the basic principles of good user interface design are logical and not based on some mysterious indefinable magic."

All too often the terms programmer and Software Engineer are used interchangeably. UI design is the domain of Software Engineers. A programmer should design user interfaces as much as a baker should be enlisted to make a gourmet dinner. Combine this with the fact that Software Engineering is both a creative process and a logical one, and we can begin to see why I continue to question Joel's understanding of Software Engineering. I am not saying the book isn't good. It probably is, as long as you keep these caveats in mind.

Aimed at programmers who don't know much about user interface design and think it is something to fear

I don't think people necessarily fear UI design so much as they see it as just so much scut work that needs to be done. It tends to be a pain in the ass to get all of the widgets lined up nicely and controlling the bits of your program that they're supposed to control. A decent IDE helps lessen the pain somewhat, but it's still not as fun as coding the parts of your program that get the real work done

I'll admit I come from the old school of user interface design; e.g. fuck 'em. This is most of the reason I prefer back end development, or development where the target user is another developer.

Spolsky seems to have a good grasp on the idea of Joint Application Development: you have to sit down with the users and ask them how best to make your software help them do their job. It is much more important to have software whose process model is intuitively obvious to your user than anything about how it lo

Biggest pet peeve is the simple "Yes"/"no" interface seen for a dialog in most games.

Many developers insist using just 2 different neutral colors for Yes/no, and no real indication of which color means "yes, I choose this one!" compared to "no, I don't choose this one". The result? It's a toss up.

One nice way of doing it is to darken the option you didn't choose. Don't use red and green(maybe modeled after traffic semaphores) as toggle colors.

Windows' file dialog - not only does it not remember the scrollbar location or sort order, it doesn't even remember the 'details' view - the thing that makes sorting even possible (why is any other option?), so to open the file you want, you need to:

* select the "file/open" menu entry* move to the view drop-down list, click* select the "details" option, click* move to the column you want sorted (say "modified"), click* scroll down to the desired file* move to its name, double click

I'm one of those freelancers who write a lot of code and also do a lot of graphic design work. Writing UIs and using UIs not written by myself have shown me a very simple rule that works in most simple to moderate cases. The keyword here is WORKFLOW. People use UIs to do something, and the process of doing that something is called the Workflow. If you understand the Workflow, you will know how to design the UI. I'm not talking about the fundamental UI things like where to place an OK button, but at a higher

The best practice is to actually PUT SOME THOUGHT into your interface design, rather than just jumbling everything together so that you can use it and it's easier to program, and heck anyone who has a problem can read the (skimpy) man page. I've used SO many programs where it's obvious the author put zero effort into the interface, or else assumed everyone else thinks like he does.

The worst crime is to be wierd just for the sake of being wierd. Proximtron and Spybot are offenders here.

Spolsky encourages showing the in-progress software to users and watching them use it.

If you wait until the software is in progess before you show it to users for testing, you're too late. By the time you have front-end stuff to show people, developers have already invested a lot of time, and any changes will have to fight against the momentum of the project. I know, because I've seen it time and again, at numerous companies. I've seen this happen in small shops doing $10-50K jobs, and I've seen it on $3+

Many times in my career as Web developer, I've had the responsibility of taking an existing site and growing traffic. In each case, the sites started out as ugly, since the 'design' was just wahtever seemed adequate by whoever coded the initial HTML.The first step of improvement was to get a professional designer to come and fix the site - put together a more useful navigation system, adding breadcrumbs, etc.The traffic would always double (at least) after the re-launch. Part of the increase has to do with old users having to deal with a new system, and clicking around more than they used to, but the rise in traffic was consistent over time, because more user-friendly interfaces meant more users could find what they were looking for.So, design is not just making things pretty, and it's certainly not art, since art is about personal expression - design is making things useful, or optimizing their usefulness.And slick design is often appropriate. If you run an e-commerce site that looks like it was put together by a 14-year old kid with a copy of Frontpage, you will scare away business because they think you're some fly-by-night operation.
So, spend the money, hire a designer. You can get a decent redesign for a few hundred bucks.

It takes a LOT of work to make good user interfaces, and nearly all of that work is repetitive and boring. It is easy to create inconsistencies, too. Programmers who just want to work on core or library-type routines are a dime a dozen because they basically don't have to know much about the end use of the app, just the technical requirements of the toolkit they're writing. Sort these records, rip this data from a file into memory, pack these strings into this byte array, etc. They are generalized funct

I haven't read Spolsky's book, but from the review it sounds similar to a book I really liked, Programming as if People Mattered, by Nathaniel Borenstein. I definitely recommend it if you're interested in UI development.

A key to user interface design is "you should never have to tell the computer something it already knows". That's from the old Mac interface guidelines.

Never ask the user to find something the program can find. The user should have to input a data item no more than once, ever. And this doesn't mean the user has to "register" to buy from your website, even if the "marketing" people want that.

At a recent project review meeting, developers at my company listed the strengths of our development process. To my surprise, user interface design was mentioned as one of our stronger points. While I haven't been around long enough to see many projects, I definitely didn't believe this was the case.

I think what has happened is that web applications have expanded the user base of our programs while lowering the training curve. Where we might have built an application for a specific group in the past and conducted training specific to their needs, we're now deploying web apps that are used by a much broader group that gets no specific training.

You can get away with mediocre user interfaces when you're there to tell a group exactly how it works (and they pass on that information), but if your work needs to be quickly understood by a broad base, then usability is a necessity.

That brings back bad memories.
Last year, during my freshman year of college, I was one of the lucky test subject in a study conducted by one of the CS students. (See http://www.users.muohio.edu/birchmzp/csi/dharna.pd f [muohio.edu]). It was a poor VB project that had an annoying talking head bouncing around the screen, giving instructions, that I was explicitly told "not to dismiss." And yes, it used Microsoft Agent... I wanted to kill it. Especially when I had to repeat the same action over, and had to listen to

Much like seeing people write out something like this (which I see constantly), "I should of taken out the garbage." Damn that one irks me..almost as much as when people spell out "Chow" as their signoff on an email. Of course I'm sure there is some grammar mistake in this posting just by the sheer fact I'm criticizing others' grammar. Sue me!

Implicit in the "quick, simple, casual" test is that you are doing it constantly with a wide selection of people. Spolsky actually calls it "hallway testing", meaning you grab whoever is out in the hallway and ask them to do a quick test. That's better random sampling than you'd think.

You're missing another point, though, with your claim that "a user's mistake may be the lack of training": namely, that all users start out as newbies, and most users never progress beyond that level. If you spot newbie pr