Posted
by
ScuttleMonkey
on Monday September 25, 2006 @07:36PM
from the new-hotness dept.

jcatcw writes "From Xerox PARC to Apple to Microsoft, the GUI has been evolving over the years, and the increased complexity of current systems means it will continue to change. For example, Microsoft is switching from dropdown menus to contextual ribbons. Mobile computing creates new demands for efficient presentation while the desktop GUI doesn't scale to larger screens. Dual-mode user interfaces may show up first on PDA phones but then migrate to laptops and desktops. Which of today's innovations will become tomorrow's gaffs?"

A menu shortcut for opening 4 xterms, three on the left and one tall one on the right, filling all the space on the screen. I should have one for an emacs and three xterms (like in the above screenshot), but I don't.

Nine virtual desktops, each one accessible via Ctrl + Shift + one of the letters in the 3x3 block at the left edge of the keyboard (QWE / ASD / ZXC). I think of them

i think they have been slowly DEvolving over the years, becoming more bloated and complex. They are starting to outreach the average joe.

We have had simple and effective GUI's in teh past, like Atari's GEM, and Apple's Newton. Simple and effecitve. but they were tossed aside for much larger and complex systems, requiring more hardware and brain power.

I'd say the opposite. When systems are overly complex, it's a sign that they're in need of simplification. OS X shows what such a system looks like. Users have an easier time working with the system, while programmers have an easier time maintaining it.

Windows Vista shows what happens when you keep trying to complicate an overly complicated system. The system eventually extends beyond the control of the developers, making each change more and

Methinks you either slept through your college biology lecture, or just decided it wasn't worth going to. This is a diagram [okstate.edu] of one facet of a cell's existence, eating. Just that one thing, and there are hundreds of little dots, each of which stand for an enzyme. Then, in multicellular organisms, you have all the signaling pathways (which are multistage...think the 7 layers of the TCP/IP protocol) that is necessary for cells to interact, as well as the massive transport system with THREE different types of transport vesicles...

Then, if you think about the code for cells...in "evolved" eukaryotes, there are not only long sequences of DNA inserted from viruses ages ago, there are copies of genes that just don't work because they're mutated. Talk about junk code. But those sequences are dutifully preserved inside your very cells. It's a nightmare that even Microsoft would hate to dream.

I said that they're NOT GETTING MORE complex, not that they aren't complex already. While extra codes are swapped in and out, the general length stays approximately the same between generations of the overall organism. So individually, cells do not grow in complexity. However, a multicelled organism is more complex than a single-celled organism by way of a modular yet cohesive system. A bit like well-designed components in an Operating System.

Back on the subject of software, the more the complexity is packaged into simpler modules, the more the system above it can be simplified. The end goal is to have modules of a stable complexity (like TCP/IP) forming together to create a simpler OS. The problems occur when there's a monolithic structure that exposes lower-level complexity at a higher level.

My product an image manipulation system, has had contextual, ribbon-based selection of tools since 1990. We use a chapter/verse metaphor (click on one level of the toolbar to select the chapter, such as filters or geometric tramsforms, the next level slides into view which contains individual tools such as sharpness and feature removal, or ripples and rotations.)

This layout, like MS's "new contextual ribbon" puts what you need in front of you, and buries everything else until you need it. Our chapters function exactly like MS's "tabs" and our verses function as accessors for sets of tools -- basically, there are three levels to the GUI. We don't put the third level in the toolbar, because there are far too many controls for some tools (as many as 70 sliders, buttons, drop-downs) and it is (we think) a poor decision to always take large amounts of vertical space in an image-processing application. Dialogs let you move all that tool-consuming real-estate around. They aren't modal, though, so you can keep working.

This really is a better and more evolved way to work, and I commend MS on finally getting the point (although I note with some humor that they certainly didn't invent this methodology.) Of course I'm partial to it, having been building and using such an interface for well over a decade now.

The thing that seems to stick in user's craws isn't the difficulty (or "increase in complexity, as you put it) of such a layout, because there isn't any, really... but simply that it is "different." Change is a force for user discomfort, especially UI change. I'm not saying that UI's can't get more complex, they certainly can, but contextual ribbons are a simplifying factor, count on it.

The evolution of organisms on earth do have a trend toward complexity. However, complexity is not a goal of evolution. Complexity is not required for evolution. The analogy is false, because its premise is false.

The analogy is false, because its premise is false.Rather, if Chewbacca lives on Endor, you must acquit. I think that a function of evolution is that as traits emerge, a species starts to diversify, and the complexity of the system by which the trait is favored becomes more complex, until it flat out wins, then there is a return to simplicity.

It's sort of that way with scientific theory. Someone will have a quantam leap (no pun intended) forward in a model that describes the universe, and it's something really short and sweet, like E=mc^2. And then science says, "Oh, except when you're in a crowded elevator!" and, "Well, not really for very large values of 2!" and wonderful stuff like that, until someone realizes that, duh, the universe is really simple. And so on.

I want to also say that when I say the universe is really simple, I don't mean we can comprehend it. I just mean it's simple. If Chewbacca lives on Endor, you must mod me +5 Insightful.

i have to agree, i miss the old KDE-2.2 && Gnome-1.4 and Win95 GUI for its simplicity, nowdays both Windows & Linux are suffering from the bloat of feature creap, but i doubt we will be heard, lets hope xfce stays simple, there is always EDE or ICEwm, then there are lots of light and simple window managers & file managers for Linux, good thing linux offers a choice and the Windows users wont get...

Yeah. I have to use FVWM2 with a minimalistic config file to get the setup I want. no gnome or kde for me. just too much junk in there. what use do I have to title bars, window borders, start menus etc... when I primarily just use the keyboard. I wish there was a good way to do mouseless browsing but I haven't found anything good.

Unfortunately too, people learn bad habits and build up expectations that will be with us forever. For example Start/Shutdown is so logically broken, but once people have learnt about the Start button, they expect to see it there.

It makes sense when you understand a) the purpose of the "Start Menu" and b) the history behind it.

The Start Menu is the "one stop shop" for initial tasks in Windows - it's the UI element you go to (or are supposed to) for launching programs, configuring the machine, searching, help, etc, etc. It is (roughly) equivalent to Classic MacOS's Apple Menu, the NeXT Dock, and similar "do it from here" elements in other GUIs. Logically, in Windows, the "Shut Down" command belongs in this UI element and nowhere else (with the possible exception of a dedicated button on the taskbar, like Ubuntu does - although back in the day the problem then would have been wha icon to put on the button).

*Originally* (in the first "Chicago" betas), the Start Menu wasn't actually called the "Start Menu" and didn't have "Start" on it - it was just a button with the Windows logo, much like the GNOME and KDE versions. However, during their usability testing, Microsoft found that users couldn't actually figure out what to do when the system first booted and all they had was an empty desktop and taskbar, with a little Windows logo at one end and a clock at the other (I can't even remember if the clock was there at that stage). So the button got a label - "Start" - to signify that it was the UI element where you "started" to do everything.

First impressions count a lot, so if you take away the Start button most people will feel a bit lost and will have a negative experience. Thus people won't want to let go of Start even if it is in their longer term interests to learn something better.

It's interesting to note that in Vista, the "Start" label is gone. Presumably Microsoft's usability studies have concluded that the "Start Menu" UI element is now so entrenched, users no longer need to be taught what it is.

I've been developing touch screen talking pie menus [piemenu.com] on handheld devices, like the Pocket PC. Pie menus work very well with touch screens, but of course the way they track and display and give feedback has to be adapted to the quirks of small touch screens. Talking pie menus give you audio feedback with a speech synthesizer, so they don't require a lot of visual attention and hand-eye coordination.

Talking pie menus make it possible to use an application without looking at the screen! That's important for mobile applications like GPS navigation systems, which people use while driving (despite all the warnings again it).

You really ought to look at the marking menus in Autodesk's Maya, which have been around since before Maya existed back when it was called Alias Power Animator. These marking menus are also hiararchical, and allow for moving up and down the hiararchy easily (which yours don't). Someone even developed it further as a script to include icons (Xumi [highend3d.com]) Also, there have been a number of pie-based gesture extensions for Firefox for as long as there have been extensions for Firefox, Firebird, etc... One such ext

The first publication that described the basic idea of pie menus was "PIXIE: A New Approach to Graphical Man-Machine Communications"; by Wiseman, N. E., Lemke, H. U., and Hiles, J. O.; Proceedings of 1969 CAD Conference Southhampton,

Of course I've heard of Steve Mann's work, and his Gnu/Linux Wristwatch Video Phone [linuxjournal.com], which used pie menus (but didn't talk as far as I know). He built his prototype pie menu watch in 1998, about 10 years after we (Jack Callahan, Don Hopkins, Ben Shneiderman, Mark Weiser) published a paper [donhopkins.com] about pie menus at ACM CHI'88. But in 1988 (and 1998), not many people had hardware they could carry around that was suitible for implementing talking pie menus.

Speech synthesis requires a lot of memory to store a good voice, and speech enabled applications require a lot of task-specific scripting control (so they don't start talking and talking at length about something the user is no longer interested in). I'm using the Lua scripting language on the Pocket PC, to develop flexible speech enabled touch screen pie menu based interfaces, which will run on commonly available Pocket PC phones. (I've done a lot of Palm programming in the past, but that's a dead platform.)

Here's a video that Dave Winer [scripting.com] took of me demonstrating an example application: a remote control for "Rock and Roll [podcatch.com]".

The Ribbon bar concept frustrates me no end. There's a reason that in Windows I switch everything to "Classic" mode. Having grown up with DOS from 3.2, then to DOSSHELL, 3.1,9x and now XP, I like that the fundamental concepts haven't changed. Instead of floating icons that are "intelligently" moved around by the software, I would like to always have the ability to strip back the bells and whistles.

I can tell you, I have used it, and it is far superior to any other layout scheme in an office suite I've seen. It takes up as much space as a toolbar+menus, it has much larger icons which let you see what effect you are going to have. Everything is easy to find, the layout is very logical, it highlights the portions that you need at every moment, and last but not least: it's very pretty. It's actually something that MS has done right, it's shocking!

There is a learning curve, but it's not a very long one. After five minutes of clicking I knew basically where everything were (a vast improvement over the old "hunt through the menus till you find what you want"-approach, here you can actually find stuff where they should be). If you are THAT annoyed over the ribbon you are either a) not very smart and has a hard time learning anything new or b) an unapologetic a-priori Microsoft basher. The fact is, it's far better than anything else on the market.

So long as we're still using the mouse/keyboard as a primary interface for our computers, the current GUI model will likely stay pretty much the same for at least a good ten years or so. Once something better comes along, such as AI-assisted video/object recognition, it may open options similar to what was in Minority Report. Until then though, using a cursor for interaction will remain more effective than cursing at our machines directly.

Voice recognition is getting so good nowdays, as a result of handheld devices (I no longer dial numbers on my cellphone anymore, I just say the name of the person I'm calling), I think we'll start to see voice recognition GUIs here within the next few years, at least on handhelds. The keyboard may then eventually go the way of the dinosaur as an unneccessary paripheral. We'll always see new little GUI gadgets, some of them good (Expose), and some of them bad (Clippy), I don't mind having new options for how

Total agreement. Your input devices are going to define they interface far more than anything else. We're stuck in a rut with GUIs because people are used to them, and a control people are used to is worth two in the bush, so to speak. Witness everyone here kvetching about the ribbons in Vista. There's nothing particularly wrong with them, in situ, it's just that they're new. Which is awful.IMHO, the next big innovation in UI design will be touchscreens, hopefully of the multitouch variety. I just don

"As [displays] get bigger and bigger, you can get more information to the user," says Mary Czerwinski, principal researcher at Microsoft Research. But the current desktop GUI, which simply extends the same desktop across multiple screens, doesn't scale well. With more screen real estate available, computers will begin monitoring and presenting more information to the user.

This seems incredibly divorced from reality. Lots of people use multiple screens, and extending the same desktop across those screens wo

This seems incredibly divorced from reality. Lots of people use multiple screens, and extending the same desktop across those screens works really well to manage the available space.

Well, they _work_, but I wouldn't say they work *well*. Some examples:

* OS X only has a single menu bar for all applications and all screens. So if your active application window isn't on the primary screen and you want to access the menu, you need to track all the way back to whichever screen is the primary to access it. Ditto for the Dock. Why can't there be a Menubar and Dock on each monitor ?

(Personally I've always found it rather ironic that MacOS was the early bringing of good multimonitor support, but its UI really doesn't handle them well).

* Windows has a similar problem with only one Taskbar and only one Start Menu. Why not a Taskbar for each monitor and/or, even better, the ability to pop the Start Menu up directly under the cursor ?

* Mouse tracking across multiple, big displays is slow or inaccurate unless you've got the twitch muscles of a fifteen year old first-person gamer. I want trackers on top of each screen that can monitor where I'm looking and move the mouse cursor to that spot.

* There's (typically) no "maximise across all screens" button.

So we should just take that extra screen and fill it up with pretty desklets? And this will make me a more productive person?

This seems to be the model most people think of when talking about multiple screens. For example, the typical multimonitor Mac user wants one screen for their Photoshop (or whatever) window and the other for all the palettes, toolbars and feedback windows is spawns.

* Mouse tracking across multiple, big displays is slow or inaccurate unless you've got the twitch muscles of a fifteen year old first-person gamer. I want trackers on top of each screen that can monitor where I'm looking and move the mouse cursor to that spot.

Yes people have been looking at that, but it'll no doubt take quite some time yet to make it into any mainstream products. (As with Mary Czerwinski's research -- even Microsoft's own research lab have a tough time persuading the product designers to i

While I understand that GNOME has its admirers, and it can't be classified as a failure, it sure hasn't lived up to the hype of the early days.

GNOME was touted as being a real competitor to KDE, before the days of Qt being dually-licensed under the GPL. There was some initial progress, but since about 2000 it seems that KDE has been the leader. Ever since Miguel became more focused on Mono, the quality of GNOME really decreased.

The many usability problems are well known, and were much discussed. One major flaw was the inability to enter in a pathname or filename manually. The lack of path separators made the top breadcrumb trail difficult to follow at times. The 'Places' pane wasted a lot of space when it listed few items. The file list didn't show enough detail about each file. It wasn't possible to view only certain file types.

Frankly, it was a rather massive mistake to include that dialog. When compared to the dialogs of KDE, Mac OS X, and Microsoft Windows, it was the black sheep. What was worse, on some platforms non-GNOME applications like Mozilla Firefox made use of that dialog, in turn making their usability a nightmare. While things have gotten better, and the newer dialog is a slight improvement, the mistake was still very costly.

I personally know about six people who used GNOME, and swore that they'd never touch it again after seeing that monstrosity. One went back to Windows, to the best of my knowledge. The rest switched to KDE, and have been quite pleased, as far as I know.

I think that the GNOME file chooser disaster is one incident that all GUI developers should learn from. At least then it wasn't a total waste.

Their dialogs have made me refuse to use apps under Windows that use the toolkit. Things like GIMP. When on Windows, why don't they delegate to the common controls provided by the platform instead of their own dreadful implementations?

For some reason, this is actually a UNIX trait. You should have seen the file selection
dialogs in Motif, Athena and various earlier X toolkits. It was as if programmers decided they
hated their users. Many applications even wrote their own choosers. Oh boy did they suck. The Gnome
chooser is way better than the bad old days, but as you rightly point out it isn't something to be proud off.
(Try selecting a file or directory starting with a dot

Congrats on picking my pet effing hate, our university servers seem to have that DAMNED gnome filechooser as their only installed one, and as a result both eclipse and firefox use them for everything.
Here's a fun one, setting an external application as the default action for filetypes in eclipse, can't just type the command, can't use the $PATH var, have to browse around all the bin directories looking for the app you want with that horrible chooser.
grrr, the eclipse guys do a really good job, but when choosing a "run" application, there should ALWAYS be the option to just type the command if you intend for your product to be used on a *nix variant.

Let me start off with a disclaimer: I hate KDE. (Now, now, it's not the time for a flame way!:P)

Personally, I don't mind that interface. Besides, if that's your only problem with GNOME, then we must have it pretty good! I "strongly dislike" KDE's browsing system (one arrow left, one arrow right, one arrow up, one arrow is a crazy swirl, all so close together and so similar in appearance that it really gets frustrating at times.) And why the default is set to open folders with one click is beyond me. I have one program (Noteedit) that uses the KDE interface, and because of that, I didn't bother downloading all of the customization crap, so I'm stuck with it (if someone has a solution, tell me please!). Also, the taskbar/menu at the bottom always looks too cluttered to me. And the clock is just ugly. And why do they stack the window list in two rows? I came over from the Windows world, and was introduced to GNOME and KDE at the same time (I was playing around with SUSE and Fedora). I liked both the same and eventually my final decision came down to the GUI. KDE just hurt my eyes to use. It's a little hard to explain. All of the icons were so...BIG, and pixilated. And despite the fact that KDE looked a lot like XP's UI, I went to GNOME.

From what I can tell, people are about evenly divided on this issue. It's just whatever appeals to you. No, GNOME is not paradise incarnate, but to me, it's better. Besides, I sure you can customize that path chooser;)

But isn't that the beauty of FOSS? The fact that you can actually choose? Sort of like democracy, it's all the arguments that actually let you know it's working.

You linked to an older version of the file dialogue. There are really only 2 major problems with the gnome file menu 1) It's dog slow when you type paths. Not just slow but Shocking Slow. Absolutely Astounding Slow. 2) There is no way to view hidden menus. There should be a "Show Hidden Files" button. It should be a button not a pull down or anything else.There are sometimes when the file dialogue really pisses me off. I hate that little one that firefox and btdownload use. They always point to the

It's very true that the fixed menu doesn't scale... This is probably the biggest reason that I use Fluxbox. It allows me to right click anywhere on the desktop and pull up an application menu. Contrast this with my XP machine which I'm using now: It has two widescreen displays but the Start Menu only shows up on the left screen. If I'm on the non-Menu screen, I need to scroll across two desktops to click the Start button and then select. There are workarounds but some keyboards don't have the Windows key,

You sir have hit the nail on the head. This is exactly the same reason I use Flux too. I have never given much thought, but the agony of using a different OS/DE and not having my application list NOW and close to my point of of focus drives me batty.As I type this I think that is the answer. Point-of-Focus, (I should (tm) that now and get ready to sue.) is the MOST user-friendly way accessing data. Your eyes are there, your focus is there, and more importantly your thoughts are there. Not needing to m

I often use keyboard navigation anyway with the Start menu. You can use the first letter of menu items to jump to them.If you don't have a Windows keyboard, Ctrl+Esc brings up the Start Menu (didn't this use to be a key combo for task switching under Win 3.1?), and Shift+F10 brings up the context menu (is that what you're calling the applicaton menu?). Good look getting the latter working with apps that refuse to follow Windows UI guidelines, like Trillian. Why are some programmers so ignorant?

Is a desktop GUI that is based on the menu system style of MythTV [sourceforge.net] instead of "START". This would make it SO easy to navigate for novices, I mean, after all, what's wrong with a GUI for a computer that was made to be easy enough to navigate for people who watch TV?

Ideally the computer should just know what you want to do and do it for you. The problem is telling the computer what to do. I'm surprised that voice-recognition hasn't progressed further. The Apple OSX voice stuff is pretty cool but not responsive enough to be useable. And all it does is integrate into the window manager. Why would I want to ask the computer to open a window if I just want to ask a question? For instance, say I want to know what time it is. I can't just ask the computer, "Computer, what time is it?" Instead, I have to say, "Computer, open clock" and then read the time. Maybe some feedback would make it better. Communication requires feedback. Maybe the computer could respond, like the XO of a ship responds to the captain: "Make turns for 30 knots" XO: "30 knots, aye"

I think a big problem is the mouse. The mouse is so great for so much, yet it falls short. I know they have mice that have practically a whole keyboard on them. I'd like to see that idea extended beyond the window manager also.

One thing that has really excited me recently is the Optimus dynamic keyboard [artlebedev.com] over at artlebedev.com. Thinking more about adapting the interface around the user and the software is important. A lot of that will be workflow analysis, such as "User A always saves before printing, so if they save, make the print icon easier to find and click." will be necessary.

A lot of what needs to be done the computer can do for us. The hidden options in MS Word are a good example of this. Although it was a support nightmare when it first came out, it really helps speed up the work when you are doing common repetitive tasks. This could be expanded to allow different hidden options depending on what you're working on. For instance, if you're writing a letter, addresses and envelope stuff should magically appear, but it should not show up if you're writing a scientific paper.

One thing that the MS monoculture has brought us is a somewhat standard UI experience for most users. That would be impossible with 100 competing OS's. The web does not offer that opportunity except maybe through some toolkits like Swing (which sucks), or Ruby on rails with the prototype.js. The monoculture has stifled innovation, however, so I hope in the future there will be more people thinking about design when they make their interface and MS being open enough with this Aero stuff to allow designers freedom to make something new. I seriously doubt that will happen, however.

And all it does is integrate into the window manager. Why would I want to ask the computer to open a window if I just want to ask a question? For instance, say I want to know what time it is. I can't just ask the computer, "Computer, what time is it?" Instead, I have to say, "Computer, open clock" and then read the time.

I don't know much about the present speech systems in OS X, but the older one in classic Mac OS had a "speakable items" folder that was mostly filled with AppleScripts. Speaking the name of any item in that folder would launch that item; if it was an AppleScript, it would do various thing. The system shipped with a number of useful scripts already built in: one of them was called "What time is it?", and all it did was speak (via TTS aka MacInTalk): "It's [current time]", e.g. "It's five oh four pee em." (Then again, I don't find this very useful because I've got a menubar clock, as all Macs have by default for ages, so it's quicker just to glance up there).

There was one really impressive script in that that would tell a number of interactive knock-knock jokes, called "Tell me a joke". So you'd say "Tell me a joke", and it would speak (via TTS) "Knock knock". A response of "Who's there?" would prompt it to select from a number of responses, and it would then listen for "[previous response] who?" after which it would deliver the appropriate punchline.

I just looked, and there is a Speakable Items folder and it has all this same functionality still. Runs a lot faster than it used to, too. Sweet.

The Apple OSX voice stuff is pretty cool but not responsive enough to be useable. And all it does is integrate into the window manager.

Actually, in OS X you can ask it the time, and it will speak it. You can also ask the date, tell it to start the screensaver, and a whole bunch of other crap. It's certainly not perfect, but it can do a lot more than just open/close windows.

Ideally the computer should just know what you want to do and do it for you. The problem is telling the computer what to do. I'm surprised that voice-recognition hasn't progressed further. The Apple OSX voice stuff is pretty cool but not responsive enough to be useable. And all it does is integrate into the window manager. Why would I want to ask the computer to open a window if I just want to ask a question? For instance, say I want to know what time it is. I can't just ask the computer, "Computer, what ti

The problem is telling the computer what to do. I'm surprised that voice-recognition hasn't progressed further.

I was just writing about this above. Actually, voice-recognition has progressed considerably in the last few years, due to handhelds. Cellphone voice recognition is practically standard, now days. There's a few problems with bridging the gap over to desktop computers (less with laptops, though), the main one being that most people don't have a mic built into their system. Companies have TRIED wit

Wow thats some nice stuff. I might have to dust off my Linux harddrive (I have a dual boot, but didnt boot in Linux in forever) once this is out.

One thing that I've learnt doing web apps for high profile customers lately (though its an obvious thing): GUI sells, period. Yes it has to improve productivity. However, the fact remains, many people have that darn screen in their faces 8 hours a day. It then becomes important that the GUI is interesting and attractive.

I say this as a KDE enthusiast who has a background with being in love with GNOME: How come no mockups like the ones you linked and the ones found on many many other places online have not been adopted yet into KDE4? Matter of fact, why is it that KDE4 and QT4 itself chugging along at such a slow pace? I guess I should be grateful that KDE3 is still seeing so much attention to detail, because it's only recently been able to woo me away from my GNOME desktop.

Well... I don't really mind having bubbly/shiny gui, IF it works correctly. And by that, I mean that it is implimented effeciently and ideally rendered in hardware. However, with Microsoft it never seems to work that way. Instead it's a big graphic slapped ontop of something written in VB, which is running ontop of a heavily-object-oriented high-level super-ineffecient program. I don't mean to sound like a Mac fanboy, but in OSX, the animations and interface don't gum up the works, because they're bu

I find it interesting that the examples of bad GUIs are 3/4 Microsoft. While those three are bad (Clippy? Bob? Ew. I get adaptive menues, though. The idea is valid, to a point.)

The Apple example, handwriting recognition on the Newton, is a good gaff. Which is to say it isn't something that any rational person would look out and say "That's dumb. Don't do that." It isn't Clippy. It isn't Bob. It's trying to get the computer to adapt to the person rather than getting the person to adapt to the computer. The big win for Palm was that Grafitti forced the user to adapt to the computer. Our handwriting is the way it is (hopefully) so that other people can read it to. Typewriting is not a natural thing, even though some of use geeks reach WPM speeds that make it seem like it is.

When we're talking about verbal user interface gaffs, we'll find similarly goofy things, and we'll find things that made sense intellectually but didn't work in reality. That's what we call research, kids.

I firmly believe that when it comes to GUIs, change is almost always for the worse. One reason for this is that once a set of GUI conventions has become established, change is disconcerting--you now have to accustom yourself to the new "look" or to the new way that the GUI works. That inconvenience is rarely repaid by the alleged advantages of the change.

As an example, consider the difference between the Windows 2000 and XP desk tops. Just how is the XP desktop better than the older one? I sure couldn't see any advantage to it. Yet, if you were to use the darn thing (and not switch to the "classic" view), you'd have to figure out again how to do a bunch of stuff you already knew how to do before the interface changed. This is progress? Even at the detail level, the changes are silly and unhelpful. Look at those three-dimensional window title bars. Why is that bulgy look better than the less obtrusive flat title bar of the old Win 2K interface? What convenience or information is added by the 3D bulge? Or how about the XP icon for video options--it's a screen with a flat paintbrush on it instead of the 2K screen with a round paintbrush and ruler in front of it. The two look different enough that it takes me a couple of extra seconds to find that icon in the Control Panel whenever I'm forced to use the default XP interface. It's not that the new icon is better or worse than the old one--but why ever change a familiar, easy to recognize icon? It's done to create the illusion of progress, of course.

Making icons look "cooler" in successive iterations of software is one of my particular pet peeves. Whenever someone releases a new version of their software, they think that people won't believe they got their money's worth if the GUI looks the same--so they jazz up the icons. Usually, this means adding more detail, even though this violates the basic principle of the icon: that it should be simple and easy to recognize. In other words...icons should be iconic.

That brings me to another reason why software publishers change GUIs. From the article:

The increased complexity of today's computer systems is forcing change upon the GUI. As the number of features has exploded, users have been overwhelmed with layer after layer of icons, tool bars and menu options.

Excuse me, but if you've got "exploded" features, then you do not have a problem that can be solved by a revamped GUI--you have bloatware. Clean up the mess, and start over.

I haven't seen these new "ribbons" MS is talking about for LongVista, but even the name is dumb. Look, the people at Xerox Park gave us the foundation of a great GUI, and there's no reason to change that basic set of visual metaphors until there's a fundamental change in the mechanics of the computer/human interface. The requirements for a good GUI are well-understood: it should be as simple as possible, it should be consistent between applications, it should use easily recognized familiar symbols and conventions. It most definitely should not change from one moment to the next according to the notions of some guy in Redmond who thinks he can anticipate what I want to do.

...absolutely all we need is halfway thoughtful, somewhat intelligent application of the paradigms we already have.

If software developers just spent an extra hour to watch an untrained user play with their software... and their managers gave them a couple of extra weeks to incorporate what they learned by watching... that would have more effect on software usability than the introduction of new techniques.

The problem today is that so much software leaves you gasping with amazement at the seeming perversity of their design. It's been observed since the day Windows 95 was introduced that it is stupid to turn off your computer from a button labelled "Start." Microsoft has had over a decade and one, two, three, four, five major software releases to do something about it, and they haven't. If they don't get it yet, all the pie menus and gestures and voice recognition isn't going to help them.

You may cry foul because this isn't strictly speaking, a software problem, but will you take a gander at the button layout on this portable DVD player? [dpbsmith.com] In case you don't get it--it's so mind-boggling it took me a while to get it--the northeast button moves you east, the southeast button moves you south, and so forth. That's why every button has a little printed arrow next to it.

An awful lot of modern software design seems to me to be be putting little printed arrows next to utterly misplaced buttons.

Voice recognition is a common thing I read here, but I whole-heartedly disagree. I already think office noise chatter is too high. I don't wnat to imagine when everyone is talking to their computer to tell it what to do.

What most replies here lack the understanding in is that an input method has its purposes and its uses. See the whole CLI vs. GUI argument here. Voice is just another input. It's great for GPS navigation or a mobile phone in your car, but for an office suite? Definitely not: ugh! How about in a library? How about at a LAN party? Anywhere where there are many people.

I think that by convention every function available in an application should be accessible either directly or indirectly from the main menu.

This used to be more or less a design standard (I think apple published it in their human interface guidelines?). For the most part, people use keyboard combos, toolbar buttons, or context menus; however, the main menu serves as a kind of index of all of the functionality that is available in the application. On macintosh it is also a place to quickly look up the the keyboard shortcut binding for a function.

Unfortunately, some developers have gotten lazy recently and made functionality available through only one source, instead of the usual triplet of main menu, context menu, and keyboard bindings. This is annoying when someone makes functionality that is only accessible by context menu, but it is crippling when functionality is only accessible from a keystroke. Worse, sometimes there is no documentation as to what keystroke is needed, and the functionality becomes less of a feature and more of an easter egg for whoever stumbles upon it.

Sadly, Linux software is the main offender here. Unfortunately many developers are totally unaware of the importance and difficulty of good UI design, and writing a GUI becomes an afterthought. In large companies this is rectified because people who specialize in UI design are hired, and on macintosh and windows, apple and microsoft publish UI standards that all applications should meet, but no one seems to be providing this service for Linux.

One other deadly sin of software design is writing software that is only configurable through a text file. Having a human readable text file to configure the application is a feature, but *not* having a preferences GUI in you application that wraps all supported features in the config file is just downright lazy.

Worse are applications that use a scripting language to configure themselves instead of a regular record format (i.e. xml properties files like apple uses, or.ini files like on windows, or the registry, etc). Using a scripting language to configure the application makes the file more difficult to edit for novice users, makes syntax errors more likely because the syntax is necessarily more complex, and makes parsing by third party applications more difficult because, again, the syntax is necessarily more complex. Additionally, a scripting language is just stupid overkill for a configuration file that needs to turn on and off options and specify a path. By definition, a configuration file shouldn't be doing anything *conditionally*. If something like that is in a.conf file, than you put it in the wrong place. Sadly, many linux daemans are guilty of this (especially apache, which is otherwise a nice and powerful web server).

My prediction is one mentioned in the blurb: the contextual ribbon. It sucked in XP and it looks like it will get worse in Vista. It's an interface designed around the assumption that users cannot learn. It's great for a newbie, but it blows chunks for intermediate and advanced users. It's a usability issue. When menus reorder items the user is unable to learn where they are. Half locations I click on in Windows menus are those stupid down arrows to see the REST of the freaking menu!

If you have too many menu items that you need to start hiding them, start rethinking if you need all those items. Think of <gasp> submenus. Think about other forms of command. Don't throw out the entire menu concept, because it ain't broke!

So if I'm airbrushing a busy background out of a photo, which has enough colour variation to make it a bit confusing where the background ends and main thing begins (I edit photos for the technical manuals I write for industrial equipment), you can do this in GIM with scripting?! Cool!

I think he meant "See GIMP, for an example of a spectacularly badly designed graphical user interface, and compare it to Photoshop, if you want to see how much better a well designed user interface can be."

I hate Adobe as much as anyone, but there's no reason for GIMP fans to lie about how easy it is to use, compared to Photoshop.

By the way, Photoshop has scripting, too. The GIMP fans should learn more about the competition before trying to trash it. One reason GIMP is so far behind Photoshop, is that many of its developers refuse to try Photoshop or learn more about it, because they want to remain "pure" (i.e. proudly wearing a badge of ignorance). That's why real artists who use Photoshop regularly can't stand GIMP.

GIMP has a pretty good separation between the interface and the backing engine. Because of this, you can get something like GIMPShop [gimpshop.net], which is a Photoshop-style interface atop GIMP's engine. So if you really hate the GIMP's interface, don't use it. Sheesh.

Also, when you say that Photoshop has scripting, do you mean that you can use a full-featured scripting language like Perl to execute Photoshop commands, possibly without even opening the GUI? Or is it an attempt to make a scripting language without requir

No, I think he meant see Gimp, because editing pics and photos can't be done without having a gui. (at least not without going insane)

I agree with you that Gimp is not user freindly. I have adapted, and can use it to do what I want to do.. but I did give up on it many previous times.. but I got further in it than Blender. All I can tell you, is if you don't like Gimp then submit your complaints to the Gimp developers, and if you get no satisfaction then get your money back.

Especially when they're talking about using it through the command-line, for chrissakes. I can definitely think of some good examples of the command-line speeding up tasks immensely, but when you're dealing with graphics it's absurd to suggest most of the tasks (i.e., not mathematically generating abstract patterns or completing very simple tasks like red-eye correction) for which people use Photoshop can be completed more efficiently through scripting.

I'd wager that, in the long term, GUIS might not increase productivity.. But an -intuitive- GUI for the end user sure as hell minimizes training for a lay user. Visual Icons representing actions are great reminders for those people, especially older ones, who can't remember three letter short-cut commands.

Bottom line: For an expert user, GUIs slow you down. Basic to Intermediate users, especially middle-aged non-techies, GUIs are a godsend, -- when done right --.

Depends on the type of "expert." What if I'm an expert drafter? Or an expert artist (visual or musical)? Or, hell, even an expert accountant?

The only experts who really benefit from CLIs are experts who deal primarily in text.

But the most important thing to me is this: It's very easy to run a CLI in a GUI; it's impossible to run a GUI in a CLI. Therefore, all computers should come with a nice GUI by default and users can easily run Terminal.app (or whatever) if they want a CLI.

There's nothing you can't do in a shell that a gui provides extra ability for, when you've been well trained or decided to -learn- how to use a text mode interface well.

Moving multiple arbitrarily named and arbitrarily chosen files from one folder to the next (or other similar action).

Altering the arrangement of a screen.

Anything having to do with graphic design.

Oh, and:

For a simple example, look at a spreadsheet in its most basic form. Tab goes to the next column over, return goes to the next row down. Entire usage of the software can be made in a text screen, and FAR quicker than entering a number, moving to the mouse, moving the mouse to the next cell, clicking, then moving back to the keyboard, when instead you can enter a number, hit return, enter a number, hit return, etc.

A mouse is not fundamental to a GUI, and a good GUI allows for the same keyboard-driven arrangement that your "text screen" spreadsheet does. In fact, using a GUI lets you do things that you can't easily do with a keyboard alone--such as pick a few arbitrary cells to perform a quick calculation on.

How about just not having to remember commands. My brain has 7 slots of active memory, I'd prefer to use all 7 instead of having to swap shit out so I can remember a command, or the options that command takes.

command --help , man command etc
I'm not saying there aren't things the GUI is useful for, however there are definitely things that are faster with a CLI.
Oh, and with any filesystem manipulation? forgettaboutit, the cli is your friend.

--help and man are the problem. I've got 7 chunks in mind, I need to write a command to use them, shit, what's the name of that option again? man foo. Ahh, I see, now what was it was I doing again? Oh yeah, did I write that down? No, damn, better go back and find it. When you use the command line you actually learn not to keep 7 things in mind. You keep about 4 or 5 in mind and write the rest down cause you know you're gunna need 2 or 3 slots just to get the commands to work. GUIs eliminate that. A

Ok, now you're just being a little stupid. Sure you could use batch mode to manipulate graphics, if you know a priori what the image looked like and what you want to do with it or if you want to perform the same operation on a whole host of images. Prior to that, you'll the GUI to see what the image looks like and what kind of operation you want to do.

There's a lot of scientific user interface research that contradicts your sweeping claim that "There's been no evidence that they actually increase productivity...".

A shell is itself quite a sophisticated user interface, and the commands and scripts you type into the shell are user interfaces, themselves. The TOPS-20 operating system provided completion and help built into the command line of all its utilities and applications. Tell me that's not a user interface. Unix has a much worse, non-standard way of providing parameters to programs and getting help about their parameters, and a lackluster hodge-podge of shells and scripting languages, which are some of the worst text based user interfaces in common use.

There are many things that guis make easier, like picking from a list of choices (menus, trees, scrolling lists, etc), drawing and painting (sure you could paint in a shell by typing in x,y coordinates, but that illustrates my point that there are many common tasks that a gui is better for than a command line).

I understand that you're probably just trying to play the Luddite, by rejecting all graphical user interfaces out of hand in favor of a text based shell, but shouldn't you reject all computers, cell phones and other electronic (and steam driven) devices, if you really want to be consistent? I mean, if you hate bad user interfaces, then you certainly shouldn't use the shell (or at least you should run it under Emacs so you have some reasonable input and output editing ability), because most shells have absolutely horrible user interfaces (i.e. arcane syntax). That's right, the syntax of a scripting (or programming) language IS a user interface. Unfortunately many language designers (i.e. PHP, Perl) have no concept of user interface design, and make many foolish usability mistakes that a competent graphical user interface designer should never make.

Have you ever try to explain csh history substitution syntax to your grandmother? Even if she knows how to send and reply to email with a graphical user interface, it'll probably take her a long time to learn how to use the shell.

Read the article I linked to: it describes how TOPS-20 programs and commands can document themselves to the CLI, so it can provide the user with consistent completion and full help about the parameters, insert (and ignore) noise words, and provide completion over alternative symbol spaces for special types of arguments, like host names. That was quite useful when ARPANET addresses were only 8 bits long, and you could type "teln mit-?" to get a list of all host names beginning with "mit-" that you could tel

GUIs are great for utilities that one uses only once in a while, say every two months. Going through a man page, keeping track of options, etc., is a nuisance, and memorization is not worthwhile for rare use. Likewise, well-organized GUI menus are nice for allowing access to commands that one uses rarely. Ideally, there are keyboard shortcuts for common commands.

> There's nothing you can't do in a shell that a gui provides extra ability for, when you've been well trained or decided to -learn- how to use a text mode interface well.

I've gone ahead and highlighted the critical flaw in your well-thought out argument.

People aren't well-trained in anything. The entire point of having a computer for most people is to make the computer SOLVE problems for them, not CAUSE problems that require training to fix. Most people don't want to take the nontrivial amount of time required to learn how to use a command prompt well, and it's for those people who GUIs are for.

Huh, really? Well first of all, without a graphical user interface, you can't see images, or even nice formatting. You also can't arrange windows to maximize your productivity, or for that matter do two things at once at all.Having a GUI doesn't mean always using the mouse. The mouse is a great tool, but so is the keyboard. Sure, you use the keyboard to navigate spreadsheet cells, but what about when you want to bring up a web page next to the spreadsheet to read off of it? I generally mainly use the keyboa

According to the latest research by the Yankee Group, it is also cheaper than maintaining a Linux desktop. However, Microsoft Vista, with its productivity whatsits and glossyness will be cheaper, more productive, and more attractive.