The desktop metaphor has been hanging around for a number of years now. The creativity within the software field should be able to produce a new cooler metaphor that doesn't involve dragging windows around all the time.

I believe that the desktop metaphor has soon been stretched to its maximum. What do you think?

Have you heard of any new cooler ways of using the power in today's computers that's different/more productive than the desktop? Or do you have a new crazy idea for this?

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
If this question can be reworded to fit the rules in the help center, please edit the question.

12 Answers
12

GUI is horribly inefficient compared to CLI. I can accomplish much more using only keyboard than I can when the mouse is involved, unless I'm editing images.

3D desktops are generally not that useful. At least not yet. Anything where I have to move around to find things isn't very useful as it slows me down. Most other 3D interfaces are useless gimmicks. Except maybe being able to flip windows around to keep metadata attached to the back, that is one idea I like.

Even though I dislike 3D interfaces, I do like the idea of using spatial data (eg, I left the document over there is useful).

I like the ideas behind BumpTop, but so far, the BETA version has been a major disappointment. If the entire GUI file browser was replaced by this, instead of just the desktop, then it may work better.

I'm a big fan of multitouch and feel this could be a very efficient GUI system, however, until an inexpensive multitouch solution becomes easy to buy, I don't think this will catch on.

After a conversation with this guy some months ago, I realised that, by far, my favorite desktop replacement interface (besides some form of graphical super-CLI) is a zoomable desktop, where documents are spatially positioned on a plane which I can pan around and zoom in and out of. This, coupled with orthogonal persistence and a smart application launcher (I shouldn't need to know what applications run to edit my files, I simply zoom into the text document and begin editing, completely unaware (if I want to be) of the fact that it just launched open office in the background to actually allow me to edit the file. When I'm done, I simply zoom out and don't worry about saving the file or any other mundane details). This is the interface I hope to eventually see, especially if combined with multitouch.

Virtual reality is only useful for certain domains (scientific data visualisation, entertainment, probably a few more) and not general computing. Same goes for augmented reality.

Basically, I don't want to have to manage my computer at such a low level as we need to currently (nor do I want to wave my hands about Minority Report style, waaay too tiring). As a user of a computer, why do I care about filesystems, saving files, starting programs etc etc? I don't. I only care that my work isn't lost and that I can edit it and otherwise accomplish what I want. The OS should manage persisting my work for me. The OS should worry about what applications are running. I shouldn't even see an application start up, I should just see my document and as soon as I begin working on it.. I can. Without knowing that it had to run some program.

Intersting ideas. The only thing I'd argue with you about is the saving docucment thing. If I'm editing something and don't like the result, I'd like to abandon all changes I've made since my last save. A continual version tracking wouldn't help either because ...
–
KevinDec 3 '08 at 16:07

1

... maybe I've made about 500 small edits. It would be difficult to find the exact previous version I'd like to revert to. Not to mention that continual version tracking of each edit would be space inefficient.
–
KevinDec 3 '08 at 16:09

Well, I obviously simplified everything a little. Regarding orthogonal persistence, I think it should automatically persist everything, but it should also retain the state of a given file as it was when you started that edit session and then I can either leave and have it persist, or...
–
DanDec 3 '08 at 16:23

... hit a discard button and have it reverted back to what it was. Also, even though I don't want to have to think about saving my work, theres no reason why a manual save shouldn't be available, so long as its optional. That way I can manual save when I feel its appropriate and revert later changes
–
DanDec 3 '08 at 16:25

Bump top can replace the view in every folder... but its not easy to configure in the beta(and it leaks memory), but yea it rooocks
–
DFectuosoDec 30 '08 at 5:48

Interface metaphors will remain 2D until we have inputs and displays that aren't 2D.

As for "Minority Report"-style interfaces, I cringed really hard when I saw those "wave your hands in the air" interface scenes in that movie because I knew that people would latch onto that as some kind of wave of the future. That kind of interface is horrible for most things.

How many hours a day do you people spend on the computer? Eight, ten, twelve or more? Waving your hands around in the air is really, really tiring.

It's like everybody has forgotten the failure of light pens already. In the 70s and early 80s, everybody thought light pens were the ideal way to work with computers. They mimicked the pen-and-paper thing everybody already knew, and there was a direct 1:1 correlation between the movement of your hand and the "pointer." What wasn't to love? As it turns out, light pens were actually awful to use and the mouse became the dominant non-keyboard input device. They were physically tiring to use and required expensive displays.

Anyway, discussions over input devices and UI metaphors are really missing the point anyway. The problem is not that we can't point to things quickly enough, or that we need to do things in 3D, or any of that.

The problem is that we have too much information to sort through. How can a system - regardless of interface - help us to find the information most relevant to what we want? In other words, interface is important but the REAL issue isn't how we get stuff into the black box or out of it. The hard part is figuring out what to do inside of the black box, and that doesn't change whether you're doing things in two dimensions or fifty-seven dimensions.

I think this is a very good answer. I also agree with your remark about how difficult it is to organize information. I can't even get my bookmarks organized in any decent way...
–
Diego DeberdtJan 30 '09 at 14:14

Well, I'd say that Minority Report style interfaces DO have their uses. I imagine they would be useful for augmented/virtual reality applications (and I envision those to be used in entertainment and visualization for scientific computation) - outside of those areas though, I agree with you. I do not think this will ever become the desktop interface of choice for the exact reason you've stated: it would be tiring as hell and inconvenient. I would like to see desktop systems make better use of spatial organization of data though, I do see a future there. Audio based interfaces too.
–
DanJun 29 '09 at 13:15

I guess in the future, people will never sit down in front of computers. :P
–
Philip MortonDec 3 '08 at 9:53

Though I find the technology extremely interesting and have been semi-following this kind of stuff for a while, I do not think it will be in general use. Ever. I think it may be useful in certain specific domains, but certainly not for general consumer use. Heres why: tinyurl.com/6q8omh
–
DanDec 3 '08 at 9:59

I forsee something similar to oblong (but not as fully featured) being useful for things like home entertainment systems. It'd be nice to be able to change TV and radio stations, streaming media, etc. without using remotes. But then again, you don't want your kids fighting over this either. Hmmm..
–
KevinDec 3 '08 at 16:03

I see this useful for some scientific computing situations (visualizing large and complex datasets) and entertainment. Outside of that, I don't see much use for it simply because it would be too tiring compared to alternative interfaces. I probably missed other potential use cases.
–
DanDec 3 '08 at 16:29

1

Well, when you're talking about the future ... I mean, wouldn't it be sort of sad if in 100 years, we were still just using Web Browsers?
–
BobbyShaftoeDec 5 '08 at 20:48

I remember seeing a talk where someone said that the great thing about language is that it allows us to communicate without pointing and grunting. If speech recognition ever becomes useful, I believe that it will cause a return of sorts to the command line. We may still point (finger, mouse, whatever) in order to indicate the noun that we are talking about, but we will get back some of the richness of language with indirect objects, adverbs, conjunctions, and so on, things that we already have in the command line but which are unavailable to most users who only know the GUI.

I agree. The book "The Pragmatic Prgrammer" also agrees. Its MUCH easier to tell someone what commands to type, but fairly awkward to tell them to "click here, open that menu etc etc". I can also type commands much much faster than I can use the mouse and make extensive use of keyboard shortcuts
–
DanDec 3 '08 at 10:53

I don't know about you, but I rarely grunt at my computer!
–
TomDec 7 '08 at 21:40

Like the hand-waving of multi-touch UI's, speech recognition will become more important but I think both will remain niches.
–
Diego DeberdtJan 30 '09 at 14:21

the trouble with speech is the loud guy in the next cubicle... "I WANT TO DELETE THAT, YES".. and all my work goes too :(
–
gbjbaanbMay 16 '09 at 14:05

I would say that the next real "metaphor" is multiple devices. Your virtual "desktop" space will consist of a variety of devices around you, in the same way that an extended desktop works in multiple monitors.

Your mobile device would be a viewport or an agent in the model represented by the UI, so that, for example, you will draw on your iPhone and get it to appear on your whiteboard or computer, etc.

I can think of UI like the one shown in the 'Matrix' series, especially the third part viz. user are connected to the device and actually experience and feel their way thro' the program. The program reacts to the body and eye movement of the user.
Very farflung though...

I like the interface in the movie "The Island" where they use a triangular prism like device in conjunction with a pen as the interface, where simply rotating or flipping the prism causes an action. Yes it is still "dragging windows around" but looks to be in a more intuitive and simpler fashion.

I think it's way to early to predict what will replace traditional GUI interfaces, especially because in some situations it's always going to be a good choice.

However, as we all know, traditional GUI interfaces aren't suited to many interactions that we invoke them on. Instead, other forms of interfaces will become prevalent in those areas, or even better, the interface will become lightly-seemed, transparent or even entirely natural. That is, the way we interact with computers is going to change dramatically and in a lot of different areas.

Multitouch seems to be the way of future, If we throw away the mice, we'll have multitouch screen closer to the keyboard and slanted at an angle and not straight-up. We could also have other (larger) screens on the walls.

I'd like to say multitouch, but I really doubt it. Just because that's currently something new doesn't mean it'll catch on for anything other than specialised systems, eg kiosks and occasionally-used displays. If I had to use a multi-touch system all day long, you can guarantee I'd off to the doctor's with elbow problems after a year. And that assumes you can keep your arm from resting on the display and screwing with what you're trying to do with your fingers.

The best I think you'll get for a often-used touch display is to integrate a keyboard to it, and effectively turn the top half of your desk into a display where you interact more with the bottom section, occasionally using fingers to interact with the GUI at the top, perhaps organised like a draughtsman's table. I doubt it though, why spend that money when you can just use a mouse/keyboard/monitor.

I do think that the evolution of traditional displays will be javascript based, all the competing display technologies that have appeared recently may fade in favour of a single-codebase javascript that is hosted in a browser, or a desktop app, or your phone or TV.