A couple of days ago I read a blog post by Stephen Ramsay, a professor at the University of Nebraska-Lincoln and a Fellow at the Center for Digital Research in the Humanities. In it, he mentions that he has all but abandoned the GUI and finds the command line to be "faster, easier to understand, easier to integrate, more scalable, more portable, more sustainable, more consistent, and many, many times more flexible than even the most well-thought-out graphical apps." I found this very thought-provoking, because, like Ramsay, I spend a lot of time thinking about "The Future of Computing," and I think that the CLI, an interface from the past, might have a place in the interface of the future.

The most basic scenario: if I'm doing something infrequently, I want a GUI. Discovery and recognition mean that I will spend a lot less time looking things up then applying them. On the other hand, if I do something a lot, a CLI will allows me to cut to the chase and just do it (i.e. no digging through menus and dialog boxes). CLIs also provide much better facilities for automation.

The task matters too. If I'm doing something 'linguistic' (e.g. word processing, programming, file management), using a GUI means that the brain is constantly context switching between linguistic and visual/spatial processing. On the other hand, trying to express something spatial in linguistic terms creates a similar problem. If you ever tried to use POV-Ray for 3-D modelling you'll know what I mean.

Finally, there's the person involved. Some people are more linguistic, while others are more spatial/visual. Some people have excellent recall (great for CLIs), while others have a wonderful ability to recognize visual cues (fantastic for GUIs). Neither skill set is better overall, but each skill set is better in a certain domain.