Nowadays smartphones, tablets and desktop/laptop computers are all siblings. They use the same UI paradigms and follow the same idea of a programmable and flexible machine that's available to everyone. Only their hardware feature set and form factor differentiate them from each other. In this context, does it still make sense to consider them as separate devices as far as software development is concerned? Wouldn't it be a much better idea to consider them as multiple variations of the same concept, and release a unified software platform which spreads across all of them? This article aims at describing what has been done in this area already, and what's left to do.

I find the shell to be simpler, because it _will_ do what I want, unless I do something wrong.

A turing-complete language without major bugs, and good documentation will do the same thing every time.
It's also not doing anything when I don't want anything happening.

I use 'gui' applications when there's no other solution that makes sense or works properly. I use qbittorent because it has more working features than the cli/curses alternatives.
I use easytag because there are no simple cli tagging applications. The GIMP and Inkscape for art for obvious reasons...

I use Conkeror for browsing because it's precise, and I don't need to fudge around with the mouse as much if a website is designed well.
I wish more programs used Conkeror's UI model. Hinting and keyboard command sequences are efficient once you learn them. Once you have a series of menu options memorised there's no real difference between those and a keyboard sequence. At a certain point it becomes muscle memory, but with a keyboard sequence you're not dealing with window/menu position variations.