A collection of assorted footsteps from the journey in designing an interface for a new mobile operating system. How to create value through design in places others cannot. How to focus on your own design game and avoid competing head-on. How to create emotional responses through engaging design to stay afloat in the ruthless hardware centered mobile landscape.

Saturday, February 28, 2015

To evolve, or not to evolve?

That's not even a question.

Be it building shelters, gathering food or traveling long distances; people always had an innate desire to do things better and faster. It's been always possible to improve some part of an activity or a tool related to it. Even entire professions have been forgotten after becoming obsolete. Thanks to the increasing pace of technological advancements, our children won't anymore recognize objects their parents grew up with.

Except when it comes to user interfaces.

I grew up with computers around me, and my kids will grow up with even more computers around them. Over the years, they've gotten a lot smaller and immensely more powerful. What hasn't really changed, is the graphical user interface staring back at us. The desktop metaphor with windows, icons, menus and a pointer (WIMP) has stayed intact for over 40 years.

The first mobile devices had no touch screens, and had to be navigated with either directional keys or a scroll wheel. It was logical to use the same approach for such a miniaturized desktop, but when touch screens became more popular, user could directly interact with things. This made controlling a pointer redundant.

After the mouse pointer was removed and touchable things made a bit bigger for suitable finger operation, everything was ready for profit-making. Nobody seemed to question, whether an interface paradigm originally designed to be operated with a keyboard and mouse (WIMP), was really applicable for a mobile touch screen use:

Unlike desktops, mobile devices

are primarily used without a supporting surface (table or similar)

are used in dynamic environments with disruptions

can't assume user is constantly looking at the screen

can't assume both hands are available for a basic operation

can't assume equal amount of time is available to perform a task

Regardless, all mainstream mobile operating systems treat mobile use the same way as desktop use. The familiar button-based navigation model, dating back 40 years, does not really qualify for mobile use. It requires too much attention from it's user to be efficient. Too much precision to be comfortable. Too much time to be fast.

Replacing mouse and keyboard with touch alone, just decreases the speed user can control the system, making it actually worse than the desktop. It's been a wobbly decade of mobile user interface infancy. The only way it's gotten any better, is through nicer visuals and smoother transitions. But that's just surface - a better hardware clad in finer clothes.

At this rate, my grandchildren can still identify an Android phone, because baby steps were considered good enough. That's a valid strategy as long as everyone copies one another, and no alternatives exist: a family tree that looks like a ladder. It's an open invitation for smaller companies to deliver less inbred products, that are designed to adapt to your life, instead the other way around.

If you still think those archaic desktop conventions are enough to keep your massive software business afloat today, you're not the first one. The bad news is, that the only way a dinosaur could avoid extinction, was to stop being one, and evolve into something else.

Before it was too late.

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

7 comments:

Talking of evolution, will Sailfish be ready to evolve from an application based paradigm to something else ?As you say for other themes (like clickable buttons coming for mouse based computing), applications are also a remaining of the desktop computer era.

If I take my own usage of the phone, I see that apps doesn't fit a perfect user experience.Apps divide the contents in several chunks, when it doesn't need to be.

Several examples:* I never access the contact app directly, but I reach its contents through the call, mail or sms app, where I actually need it.* I the twitter notification you can see images. But you have about no option to do anything with them, whereas in the gallery app also intended to do things about images, you can do a lot.* In mails, twitter, or some other app, you can have to open web pages, or documents (pdf for example), which let you go to the web browser, having to switch back to the other app when done, breaking the flow (for example, going back to the twitter notification page, requires to launch event view, click twitter, then scroll back to where your were reading...)

In programming, we usually use the MVC scheme : Model/View/Controller. The model is the raw data (being mails, sms, webpage contents, images...), the view is the way to show them on screen (the UI part), and the controller the glue between them. The good point is that several view can show differently the data from the same model, and a view can show things from several models.

From this idea, we should break apps, to models (one for each of SMS, mails, twitter, gallery, music, ...), and views (video player, image viewer, web viewer, pdf viewer, map viewer). The OS would then have to orchestrate all those data to provide a simple way to go through the contents of the models, with a good user experience and not the flow breaking change of apps.

We already see some work done in this direction in low level stuff (like tracker producing a model of all resources available, like music, which can then be shown in a media player app), but it is far from complete, and I don't see a real movement at the high level to go to this. Maybe Blackberry's hub is the best example to it (I don't know it very well but heard a lot of praise for it), but still only about general messaging, not all contents

There are still some contents that would have to be launch by some way (a game for example), but most of the apps still share something at one point.

You can find a related talk from Akademy 2013 by KDE's usability team, to switch from application, to task centric approach (called flow here). The 10 first minutes are the one that matters (it is an implementation example then) : http://files.kde.org/akademy/2013/videos/Thomas_Pfeiffer,_Bjorn_Balazs_-_Computer,_I%27d_like_to_write_a_summary_of_this_meeting.webm

In SFOS 2.0 very sad to see cover actions swipes and swipe from top to close apps and lock device disappear (two of my preferred features over other OSs). Hope we could get an option to replace de profile drawer with close app and lock device as it is now.

We had several prototypes and let people vote what they preferred. I can't remember the exact behavior of those prototypes since I was doing other things, but people voted for the version demonstrated at MWC.

I think people liked it as it resembled more android and it got chosen. Again, I don't know the exact reasons it was selected.

However, I'm sure the app closing can still be arranged as the work progresses. I'm also sad to see cover actions go, but let's see if the partner space customization can make up for that.

Thanks a lot for your answer. I already guessed that people voted for this new UI because it is more Android like and people almost always choose what is familiar even if it is less relevant.I am planning to write a post on Jolla.together.com regarding Androidification of Sailfish OS with 2.0 version.