Sometimes when figuring out solutions we go from A to D instinctively. The mental cookie you get from leapfrogging a few steps is an incredible drug. It's seductive, gives off an air of mystique; you pace along for 5 minutes and come down from the mountain with a solution.

I did it in the past. It's a lottery. The team will never know whether it will take you 5 minutes or 30 to come up with the next solution. Skeptics (i.e. developers) will demand an explanation and the B and C points of the path. You need to convince your stakeholders and your team that D is where you need to go to but cannot quite articulate why. In the past I've used charisma, persuasion and authority to get over this. The solution is so obvious to you that you roll your eyes when your team/client/users dispute your suggestion. However, "Trust me, I'm a designer" can only go so far.

People label "creatives" and keep them at a distance. Never anger them or they will attack with their arsenal of pencils. Clients use this as an excuse to never get involved in the process and hope that "creatives" will surprise them pleasantly. Some "creatives" use this to build an ivory tower around them. They feed the mystique because they are afraid they will lose their jobs once people find out how they do what they do. And so they gamble designs and jobs and then justify losing a customer with "they just had no taste, they couldn't appreciate my work".

In the end both sides lose. I have a few theories why this happens. I reckon putting all of this together in a documented manner might fill a book or two. I'll take a shot at it. I think the biggest factor is education: minimisation of arts in classes. People are not aware that drawing is a language and a skill you can learn. Here is the truth about any creative pursuit:

Success is always commitment and hard work. Talent is just a shortcut.

That is rather unpleasant to hear. People are more comfortable to say "I just don't have the talent" than "If I invest several hundred hours I might get to that level".

"I know these are bad... " you say when you're showing your photos : create an account on Flickr. Go every weekend and browse the Explore tab for 2 to 4 hours. Favourite the ones you like. Do that for up to 40 hours or about 2-3 months. Note that I didn't ask you to take photos for 3 months. I only asked you to look at pretty pictures every other lazy Sunday for a few hours. After those 2-3 months, go take some photos. As if through a feat of magic, your compositions will have improved. People think learning photography is about wielding a DSLR and slaving over technicalities. In reality, it's about grabbing your cellphone or setting the DSLR on auto and learning to "see". Paying attention to details. Taking a step back. Ignore details and focus on an interesting subject.

"I can't draw". Take a look at the following images:

Does the cylinder on the left look like something you'd draw?

These couldn't have been made by the same person, right? I mean, it's not like someone took 3 years and dedicated himself to learning to draw from scratch? Go ahead, take a look at his progress shots. But that's some guy from the internet, right? Let me put my money where my mouth is:

Left: 7 days after starting; Right: done at lunch

Left: first color sketch in Photoshop. Right: 8 months later

I'm nowhere near where the guy above is, but seeing his progress has helped and inspired me (and others, no doubt) so much.

"The minor gift is the innate gift.[...] game design, mathematics or playing the piano comes naturally to you. You can do it easily, almost without thinking. But you don't necessarily enjoy doing it. [...]

The major gift is love of the work.[...] How can love of using a skill be more important than the skill itself? If you have the major gift you will design using whatever limited skills you have. And you will keep doing it. And your love for the work will shine through, infusing your work with an indescribable glow that only comes from the love of doing it. And through practice your skills will grow and become more powerful until eventually your skills will be as great or greater than those of someone who only has the minor gift. And people will say: "Wow. that is one truly gifted person" They will think you have the minor gift, of course, but only you will know the secret source of your skill, which is the major gift: love of the work."

There is only one way to find out if you have the major gift. Start down the path, and see if it makes your heart sing."

People look at my drawings and say "Wow, you're talented". They didn't have any chance to discover that one year before as none of my drawings existed then. That has been my path, tackled bit by bit every day.

I've become convinced that there is a process to creativity. If we can describe quantum mechanics, surely creativity cannot be more complicated than that. Now, I'm not a cognitive scientist; I've certainly not studied enough on the subject to begin to approach defining this process. All I have are floating bits and pieces.

An order of magnitude easier is to define a narrower scope. And this is where I want to bridge User Experience design with process. Thankfully I've matured past my youthful rebellion against paperwork and structure to recognise the benefits of a process oriented thinking.

A clean, concise process explained to teams, customers, bosses helps ease people into collaboration and understanding. It also brings you, the creator, back on the path of creative flow. Listen to yourself and find out your process. This will bring you back on track when you think the "muse" has left you. If you listen carefully, you will also find out why it "left".

Sharing this process will ease everyone around you. They will understand that it takes several tries and failed attempts and that it is normal. They will learn the need to explore a domain of solutions and not try to find only one "correct" answer. Clients will know that you require input from them and they'll prepare to be part of the process.

The disconnect in people's heads when faced with a design-related activity is avoidable. You remove the lottery. Take out the guesswork. Dispel the mystique. Teach people the major gift and share your love of the craft. Don't hide behind the minor gift.

You will discover newfound trust among people around you. And be recognised as a professional.

As time travelers usually go, he spoke in cryptic ways, poured a lot of knowledge and had to leave quickly. He left me with the glimmer of hope that the Software Industry can achieve a 100% success rate instead of the abysmal 20% of today.

Tom struck me as what I would call a software architecture scientist. He is among the few to work in IT since the 60's and has a scholarly approach and detachment. He worked for major companies and helped steer very large boats to clear waters. Despite this, both he and his son display a norwegian humility and openness coupled with the desire to spread their knowledge.

At a first glance what they presented seems to be just common sense. Focus on stakeholders and end users. Make sure requirements are clear and refined. Define clearly measurable quality attributes. Be concise in expressing architecture. Use Agile, but not as a silver bullet. Start delivering value in the first week. Reasess and learn during each iterative cycle... not only in development but in project management and architecture.

Sounds like bits and pieces of every other software development advice? The biggest "twist" is thinking about architecture in an engineering way. "Real" architects and civil engineers have long enjoyed the respect and trust from people of all walks of life. Houses, bridges and so on have been built for hundreds of years without fail. Yet we struggle to keep our systems up almost every day.

Parallels and highlights

"We're proud that Agile has reduced a 40% failure rate to 20% failure rate. In other words we're proud that we only kill one pedestrian at each crossing instead of two".

"One of the biggest failures is not filtering/sorting/correctly quantifying the stakeholder values BEFORE the team starts developing"

"The human body manages all kinds of systems and if one fails, we die; we should engineer our systems the same way".

Force yourself to put the architecture on one page. Tom referenced a Mozart anecdote, "There are just as many notes as there should be". I immediately thought of U2's guitarist Edge:

"Notes actually do mean something. They have power. I think of notes as being expensive. You don't just throw them around. I find the ones that do the best job and that's what I use.".

In software, a 200 pages document will never be read. Every architect should go through the exercise of expressing a complex system on one page. The bad version of this is "The more I write the wiser I will seem".

Each line in the architecture must address or refer to a quality attribute.
I read last week Kurt Vonnegut's 8 tips on How to write a great story

"Every sentence must do one of two things — reveal character or advance the action."

Why not try to write your architecture so it is a great story?

Architecture that never refers to necessary qualities, performance characteristics, costs, and constraints is not really architecture of any kind
This reminded me of a Ken Levine tweet which I can't find at the moment. Paraphrasing:"Oh you have a great game idea? Come up with a great idea that gets the whole team on board, can be done within budget and fits a large enough audience to make a profit... and then we'll talk".

Few speakers imparting such wisdom are backed by experience and proven track record. When you see projects all around suffering from lack of goals, platoons of developers trudging along without raising their sight and yelling "Why are we doing this?" to have someone tell you 100% success rate in software is possible is, if not a time traveler, an alien from a distant planet. I thought this is impossible, after seeing the diamonds in the presentations, diamonds present since the 70's I wondered why are they not implemented all around us?

I realised after the talk that to fully understand Tom and Kai's methodology you need to: have participated in at least two projects that failed; been in direct contact with a software quality and measurement practice such as CMMI; worked in at least two Agile projects (can cross with the ones that failed); having been in the frontline for gathering requirements from the client; and of course, being in a role where you define software architecture. That is a whole lot of IT-related experiences to go through.

This, coupled with the fact that software is invisible and software cities crumble all around us makes customers and CEOs afraid of fully trusting yet another methodology.

I've been on more than one occasion in the following dialogue. "What do you do?" "Oh I'm an architect"; "Ohhh, really?"; "Yes, a software architect" ; "Ah... I see". This drives me up the walls. I've shared with Tom my frustration that we software architects are not as respected as architects, lawyers or doctors even though we can affect larger numbers of people. Tom smiled and said: "I give us 50 to 100 years before we achieve this".

This tool/framework/suite/software has some obvious flaws that I can clearly single out after working with it for several years.

I could make something better! I'll just take out the good, rebuild it with current technology and leave all the bad/outdated parts behind! This is surely my ticket to money and fame! (Alternative for the less deluded: there must be something better than this that I can use!)

Attempt to get up to speed ("will only take a few mins") with new, competing technologies.

Find out there are foureight 15 new frameworks, two new languages and 3 new build tools for you to grasp. The questions you are asking Google are no longer answered since 3-4 years ago.

Start googling these new frameworks and languages, adding useful words such as "review" or "x vs y" or the ever popular "< framework >+< current year>" so you get the current status of the project instead of the hype-filled early material

Find out there are new paradigms in how software is being developed. Research those paradigms. Definitely stop when you reach posts on http://lambda-the-ultimate.org

Start asking Google the right questions with the right paradigms.

Reach answers from the past 3-4 months on the topics.

Congratulations! You have reached the present day. Have a beer and relax. Tomorrow you will be back at work where you will start mentioning all the cool new things you read about. Prepare lines to whine about currently used technologies by poor coworkers that have not been enlightened yet.

Wonder where the last 3 hours have gone by. I was supposed to build something, wasn't I? Oh well, look at all the shiny new toys I have!

Every couple of years some hotshot comes along with Ruby/Python/Language du jour stapled into the "Framework du jour" and demos in 5 minutes what it previously used to be a day's work; YOUR day of work. People flock, the Borg resists, adapts, assimilates, moves on in pursuit of perfection.

Rails becomes Grails, Python becomes Scala and Ruby becomes Groovy. make becomes ant becomes maven becomes gradle. As the famous saying goes, "Resistance is futile". As long as you have learned stuff, don't despair! Adapt and assimilate...

break or fall apart into small fragments, esp. over a period of time as part of a process of deterioration: (as adj. crumbling)

Software is like a huge city that can expand up to 10 times the current number of residents in a moment's notice. You could move the entire city (at some expense) to run above a stretch of water. We can repaint the walls of the city at the flick of a button. Similarly, you can remove an entire quadrant of buildings on a whim. The megacities have flying routes as well as dirt roads. They become more and more complex, more and more powerful.

They also can crumble in an instant.

We've all seen it. Our favourite website is suddenly down. The airport lady smiles, noting that our delay is due to " a problem with the system". The clerk at the office is unable to serve us today, he says, because the "system" is rebooting. And every now and then your computer stops working for unfathomable reasons.

The system is everywhere. It runs our cars, our water supply, our traffic lights and our banks. People accept them for good reasons: they make things faster, more accurate, shinier and apparently less error prone than pen-on-paper or human-operated mechanical systems.

The mystery of digital items is that no one ever really sees them, touches them. Even the datacenters, our closest physical representation of the "system", are somewhere in remote facilities. We further the trouble by naming and moving things to the "Cloud".

Having been inside the city walls for what is now 9 years, I have to tell you, I'm mildly surprised computers manage to start every day. This post by Jean Baptiste Queru is a very neat explanation of all the little digital cogs inside the machine that are put in motion when you try to reach google.com via your browser. See also a more humorous take on this.

This is just you, connecting to google.com. Now imagine the system that runs our water, the traffic, our banks.

Abstruse goose on computers

We accept that we don't know how the bits and pieces end up from one computer to the other. There's a disconnect that happens in most people heads when they try to comprehend the "system". You might as well be telling them of purple tap-dancing zebras on a giant keyboard relaying the information at the core of the "system".

My current nomenclature is that of Software Architect. Besides soliciting oh-that's-so-cute laughs from Real Architects, it entails that I create the blueprints of such a system. In order to be successful, I must gather information and get all parties to agree on a set course; find the cogs and bits that will solve the problem and think how they can work together; future-proof the system ; convince the team to follow my ideas and then stay the course; make a solution elegant and simple enough so that I can brag about it to my fellow colleagues.

The reality is fraught with multiple contributors, legacy code that few people know how it works anymore, maintenance work done just to keep it running, ambitions of young developers eager to show their skill and ready to ignore previous wisdom due to its age.

This is an industry born 60 years ago. We've rushed from platform to platform, technology to the next, improving and disrupting, dismissing entrenched principles and recycling them 10 years later. The internet is now in your pocket and the world is a virtual one.

I wish that one day we will have regained our users' trust. That software takes humans seriously so that humans will trust software. So we can be trusted the same way structural engineers are responsible for our buildings. I will probably hear back that we now have processes, methodologies, unit testing and continuous builds! That things are so much better than only 4 years ago.

It is true, somewhat. We've come out of the dark ages and started a user-centered renaissance.

If you don't know where you're going, how will you know when you're there?

Life, mentors, bills, luck and curiosity have steered me through many roads in tech. In 9 career years I've learned my way through Web, Java, Enterprise Java (two different beasts), Liferay, Web Services, iOS programming, game development and many other assortments.

I think I have finally understood my path. It's always obvious in retrospect.

I was 8 years old when my mother bought me an HC85 computer, the romanian clone of a Sinclair ZX80. She saved quite a bit for it and for programming lessons. Like any child I was fascinated by new things, but as my teacher tried to impart his wisdom on subroutines, variables and other intricacies of BASIC my eyes often glazed over. When he showed me how to draw a dot, a line and a moving circle I was hooked.

High school piled upon me heaps of algorithms, maths and abstractions that really struggled to get along with my brain. What latched onto me was the passion for computers that my CS teacher, Doru Anastasiu Popescu projected in his class. It was that spark in his eyes that lit a fire in mine. To this day it is one of my most important criteria when recruiting teammates. I am really grateful for his guidance and inspiration; I also hope he has forgiven me for earning 5 points out of 100 at that CS contest where he confidently sent me all those years ago.

You see, that contest, like many other computer science ones, involved reading data from a file, doing some funky math with it and writing into another file. No UI, only abstractions. I don't remember what was the problem description. I don't remember how two hours flew and in what town I was. I remember coming home red faced and thinking only of what will I tell my teacher?

This would be one of the many moments of self doubt that would follow. Am I a good programmer? Am I really fit for this? Why are these people encouraging me? You want to assign a team of how many to me?

While this is cathartic for me, I am also writing as an encouragement to others. I think constantly questioning whether you are good at what you are doing is an important part of growing up as a person and professional. You also need to move past this and tackle the next challenge.

In the end I graduated with a MS-Paint-like program that had no less than 5 windows and many instrument toolbars. I was quite proud in having written this program from scratch and not redoing something from a previous generation of students. I clearly had a lot of fun writing it, testing it, laughing at weird display bugs. I distinctly remember presenting it to the commission not worried that it won't work but with the enthusiasm of potentially expanding the program.

I'm 32 this year and I've spent a lot of time riding technology's waves, trying to find my path. Again and again I've let my brain pull me into directions which seemed interesting to me but seemed mere distractions compared to "real work".

Two things remained constant throughout the years: my interest in user interfaces and my plethora of abandoned side-projects. When I was a child, it was circles on a screen. In college, OpenGL graphics and multitouch interfaces. As a developer, "web frontend" work. As a senior developer, more frontend work except this time with enough experience for people to ask me how I think UIs should actually be built. As a Frontend Technical Lead (talk about title inflation!), a curse for junior developers when I spotted a 1px difference looking across their shoulder.

With many projects on the path, I ran into the abstractions again. Web Services. Enterprise Service Buses. Data storage and distributed platforms. Big data and batch processing. BPM and data mining. A myriad of frameworks later, I've come to this conclusion: these are all wonderfully complex puzzles that algorithm minded programmers love to solve. I understand them, worked with them but that is not where my heart lies.

I walked a good part of it, but now I've found out my path's name: User Experience design.

I don't rank Gartner content highly, but this diagram actually makes sense. Posted by Gartner in July, gamification was very much high on hype. We're probably knee deep in the trough of disillusionment by now. I've just seen one of Gartner's presentations about the topic. Managing to stay awake, I think i've figured out why we're in the trough. I also found out why I willfully ignored the hype ramp-up and seen a few paths up the slope of enlightenment.

"Web developer by day, wannabe game designer by night."

The above Twitter bio is posted on my profile since september 2008. And yet, merging the two never occurred to me. Going down the list of obliviousness, I didn't pay attention to the likes of Foursquare or Gowalla until now. It didn't help much that gamification sounds like a marketing-drone-inspired word.

Why is that? Foursquare's "game" involves checking-in to a location. Its basic game mechanic, so to speak, is "push a button to get one point". The additional rule is "you can't push the button for a location more than once per day". Now, other than the difficulty of physically getting to the actual location, this doesn't seem like much of a mechanic, does it? Once you get enough points, you get a badge. If you best the other "players" in clicking, you may become "Mayor" of the location. In order to preserve your "Mayor" status, you need to do exactly what you did the other day. In gaming terms, this is called "grinding": " ... describe[s] the process of engaging in repetitive and/or boring tasks not pertaining to the story line of the game."

Of course, Foursquare has a business plan attached to their game, which is enticing businesses to offer deals and discounts to users of their platform. While in theory it sounds like a neat idea with a convergence of social, location, mobile and business and Techcrunch is reporting $600 Million valuation for them, I believe they are not sustainable by this model alone. Why? Deals and discounts online internet star Groupon had a disastrous IPO just recently. Customers looking only for deals are not repeat customers. Deals may have promotional value but are not long-term revenue providers.

Games and more recently video games are defined by game play, types of goals, art style and so much more. People play games to have fun, to be immersed, to escape, learn and win. Some reward skills, some strategy, some just appeal to our emotions or have a compelling story. There is so much breadth to games and we are a ludic race. Games are everywhere around us helping us learn and socialize from an early stage in our lives.

Video games have now been sold for 40 years. They have spawned a multitude of genres, we are referring to the seventh generation of game consoles and are now a $35 billion industry.

Entrepreneurs, developers and designers have now grown with video games and have created businesses based on game concepts or adapted game concepts to existing businesses. Some of these have attracted a lot of attention and here we are where even Gartner is putting forth reports and analysis on gamification.

Fun and accounting

Games are designed to achieve what businesses yearn for: engagement, loyalty, positive emotions, word of mouth marketing. Good games stay in the collective memory of gamers for years, even decades. Some are still played decades later. (If we include chess and the like, we can extend this to millennia). How many of you still use (and have fun using) Win 95? Taking a look through at the Wayback machine [archive link] sites in '95, the only adjective you can label them with is primitive.There's an argument to be made about Google's minimalism all these years, but then again, a screwdriver is still a screwdriver. Yet people still play Starcraft 1 and Diablo 2, games which are 10 years old. Why is that?

There are (were?) many ways in which games differ from business software or websites. The ones I believe most important are: games focused on the emotional experience, are artfully realized, polished and fun.

Emotional experience: this is one businesses run away from the most. Corporate software is generally valued if it is configurable, adaptable, one-size-fits-all, politically correct and so on. In other words, boring, unremarkable, trying too hard. Eliciting emotion brings loyalty. It also polarizes (which is why businesses run away from it). People talk about loving/hating Apple products. Customers that love your product won't stop talking about it. They will build a fan-base for your product. They will be your evangelists. People that hate your product will blast you in online forums and show much vitriol. Nevertheless, I feel it is a tradeoff worth taking.

Artfully realized: in another word, beautiful. Game studios are increasingly populated with artists instead of programmers; game engine development is nowadays a small part of game development. Each game menu is designed to match the entire experience of a game [screenshots here], even if you are adjusting the video card settings. Concept artists, game designers put a lot of soul into a game.

Polished: even though games also suffer from deadlines and their share of bugs, the polish level is through the roof compared with other types of software. Game designers worry about pacing, rhythm, discoverability, guiding the player. Besides obvious bugs, testers are also noting how much fun they had while playing a level. Because it's a game, people working on games make sure there is nothing that can frustrate the player, break immersion or feel out of place in the game world.

Fun: surely accounting software cannot be fun? How can we turn "it's not frustrating" into "it is a joy to use"? Surely these are phrases every accountant would like to say. All the above combined turn games into fun experiences that people want to share with friends, talk about, and spend on it.

Achievements versus gameplay

Achievements in the video game universe are a relative newcomer to the party. Microsoft, Sony and Valve have introduced their own systems for their own networks.

So what are achievements? "In video gaming parlance, an achievement, also sometimes known as a trophy or challenge, is a meta-goal defined outside of a game's parameters. Unlike the systems of quests or levels that usually define the goals of a video game and have a direct effect on further gameplay, the management of achievements usually takes place outside the confines of the game environment and architecture." (Wikipedia)

As it starts to click together, you can see that the badges and points that Foursquare and other "gamified" sites are peddling around are actually tacked on parts of a game, meta-goals. Sure, you can brag about it but think about it: when visiting the Eiffel tower, you're not going to tell your friends about how amazing it was to check-in with Foursquare there (if you do, there's something seriously wrong with you) you will tell them how beautiful the view from the top was. So we come to the problem and conclusion. All these gamified systems are lacking the actual game part. There's nothing emotional about them or artfully realized.

Since I jumped on the hate train about one year too late, here is a list of links of other people thoroughly dismantling gamification:

The slope of enlightenment

Now that I've made my case against gamification, getting closer to games is actually quite desirable. I will go through the list again: emotional experience, artfully realized, polished and fun. I think all software builders should strive for these, regardless of what kind of software they are building. Software should provide a free and safe place to play, encouraging people to try new things and not punishing them for their mistakes. Giving users feedback appropriately and timely puts them at ease. Establishing clear, achievable goals and rules, providing a challenge makes people engage with your software.

There is so much more that business software can do to get closer to games. Gamification in its current incarnation is not it. I'll leave it to future articles for concrete ways on bridging the gap.

In the meantime, let's all try to bring a bit of fun into our users lives.

Many people don't know what Twitter is. Some people have accounts and don't know what to use it for. A few have accounts, use them but still cannot define what Twitter is.

This is my personal take on Twitter. I'm not gonna explain the concepts, just what they do for me.

The gossip

Discussions on Twitter are a lot like gossip, except they are open to the audience of the intertubes. Short messages are prone to triviality and sometimes that's all people see when looking at Twitter and dismissing it shortly after. However, brevity is the soul of wit. I've been at times frustrated with the 140 characters limit but it proved to be a powerful creative mechanism and one of the reasons Twitter works.

The realtime

One of the most frequent uses of Twitter among the tech crowd is following conferences. Conferences that are overseas, that you cannot reach, even items you may have missed from the topics in the other hall. If 50 years ago it would take several weeks before you read a newspaper reporting on a conference on a specific topic, you can now follow as it unfolds. Extrapolate this to important news, sports, or any kind of event and you will see it ripple through Twitter's streams faster than anything else.

The reach

Because of the character limit and simplicity of Twitter's interface, I think it brings an intimate way of publishing. A celebrity needs staff to maintain a Facebook page, never mind an entire website. 140 characters can be typed in little enough time so that everyone, no matter how busy, can share their thoughts with the world. In the time I've followed some favorite actors of mine (or any kind of celebrity), I've been surprised by the level of candor and openness Twitter's discussion achieves.

The "It's not Facebook" factor

I've opened a Facebook account at my family's requests and kept it to share links from this blog. Compared to Twitter, I feel I am lost in a sea of videos, photos, and games. Every time I login on Facebook, I approve x numbers of "Friend" requests, block an ever increasing number of idiotic games and ponder at the amount of time people waste on it. While I am not sure which way the information flows in Facebook (it's all a big wall to me), I can definitely identify that in Twitter. You don't "Friend" someone on Twitter, you voluntarily "Follow". A quite important distinction. If you want to follow what someone says, it means the content produced by that person is important to you.

The echo

There are quite a few wordsmiths on Twitter that condense relevant, witty nuggets into 140 characters but looking at my stream I see about 80-90% of tweets being links passed through. An opinion in a few words with a link from a person you respect give that link a lot more weight. Following some intelligent people turns Twitter into a clever filter for the internet. This is also why many Twitter clients have first-class browsers included in them. In fact, I am now checking first my Twitter feed for news and my Google Reader account after.
An important part of Twitter is something which evolved from the community to be later included as a platform feature. The Retweet. This is Twitter's echo, gossip and reach all into one. One person posting the value content is retweeted by its followers to their followers and so on. Like an echo traveling at the speed of light (and mouse clicks) you can now gauge how important some news is by how many retweets it had. Why do retweets work better than any other sharing mechanism? Because you are actively following the person who retweets. Someone retweeting has deemed this worthy and there's a good chance you will pass that along to your followers saying "Hey guys, this is worth checking out. Trust me!". I think this social graph of trust is Twitter's greatest strength.

There are many ways in which people use Twitter. This is mine. What do you get out of it?

The Iron Man returns! This time accompanied by Playstation Move, Novint Falcon and Leonar3Do

As I went to see the second installment of Iron Man, I kept thinking whether they will still have the cool visualizations and interfaces present in the first one. Maybe the audience didn't really respond to them? Was it only a geeky segment that would be cut off for more space pursuits and lasers?

Turns out my fears were completely misplaced. Not only did they increase screen time, they went crazy with ideas! 3D scanners! Surrounding interface! Digital basketball!

So, keeping with that theme, if you thought part one had too many videos, prepare for another video-post exxtravaganza!

Let's run the analysis on this one. If in the first movie I referred to the Autocad-like interface as direct manipulation. In this video, the interface Tony Stark is using expands ﻿that concept, placing the operator in the middle of the interface, stopping a few steps short of Star Trek's Holodeck. Despite that, I'll try describe how we would start building such a thing.

Again, center stage is accurate response to direct manipulation. But this time, Tony is surrounded by the elements, by his engines, digital representation of his suits and is able to manipulate any of them as he pleases. Do note that he is moving through the room/garage when working. So if previously Tony used a stylus/table area on which he worked - focus is on working area, Tony is facing the "computer" - now the entire mode of operation is focused on him. Pardon the corniness, but I'd call this type of interface a Renaissance UI, as it redefines itself around its human operator. How would we go about building this? Skipping the topic of holograms for now, let's presume we have a way to display the 3D elements in a spatial manner. We would need to track the entire "work area", meaning the room, for accurate positioning within it. Each of the digital entities would need to have an XYZ coordinate. Once that is in place, the human operator's position and full body posture needs to be digitized and accounted for. There is a large amount of hand and head tracking in place.This needs to be very accurate and responsive. In this particular instance, head tracking needs to be especially sensitive. I think that Tony's work room would have to be absolutely packed with sensors, projectors and cameras, sitting above a cluster of servers that would process the entire thing. To top it off, the room is not a cube-like (even perhaps dome-like), completely empty box as the Star Trek holodeck, there are cars! desks! robotic equipment! Oh the headaches these Iron Man designers give me... As Robin Williams once said about golf, "They put shit in the way!".

I would say that a dome-like, empty room, inward projected, 3D imaged, with full body tracking would be an interesting middle-of-the road solution. Sure, you might not get to run around it, crumple engines into basketballs and dunk them into previously invisible baskets ( Note to Iron Man designers: yes, sometimes, ﻿programmers like to put in easter eggs, but come on, a virtual basketball forming out off nowhere?)

So what do we have here? Enough projectors to cover all viewing angles (depicted only two), a similar array of sensors for body tracking and our main character. Now, I didn't give him glasses to make him a geek, I gave him active shutter glasses to see the 3D images (the red cube).

Hypothetically, let's say we have gathered all the needed hardware. Several dozens projectors, a lot of tracking sensors and 3D glasses. From the software point of view, we would need a complicated process to render the 3D imaging. Active shutter glasses work best when the distance between the screen and glasses is known (more on that later). Then we would need an approximation of where the objects are when "seen" in 3D to give the XYZ for a certain element. We need to do this to realize the accurate direct manipulation.

Leona3Do makes 3D work

Earlier this year I found out about a 3D editing solution for *CAD software. Here is their video.

This is a very impressive demo and a really useful tool for modelers.It's available for purchase for 750 €. Leonar3Do is the first commercial offering I've seen that is truly a 3D sculpting tool. The setup is rather involved, with 3 hardware dongles, screen measuring, distance points mapping on the monitor and so on. However, once it's up and running, it's a very interesting solution in the field of visualization. It is not as accurate as the g-speak (doesn't detect hands), not as setup-free as the Kinect, but it comes closest to our Renaissance UI, because it projects 3D images in a ﻿real space where they are directly manipulated and seen as real objects .

This looks like our solution, right? Well, it does have several limitations, some of which can be eliminated, some that can't be.

﻿Accuracy on the Move

The Move works in combination with the PS Eye camera for the PS3 and it also capable of head-tracking and accurate 3D manipulation. So how does it work? In the demo, you see the Move controller's motion can be detected back and forth in 3D space. The colored round ball on top of it is responsible for that. The PS Eye camera picks up the ball, sees it as a circle and measures the diameter. Based on that diameter it can calculate the distance. The most impressive part of the demo is, however, the accuracy of the sensors within the controller. Even the slightest changes are picked up by the controller and rendered accordingly. Two accelerometers in each Move controller are feeding back data to the PS3. Together with the Eye, they provide a full picture on XYZ coordinates of the controller and its orientation. I have not seen any convincing 3D (active shutter glasses variety) demos with the Move yet, but I think it has the potential to surpass Leonar3Do in 3D sculpting. I've read that some internal Sony teams already started to use the Moves as 3D sculpting tools. For more *ahem* entertaining uses, see this.

So, 3D, accurate, direct manipulation? Sure, it would be nice if it had the firepower to work only with our hands, but Sony realized the hardware and software are not good enough at this moment for that. As it turns out, you need some buttons. Honestly, I think Microsoft made a big mistake when it embarked on the "You are the controller" slogan. The Kinect would have been perfect working with the Move controller. As it is, Sony is supplementing the lack of sensors in the Playstation Eye with software and the Cell processor's huge capabilities for number crunching. So, while Kinect sensitivity might increase in the future, I believe Sony delivered the better solution for this moment.

And there is the one thing extra that Move brings to the table. Feedback.

The case of the missing feedback

﻿A topic I have not covered yet is input feedback. It's missing in Iron Man, Kinect, g-speak or Leonar3Do and it's a huge part of the experience. It's also one of the problems that falls in the same category as accurate holograms: something for which, given the amount of freedom we would want in the Renaissance UI, we don't have a solution yet.

Why is feedback important? I've written about the benefits of having an interface that mimics physical properties. Kinetic scrolling, inertia simulation, acceleration and deceleration are all bits that enhance the user's experience and provide familiarity in a new setting. We all learn the physics of this world from the earliest age. A natural user interface would be usable by a 2-year old, as it would mimic the world he already learned. And if your interface is usable by a 2-year old, it means other users will thank you for it. But I digress.

The missing bit from all this is reactivity or feedback. In the game console vernacular, it's also referred to as force feedback. You grab a stone, it is heavy. You pull a string, it opposes resistance. You press a key, it has a travel distance and a stop point which gives you a confirmation that a key has been hit.

In all the Iron Man scenes, that feedback is missing. In playing with Kinect, that feedback is again missing. To accurately throw an object and hit a target, our body has learned to apply the required amount of force based on the object's weight (sensory input), distance(visual input) and previous experience (throwing other objects). Hitting a target through air can be hit or miss. Typing several paragraphs of this article on an iPad, I can tell first hand that missing feedback is the thing that kept my typing speed down the most.

The Playstation Move has a force-feedback engine inside it. So, while driving a car, hitting a wall would make the controller bounce in the opposite direction. This adds a lot to the realism of operating such an interface.

The Novint Falcon product is probably the best feedback device on the market right now. ArsTechnica had this to say about it:

When you fire the shotgun, the orb kicks in your hand; it's impossible to fire any fully-automatic weapon for more than short bursts while keeping your aim in one place. This is interesting, because the game suddenly becomes much more tactical—you have to think about what weapon you're using, and aiming is much more difficult. I mean that in a good way; this feels more real in many ways.[...]The Falcon makes Half-Life 2 much more engaging and immersive.

I really don't have much to add on this, so please check out Arstechnica's review.

In closing, there are many challenges ahead for building a real-life Renaissance UI, but with systems like Kinect, Move, g-speak, we are moving closer to that reality every day. Besides g-speak, you could purchase any of the other devices and have unique and novel interfaces for computers in your living room.

We are still at the very infancy of interfacing with such controls. Navigation, data visualization, gestures, workspace organization are all things which will be solved in software and that we will need to figure out in the coming years. I will address my ideas for most of these in a future series of articles.

﻿There is something to be said about sparking imagination. The role of filmmakers is inspiring real life - Star Trek, Minority Report, Iron Man all raised a bar that we can now only reach with our imagination. And it is that imagination that drives us forward to build a ladder to that bar.

And now, here's the Iron Man song, as performed by Black Sabbath. No, it has nothing to do with interfaces, design or innovative controllers. It's just 5 minutes of awesome rock.

What is the next step in natural user interfaces? And what do comic book heroes have to say about it?

Iron-Man is one cool hero and Robert Downey Jr.'s 2008 portrayal made it even cooler than the paper concept.

But what did a geek notice while watching the movie and sighed wanting the same thing? No, it's not a supersonic costume with limitless energy, but the 3d direct-manipulation interface that RDJ used to build said costume.

Though present only for a very limited time on screen (after all, there are baddies to kill, damsels to save), it was an "oh wow" moment for me. Two-dimensional direct manipulation is already here, thanks to the likes of Surface, iPhone and iPad and multitouch technology. But what about accurate, responsive 3d manipulation?

A discussion I've had some time ago made me think again about the interface envisioned in Iron Man two years ago. It is, I believe, the holy grail in computer interfaces. Direct, accurate manipulation of digital matter in three dimensions. The software used in the movie was representative of an Autocad package, used in modeling houses, cars, furniture, electronics. Let's analyze the specifics in the video and see how further along we are in building this. No interface elements. There aren't any buttons, sliders or window elements. The interface simply disappears and you are working directly on the "document" or "material". When designing any software product, the biggest credit an UI designer can get is that the user doesn't notice the interface and the he or she just gets things done. Kinetic/physical properties. Movement of digital items has speed and acceleration (rotation of model in first video), mass, impact and reactivity (throwing items in recycle bin unbalances it for a moment). All these are simulated for one purpose: making the user believe he is manipulating the digital entities in the exact same way one would act with physical objects.

Advanced (3d) gestures for non-physical manipulation. There are, of course, a large number of actions a computer can perform which don't map to a physical event such as the extrusion of an engine.

Accurate 3D holographic projections. The digital elements take a physical shape and are projected in 3d space.

Accurate response to direct manipulation. There are no styluses, controllers, or other input devices used.

So how far along are we? If you think this is all silly cinema stuff, prepare to be pleasantly surprised.

Learn to g-speak

G-speak is a "spatial operating environment" developed by Oblong Industries. If the above demo didn't impress you, check out the introduction of g-speak at TED 2010 and some in the field videos as this digital pottery session at the Rhode Island School of Design. The g-speak environment speaks to me as Jeff Han's multitouch video did back in 2006. It took 4 years for that technology to appear in a general-purpose computing device as is the iPad. The g-speak folks hope to bring their technology to mass commercialization within 5 years. That sounds pretty ambitious, even coming from the consultants of the 2002 Minority Report movie. Right?

Bad name, awesome tech: enter Kinect

The newly released Kinect is an addon for Microsoft's XBox gaming console. Whereas g-speak is still some years away from commercialization, you can get a taste of it right now with the Kinect.

Combining a company purchase (3DV), tech licensing (PrimeSense) and some magic Microsoft software dust, Kinect was born. Here are a fewpromotional videos, if you can stomach them. And here's Arstechnica's balanced review. According to PrimeSense,

﻿The PrimeSensor™ Reference Design is an end-to-end solution that enables a computer to perceive the world in three-dimensions and to translate these perceptions into a synchronized depth image, in the same way that humans do.

The human brain has some marvelous capabilities for viewing objects in 3D. It is helped by an enormous parallelism capacity. It only needs the two inputs from our eyes to tell distance. Not disposing of the brain's firepower, Kinect uses a neat trick: projecting infrared rays into the room with a IR light source that are picked up by the second camera (a CMOS sensor). Here's how the Kinect "sees" our 3D environment.

Besides the IR projector and IR Receiver, Kinect also comes equipped with a VGA camera and no less than 6 microphones. Microsoft took the PrimeSense design and added the VGA camera for face recognition and help with the 3d tracking algorithm. The microphones are used for speech recognition; you can now yell at your gaming console and it might actually do something. All this in a 150$ dollars package that you can buy today.

From some reports I read on the net it appears Microsoft spent a lot of money in R&D on Kinect. The advertising campaign alone is estimated at something like 200 million $. It is only natural to assume that they have bigger plans for Kinect than just having it remain a gaming accessory. I believe Microsoft is betting on Kinect to represent the next leap in natural user interaction. Steve Ballmer was recently asked what was the riskiest bet Microsoft was taking and replied with "Windows 8". The optimist in me says Windows 8 will be able to use Kinect and have a revised interface to suit 3d manipulation. The cynic in me tells me that he was talking about a new color scheme.

So what do we get? Almost no interface elements, kinetic/physical properties, advanced 3d gestures from the original list. They added some really cool stuff via software, such as, in a multiplayer game, if a person comes into a room and he/she has an XBox Live account, they are signed in automatically into the game, simply via face recognition. Natural language commands bring another source of input for a tiny machine that knows much more about it's surroundings than previous efforts.

What do we miss? Holographic projections, accurate response and, something missing also from Iron Man's laboratory, tactile feedback. The early reviews for Kinect all mention this in one way or another... Kinect's technology, when it works, is an amazing way to interface with a computer. When it breaks down, it reminds us that there is still a lot of ground to cover. Microsoft's push for profitability ﻿(understandable, remember this is a mass-consumer product) removed an image processor from the device. This means that it needs an external processing power. The computing power reserved for Kinect is at the moment up to 15% of XBox's capability. The small sized cameras and their proximity requires a distance of 2-3 meters from the device in order to operate it successfully. Because of the small amount of processing power reserved for it, Kinect's developers have supplied the software with a library of 200 poses which can be mapped to your body faster and easier than a full body scan. You cannot operate it sitting down; it's my opinion that this is a side-effect of the 200 pre-inputted poses. You can also notice in the g-speak video above that their system reacts to their tiniest change, even when moving just their fingers. How do they do that? By using 6 or more HD cameras (and tons of processing) per second. The 340p IR receiver and 640p video camera just doesn't cut it for such fine detections. This is , again, an understandable means of reducing the cost.

On the other hand, Microsoft made a great move by placing Kinect 1 next to a gaming platform. Games are by their nature experimental, innovative processes. This gives everyone huge amounts of freedom to experiment. Made a gestural based interface and no one likes it? You can scrap it and the next game will try something different. This will give Microsoft valuable data for improving Kinect and filtering out bad interaction paradigms.

Kinect has a chance to evolve and become the next natural way to interface with computers. With increases in processing power, accuracy will increase. If you want to play like Iron Man, you can do so now with Kinect.

In the next installment, I'll talk about the accuracy, feedback, Playstation Move and the (sorry) state of holographic projections.

﻿Dear recruiters. I am happily employed, but as anyone, I sometimes go through my spam folder. So here are some tips, free of charge. They may help you at your job. On the highly unlikely event that actual recruiters will read this, here it goes.

You want to get someone to work at your company or the one you are recruiting for? Please take an extra moment to make sure that:

the position you are recruiting for is not lower than the one that is listed on someone's profile page. I'm sure Junior developers are highly praised and looked upon as gods at your companies but I left that position 6+ years ago.

you are actually recruiting in the general technology area that my profile is listed. PHP may be the best technology ever for you, but I won't go near it willingly.

3000+/5000+/INFINITY+ connections? You must get up really early in the morning. Here's a cookie.

I am simply dying to be kept up-to-date with market opportunities that fit my profile. I'm sure you'll leverage your social connection synergies to match me to some large-enterprise/huge opportunity/promising startup every day now.

﻿Spam is hated for a reason. Don't be a spammer. Generic messages sent to hundreds of people don't work.

"We would be grateful if you could send us your contact data". I'm not sure how this message was even delivered to me since you clearly don't have my contact data. Besides, with my LinkedIn profile, Facebook profile, this blog, my Twitter account, there is simply NO WAY to get in touch with me. Now, where shall I fax my contact info?

Take time to read someone's profile, blog, interests, use correct spelling and common sense. I'm sure the people you work for will appreciate you getting the attention of people they will then want to work with. The keyword here is people. Not candidates, not potentials, not resources. In the end, it's always people who do the work.

Synopsis: Here's my tale with Linux over the years and why I believe Android fits the bill for this article's title. But first, a bit of history and how the desktop had to change for Linux to be on it.

Ah, Linux... champion of open-source, love of computer geeks everywhere and owner of the cutest Operating System mascot around.

Most of my colleagues label me a mac-freak. With two Apple laptops, two iPods, an iPhone and a Time Capsule in the house, I can't really blame them for it. However, not all of them know that before hooking up with Apple I had a three years stint with Linux.

This was almost 8 years ago, in a land where there was no Ubuntu, Android wasn't even an idea and editing /etc/X11/XF86Config-4 was the only way to change the screen resolution.
It's been a while since so allow me to reminisce a bit.

"Damn kids, get off my lawn!"

I started off with Red-Hat 7.1, in a brave attempt to have a triple booting system together with Windows 98 and then newly released Windows XP. I sat down on a Friday afternoon and emerged from my room Sunday around lunch. I slept about 5 hours through the whole process and after 20-something installation attempts of the three operating systems, I abandoned in defeat.

I was intrigued by my failure in the realm of technology while thinking of the new worlds I had learned of. It's funny how Microsoft's domination on operating systems market share made it strange to even question the status quo or think about alternatives. Learning about open source, volunteers, Unix, command line, kernels, distributions was as strange to me as was coming out of the Matrix to Neo.

A few months later and several stubborn sessions ("you will learn vi's commands or starve at this keyboard") I had learned a great deal about operating systems, partitioning, package management, scripting, window managers, boot loaders and other assorted varieties of unix-y knowledge. I firmly believe that any developer should know the innards of his preferred operating system as well as what choices may exist. This knowledge will help him/her write better code in some instances but most importantly will help debug software when something goes wrong at the lower parts in the technology stack.

Red Hat, Mandrake, Gentoo, Slackware, Debian all served time on my desktop. Lycoris Desktop, Linspire, Yoper, Mepis, FreeBSD and other curiosities such as Linux From Scratch had a brief run during a period of experimentation. Knoppix was making the rounds as the first usable Live-CD Linux distro, a feature now common to all distributions. At the time, running an OS from a CD was nothing short of amazing, even if agonizingly slow. Debian Unstable eventually became my base OS and xfce my window manager of choice following testing of Blackbox, Fluxbox, IceWM, WindowMaker, Enlightenment, and of course, KDE and Gnome.

I remember clearly spending one week trying out kernel builds on the 2.4 branch, ranging from keyboard-biting frustration to enlightening exhilaration. I made some really good friends that taught me as I went along. I understood communities and open source. I joined a LUG and went to a conference. I also didn't spend more than 6 months running the same OS on a daily basis.

Three years is a long time to run such an experiment, but I don't regret doing it (I also don't recommend Slackware or LFS to anyone, either). I probably learned more about computers during this time than in any other period. I wanted to start a business on Linux consultancy.

So what happened?

University years passed by and pretty soon I needed my computer for "real work". Eventually the thrill of discovery and learning wore off and I became weary of spending hours configuring things just to make them work. My respect for the Unix-way of doing things remained, so I couldn't go back to Windows. Ubuntu was just a blip on the radar in 2003-2004. In spring 2005 I ran across John Siracusa's excellent review of Mac OSX Tiger and the course was set. John's reviews have been epic enterprises over the years, sometimes expected more by the community than the actual releases of OSX. His attention to detail, precise critique and detailed Unix knowledge drew my admiration and desire to learn more of this OSX. One typical feature of his reviews is the attention to the aesthetic. All of these, I would later discover, are things highly treasured by the mac-community; I'm sad to say the latter one is still absent from their linux-minded counterparts.

Three months later I did what any self-appointed geek does at some point: buy the most capable computer he doesn't really need. I embarked on a dual-CPU, 2.7 GHz G5 Powermac and put my Linux days behind me.

A modern, unix-based operating system set up on top of FreeBSD meant I would have the Unix strength beneath the hood while at the same time benefit from an interface built with usability and speed in mind. Sure, I might give up some "freedoms" found in the Linux world, but really, how many times do you need to change window managers?
Which brings me to the topic at hand.

Mainstream, schmainstream

Mainstream software as a concept lives and dies by the amount of people using it. Software ecosystems thrive when users drive demand that developers strive to meet. I'm not going to mince words here. Where operating systems are concerned everything outside of that is a highly specialized tool, an academic experiment or a hobby.

It so happened that during my years running Linux and thereafter I ranacross several articles, forum posts and discussions as to which year would finally be the year of "mainstream" Linux. What drove linuxists to this goal besides recognition and free software ideals?

Linux developers were united by another thing. An idealistic underground current against the Microsoft "opression". Even today, Ubuntu's Bug No.1 stands as an example of this counter-movement.

Microsoft has a majority market share in the new desktop PC marketplace.
This is a bug, which Ubuntu is designed to fix.

Microsoft's monopolistic strategies of the past, shady business decisions and outright hostile campaigns against Linux painted a big target on its back. Flame wars ensued, parodies popped up, salvos were fired from every camp. "Microsoft is evil/no it isn't/yes it is" flame wars will eventually pop up in any tech community.

It's a known thing that humans are uniting easily against a common enemy and rally behind heroes in any battle. And although Microsoft has always been the "enemy" for the Linux camp, a true "hero" never quite emerged. I've often thought of Ubuntu as of a pacifying unifier of the various Linux tribes while at the same time spreading a message of love and understanding for users.

The other OS company running in the mainstream race, Apple, faced the same upstream battle against the Microsoft monopoly. They had a more focused approach, a lot of money and still, after many years are still placed somewhere between 5 and 10% market share worldwide.

The desktop wars were won by Microsoft a long time ago, and the Windows+Office+Exchange+Sharepoint combination will be hard to "beat" in the near future. Apple had a clean break with the iPod, the iPhone and pretty soon with the iPad. Google won the internet race and Linux is hard at work on servers, embedded devices and phones.

Rise of the replicants

Since November 2007, a new hero emerged in the Linux community. Android took on a long path from a Palo Alto startup snatched by Google in 2005 to an alliance-backed open source contender for the mobile operating system crown.
Microsoft ignored the web and Google snatched it away. Microsoft also ignored the mobile space and Apple stole the spotlight. Nokia struggled in unifying its many platforms and UI toolkits. RIM focused on email and business users while HTC took upon grafting a modern, pleasant interface on top of the aging Windows Mobile platform.
Apple had shown with the iPhone that consumers appreciate usability with a top notch media and web interface. Mobile device manufacturers needed a modern operating system with a big software developer behind it.

This is the landscape in which Android was introduced by the two Google founders who rollerbladed their way through business suits when introducing the HTC G1 phone.

The G1 launch didn't set the world on fire, however slowly but surely, Android gathered a lot of momentum. An army of droids is being assembled as I write this (tip'o the hat to my friend, Mihai).

Being used to lengthy flamewars in the past, i was slowly recognizing a trend among comments on sites I frequently visit related to Android articles. However, it really dawned on me that Android became "the hero" for the Linux community after reading David Pogue's amusing followup to his Nexus One review:

Where I had written, "The Nexus One is an excellent app phone, fast and powerful but marred by some glitches," some readers seemed to read, "You are a pathetic loser, your religion is bogus and your mother wears Army boots."[...] It's been awhile since I've seen that. Where have I seen… oh, yeah, that's right! It's like the Apple/Microsoft wars!

Yes friends, wars, passion, heroes! Being an iPhone-toting Java developer among open-source enthusiasts in our company, I soon started to get looks and remarks as "yeah, that iPhone guy who bows to Steve Jobs". Because you see, Android managed to unite two battle-fronts: both Linux developers as well as Java developers (but that's a topic for a future article).

As I mentioned earlier on, I've always been a supporter of Linux, even if not apparent at first glance. That's why I always get a laugh when overhearing the above line. At the same time, I'm glad to see passion among developers for a Linux-based platform . I truly believe passion is needed to bring people to create software, develop an ecosystem, rally behind an idea and yes, bring it into the mainstream. This guy had the right idea, if lacking a bit in style.

Despite my continual purchases to Apple, I also believe competition is good. And unfortunately, besides Android, there haven't been many (or few) to light up fires under Apple's iPhone platform, forcing them to react to its shortcomings.

But why a phone? And surely, if we aren't "winning" on the desktop, it's not truly winning, is it? Apple and Google may have targeted phones at first because of different reasons and backgrounds, but found themselves on common territory. Here's my take on it.

The Desktop has been gradually shifting away to the Mobile. Laptops, smartphones, tablets, e-readers. The computing landscape changed in the last years, a fact obvious to many. We are witnessing a mindshift, a transition from general purpose computing to device and activity specific. Reduction of costs, size and increasing computing power made the original iPhone twice as fast as my first computer and the Nexus One five-to-six times as fast. What about constraints? Memory, storage, screen estate are all premiums on mobile devices. You can't just plug in another hard drive. You can turn it however into a valuable asset in creation: focus.

I believe the focus on this class of devices and consumer-orientation made them a success. Why is that? Targeting a reduced platform, a niche if you will ensures you don't get distracted or waste resources. You can fail without taking down the company. It's a relatively low-risk avenue. It's an excellent test-bed for new interaction and UI paradigms. And if you play your cards right and use the correct development method, you can then expand your Operating system onto other generic devices that eat up a pie of the desktop's hegemony.

Interestingly, Apple and Google arrived here from different roads. Apple leveraged its iPod legacy of industrial design and its flexible OSX platform with a focus on media, entertainment (much of that being games) but also a premier web experience. Google wanted to leverage it's excellent infrastructure for the "data in the cloud" paradigm while promoting Linux to mainstream use.

Google made the laudable decision and kept Android open-source. As a result, with people starting to use it for ebook readers, upcoming tablets and netbooks, enterprising developers are rapidly expanding Android's reach. The emphasis on portability, battery performance and Google's focus in this area will ensure that for some time, Android will remain a mobile-devices OS.

Because I care about technology, the future and human evolution. I believe the web is guiding our path there, fueled by innovation, marking our progress.

The future of the web is the future of our civilization. It is our collective fountain of wisdom, our interconnection of services and actions, our logger of events, the keeper of our data.

We must nourish it, protect it and help it develop itself. Help it mature. We need to understand it, as well as make sure it will understand us. Because when we will be old enough, it might just take care of us.

How do I think we'll get there? Sharing, standards, innovation, design. In this blog, you'll read about web technologies, web companies, user interfaces and interaction, bits of design, game development and my accompanying thoughts on these subjects.