Category Archives: digital business

While browsing for distractions on my way to the airport, I stumbled upon Kat Hagan’s post “Ways men in tech are unintentionally sexist”, hosted on Anjani Ramachandran’s One Size Fits One. The post is part of a larger debate about women’s presence and recognition in tech, a debate that at its worst sums up the way socially relevant issues could be discussed and they’re not: we could make a smart use of the wealth of bright minds and insightful data made available by the digital age, and instead we pursue click-baiting headlines and artificially inflated scandals.

To her credit, Kat Hagan has a much more thorough and thought-through approach, referencing scientific theories and academic papers, to illustrate how men can be unintentionally sexist when approaching/designing/managing technology and its development. We need more of that.

She then makes a list of behaviours that should be avoided, some of which are very reasonable and uncontroversial, such as not using “guys” when addressing a group of mixed genders, or ignoring women’s needs (the example of the lack of period tracking functionality in Apple’s new Health app is particularly spot on).

Other recommendations, though, may sound entirely sensible at first (as confirmed by readers’ comments), yet hide a logical flaw that often recurs in discussions around sexism and other forms of discrimination:

you can’t scale linearly from individual to mass.

While there are great variations between individuals, as you get to big numbers you see statistically significant similarities between people of the same gender. We all agree that we should treat each individual on their own merit, but should we extend that to millions, or hundreds of millions, of people in the face of these similarities? Should we ignore them? Or worse, deny them?

I’m going to make some increasingly uncomfortable examples to show that things are more complicated than the sexism debate seems to account for, and there are difficult questions worth at least asking ourselves.

1. Assuming gender identity

Kat argues that using avatars that are male by default is a form of sexism that should be avoided, but the underlying issue is whether we should allow ourselves to assume that a user is of a certain gender, and at what cost. The avatar example is an easy way out of the problem, because you can always go for neutral (although when I registered to Pinterest, with its overwhelmingly female membership, I’d have had no issue being presented with a female icon). Things get trickier when it comes to design choices that don’t always have an optimal neutral solution: colour palette; sizes; font; images of a user, such as a face, or a hand. If the numbers proved that there are significant differences in preference among the genders, and our platform were skewed male or female, should we ignore it? Should we opt for a neutral solution even if it doesn’t please anyone, as long as it doesn’t displease one or the other?

2. Assuming gender differences

Kat’s point no.8 is “Stop denigrating things by comparing them to women or femininity”, like saying “you fight like a girl” or “you like chick flicks”.

This is a campaign by Always. Who couldn’t like it? Who couldn’t agree with it?

Unfortunately it’s hypocritical, because it hides an uncomfortable empirical truth. In our experience (and there may be times and places where things are different) most girls fight “like girls”; most “chick flicks” are viewed and liked by girls; just like most “jerk” acts and comments are made by stupid males, and most horrible sex comments are mouthed by male “pigs”. Is it true that “like a girl” tends to be an insult whereas “like a man” is celebratory? Yes. But we have other derogatory terms for men: jerk; pigs; a**-hole; wanker… They’re all unequivocally male.

Should we replace “fight like a girl” with “fight like a bitch”? Is this what we’re talking about?

On the other hand, we can decide that we’re better off as a society by being hypocritical and treating these uncomfortable empirical truths as if they didn’t exist, but facts tend to be stubborn things, and in the long run hypocritical conventions end up damaging the broader issue they’re supposed to protect because they make it come across as artificial and false.

3. Assuming gender interests

Kat argues that “assuming the women they meet are in non-technical roles” is a form of sexism: this is certainly true if you meet them at a tech conference, the (once again too easy) example that she chose to illustrate her point; it’s a lot less true if you’re introduced to a new team of mixed roles, or if you’re meeting students at a grad fair. You can legitimately assume that someone interested in Computer Science is more likely to be male because the numbers prove you right, so if hypothetically you only had time to speak with one applicant with no knowledge of their background, picking a man would not be a form of sexism, it’d be weighing your odds.

Of course that doesn’t mean that you should rule female applicants out:

it’s ok to prepare for the usual, as long as you welcome the unusual with open eyes and mind.

But this is an easy-to-agree principle, so let’s move on to more troubling questions: if you’re a parent of a young girl, and you have to enrol her in an extra class of either literature or coding, knowing that right now she’s interested in both (or neither), what should you do? And if you were to build a new dorm for your future Computer Science students in a country where men and women can’t share facilities, would you split the space half and half?

4. Assuming gender capability

Kat contrasts the prejudicial view that “Women just aren’t interested in programming/math/logic” with evidence that “the variation between individuals dwarfs any biological difference”. Although counterintuitive, both statements are true: there are massive variations between the capabilities of any two random individuals, and that’s why we should always be judged on our own merit; but at the same time when it comes to large numbers, men are marginally better performing and significantly more interested in mathematical and technical disciplines.
We design technology for millions, sometimes billions of users, and even a marginal difference in response can amount to a dramatic increase in adoption, revenues, and success. Should we ignore that for the sake of equality? Should we do more than that?

A famous experiment from a few years back showed that what we consider an absolute (eg. how good someone is at something) is everything but: female Korean-American students were given a math assignment, after going through a process that would remind them either of their gender or of their heritage. Participants who were primed on their Asian roots (positively associated with math skills) performed statistically better than equivalent students who were primed on their female gender (often associated with being bad with numbers).

If we’re pursuing equality, should we actively design technology requiring quantitative skills in a way that makes women forget that they’re women? Are women actually better off in a “sexist” office that calls everyone “guys”?

I’m not suggesting an answer to any of these questions, but I think it’s worth asking them. Human behaviour is counterintuitive and complicated: individually, we’re very different; in groups, we influence one another and form clusters; when you have to design for large groups, you inevitably sacrifice the uniqueness of each individual.

The point is not how to avoid discrimination.

We always discriminate: when a newspaper publishes an article with a certain font size; when a supermarket places a product on an eye-level shelf and another one high up; when it was decided to use certain colours for traffic lights.

We always discriminate in technology, too: when we decide what operating system we develop apps for; what apps we preload onto a device; what features we include in those apps.

The point is how to discriminate well.

If we look at the world we live in, we follow a few principles:

Discrimination must have a purpose: newspapers were printed in only one font size because, before digital came along, it would have been economically inefficient to do otherwise

It should be optimal for a sufficient majority: traffic lights are a bad solution for the blind and color-blind, but because most people don’t have such problems, it is the solution we chose

It should not make things too hard for the minority: if you’re too short to reach a product on the top shelf of a supermarket, you can ask someone to help you

Sometimes, it requires people to adapt: if you move abroad you can’t expect people to learn your language, you have to learn theirs. It’s a discrimination against new immigrants, but the alternative would be so inconvenient that they just have to comply.

When it comes to technology, we need to be aware that these trade-offs are an inevitable part of the job regardless of how uncomfortable they are, and so are the questions they bring along.

If Apple didn’t include period tracking in their Health app because it would have come at the expense of another feature or of a faster performance that would have made the product better for most of their users, would it still be wrong? And would it be an ethical question or a commercial question?

Would the answer change if there were fewer alternative health apps on the market?

How much worrying about sexism is too much?

And if we say it’s never too much, let’s rephrase that: how much disregarding of statistically different behaviours among genders is too much?

How much gender-neutrality can we pursue, without being counterproductive to the success of what we do?

[This article was first published on the Singapore Business Review: thttp://sbr.com.sg/retail/commentary/its-time-retailers-shape-their-own-future]

There are two ways to predict the future: the first one is to turn to the experts and trust their wisdom; the second is to look at the gap between what people expect and what they’re able to do, and see in it the shape of things to come.

After 20 years of research with 284 experts producing 28,000 predictions, Philip Tetlock concluded that “the average expert was found to be only slightly more accurate than a dart-throwing chimpanzee”.

While the object of his study was Political Science, it could as well have been retail marketing: ever since the time of dial-up modems, analysts have been anticipating the day when brick-and-mortar shops would be made obsolete by a new generation of shoppers that would buy everything, from avocados to Z4s, from their computer/smartphone/Facebook page/twitter feed.

Of course that day hasn’t come yet, and chances are it never will, so when we decided to investigate the face and fate of retail in the digital age for Havas’ latest Prosumer Report, we focused on real habits and expectations.

The state of commerce in the digital age
Surveying over 10,000 people across 31 countries, “Digital and the new consumer” discloses what we’re getting right and wrong about digital commerce and what the challenges are for online and offline retailers. (Spoiler: this is an example of what we’re getting wrong…)

Bill Gates once said that “we always overestimate the change that will occur in the next two years”, and this seems to be the case with mobile commerce: despite all the talk of it, only 22% of mainstream respondents have used a smartphone to shop online.

Things are about to change, though, as that figure climbs up to 38% among Prosumers, a relatively small cohort of influencers who have been proven to give a good indication of what the majority will soon think and do.

Moreover, if we take geography into account, mobile is confirmed as the new frontier: in Singapore, where shopping is almost a competitive sport, 48% of people have made purchases from their smartphone and 26% from their tablets, a figure that, given the category penetration, shows that virtually every tablet user is a tablet shopper.

Having said that, before we rush to stick miniaturized versions of our stores into an app, we should be aware that we are not just talking about another screen. It’s the shopper that is mobile, and that is fragmenting the purchase process across multiple real and virtual steps: the smartphone is only the glue that keeps it all together.

A transition towards a new form of shopping
51% of Singaporeans say that for major purchase decisions their first stop is usually the internet; that’s far from being their last, though, with 66% “showrooming” (i.e. visiting stores to see/try-out a product before buying it online) and 58% checking for price and customer reviews online while in a shop.

This blend of on- and off-line has unlocked the e-commerce potential of non-commoditized goods, such as clothing, shoes and accessories, which is now the most popular category of online shopping in Singapore (65%), well ahead of books (37%).

The most important insight offered by the Prosumer Report is that we’re not in a transition from an age of brick-and-mortar to one of bits-and-bytes, but rather from one of confrontation between the two models to one where retailers can create a hybrid model to respond to what is already a fluid experience in the minds and habits of shoppers.

Singaporean retailers have been waiting for too long
Unfortunately, the local industry seems to be lagging behind: some retailers are still very hesitant to create an online presence, leading to nearly half of all Singaporeans feeling frustrated; at the same time, e-stores have problems of their own, with 70% of online shoppers feeling overwhelmed by the amount of choice and information, and 64% still preferring to buy certain products in person for the tangible benefits of touching them and trying them on.

Real innovation seems to be coming from local start-ups, who understand that the best use of new technologies is not to support old business models, but rather to invent new ones: companies such as Swiff, MOGi or ERN are determined to unleash the full potential of mobile commerce, while Tate & Tonic is suggesting that we could rid of shops altogether, replacing them with a monthly subscription to a curated, personalized fashion collection delivered to your door.

While this is certainly good news for entrepreneurs and venture capitalists, established retailers should start worrying, as they cannot expect to leave radical innovation to smaller competitors and still stand to benefit from it.
Whatever commerce will look like ten years from now, it will reflect the objectives and needs of those talented and ambitious enough to shape it, and it will surely be more disruptive than just more windows on more screens.

After saying that “we always overestimate change that will occur in the next two years”, Bill Gates went on to add that “we underestimate change that will occur in the next ten.”

It’s time for retailers to stretch their imaginations and start shaping the retail industry of 2023, to ensure that they will play a part in it.

One of the most interesting aspects of how the digital revolution is impacting media is that newspapers have found themselves in a situation where they’re not just reporting on history from a safe distance, but also feeling its weight on their very shoulders, and this is proving to be a significant test of their journalistic ethos.

The last example of this is Helienne Lindvall’s article for the Guardian, where she polemized with the likes of Cory Doctorow, Chris Anderson and Gerd Leonhard for advocating free (music, media, films…) while at the same time demanding money to speak at conferences. The article really is as shallow as my words make it sound, and praise goes to Cory Doctorow for a polite and thought-through response, that even discusses the hows and ifs of his compensation.

I have too much respect for the Guardian to believe that they don’t get the debate around “free”, but it may be worth simplifying things in three points:

“Free” is not an ideological debate, it’s a practical one. I admit that authors such as Lawrence Lessig may mislead us, but the real point is not whether it is “right or wrong” to distribute content for free, but whether you even have an alternative. You don’t. Technology allows for free copying and distribution, and that’s what’s going to happen. We can legislate against it, but technology will always find a way around the law.

Given this, I understand that the government would try to enact the law as a matter of principle (but even then, for how long, when you can spend months working an operation to take BitTorrent down, and it takes a few hours to put it back up?), but the nature of businesses is not to defend principles, it’s to make money. And no business model was ever built on “what’s right”: it’s built on what a company is a capable of doing, and what consumers are willing to pay for it. That’s exactly what the advocates of freeconomics are saying: get paid for what people are willing to pay for, give away the rest as advertising. (By the way, this is also precisely what they’re doing. Chris Anderson gave away Freeconomics in e-book format for free, and charged for the printed version. At the end of it, he made money. And so incidentally did Wired.)

Let’s talk about the Guardian: what’s clear by now is that very few people are willing to pay to access online the same news they see on other sites in the same way they do on other sites. This leaves you with three options:

Offer news that no-one else is offering, and that a certain segment of people is willing to pay for. It’s usually specialist information for affluent categories. This is not really an option for a generalist newspaper.

Offer a different experience around the same news. This is really tough, as there are already someexamples of engaging information experience available for free.

Experiment with a new model. Chris Anderson would probably suggest you to stop paying your best journalists and start becoming their agent instead: give them as much popularity as possible, help them monetize it, and then take a share of the revenues.

As always, things are complicated. But the business essence of free is not. Free is neither a business strategy nor an ideological crusade. It’s the way some things are destined to be. “That’s technology, baby. Technology! And there’s nothing you can do about it. Nothing!”

So, I indulged in a bit Google slapping in some of my last posts, but since I’m very much aware that this seems to be one of Blog-land favourite activities for 2010 I thought I’d be true to the spirit of this blog and complicate things a bit. Here are some unsolicited considerations on why and how Google can still get bigger and better:

Search is a by-product

This is a fundamental point for everything that follows. Search is not a product. Not because it’s a feature: of course it is, but that’s not the point. In people’s eyes, the difference between product and feature is meaningless. Search is not a product because it carries no value in itself. It’s a by-product, that is generated by, and only makes sense with, a worthy product.

When Google launched, the web was one mastodontic product that needed to be sorted out. Now you don’t need to agree with the theory that the web is dead (it’s not, the logic and math behind that theory are flawed) to acknowledge that we now have many products within the web: Facebook is one, but there are many others, such as Amazon, Groupon, Yelp, Wikipedia. (ie. the verticals). Facebook is not trying to replicate the internet within itself because Zuckerberg is ambitious: every one of the above sites has a natural drive to do it, Facebook is just getting there sooner and better. And that’s not a problem. However, all those sites have generated their own search as a by-product of their increasing complexity. That’s the problem.

Products create scale, by-products create profits

Up until the mid-2000s , things were pretty simple. Someone else created the product (the web), and Google profited from the by-product (search). This was only possible because in an atomized web no player was big enough to generate the users value needed to create a self-sufficient ecosystem. Even Amazon struggled to achieve the scale needed to generate profitability.

When the internet finally went mainstream, things changed. Facebook, Amazon, eBay and Skype were amazing products that were capable of drawing millions of users, effectively creating massive ecosystems. The scale of these ecosystems generated at the same time the demand for certain byproducts (ads, search, payment) and the business model that would support them. In a line, the lesson was: offer an amazing product, build a massive community, and money will come. (Ironically, that’s exactly what caused the dot-com bust, but the problem back then was that companies went public before they went popular)

First, after Google Wave and Google Buzz, your next Google-branded product will suffer an unnecessary PR handicap. Second, it confuses users: Google stands for search, and that’s what they know and use already. If you build a product, you shouldn’t call it with the name of the by-product.

If you look at what’s working, you’re doing it already: your two most successful recent products are not called Google Browser and Google Mobile, they’re called Chrome and Android. And they’re utterly brilliant!

And by the way, this is where you have an advantage over Facebook: because of its strategy, Facebook needs to bring everything under its roof and its name, and that limits what they can do and stand for. (The only exception is the “Like” button, that has a very different nature from Facebook: it’s no coincidence that it’s not called “Share”. It’s a bookmark, a favourite, a Digg with the scale that Digg never had: half of its value is one-to-self, to keep track of what you like; the other half is one-to-everyone, to broadcast your preference to strangers. None of these two propositions are consistent with Facebook. That’s why it has massive potential.)

Start from Android

It’s a great product, a great brand, and it’s doing brilliantly. The only thing that could jeopardize its success is a lack of leadership, and this is exactly what’s happening. We all like the idea of an open environment, but marketing is like physics: there can be no void; if a substance leaves a place something else occupies it. And in this case it’s the worst substance of all: operators. They’re taking advantage of Android’s openness to pre-install junkware, restrict access to applications that are bad for them but good for users (such as Skype), and impose arbitrary limitations. As of now, they’re the single, greatest danger to Android.

Just like democracy, technology needs leadership: you need to inspire, educate and, yes, regulate. And then let people vote with their fingers. Hopefully it won’t be about the middle.

Share this:

Like this:

Last week TechCrunch hosted Disrupt SF, one of the start-up tournaments that are really worth following because they can at the same time reveal and determine trends in digital entrepeneurship, given how much VCs are subject to the bandwagon effect.

There was no real groundbreaking innovation (assuming we can tell one when we see it, and I know that at least with Twitter I didn’t), but a few considerations can still be drawn.

Linkedin is the Godot of the Web 2.0. It has the scale, the resources, the talent and a clear set of users cases, but everyone’s still waiting for its true coming. It seems like some people are tired of waiting, and Namesake, Opzi, Sumazi and Gild are all trying to take advantage of some of its shortcoming and carve their own (hopefully) profitable niche. None of them pose any threat whatsoever, but Linkedin better sorts itself out before it makes any real plans to go public.

Virtual currency is still a tad too virtual:the technology is here and is getting better by the day (MobilePay USA doesn’t require a physical accessory like Square does), but noone seems to be able to put the right infrastructure in place to generate mass adoption. The elephant in the room is Facebook credits: it could really be a game changer, but by doing that it would lock almost any other player out, so it’s easy to see why some entrepreneurs are trying to cash in while they can. I’d be curios to see if mobile operators and banks will let a newcomer disrupt their business like the music industry did, or if they’re going to do something serious about it. (By the way, I also wonder what the Fed, the ECB and other central banks think about this.)

Check-ins are the most popular answers to questions nobody asked. Inspired by Foursquare, there are now zillions of startups that ask you to check in everywhere: check into websites (Badgeville, OneTrueFan), shops (Checkpoints), and even individual products, maybe in exchange for comics featuring a supposedly-witty dragon (if you don’t believe me check out Snapdragon). The problems with all these services is that they’re meant to address commercial interests and not user needs: that’s why they have to come up with artificial rewards (aka bribery). However, bribery is not enough to get people to adopt new tools: I may tweet what I’m having for breakfast if and when I feel like it, but I’m not going to install an app and check into my cereals in exchange for a badge and a funny comic. Sorry, I have better things to waste my life on.

All in all, too many start-ups want to cash on a business need, so they create a new product and then try to devise reasons why people should use it, while it should really be the other way around. It seems like they would benefit from a business 1.0 lesson: “Marketing is producing what you can sell, not selling what you can produce”.

Last Saturday Adam Rifkin published a long and thorough article on TechCrunch predicting that, 5 years from now, Facebook will be bigger than Google.

It goes on to list a number of different industries that Facebook can turn upside down and make huge profits in the process, and they’re all worth a read, but the first point alone (advertising) is enough to rest its case.

When Beacon launched back in 2007, I was insisting that, despite that specific execution was a bit of a fuckup, there were solid reasons why Facebook would inevitably be a better advertising platform than Google. Given the scale it has achieved since then, I find it quite easy to say that it can grow into a bigger one, too.

The problem with Google is that all its ads are driven by search: that’s what made its value proposition unique in the first place (“We show your ads to those who are in market for your product”) but that’s also its insurmountable limitation. Advertising is not always the answer to a question. More often than not, it’s aimed at people who are not even thinking about that question: because they don’t know that they have a certain need, or that a certain product even exists. And that’s ok.

Some of the most interesting things in life, we stumble upon. And then we want them. Google is not designed for this. Its focus on contextual relevance means that it’s actually engineered against this.

Facebook on the other hand does exactly that: it makes it easy to serendipitously discover new things, whether via your friends or via the brands you like. That’s why social shopping is gaining momentum and attracting so much investment: it’s not just because you tend to trust what your friends are buying or recommending; it’s first and foremost because you become aware of products you wouldn’t otherwise have known.

So here it is: Facebook was always destined to be bigger than Google. At least as an advertising platform. And isn’t that how Google makes virtually all its money? Given that advertisers have limited resources, Facebook doesn’t even need to get into search to steal a big chunk of it.

Like this:

This is an article I wrote in March 2009, and it’s essentially a more elaborate version of Steve Job’s subsequent comment about search on the iPhone: ““When people want to find a place to go out to dinner, they’re not searching they’re going into Yelp”.

Here’s one thing that apparently is not related to this: a report on how Facebook could kill Google, based on analysis from Ross Sandler. (Just so that you know, the article doesn’t say how Facebook could kill Google, it just compares the size and growth rate of the two giants. FAIL!)

Here’s one thing worth thinking about: how Facebook could damage Google. Not as a social network in itself (in a way Google is a social network), not as a competitive ad destination (there’s still plenty of money to flow towards online advertising), but as an alternative search engine. Here’s why.

Google is great at simplifying complexity. But it’s a universal search engine, and there’s only so much it can do. So it inevitably loses some ground to its competitors. And they’re not Yahoo or Msn.

When I want to know something, I search Wikipedia. Because I’m sure that it’s where I can find the single, most relevant result. (Even Google acknowledges that, by usually ranking Wikipedia results first). And from there, I can move on to related information.

When I want to buy something, I search Amazon. Well, I don’t, because I’m old fashioned, but plenty of people do.

When I want to watch something, I search Youtube. And that’s what killed Google Video; and why Google bought it.

When I want to find out what’s going on right now about a certain event, I search Twitter. Twitter gives me real time results. Not only Google doesn’t. Google is designed not to, because it privileges older results that have had time to grow relevant for its algorythm, over more recent ones that are relevant for my search. (Google subsequently tried to address this.)

And when I want to find someone, I search Facebook. Not only is it a search engine for people; it’s the most relevant search engine for people. (At least in the US and most of Western Europe). With the first click, I get a list of people with pictures, so that I know at first sight if any of them is the person I’m looking for. With the second click, I can contact them, and in many cases find out a whole lot about them.

If I have more of a business interest in someone, I search Linkedin. All the other considerations still apply.

To that same extent, every social network becomes an alternative search engine: a specialized, thus more relevant, thus better one. If I want to plan a dinner out, I’m better off running my search in a social network about restaurants/london (eg. Yelp, Timeout…) than googling “good restaurant london”, and be flooded by a number of more or less relevant results.

One could argue that Google would redirect me to that social network, and many others, but why waste time with one more unnecessary search, once a preminent, relevant social network emerges?

This is true for simple tasks, but even more so for more sophisticated ones. If I have to research a topic I know little or nothing about, for work or study, where should I start from? If I google it, I can’t really tell the relevant results from the less relevant, and above that the most credible results from the BS, because I have no expertise in the subject.

Thirdly, if I haven’t found enough information through my first two sources, or if I want a little more, I can Google. And hopefully by now the first two kinds of sources will have provided me with enough backround expertise to tell the good from the bad.

Does it mean that social networks will kill Google? No. At least not if we look at “Google as an ad platform”.

But I can safely say that “Google as a search engine” has been steadily losing share of my time, and will keep losing more.”

Like this:

Two days ago I was relaxing in Brick Lane with a good friend who happens to be fairly smart and competent in the digital industry, and while discussing bits and bobs of life in the XXI century it suddenly seemed clear that Facebook might want to give it a shot at creating its own mobile phone.

While the article suggests that such a move would be dictated from concerns for “the increasing power of iPhone and Android”, I see two more fundamental reasons (reinforced by the suggestion that Facebook would be developing the phone on the Android OS):

The second reason is less defensive and more about growth: moving out of a browser/app page into which you have to cram all your functionalities, and into a more complex platform where you can re-allocate them, allows Facebook to better exploit the potential of its many features. They already did this with “Facebook everywhere” , turning every web page into a Facebook-social property and its “Like” button into a larger Digg. Doing the same on a mobile phone would allow users to “Like” and “Share” things in real life. This could be the real trigger for “Places”, and could have gigantic implications. (eg. social commerce in the real world)

While the mobile market is extremely competitive and complicated, the option of a Facebook smartphone is made at least credible by three assets Facebook can count on: the right internal talent to give it a shot; half a billion users worldwide, including 150 (one-hundred-and-fifty) million mobile users; the necessary combination of supreme ambition and arrogance to deem it feasible.

Here are more reasons why Facebook is likely to be considering this: definitely worth a read.