Categories

Category think

This weekend (May 14th) 7on7 runs for the second time in NYC – the event brings together artists and technologists – where they conceive and often build a project over the course of a single day. Some people have referred to it as a YCombinator for the art world, sort of, but last year it was a little more unconventional and irreverent than a YC event. Slamming an artist together with a technologist can have unexpected consequences.

Last year Matt Mullenweg and Evan Roth hacked WordPress to add in a feature that would create random and unexpected experience at points in the software that he described a lonely or threatening. Marc Andre Robinson & Hilary Mason created an umbrella with a homing beacon so that you could see patterns of use and rain across a region. Joshua Schachter & Monica Narula devised a concept for a guilt exchange. You can see a video of these three presentation here. The other four presentations were wonderful – the whole event from 2010 is posted here.

Why 7 on 7?

A handful of reasons: this event and the process that it represents is something I have been fascinated by for a long time. The first site I created on the web was äda’web, back in 1994. It was a platform for artists and technologists to collaborate and create projects for the web – ones that were medium specific – ie: it wasn’t about putting paintings on the web, rather it was about using the web to create. The site is still up and running courtesy of the Walker Art Center, to whom we (and AOL) donated äda’web to in 1998. For more about what “äda’web is” see this interview with my co-founder, Benjamin Weil, and / or read this piece he wrote about äda’web as a digital foundry.

Back in the late nineties it struck me that the process that an artist and a technologist apply to their craft is similar. There is much to write on this subject, rather than diving in here there is a thread we started yesterday on quora titled Do Artists and Technologists create things the same way – it spells out similarities between creating art and creating technology.

7on7 slams technology together with Art. As such it is a great platform for pranksters. Pranksters have a vital role in any society — from Jesters, forward they help us gain perspective and see and say things that might otherwise be socially unacceptable. I met this group earlier this year who setup a system to randomly wardial phone boxes in London — Art or Hack? I’m not sure, either way, fierce fun.

Last thought. Art and technology are two communities that are well represented here in New York and yet they dont intersect that frequently. This event was designed to become a bridge between these communities. As technology becomes more deeply engrained in our lives and society it will become part of what we consider to be art and vica-versa. See you on saturday, I can promise something will surprise.

News.me launched this morning as an iPad app and as an email service. Here is some background on why and how we built News.me:

Why News.me? For a while now at bitly and betaworks, we have been thinking about and working on applications that blend socially curated streams with great immersive reading interfaces.

Specifically we have been exploring and testing ways that the bitly data stack can be used to filter and curate social streams. The launch of the iPad last April changed everything. Finally there was a device that was both intimate and public — a device that could immerse you into a reading experience that wasn’t bound by the user experience constraints naturally embedded in 30 years of personal computing legacy. So we built News.me.

News.me is a personalized social news reading application for the Apple iPad. It’s an app that lets you browse, discover and read articles that other people are seeing in their Twitter streams. These streams are filtered and ranked using algorithms developed by the bitly team to extract a measure of social relevance from the billions of clicks and shares in the bitly data set. This is fundamentally a different kind of social news experience. I haven’t seen or used anything quiet like it before. Rather than me reading what you tweet, I read the stream that you have selected to read — your inbound stream. It’s almost as if I’m leaning over your shoulder — reading what you read, or looking at your book shelves: it allows me to understand how the people I follow construct their world.

As with many innovations, we stumbled upon this idea. We started developing News.me last August after we acquired the prototype from The New York Times Company. For the first version we wanted to simply take your Twitter stream, filter it using a bitly-based algorithm (bit-rank) and present it as an iPad app. The goal was to make an easy to browse, beautiful reading experience. Within weeks we had a first version working. As we sat around the table reviewing it, we started passing our iPads around saying “let me look at your stream.” And that’s how it really started. We stumbled into a new way of reading Twitter and consuming news — the reverse follow graph wherein I get to read not only what you share, but what you read as well. I get to read looking over other people’s shoulders.

What Others Are Reading…

On News.me you can read your filtered stream and also those of people you follow on Twitter who use news.me. When you sign into the iPad app it will give you a list of people you are already following. Additionally, we are launching with a group of recommended streams. This is a selection of people whose “reading lists” are particularly interesting. From Maria Popova (a.k.a. brainpicker), to Nicholas Kristof and Steven Johnson, from Arianna Huffington to Clay Shirky … if you are curious to see what they are reading, if you want to see the world through their eyes, News.me is for you. Many people curate their Twitter experience to reflect their own unique set of interests. News.me offers a window into their curated view of the world, filtered for realtime social relevance via the bit-rank algorithm.

Streamline Your Reading

The second thing we strove to accomplish was to make News.me into a beautiful and beautifully simple reading experience. Whether you are browsing the stream, snacking on an item (you can pinch open an item in the stream to see a bit more) or you have clicked to read a full article, News.me seeks to offer the best possible reading experience. All content that is one click from the stream is presented within the News.me application. You can read, browse and “save for later” all within the app. At any given moment, you can click the browser button to see a particular page on the web. News.me has a simple business model to offer this reading experience.

Today we are launching the iPad News.me application and a companion email product. The email service offers a daily, personalized digest of relevant content powered by the bit-rank algorithm, and is delivered to your inbox at 6 a.m. EST each morning. The app. costs $.99 per week, and we in turn pay publishers for the pages you read. The email product is free.

How was News.me developed? News.me grew out of an innovative relationship between The New York Times Company and bitly. The Times Company was the first in its industry to create a Research & Development group. As part of its mission, the group develops interesting and innovative prototypes based on trends in consumer media. Last May, Martin Nisenholtz and Michael Zimbalist reached out to me about a product in the Times Company’s R&D lab that they wanted to show us at betaworks. A few weeks later they showed us the following video, accompanied by an iPad-based prototype. The video was created in January 2010, a few months prior to the launch of the iPad, and it anticipated many of the device’s gestures and uses, in form and function. Here are some screenshots of the prototype.

On the R&D site there are more screenshots and background. The Times Company decided it would be best to move this product into bitly and betaworks where it could grow and thrive. We purchased the prototype from the Times Company in exchange for equity in bitly and, as part of the deal, a team of developers from R&D worked at bitly to help bring the product to market.

With Thanks … The first thank you goes to the team. I remember the first few product discussions, the dislocation the Times Company’s team felt having been air lifted overnight from The New York Times Building to our offices in the heart of the Meatpacking District. Throughout the transition they remained focused on one thing: building a great product. Michael, Justin, Ted, Alexis — the original four — thank you. And thank you to Tracy, who jumped in midstream to join the team. And thank you the bitly team, without whom the data, the filtering, the bits, the ranking of stories would never be possible. As the web becomes a connected data platform, bitly and its api are becoming an increasingly important part of that platform. The scale at which bitly is operating today is astounding for what is still a small company, 8bn clicks last month and counting.

I would also like the thank our new partners. We are launching today with over 600 publishers participating. Some of whom you can see listed here, most are not. Thank you to all of them we are excited about building a business with you.

Lastly, I would like to thank The New York Times Company for coming to betaworks and bitly in the first place and for having the audacity to do what most big companies don’t do. I ran a new product development group within a large company and I would like to dispel the simplistic myth that big companies don’t innovate. There is innovation occurring at many big companies. The thing that big companies really struggle to do is to ship. How to launch a new product within the context of an existing brand, an existing economic structure, how to not impute a strategy tax on a new product, an existing organizational structure, etc. These are the challenges that usually cause the breakdown and where big company innovation, in my experience, so often comes apart. The Times Company did something different here. New models are required to break this pattern, maybe News.me will help lay the foundation of a new model. I hope it does and I hope we exceed their confidence in us.

Its been a remarkable few months in the middle east. Most recently the events in Egypt have captured the world and Al Jazeera’s english web site has become the place to watch many of the events unfold. Given that the channel isnt carried by most US cable companies the web site has been the means to view the channel live over the Internet.

Al Jazeera is also a user of Chartbeat. Chartbeat offers a real time window into what is happening on a web site right now. Watching the traffic flows over the past few weeks has been fascinating — in Al Jazeera’s case, the site broke traffic record after record. I wonder what popluar TV show would compare to having 150,000 to 200,000 simultaneous users on a web site, most of them watching TV?

A lot has been and will be written about the role of social media in this revolution here is some data and perspective from the vantage point of traffic to the Al Jazeera web site yesterday as seen via their Chartbeat dashboard right as Mubarek announced his resignation.

Many thanks to the Al Jazeera team and specifically Mohamed Nanabhay for letting us publish these snapshots.

———————————————————————————————————-
Just before noon yesterday, users started flooding into the Al Jazeera web site.

The screen shot below shows the traffic sources — links, social and search at noon EST.

If you zoom into the article level view you can see that 70%+ of the traffic is coming from social networks. The picture on the left is the same as the one above — the one on the right zooms into the article level dashboard for the page titled “Hosni Mubarak resigns as President”.

Mohamed Nanabhay, Head of Online, Al Jazeera’s English web site described the experience: “As you can imagine our newsrooms and field teams have been on full throttle over the past three weeks. While Al Jazeera very quickly became the worlds window into the revolution in Egypt, Chartbeat proved invaluable as my window into our audience and website. From deploying resources to prioritizing updates, from rolling new features to identifying technical issues on the site, we were able to make better decisions more quickly based on real-time data.”

Interesting snapshots and kind words from people who are monitoring the real time web in ways that could not have been imagined a revolution or two ago.

This is a different kind of post. I started thinking about “networked media” last August. This began in the same way my longer posts usually do: a slow process of thinking, writing, and editing that spans a few months. But the process took a left turn in October when I decided to speak about networked media at betaday. My work on the blog post ceased and I focused my attention on betaday. What I’m posting here is a compilation of the introduction that I wrote back in August, a video of the betaday talk, and my general notes.

The impact of the “socialization of the web” (i.e. the social components of the web that now pulse through every web page) is a fascinating subject that I think we are only just beginning to understand. Though “socialization” is a politically loaded word, my intent here is not political. Rather, my use of the word “socialization” is three-fold: I seek to 1.) to show how media is changing as it becomes integrated with social experiences. 2.) to note that the economics of media production is changing and 3.) to emphasize that this shift is a process, not a product.

Social disruption

Over the past few years I have written a fair amount about how the social web will change the way people discover and distribute information online. This started with a post in the spring of 2008 on the Future of News. Then in early ’09 I outlined how “social” would change the discovery process and disrupt traditional search. And then I wrote a long piece about what this shift in discovery means for the user experience on sites. These ideas, and subsequent posts, have informed a lot of what we have built and invested in at betaworks. New modes of navigation and discovery are being developed – from Summize to Tumblr to TweetDeck, and more recently from GroupMe to Ditto. It is now generally accepted that the impact of “social” on discovery and navigation is under way, but I believe the impact goes beyond discovery.

Undoubtedly, search has changed, and continues to change, the way we write, create pages, layout pages, tag and relate to content. It has also encouraged the creation of sites with limited or distracting content that exist solely to optimize search. Search has not driven a change in the content and user experience once a user is on a page that they value. By contrast, the “social web” is changing the web itself – “social” is altering the nature of what we find. Social experiences are becoming the backbone of many sites. A web page that is part of the “social web” transforms content into a liquid experience, giving rise to a new kind of media: networked media. In the video from betaday, I walk through this shift and show data we have at betaworks that illustrates this change.

Starting about four years ago it became clear that the social, real time web could change the way search and discovery happened online. Fast forward today and that is certainly happened. The impact of this shift in distribution economics isn’t over but the trend has tipped to scale during 2010. Last year we saw site after site announce the percent of traffic that it is getting from the social web now exceeds or is a second only to search. In my post on how social will disrupt search two years back I used the example of youtube, and showed the speed at which it had become the second largest search destination on the web. Twitter, Facebook, tumblr and other vertical social networks are driving meaningful traffic to sites around the web. Take a collection of sites in the chart below, from news to commerce, from TV based media to sports for many of them social is now the largest driver of traffic. Nick Denton said last month that referrals to Gawker properties from Facebook had increased sixfold since the start of the year. And this is different traffic to search traffic. Its socially referred, its of higher quality and embedded in it is the multiplier effect that the social publishing platforms drive.

The socialization of the page

The question I would like to turn to now is how web pages and applications are been changed by the social, real time web. Search changed the way we discovered the web. Web sites optimized their pages for search bots but in most cases they didn’t actually change the content or substance of the page that was presented to the end user. Put another way, search brought little tangible benefit to the end user beyond discovery. Search certainly created new forms of sites. Domain parking, content farmers, link bait, search spawned thousands of sites that managed to game the discovery tool to gain attention, clicks and visits by users who find themselves on site that has the meta data they were looking for but often little of the content.

But unlike search the dynamic of a web page becoming part of the social web is transforming the experience and the content of that page into a liquid experience that is giving rise to a new kind of media. Humor sites changed because of search. This was the one exception I found. Fred Seibert told me last summer about how humor sites changed the content of their pages, placing the punch line up front — because that is what people searched for.

(for the interested, a short primer is here on what we do at betaworks)

Three steps re: how does a page becomes networked?

#1. An Activity window opens up Somewhere between 1-3 hrs after a story is posted a window of social activity opens. An example, albeit a slightly unusual one: a product page on amazon for a set of speaker wires that cost almost $7,000 — this past weekend this page has all of a sudden taken flight on on Twitter and some of the social blogs. The page was actually posted to reddit a month ago. Yet for whatever reason, the insanity of a $7,000 cable didnt mesh with the zeitgeist until November 27th. On the 27th the page was Tweeted by @PaulandStorm. And off it went. Screen shot of the page here. In the video above you see this process happen in detail. I use Chartbeat to understand the progression and dispersion that occurs in this initial activity window. Take a look at the dispersion patterns of typical stories on Fred’s AVC blog you can clearly see the window of engagement happen — just take a look at this as Fred puts up a new post one morning. Look at the uptake starting about 1 hour after the post hits. Usually the peak occurs at the 100 minute mark. Chartbeat data from 1000’s of large sites around the web suggests that for a blog the peak is usually around 60 mins after posting and for a news site its 130mins. Its great how open Fred is with this data, lots to learn. These are windows of meaningful, concurrent activity. Concurrent users is the key metric to track at this point. Amplification in the social web is what drives the metric. And amflification happens because of relative influence within your and other social groups. Link and discuss: It’s Betweenness That Matters, Not Your Eigenvalue: The Dark Matter Of Influence: http://sto.ly/ii40vr

#2. Social clustering occurs With the engagement window open and concurrent users on the page peeking clustering starts to happen. What separates this from just an open engagement window is the level of engagement. Users arrive on the site and they start posting comments and the conversation begins. “Each comment someone takes the time to leave serves as a proxy for 100 or so folks who properly echo that sentiment” (Batelle). Examples… The importance of the time of day that you publish into the social web. Timing relative to what your social group is talking about now is what triggers clustering. This is why socialflow works — it knows when is the right time to send the message that lights up the social web. Below is an image from some analysis that the NY Times using bit.ly data. It shows the dispersion of a particular story — in this case a Kristof piece about the Pill — across the social web. In the image you can see the clustering occurring, this burst over time of influencers and social engagement.

#3. The page becomes Networked Snap a synchronous experience occurs. Critical mass of users on one page at the same time and something magical happens. Think about it as a page becoming a live event or a live site. Similar to a concert there is a residue of the social experience when you go back — even if its way after the event. If you watch the opening of this live concert you will get a visceral experience this looks like and what happens when media becomes connected with the audience. Its Springsteen’s hungry heart and while he plays the opening of the song he turns it right over the audience to pick it up and sing the opening. Forking of content.

– Rise of agile publishing: what is it? Lean editorial teams, instrumentation of sites, getting the data feedback, adaptive CMS’s, importance of posting at the right time up, importance of tracking social engagement, how every page is becoming a front page

– Serendipity. Some of this is science, some of it isnt. An “old” page can become networked out of no where — point back to the amazon example. You don’t know where its going it’s going to happen, you need tools to track and alert you when its happening

– We are moving into an age of networked media. Dana Boyd’s analysis of the shift from broadcast to networked media

– closing of comments post the activity window – proximity references / boyd article, couple of old ones are in close proximity to this one – Structured data types to allow for debate topics.

Example: Gawker. gawker is experimenting , new design that is both more dynamic (real time) and more immersive, without the restrictions of reverse chronological. Users are no longer navigating from page to page across isolated sites. Rather they are experiencing the subset of sites as a liquid experience, where there is a consistent flow from site to site and the consistent aspect is social. Users flow — ambient experience of media.

Example: Dribble and iTunes icon, this became a networked media event.

Example: Yahoo bloggers adapt content to the refers and links to the spiker

Example: “the quality of the dynamics of the conversation shift from one where parlor tricks can sustain themselves beyond the quality of the content to one where we can get sort of immediate tactile connection with people” (source: 4.18.09 Gillmor Gang 1.01 min).

- Advertising as the primary mode monetization and pulling people in vs. pulling them away.

- Tension between platform owners who monetize w/ advertising on their site, trying to intergrate web sites into their monetization flow

- The monolithic assumption that one social platform will rule all. How vertical use cases of social (from tumblr to Foursquare to Groupme to Instagram) illustrate how social is fragmenting into specific workflows and uses. Do “digital networks architectures naturally incubate monopolies” Lanier?

- How are the economics of social media are effecting networked media. Ownership of data, ownership of content, if users are creating the content what rights do they have over it?

- Importance of the link structure of the web its the most fluid form resist the temptation to vertically integrate and “consumption” sites.

- dimensionality reduction too much data

- Importance of the link structure of the web its the most fluid form resist the temptation to vertically intergrate and “consume” sites

- Heisenberg principle of social media, the act of a page becoming social changes it

Access to fast, affordable and open broadband, for users and developers alike is, I believe, the single most important driver of innovation in our business. The FCC will likely vote next week on a framework for net neutrality – we got aspects of this wrong ten years ago we can’t afford to be wrong again. For the reasons I outline below, we are at an important juncture in the evolution of how we connect to the Internet and how services are delivered on top of the platform. The lack of basic “rules of the road” for what network providers and others can and can’t do is starting to hamper innovation and growth. The proposals aren’t perfect but now is the time for the FCC to act.

Brad Burnham stopped by our office earlier this week to talk about his proposal for the future of net neutrality.The FCC has circulated a draft of a set of rules about neutrality that the Commission will likely vote on this week. Though the rules are not public, Chairman Genachowski outlined their substance last week. Through a combination of the Chairman’s talk, the Waxman Proposal, and the Google/Verizon proposal, one can derive the substance of the issue and understand its opportunities and risks. I strongly support much of what the Chairman has proposed and I support the clarifications that Burnham outlines. Before further discussing this point, I have to ask – why does this matter now? Over the past few years there has been a lot of discussion, a lot of promises, and some proposals with regard to net neutrality.

I’m excited to announce that bit.ly has completed a Series B funding. The details are on the bit.ly blog The round was led by RRE Ventures, general partner, Eric Wiesen will be joining the bit.ly board. It’s been an amazing two and a half years since the founding of bit.ly. Growth has been the focus for much of the time — managing the growth and managing to continue to push new product out, on the site and through the API.

So far this year over 40.6 billion bit.ly links have been clicked, last month alone the number was almost 6bn (5.96bn to be exact). The chart below shows the daily clicks volume — what we call decodes — the blue line is daily clicks, where you can see the variance around each week (ie: higher of weekdays, lower click volume on the weekend) and the red line is a 3 week moving average. This past tuesday we had our biggest day ever of bit.ly links created. There have been over 4bn unique URL’s that have been shortened using bit.ly — for every one of these and for all the 40+bn clicks bit.ly offers real time metrics with the simple addition of a “+” at the end of every link (ie: for traffic to this page see: http://bit.ly/bseries+). All this growth and progress has happened because of our team and our users.

Thank you — we love our users and the team at bit.ly is one of the best I have ever worked with, so thank you. We now have much more work to do as we build out what is now a cornerstone of the real time / social web.

There was a discussion on the Gillmor Gang last Friday that I wanted to flesh out a bit. The topic was the sale of TechCrunch to AOL. Much of the talk on the web and some of it on the Gang centered on TechCrunch as a media property. Are “content” acquisitions on the rise? What does this mean for content sites? How do old media, other content companies relate to this? etc. etc. etc. I dont think these question are that interesting. All media is internet media today — if the so called “content” provider doesn’t place them on the net they get there regardless. It’s no longer the presence of content online that makes it interesting — its type of engagement that occurs that is is interesting. TechCrunch is in my mind becoming a place — a real time, or live, conversational platform.

If you look at TechCrunch articles the number of comments that stream into the page within the first hour after an article is posted is meaningful. It’s these real time interactions, the conversations that are happening on the page, the connections that are taking place real time or close to real time — that make TechCrunch such an interesting place. Yes, a place not a site. TechCrunch or the Huffington Post (the other example I mentioned on the show) are becoming conversational places or platforms where the content provides context to the conversation and visa versa. A while ago I had a conversation with Bob Stein we were talking about writing, publishing and blogging. Bob told me about a test he had run at the Institute for the futureofthebook. In the test they placed comments on a blog to the right of the posts / articles. The result was meaningfully more interesting discourse. The comments werent placed at the bottom, hidden away, like a letter to the editor, they were part of the body of the post. Think about it this way. If you took TechCrunch and placed the comments to the right of the posts and let them stream live (most recent first) wouldnt it look like a mirror image of the new Twitter? Stream on the right — media on the left — Twitter is stream on the left, media on the right. Interesting.

TechCrunch is in my mind a conversational platform and its that + the personalities of the team that what make it interesting. And the “that” bit, is the real time participation of the users – that provide for a degree of authenticity and connection. I think when Steve Gillmor was talking about Neil Young on the show it was this type of connection he was talking about. Arrington in his post “Why We Sold TechCrunch To AOL, And Where We Go From Here” says “I don’t want to get all teary-eyed here, but the best comment I ever saw on TechCrunch was years ago in response to when I quipped something like “This is my blog and I’ll write what I want” in response to a troll. The response was “No Mike, This is OUR blog. You just work here.”” When @Auerbach pointed that comment out to me this week this thread of thoughts came together. That is whats different here, the active, passionate users who are participating in the conversation, live — maybe we should call the category live blogs. Place like these are emerging, most of them are in news, politics, tech or gossip but other vertical categories are starting to appear. In a sense I see these sites as children of the old bbs’s. And its happening the way things happen on the web — its somewhat chaordic, its messy, there is a pull from the centralized services that have the advantage of a tightly coupled integration and a more gradual, but eventually greater pull from the edge.

If this all sounds fairly general, I do have some data to back up the thesis. I’m going to talk about this data generally since its not my data to publish in detail. Via Chartbeat (a company we built at betaworks) we see engagement on a variety of sites, in real time. The focus of Chartbeat is on how many people are on your site, right now and what are they doing. Looking at the real time engagement dashboard on Chartbeat accross a set of customers, say: TechCrunch, WSJ, Gawker, Yahoo News, ChatRoulette and FoxNews we see very different patterns of engagement.

The pace at which TechCrunch is published, the degree of engagement, the real time updating of comments, the requirement of the blog to post with your real name, the direct engagement from the authors … all of this contributes to a what is much more of live experience than most blogs. There is a public example of data around a live blog that I can point to, that’s AVC.com, @FredWilson has made his dashboard for Chartbeat open. Take a look at it the engagement view as he publishes. Again note the pace and consistency that Fred blogs and the relationship he has to his audience. Or look at what Chamillonaire is doing … live is becoming live in a whole new way, participatory media is becoming more diverse and interesting. And for AOL this is in a sense, a return to its roots of community and conversation. There is potential in this deal, potential for TechCrunch & AOL and the team to turn more of the web into more of a conversation — the vision of AOL as a next generation content platform might start to emerge out of this.

The Tweetdeck team have been hard at work for two years thinking about how to display and navigate streams on the web and on devices. The Android version that moved into beta yesterday is a big step forward. The tech blogs have done feature reviews, paid complements to the user experience, the speed and simplicity of use but there is more going on here. It is going to take a some use to settle in on why this is different and what has changed, users are starting to see it.

What’s so different here is the concept of a single unified column for all your real time feeds. Inside of the “home” column are the different services color coded and weighted to allow for the varying speed / cadence of different streams. In the screen shot below you see the beta Android client, you are looking at my “home” column. It includes updates from all my Twitter accounts, Facebook, Foursquare, Buzz etc. You can see that a checkin is included in the home stream as a simple gesture that tells me “Sam checked in at Terminal 4″. Its formatted differently to a Twitter update – it contains only the summary information I need “someone is checking in somewhere”.

If click on the “check in” the view pivots around place not person.

This cross stream integration is also evident in the “me” column — a single column that integrates all mentions across the various social services you have. The “me” column is the first one to the right of home — you can see it in the screenshot below. The subtle little dots on top offer a simple navigation note that you are now one column to the right of “home”. And the “me” column again integrates mentions across streams — the top one is a reply to a Facebook update, if I click through I get the context, below it are Twitter mentions.

I wrote about the importance of context in the stream a while ago. Context is more important now than ever as the pace of updates, vertical services (ie: local, q&a, payments) and re-syndication continues to only speed up. Previously Tweetdeck ran all of these services in separate columns – one for each. The Android version still has mutliple columns but the other columns are ways to track either topics (search) or people (individual people or groups of people) — you can see how those work here. It’s in beta and there is still work to do still but this new version of Tweetdeck breaks new ground — the team have created something very wonderful.

The original Tweetdeck broke new ground in how Twitter could be used. All the Twitter clients had until that time taken their DNA from the IM clients. They all sought to replicate a single column, a diminutive view of the stream. Tweetdeck on the desktop changed all of that. Offering a multi column view that was immersive, intense and full on. As you move your service to different platforms (say from Web to mobile) you are faced with the perplexing question of whether you re-think the service to fit the dimensions and features of the new platform (mobile) or you offer users the same familiar experience. Tweetdeck Android is a ground up re-invention of the desktop experience — created for for mobile. I have been using it for a few weeks now and it is changing the way I experience the real time web. Once again the Tweetdeck team have taken a big bold step into something new, you can get the beta here.

I have been running an experiment for the eleven weeks or so since the iPad launched. Each weekend I spend time going through directories hunting for apps that begin to expose native attributes of the device. My assumption is that the iPad opens up a new form of computing and we will see apps that are created specifically for this medium. Watching these videos of a two and a half year old and a ﻿99 year old using the device for the first time offers a glimpse of its potential. Ease of introduction and interaction are the key points of distinction. I havent seen a full sized computing device that requires so little context or introduction.

When the iPad first came out much of what was published was on either end of a spectrum of opinion. On one were the bleary eyed evangelists who considered it game changing and on the other people who were uninterested or unimpressed. I think invariably the people who found it wanting were expecting to port their existing workflows to the device. They were asking to do “what I do on my PC” on the iPad. These people were frustrated and disappointed. They assumed this was another form of PC, with some modifications but that it represented a transition similar to desktop to laptop. Take this post from TechCrunch: “Why I’m Craigslisting My iPads” — three of the four reasons the author lists for dumping his iPad are about his disppointment that the iPad isnt a replacement for his laptop or desktop. But in the comments section of the post an interesting conversation emerges: what if this device’s potential is different? Just like video has transformed the way our culture interacts with images, what if gesture based computing has the potential to transform the way we use, create, and express ourselves.

The iPad is the first full sized computing device with wide scale adoption with:

﻿Hardware and software that requires little to no context or learning

An input screen large enough to manipulate (touch and type) with both hands

﻿A gesture based interface that is so immersive, and personal that it verges on intimate

﻿Hardware with battery and heat management that, simply, doesn’t suck

An application metaphor that is well suited to immersive, chunky, experiences. As @dbennahum says: ﻿“The ipad is the first innovation in digital media that has lengthened the basic unit of digital media”

A tightly coupled, well developed and highly controlled app development environment

﻿For some people these attributes sum up to the promise that this will be the “consumption” device that re-kindles print and protects IP based video. That may occur but for me that isnt the potential. The iPad is a connected computing device that extends human gestures. If you step back from the noise and hype, after almost 15 years of web experience, we know a few things. Connected / networked devices have consistently generated use cases that center around communication and social participation vs. passive consumption. Connecting devices to a network isnt just a more efficient means of distribution it opens up new paths of participation and creation. The very term consumption maps to a world and a set of assumptions that I think is antithetical to the medium (for more on this see Jerry Michalski quote on the Cluetrain). I believe the combination of the interface on the iPad and the entry level experience I outlined above is sufficiently intuitive that this device and its applications has the potential to become an extension of us and transform computing similar to how the mouse did 45 years ago.﻿

Douglas Engelbart and his mouse changed everything. Similar to the mouse the multitouch interface lets you navigate the surface of the computer. But there is a key difference between this gesture based interface and the mouse. The mouse is separate from the working surface, connected to the body but separate from the actual place of interaction. With the iPad gestures happen on the surface that you are creating on. ﻿ I have this general theory that when you narrow the gap between the surface that you “create on” and the surface that you “read on” you change the ratio of readers to writers and proportionally you reduce consumption as we used to know it and increase participation. Some examples. Images — still and video — where the tool you use to capture is increasingly the tool you use to view and edit. Remember the analog experience — shoot a roll of film on one media type (coated celluloid) and then develop / display on another (paper). The gap here was large. Digital cameras started to close the gap by eliminating the development process — by recording on a digital medium that permitted the direct transfer of that to a display and editing device (the PC). The incorporation of display screens on cameras shrunk the gap further. Now we are closing the gap even further. Embedding cheap cameras every display screen so that what you see also is what you record and display screen into the front of cameras. With each closing of the gap between between production and display — participation increases. Take the web itself. The advent of wiki’s, blogging, comments and writable sites. Or compare Facebook, Twitter and Tumblr vs. WordPress, Posterous and Typepad. They are all CMS’s of one kind or another — but the experience is radically different in the first group. Why? Because they close that gap — specifically, they dont abstract the publishing into a dashboard. You write on the surface you are reading on.

So, as a rule of thumb, when i see this gap narrow — I sit back and think. And it is for this reason that I believe the gesture based interface on this device has the potential to open up a new form of computing.

Back to that experiment. So while its has been less than 12 weeks since its launch I want to see if there are elements emerging on iPad apps that can tell us about what this new medium has to offer, what are the things we are going to be able to create on this device. My process is as follows:

(a) Hunt and peck for native apps. The discovery / search process is imperfect. I spend a fair amount of time using services like ﻿Appshopper, Appadvice and Position App. I also spend time in the limited app store that Apple offers (limited in that it sure is one crappy interface to browse, compare and find app’s). I do find the “people who liked this also liked this” feature useful. But hunt and peck is the apt term — its a tough discovery process — while Apple has done an awful lot to open up new forms of innovation they are simulateanously compromising others — the web isnt a good discovery platform for a lot of these app’s because many of them arent “visible” to basic web tools. Any that is how I find things.

(b) I use the apps for a few days at least. Given how visually seductive this platform is its important for me to use the app’s for a bit, let them settle into my workflow and interests and see if they mature or fade. I then create a summary, of the app, on the iPad (might as well use the medium). The app that I used to write many of the these summaries was Omnigafffle.

Six of the summaries are inserted below aggregated under some broad topic areas. I wanted to lay them out side by side on the table and see what I had learnt thus far. I have some commentary around most sections and then some conclusions at the end.

1. This is the first post I did — summarizing the goal:

2. Extending the iPad

In the early days I was fascinated by camera A and camera B application — it lets you use your iPhone camera on your iPad, over WIFI. It’s one of those wow app’s — you show it to people and you can see their eyes open as they think of the possibilities this opens up. I think the possibility set that it opens up relate to the device as an extension of other connected devices. There a small handful of other applications I found that have done interesting things integrating iPads with other devices — ie: Scrabble, iBrainstorm and Airturn. Airturn is brilliant in it’s simplicity and well defined use – using a Bluetooth foot pedal to turn the iPad into a sheet music reader. Apple might well have not put a camera on v1 of the iPad for commercial reasons (ie upgrade path) but the business restriction has opened up an opportunity.

CameraA/B is a good example of how those design choices are driving innovation. One of the first pictures I did was a requisite recursive image.

3. Take me back …

The only physical navigation on the device is a home button, like the iPhone no back button. I wish there was a back button. I find myself using the home button time and time again to go back when im in an application. I love how conservative Apple is with its hardware controls but a back button is missing — its one of the great navigational tools that the browser brought us, I really want one on this device.

4. Jump on in …

There are a lot of interesting immersive app’s that are beginning to pop up on the iPad. These are good examples of the kind of experiences that are emerging:

This is another immersive application — the popular Osmos HD. I said at the outset that I avoided gaming app’s and this and the coaster are games. Its the immersive navigation that i want to emphasize — today, there aren’t many better ways to explore this than app’s like these. Both of them use the high resolution display, the multitouch interface and the accelerometer to give you a visceral sense of the possibilities.

5. Writing …

I want to write on the iPad, write with my hand. I tried getting a pen but the experience was disappointing. The mutitouch surface is designed for input from a finger — the pens simulates a finger. If you want to draw with a pen or have large fingers then a pen like this works but it doesn’t work to actually write on the device. There also isn’t an application that lets you scale down words you have written with your finger, or at least i havent found one. But you you can type!

I have also used a wireless keyboard — I typed most of this post using a keyboard, it works well.

6. Reading, readers and browsing …

There are a whole collection of reading related experiences that are coming out for the iPad, its one of the most active areas of development. My journey began with the book app’s on the device. iBooks, the Kindle app and then a handful of dedicated reading app’s (ie comic book app’s) I don’t have much to say about any of these experiences since they all pretty much use the device as a display to read on. They all work well, and the display is better on my eyes than I expected. I liked the Kindle, e-ink display, a lot but unless you are reading outside, in full sun, the iPad display works very well. My favorite reading app is the Kindle app. The reading surface is clean and immersive. Navigation is simple and I love the “social highlight” feature. You can see it in the image below. Whilst you are reading there are sections with a light, dotted, underline — touch it and it tells you x number of people have highlighted this section as well as you. I love stuff like this — a meaningful social gesture displayed with minimal UI.

A few weeks after the launch I started using reader app’s. I define this category as app’s that offer a reading experience into either a social network (twitter, facebook), a selection of feeds (RSS), or a scrapped version of web sites. Some people are calling these clients — for me a client allows you to publish, these are readers of one kind and another. Skygrid was one of the first I used. Then came Pulse, GoodReader, Apollo and last week Flipboard. Most of these readers offer simple, fluid interfaces into the real time streams. Yet the degree to which we have turned the web into a mess is painfully evident in these applications. Take a look at the screen shots of web pages displayed on these applications. The highlight is mine but the page is a mess. Less than 15% of the pixels on the first page below were actually written by the author.

It’s remarkable how the human brain can block out a visual experience in one context (web browser) but when its recontextualized into another experience (iPad) the insanity of the experience is clear. We have slow boiled so many web sites that we have turned the web into a mass of branding, redundant navigation and advertising. And some wonder why value of these ad’s keeps falling. As the number of devices that access the internet increases the possibilities forking the web, as Doc Searls calls it, increases. Remember pointcast, sidewiki, Google News, Digg bar — same questions. Something has to give here — surfing the web works very well on the iPad, the surfing works, the problem is that its the web sites that dont.

The issues embedded in these readers stretch back to the beginning of the web — all the way back to the moment that HTML and then RSS formed a layer, a standard, for the abstraction of underlying data vs. its representation. Regardless of your view of the touch based interface its undeniable that the iPad represents a meaningful shift in how you can view information. Match that with the insanity of how many web sites look today and you have a rich opportunity for innovation.

Users, publishers, advertisers, browsers, aggregators, widget makers — pretty much everyone is going to try to address this issue. Some of these reader app’s use the criteria that RSS established (excerpt or full text) to determine whether to re-contextualize the entire page or just a snippet of it. Some of them just scrap the entire web page and then some of them are emerging as potentially powerful middleware tools. PressedPad is installed on this blog — its somewhere between a wordpress plugin and a theme ( note to users: install it as a plugin). PressedPad gives me some basic controls re: how to display and manage the words on this site so that they are optimized for the iPad. Similar to WPtouch — it does a great job of addressing this issue by passing control over to the site creator. This approach makes sense but it will take time to scale. In the short term we are going to see a lot of false starts here. But ultimately the reading experience will get better because of this tension and evolution both on the iPad and the web. And so will monetization. Now that the inanity of what we have done is been laid bare we have to fix it.

Back to the app’s themselves. Of all these reader app’s the Flipboard is the most innovative. I’m still getting used to the experience – there is a lot to think about here. There is much that I like about the Flipboard – its visually arresting for a start, beautifully laid out and stunning. Take the image below — some app’s are just stop you in their tracks with their ability to show off the visual capabilities of the device, Flipboard is certainly one of these.

Visuals aside the thing that I find interesting is Flipboard’s approach to Twitter and Facebook. It turns Twitter and Facebook into a well formatted reading experience — it takes a dynamic real time stream and re-prints it as if its a magazine. I like the application of Tweets as headlines. I have often thought about Twitter’s 140 character length as headline publishing. Flipboard takes this literally — using the Tweet as the headline with exerts of the content displayed under the headline. The Facebook stream works less well. Facebook isnt a news stream, its more of a social stream — and I find the Flipboard randomly drops me into the Facebook at a level that im not interested in. I flip pages and I find myself browsing personal pictures from someone I barely know — something that i would have skipped by on Facebook.com.

But it is this representation of a stream as a magazine that I struggle with the most. The metaphor is overwrought in my mind. I hear the theoretical arguments that Scoble makes re: layout but they dont translate for me in practice. The stream of data coming from Twitter and Facebook isnt a magazine — formatting it as such places it into a context that doesnt fit particularly well and certainly doesnt scale well (from a usage perspective). Because it looks like a magazine and feels like one — I tend to read it like one, and this content isn’t meant to be used like a magazine. The presentation feels too finished, I have written before about the need for unfinished media and how it opens the door for participation. This feels like it closes that door – it allows too narrow an entry path for interaction. And then finally what they are trying to do is technically hard. It’s hard to algorithmically determine which text should be large vs. small, where to place emphasis — just like its hard to algorithmically de-dup multiple streams, or to successfully display the images that correspond to the title.

These are my initial Flip thoughts. I am facinated by this category and the conversations Pulse, Flip and others have started. The innovation here is just getting going and I cant wait to see what comes next.

Browsers. I’m using Life Browser a lot and liking it. The Queue feature is great — enable the Q button and any links you click on the page get “queued up” behind in a stack. Im interested to see things like candy tabs on Firefox come to the iPad.

Some conclusions …

1. Its early days.

There wasnt a single application that I found that really stood out and remained interesting after a few weeks of use. Many were recast versions of iPhone applications. I did find things that are edging in the direction of truly native – and most of those I outlined above. This conclusion isn’t surprising. It’s very hard to re-conceptualize interfaces and experiences. The launch of the magic trackpad demonstrates how committed Apple is to this interface. If this is truly a new form three months is barely a teaser — we have much to do and much to learn here. And in the past few weeks the pace of launches of interesting applications has started to pickup significantly. Im spending more time in drawing app’s and in some quasi enterprise app’s. I cant wait to see what the next 6 months brings.

2. The visual dominates, gesture emerging.

Visually arresting applications are the things that pop today. Many of them are just beautiful to look at. The pond is lovely — have you been struck by the book shelf on iBooks, I was — what about the roller coaster, so are many of the games, so is Flipboard. But I suspect much of what im responding to is the quality of the screen and the images been displayed ie: the candy not the sustenance. Many of the app’s that had an initial wow factor im now deleted. Visual graphics need to be part of the quality and essence of the experience not just eye candy. And the visual needs to be integrated into the gestural. Maybe artists will take it accross this threshold — I was sorry that the Seven on Seven event happened right around the launch, I hope that for the next one some artists will opt to produce something on the iPad. Gesture based interfaces are emerging — slowly but they are coming. I used Pressedpad to “iPad”ize this blog and the experience works well(ish) — the focus is simply on making the navigation gesture applicable. But note even here — when I showed this iPad enabled blog to @wesissman he mailed me “looks amazing – i cant figure out how to actually read the posts – but looks great”. We are in that early part of the experience of a new device where the visual is so astounding we in a sense need to get over it in order to figure out how we can make it useful.

3. Its a social device.

It’s a social device yet many of the applications are single user and not thinking through the connected aspects of the device. While the device is highly personal it’s also a social device, it caters very well to multi users and multi devices. I havent figured out why this is so but for some reason the iPad has both a highly personal inimate feel — yet its social representation is far less personal. Try this out — leave an iPad lying around in a conference room people will feel very comfortable using it. In the first few weeks it was fair to say that everyone simply wants to try one — but the behaviour persists. In the same way I have brought an iPad to meetings and passed it around the table, its a very sharable social device. In this mix of personal and not — single user and multi user/multi device is, I believe, a trove of opportunity for innovation. And then add connectivity to this mix. This device is designed as a connected device (connected to both other devices and connected to the network) — it will open up paths of connected innovation we can only imagine today.

4. Enterprise is a coming

I have been struck by how popular VPN and other virtualization app’s are. It suggests a lot of people are starting to use the iPad in the enterprise. I heard some numbers that suggested that more than 15% of the iPads sold are linked to corp accounts. The use cases are a little outside of what i know and think about but I suspect there is a lot that will emerge here. The device requires very little IT overhead — the total cost of ownership of these devices has to be a fraction of a normal PC.

So here are an initial set of thoughts about the iPad. I’m interested to hear what you think. One of the other incidental properties of the iPad is its initial lack of focus. The iPhone is in its first instance a phone, the kindle is a book reader. — the iPad is an open tablet, for us to create on. I believe there is much to do here — the tablet has been the next great form factor for a long time now, but I think its finally arrived. We now have to build the experiences to suit the device.

The last post that I did about real time web data mixed data with a commentary and a fake headline about how data is sometimes misunderstood in regards to the real time web. This post repeats some of that data but the focus of the post is the data. I will update the post periodically with relevant data that we see at betaworks or that others share with us. To that end this post is done in reverse order with the newest data on top.

Tracking the real time web data

The measurement tools we have still only sometimes work for counting traffic to web pages and they certainly dont track or measure traffic in streams let alone aggregate up the underlying ecosystems that are emerging around these new markets. At betaworks we spend a lot of time looking at and tracking this underlying data set. It’s our business and its fascinating. Like many companies each of the individual businesses at betaworks have fragments of data sets but because betaworks acts as ecosystem of companies we can mix and match the data to get results that are more interesting and hopefully offer greater insight

——————————-

(i) tumblr growth for the last half of 2009

Another data point re: growth of the real time web through the second half of last year through to Jan 18th of this year. tumblr continues to kill it. I read this interesting post yesterday about how tumblr is leading in its category through innovation and simple, effective, product design. The compete numbers quoted in that post are less impressive than these directly measured quantcast numbers.

(h) Twitter vs. the Twitter Ecosystem

Fred Wilson’s post adds some solid directional data on the question of the size of the ecosystem. “You can talk about Twitter.com and then you can talk about the Twitter ecosystem. One is a web site. The other is a fundamental part of the Internet infrastructure. And the latter is 3-5x bigger than the former and that delta is likely to grow even larger.”

bit.ly: last week was the largest week ever for clicks on bit.ly links. 564m were clicked on in total. On the Jan 6th there were a record of 98m decodes. 1100 clicks every second.

(f) Comparing the real time web vs. Google for the second half of 2009

Andrew Parker commented on the last post that the chart displaying the growth trends was hard to decipher and that it maybe simpler to show month over month trending. It turns out the that month over month is also hard to decipher. What is easier to read is this summary chart. It shows the average month over month growth rates for the RT web sites (the average from Chart A). Note 27.33% is the average growth rate for the real time web companies in 2009 — that’s astounding. The comparable number for the second half of 2009 was 10.5% a month — significantly lower but still a very big number for m/m growth.

(e) Ongoing growth of the real time stream in the second half of 2009

This is a question people have asked me repeatedly in the past few weeks. Did the real time stream grow in Q4 2009? It did. Not at the pace that it grew during q1-q3, but our data at betaworks confirms continued growth. One of the best proxies we use for directional trending in the real time web are the bit.ly decodes. This is the raw number of bit.ly links that are clicked on across the web. Many of these clicks occur within the Twitter ecosystem, but a large number are outside of Twitter, by people and by machines — there is a surprising amount of diversity within the real time stream as I posted about a while back.

Two charts are displayed below. On the bottom are bit.ly decodes (blue) and encodes (red) running through the second half of last year. On the top is a different but related metric. Another betaworks company is Twitterfeed. Twitterfeed is the leading platform enabling publishers to post from their sites into Twitter and Facebook. This chart graphs the total number of feeds processed (blue) and the total number of publishers using Twitterfeed, again through the second half of the year (note if the charts inline are too small to read you can click though and see full size versions). As you can see similar the left hand chart — at Twitterfeed the growth was strong for the entire second half of 2009.

Both these charts illustrate the ongoing shift that is taking place in terms of how people use the real time web for navigation, search and discovery. My preference is to look at real user interactions as strong indicators of user behavior. For example I actually find Google trends more useful often than comScore, Compete or the other “page” based measurement services. As interactions online shift to streams we are going to have to figure out how measurement works. I feel like today we are back to the early days of the web when people talked about “hits” — it’s hard to parse the relevant data from the noise. The indicators we see suggest that the speed at which this shift to the real time web is taking place is astounding. Yet it is happening in a fashion that I have seen a couple of times before.

(d) An illustration of the step nature of social growth. bit.ly weekly decodes for the second half of 2009.

Most social networks I have worked with have grown in a step function manner. You see this clearly when you zoom into the bit.ly data set and look at weekly decodes, illustrated above. You often have to zoom in and out of the data set to see and find the steps but they are usually there. Sometimes they run for months — either up or sideways. You can see the steps in Facebook growth in 2009. I saw effect up close with ICQ, AIM, Fotolog, Summize and now with bit.ly. Someone smarter than me has surely figured out why these steps occur. My hypothesis is that as social networks grow they jump in a sporadic fashion from one dense cluster of relationships to a new one. The upward trajectory is the adoption cycle of that new, dense cluster and the flat part of the step is the period between the step to next cluster. Blended in here there are clearly issues of engagement vs. trial. But it’s hard to weed those out from this data set. As someone mentioned to me in regards to the last post this is a property of scale-free networks.

(c) Google and Amazon in 2009

Google and Amazon — this is what it looked like in 2009:

It’s basically flat. Pretty much every user in the domestic US is on Google for search and navigation and on Amazon for commerce — impressive baseline numbers but flat for the year (source: Quantcast). So then lets turn to Twitter.

(b) Twitter – an estimate of Twitter.com and the Twitter ecosystem

Much ink has been spilt over Twitter.com’s growth in the second half of the year. During the first half of the year Twitter’s experience hyper growth — and unprecedented media attention. In the second half of the year the media waned, the service went through what I suspect was a digestion phase — that step again? Steps aside — because I dont in anyway seek to represent Twitter Inc. — there are two questions that in my mind haven’t been answered fully:

(i) what international growth in the second half of 2009?, that was clearly a driver for Facebook in ’09. Recent data suggests growth continued to be strong.

(ii) what about the ecosystem.

Unsurprisingly its the second question that interests me the most. So what about that ecosystem? We know that approx 50% of the interactions with the Twitter API occur outside of Twitter.com but many of those aren’t end user interactions. We also know that as people adopt and build a following on Twitter they often move up to use one of the client or vertical specifics applications to suit their “power” needs. At TweetDeck we did a survey of our users this past summer. The data we got suggested 92% of them then use Tweetdeck everyday — 51% use Twitter more frequently since they started using TweetDeck. So we know there is a very engaged audience on the clients. We also know that most of the clients arent web pages — they are flash, AIR, coco, iPhone app’s etc. all things that the traditional measurement companies dont track.

What I did to estimate the relative growth of the Twitter ecosystem is the following. I used Google Trends and compiled data for Twitter and the key clients. I then scaled that chart over the Twitter.com traffic. Is it correct? — no. Is it made up? — no. It’s a proxy and this is what it looks like (again, you can click the chart to see a larger version).

Similar to the Twitter.com traffic you see the flattening out of the ecosystem in the summer. But you see growth in the forth quarter that returns to the summer time levels. I suspect if you could zoom in and out of this the way I did above you would see those steps again.

(a) The Real Time Web in 2009

Add in Facebook (blue) and Meebo (green) both steaming ahead — Meebo had a very strong end of year. And then tile on top the bit.ly data and the Twitterfeed numbers (bit.ly on the right hand scale) and you have an overall picture of growth of the real time web vs. Google and Amazon. As t

For a while now I have been thinking about doing a post about some of the data we track at betaworks. Over the past few months people have written about Twitter’s traffic being up, down or sideways — the core question that people are asking is the real time web growing or not, is this hype or substance? Great questions — the answer to all of the above is from the data set I see: yes. Adoption and growth is happening pretty much across the board — and in some areas its happening at an astounding pace. But tracking this is hard. It’s hard to measure something that is still emerging. The measurement tools we have still only sometimes work for counting traffic to web pages and they certainly dont track or measure traffic in streams let alone aggregate up the underlying ecosystems that are emerging around these new markets. At betaworks we spend a lot of time looking at and tracking this underlying data set. It’s our business and its fascinating.

I was inspired to finally write something by first a good experience and then a bad one. First the good one. Earlier this week I saw a Tweet from Marshall Kirkpatrick about Gary Hayes’s social media counter. It’s very nicely done — and an embed is available. This is what it looks like (note the three buttons on top are hot, you can see the social web, mobile and gaming):

The second thing was less fun but i’m sure it has happened to many an entrepreneur. I was emailed earlier this week by a reporter asking about some data – I didnt spend the time to weed through the analysis and the reporter published data that was misleading. More on this incident later.

Lets dig into some data. First — addressing the question people have asked me repeatedly in the past few weeks. Did the real time stream grow in Q4 2009? It did. Not at the pace that it grew during q1-q3, but our data confirms continued growth. One of the best proxies we use for directional trending in the real time web are the bit.ly decodes. This is the raw number of bit.ly links that are clicked on across the web. Many of these clicks occur within the Twitter ecosystem, but a large number are outside of Twitter, by people and by machines — there is a surprising amount of diversity within the real time stream as I posted about a while back. Two charts are displayed below. On the left there are bit.ly decodes (blue) and encodes (red) running through the second half of last year. On the right is a different but related metric. Another betaworks company is Twitterfeed. Twitterfeed is the leading platform enabling publishers to post from their sites into Twitter and Facebook. This chart graphs the total number of feeds processed (blue) and the total number of publishers using Twitterfeed, again through the second half of the year (note if the charts inline are too small to read you can click though and see full size versions). As you can see similar the left hand chart — at Twitterfeed the growth was strong for the entire second half of 2009.

Both these charts illustrate the ongoing shift that is taking place in terms of how people use the real time web for navigation, search and discovery. My preference is to look at real user interactions as strong indicators of user behavior. For example I actually find Google trends more useful often than comScore, Compete or the other “page” based measurement services. As interactions online shift to streams we are going to have to figure out how measurement works. I feel like today we are back to the early days of the web when people talked about “hits” — it’s hard to parse the relevant data from the noise. The indicators we see suggest that the speed at which this shift to the real time web is taking place is astounding. Yet it is happening in a fashion that I have seen a couple of times before.

Most social networks I have worked with have grown in a step function manner. You see this clearly when you zoom into the bit.ly data set and look at weekly decodes. This is less clear but also visible when you look at daily trending data (on the right) — but add a 3 week moving average on top of that and you can once again see the steps. You often have to zoom in and out of the data set to see and find the steps but they are usually there. Sometimes they run for months — either up or sideways. I saw this with ICQ, AIM, Fotolog, Summize through to bit.ly. Someone smarter than me has surely figured out why these steps occur. My hypothesis is that as social networks grow they jump in a sporadic fashion to a new dense cluster or network of relationships. The upward trajectory is the adoption cycle of that new, dense cluster and the flat part of the step is the period between the step to next cluster. Blended in here there are clearly issues of engagement vs. trial. But it’s hard to weed those out from this data set. I learnt a lot of this from Yossi Vardi and Adam Seifer. Two people I had the privilege of working with over the years — two people whose DNA is wired right into this stuff. At Fotolog Adam could take the historical data set and illustrate how these clusters moved — in steps — from geography to geography, its fascinating.

TechCrunch falls off a cliff

Ok I’m sure there are some people reading who are thinking — well this is interesting but I actually want to read about TechCrunch falling off a traffic cliff. I’m sorry – I actually don’t have any data to suggest that happened. After noting yesterday that provocative headline is sometimes a substitute for data I thought — heck I can do this too! This section of the post is more of a cautionary tale — if you are confused by this twist let me back up to where I started. I mentioned that there were two motivations for me sitting down and writing this post. The second one was that earlier this week TechCrunch story ran this week saying that bit.ly market share had shifted dramatically. It hasn’t. The data was just misunderstood by the reporter. The tale (I did promise a tale) began last August when TechCrunch ran the following chart about the market share of URL shorteners.

The pie chart showed the top 5 URL shorteners and then calculated the market share each had — what percent each was *of* the top five. The data looks like this:

Not much news in those numbers, especially when you consider they come from the Twitter “garden hose” (a subset of all tweets) and swing by as much as +/- 5% daily. The tumblr growth into the top 5 and the ow.ly bump up is nice shift for them – but not really a story. The hitch was that the reporter didn’t consider that there are other URL’s in the Twitter stream aside from these five. Some are short URL’s and some aren’t. So this metric doesn’t accurately reflect overall short URL market share — it shows the shuffling of market share amongst the top five. But media will be media. I saw a Tweet this week about how effective Twitter is at disseminating information — true and false — despite all the shifts that are going on headlines in a sense carry even more weight than in the “read all about it” days.

The lesson here for me was the importance of helping reporters and analysts get access to the underlying data — data they can use effectively. We sent the reporter the data but he saw a summary data set that included the other URL’s and didn’t understand that back in August there were also “other” URL’s. After the fact we worked to sort this out and he put a correction in his post. But the headline was off and running — irrespective of how dirty or clean the data was. Basic mistake — my mistake — and this was with a reporter who knows this stuff well. Given the paucity of data out there and the emergent state of the real time web this stuff is bound to happen.

Ironically, yesterday, bit.ly hit an all time high in terms of decodes — over 90m. But back to the original question — there is a valid question the reporter was seeking to understand, namely: what is the market share of dem short thingy’s? We track this metric — using the Twitter garden hose and identifying most of the short URL’s to produce a ranking (note its a sample, so the occurrences are a fraction of the actuals). And it’s a rolling 24 hr view — so it moves around quite a bit — but nonetheless it’s informative. This is what it looked like yesterday:

Over time this data set is going to become harder to use for this purpose. At bit.ly we kicked off our white label service before the holidays. Despite months of preparation we weren’t expecting the demand. As we provision and setup the thousands of publishers, blogger and brands who want white label services its going to result in a much more diverse stream of data in the garden hose.

Real Time Web Data

Finally I thought it would be interesting to try to get a perspective on the emergence of the real time web in 2009 — how did its growth compare and contrast with the incumbent web category leaders? Let me try to frame up some data around this. Hang in there, some of the things I’m going to do are hacks (at best) — as I said I was inspired! Lets start with the user growth in the US among the current web leaders — Google and Amazon — this is what it looked like in 2009:

It’s basically flat. Pretty much every user in the domestic US is on Google for search and navigation and on Amazon for commerce — impressive baseline numbers but flat for the year (source: Quantcast). So then lets turn to Twitter. Much ink has been spilt over Twitter.com’s growth in the second half of the year. During the first half of the year Twitter’s growth, I suspect, was driven to a great extent by the unprecedented media attention it received — media and celebrities were all over it. Yet in the second half of the year that waned and the traffic numbers to the Twitter.com web site were flat for the second half of the year. That step issue again?

Placing steps aside — because I dont in anyway seek to represent Twitter Inc. — there are two questions that haven’t been answered (a) what about international growth, that was clearly a driver for Facebook in ’09, where was Twitter internationally? (b) what about the ecosystem. Unsurprisingly its the second question that interests me the most. So what about that ecosystem?

We know that approx 50% of the interactions with the Twitter API occur outside of Twitter.com but many of those aren’t end user interactions. We also know that as people adopt and build a following on Twitter they often move up to use one of the client or vertical specifics applications to suit their “power” needs. At TweetDeck we did a survey of our users this past summer. The data we got suggested 92% of them then use Tweetdeck everyday — 51% use Twitter more frequently since they started using TweetDeck. So we know there is a very engaged audience on the clients. We also know that most of the clients arent web pages — they are flash, AIR, coco, iPhone app’s etc. all things that the traditional measurement companies dont track.

What I did to estimate the relative growth of the Twitter ecosystem is the following. I used Google Trends and compiled data for Twitter and the key clients. I then scaled that chart over the Twitter.com traffic. Is it correct? — no. Is it made up? — no. It’s a proxy and this is what it looks like (again, you can click the chart to see a larger version):

Similar to the Twitter.com traffic you see the flattening out in the summer. But similar to the data sets referenced above you see growth in the forth quarter. I suspect if you could zoom in and out of this the way I did above you would see those steps again. So lets put it all together! Its one heck of a busy chart. Add in Facebook (blue) and Meebo (green) both steaming ahead — Meebo had a very strong end of year. And then tile on top the bit.ly data and the Twitterfeed numbers (both on different scales) and you have an overall picture of growth of the real time web vs. Google and Amazon.

Ok. One last snap shot then im wrapping up. Chartbeat — yep another betaworks company — had one of its best weeks ever this past week — no small thanks to Jason’s Calacanis’s New Year post about his Top 10 favorite web products of 2009. To finish up here is a video of the live traffic flow coming into Fred Wilson’s blog at AVC.com on the announcement of the Google Nexus one Phone. Steve Gilmore mentioned the other week how sometimes interactions in the real time web just amaze one. Watching people swarm to a site is a pretty enthralling experience. We have much work to do in 2010. Some of it will be about figuring out how to measure the real time web. Much of it will be continuing to build out the real time web and learning about this fascinating shift taking place right under our feet.

random footnote:

A data point I was sent this am by Iain that was interesting — yet it didnt seem to fit in anywhere?! Asian twitter clients were yesterday over 5% of the requests visible in the garden hose.

I had the good fortune of receiving an advance copy of Ken Auletta’s forthcoming book “Googled, The End of the World as We Know It“. It’s a fascinating read, one that raises a whole set of interesting dichotomies related to Google and their business practices. Contrast the fact that the Google business drives open and free access to data and intellectual property, so that the world becomes part of their corpus of data – yet they tightly guard their own IP in regards to how to navigate that data. Contrast that users and publishers who gave Google the insights to filter and search data are the ones who are then taxed to access that data set. Contrast Google’s move into layers beyond web sites (e.g., operating systems, web browsers) with their apparent belief that they won’t have issues stemming from walled gardens and tying. In Google we have a company that believes “Don’t be evil” is sufficient a promise for their users to trust their intentions, yet it is a company that have never articulated what they think is evil and what is not (Google.cn, anyone?).

There is a lot to think about in Auletta’s book – it’s a great read. When I began reading, I hoped for a prescriptive approach, a message about what Google should do, but instead Auletta provides the corporate history and identifies the challenging issues but leaves it to the reader to form a position on where they lead. In my case, the issue that it got me thinking most about was antitrust.

My bet is that in the coming few years Google is going to get hauled into an antitrust episodesimilar to what Microsoft went through a decade ago. Google’s business has grown to dominate navigation of the Internet. Matched with their incredibly powerful and distributed monetization engine, this power over navigation is going to run headlong into a regulator. I don’t know where (US or elsewhere) or when, but my bet is that it will happen sooner rather than later. And once it does happen, the antitrust process will again raise the thorny issue of whether regulation of some form is an effective tool in the fast-moving technology sector.

I was a witness against Microsoft in the remedy phase of its antitrust trial, and I still think a lot about whether to technology regulation works. I now believe the core position I advocated in the Microsoft trial was wrong. I don’t think government has a role in participating in technology design and I believe the past ten years have adequately illustrated that the pace of innovation and change will outrun any one company’s ability to monopolize a market. There’s no question in my mind that Microsoft still has a de facto monopoly on the market for operating systems. There’s also no question that the US and EU regulatory environment have constrained the company’s actions, mostly for the better. But the primary challenges for Microsoft have been from Google and, to a lesser extent, from Apple. Microsoft feels the heat today, but it is coming from Silicon Valley, not Brussels or Washington, and it would be feeling this heat no matter what had happened in the regulatory sphere. The EU’s decisions to unbundle parts of Windows did little good for RealNetworks or Netscape (which had been harmed by the bundling in the first place), and my guess is that Adobe’s Flash/ AIR and Mozilla’s Firefox would be thriving even if the EU had taken no action at all.

But if government isn’t effective at forward-looking technology regulation, what alternatives do we have? We can restrict regulation to instances where there is discernible harm (approach: compensate for past wrongs, don’t design for future ones) or stay out and let the market evolve (approach: accept the voracious appetite of these platforms because they’re temporary). But is there another path? What about a corporate statement of intent like Google’s “Don’t be evil”?

“Don’t be evil” resonated with me because it suggested that Google as a company would respect its users first and foremost and that its management would set boundaries on the naturally voracious appetite of its successful businesses.

In the famous cover letter in Google’s registration statement with the SEC before its IPO, its founders said: “Our goal is to develop services that significantly improve the lives of as many people as possible. In pursuing this goal, we may do things that we believe have a positive impact on the world, even if the near term financial returns are not obvious.” The statement suggests that there are a set of things that Google would not do. Yet as Auletta outlines, “don’t be evil” lacks forward looking intent, and most important it doesn’t outline what good might mean.

Nudge please …

Is there a third way — an alternative that places the company builders in a more active position? After almost two decades of development I believe many of the properties of the Internet have been documented and discussed, so why not distill these and use them as guideposts? I love reading and rereading works like the Stupid Network, or the Cluetrain Manifesto or the Cathedral and the Bazaar, or (something seasonal!) the Halloween Memo‘s. In these works, and others, there is mindset, an ethos or culture that is philosophically consistent with the medium. When I first heard “Don’t be evil” my assumption was that it, and by definition good, referred to that very ethos. What if we can unpack these principles, so that builders of the things that make up these internets can make explicit their intent and begin to establish a compact vs. a loose general statement of “goodness” that is subject to the constraint that “good” can be relative to the appetite of the platform? Regulation in a world of connected data, where the network effect of one platform helps form another, has much broader potential for unintended consequences. How we address these questions is going to affect the pace and direction of technology based innovation in our society.If forward looking regulation isn’t the answer, can companies themselves draw some lines in the sand, unpack what “don’t be evil” suggested, and nudge the market towards an architecture in which users, companies, and other participants in the open internet signal the terms or expectations they have.

Below is a draft list of principles. It is incomplete, I’m sure — I’m hoping others will help complete it — but after reading Auletta’s book and after thinking about this for a while I thought it would be worth laying out some thoughts in advance of another regulatory mess.

1. Think users

When you start to build something online the first thing you think about are users. You may well think about yourself — user #1 — and use your own workflow to intuit what others might find useful, but you start with users and I think you should end with the users. This is less of a principle and more of a rule of thumb, and a foundation for the other principles. It’s something I try to remind myself of constantly. In my experience with big and small companies this rule of thumb seems to hold constant. If the person who is running the shop you are working for doesn’t think about end users and / or doesn’t use your product, it’s time to move on. As Eric Raymond says you should treate your users as co-developers. Google is a highly user centric company for one of its scale, they stated this in the pre-ample to the IPO/s3 and they have managed to stay relatively user centric with few exceptions (Google.cn likely the most obvious, maybe the Book deal). Other companies — ie: Apple, Facebook — are less user centric. Working on the Internet is like social anthropology, you learn by participant observation — the practice of doing and building is how you learn. In making decisions about services like Google Voice, Beacon etc. users interest need to be where we start and where we end.

2. Respect the layers

In 2004 Richard Whitt, then at MCI, framed the argument for using the layer model to define communication policy. I find this very useful: it is consistent with the architecture of the internet, it articulates a clear separation of content from conduit, and it has the added benefit of been a useful visual representation of something that can be fairly abstract. Whitt’s key principle is that companies should respect the distinction between these layers. Whitt captures in a simple framework what is wrong with the cable companies or the cell carriers wanting to mediate or differentially price bits. It also helps to frame the potential problems that Side Wiki, or the iPhone or Google Voice, or Chrome presents (I’m struck by the irony that “respecting the layers” in the case of a browser translates into no features from the browser provider will be embedded into the chrome of the browser, calling the browser Chrome is suggestive of exactly what I dont want, ie Google specific Chrome!). All these products have the potential to violate the integrity of the layers, by blending the content and the applications layers. It would be convenient and simple to move on at this point, but its not that easy.

There are real user benefits to tight coupling (and the blurring of layers) in particular during the early stages of a product’s development. There were many standalone MP3 players on the market before the iPod. Yet it was the coupling of the iPod to iTunes and the set of business agreements that Apple embedded into iTunes that made that market take off (note that occurred eighteen months after the launch of the iPod). Same for the Kindle — coupling the device to Amazon’s store and to the wireless “Whispernet” service is what distinguishes it from countless other (mostly inferior) ebooks. But roll the movie forward: its now six and a half years after the launch of the coupled iTunes/iPod system. The device has evolved into a connected device that is coupled both to iTunes and AT&T and the store has evolved way beyond music. Somewhere in that evolution Apple started to trip over the layers. The lines between the layers became blurred and so did the lines between vendors, agents and users. Maybe it started with the DRM issue in iTunes, or maybe the network coupling which in turn resulted in the Google Voice issue. I’m not sure when it happened but it has happened and unless something changes its going to be more of problem, not less.Users, developers and companies need to demand clarity around the layers, and transparency into the business terms that bound the layers. As iTunes scales — to become what it is in essence a media browser — I believe the pressure to clarify these layers will increase. An example of where the layers have blurred without the feature creep /conflict is the search box in say the Firefox browser. Google is default, there is a transparent economic agreement that places them there and users can adjust and pick another default if they wish. One of the unique attributes of the internet is that the platform on which we build things is the very same as the one we use to “consume” those things (remember the thrill of “view source” in the browser). Given this recursive aspect of the medium, it is especially important to respect the layers. Things built on the Internet can them selves redefine the layers.

3. Transparency of business terms

When platform like Google, iTunes, Facebook, or Twitter gets to scale it rapidly forms a basis on which third parties can build businesses. Clarity around the business terms for inclusion in the platform and what drives promotion and monetization within the platform is vital to the long term sustainability of the underlying platform. It also reduces the cost of inclusion by standardizing the business interface into the platform. Adsense is a remarkable platform for monetization. The Google team did a masterful job of scaling a self service (read standardized) interface into their monetization system. The benefits of this have been written about at length yet aspects of the platform like “smart pricing” arent’t transparent. See this blogpost from Google about smart pricing and some of the comments in the thread. They include: “My eCPM has tanked over the last few weeks and my earnings have dropped by more then half, yet my traffic is still steady. I’m lead to believe that I have been smart priced but with no information to tell me where or when”

Back in 2007 I ran a company called Fotolog. The majority of the monetization at Fotolog was via Google. One day our Google revenues fell by half. Our traffic hadn’t fallen and up to that point our Google revenue had been pretty stable. Something was definitely wrong, but we couldnt figure out what. We contacted our account rep at Google, who told us that there was a mistake on our revenue dashboard. After four days of revenues running at the same depressed level we were told we had been “smart priced”. Google would not offer us visibility in how this is measured and what is the competitive cluster against which you are being tested. That opacity made it very hard for Fotolog to know what to do. If you get smart priced you can end up having to re-organize your entire base of inventory all while groping to understand what is happening in the black box of Google. Google points out they don’t directly benefit from many of these changes in pricing (the advertisers do pay less per click), but Google does benefit from the increased liquidity in the market. As with Windows, there is little transparency in regards to the pricing within the platform and the economics. This in turn leaves a meaningful constituent on the sideline, unsatisfied or unclear about the terms of their business relationship with the platform. I would argue that smart pricing and a lack of transparency into how their monetization platform can be applied to social media is driving advertisers to services like Facebook’s new advertising platform.

Back to Apple. iTunes is as I outlined about a media browser — we think about it as an application because we can only access Apple stuff through it, a simple, yet profound design decision. Apple created this amazing experience that arguably worked because it was tightly coupled end to end, i.e, the experience stretched from the media through the software to the device. Then when the device became a phone, the coupling extended to the network (here in the US, AT&T). I remember two years ago I almost bricked my iPhone — Apple reset my iPhone to its birthstate — because I had enabled installing applications that weren’t “blessed” by Apple. My first thought was, “isn’t this my phone? what right does Apple have to control what I do with it, didn’t I buy it?” A couple of months ago, Apple blocked Google Voice’s iPhone application; two weeks ago Apple rejected someecards’ application into the app store while permitting access to a porn application (both were designated +17; one was satire, the other wasn’t). The issue here isn’t monopoly control, per se — Apple certainly does not have a monopoly on cell phones, nor AT&T on cell phone networks. The trouble is that there is little to no transparency into *why* these applications weren’t admitted into the app store. (someecards’ application did eventually make it over the bar; you can find it here.) Will Google Voice get accepted? Will Spotify?, Rdio? someecards? As with the Microsoft of yesteryear (which, among other ills, forbade disclosure of its relationships with PC makers), there is an opaqueness to the business principles that underlie the iTunes app store. This is a design decision that Apple has made and one that, so far anyway, users and developers have accepted. And, in my opinion, it is flawed. Ditto for Facebook. This past week, the terms for application developers were modified once again. A lot of creativity, effort, and money has been invested in Facebook applications — the platform needs a degree of stability and transparency for developers and users.

4. Data in, data out?

API’s are a corner stone to the emerging mesh of services that sit on top of and around platforms. The data flows from service providers should, where possible, be two way. Services that consume an API should publish one of their own. The data ownership issues among these services is going to become increasingly complex. I believe that users have the primary rights to their data and the applications that users select have a proxy right, as do other users who annotate and comment on the data set. If you accept that as a reasonable proposition, then it follows that service providers should have an obligation to let users export that data and also let other services providers “plug into” that data stream. The compact I outline above is meaningfully different to what some platforms offer today. Facebook asserts ownership rights over the data you place in its domain; in most cases the data is not exportable by the user or another service provider (e.g., I cannot export my Facebook pictures to Flickr, nor wire up my feed of pictures from Facebook to Twitter). Furthermore if I leave Facebook they still assert rights to my images. I know this is technically the easiest answer. Having to delete pictures that are now embedded in other people’s feed is a complex user experience but I think that’s what we should expect of these platforms. The problem is far simplier if you just link to things and then promote standards for interconnections. These standards exist today in the form of RSS, or Activity Streams — pick your flavor and let users move data from site to site and let users store and save their data.

5. Do what you do best, link to the rest

Jeff Jarvis’s moto for newsrooms applies to service providers as well. I believe the next stage of the web is going to be characterized by a set of loosely coupled services — services that share data — offering end users the ability to either opt for an end-to-end solution or the possibility of rolling their own in a specific domain where they have depth of interest, knowledge, data. The first step in this process is that real identity is becoming public and separable from the underlying platform (vs. private in, say The Facebook, or alias based in most earlier social networks). In the case of services like Facebook Connect and Twitter OAuth this not only simplifies the user experience but identity also pre-populates a social graph into the service in question. OAuth flows identity into a user’s web experience, vs. the disjointed efforts of the past. This is the starting point. We are now moving beyond identity into a whole set of services stitched together, by users. Companies of yesteryear, as they grew in scale, started to co-opt vertical services of the web into their domain (remember when AOL put a browser inside of its client, with the intention of “super-setting” the web). This was an extreme case — but it is not all that different from Facebook’s “integration” of email, providing a messaging system with no imap access, one sends me an email to my imap “email” account to tell me to check that I have a Facebook “email”. This approach wont scale for users. Kevin Marks, Marc Cantor, Jerry Michalski are some of the people who have been talking for years about an open stack. In the later half of this presentation Kevin outlines the emerging stack. I believe users will opt — over time — for best in class services vs. the walled garden roll it once approach.

6. Widen the my experience – don't narrow it

Google search increasingly serves to narrow my experience on the web, rather than expand it. This is driven by a combination of pressure inherent in their business model to push page views within their domain vs. outside (think Yahoo Finance, Google Onebox etc.) and the evolution of an increasingly personalised search experience which in turn tends to feed back to me and amplify my existing biases — serving to narrow my perspective vs. broaden it. Auletta talked about this at the end of his book. He quotes Nick Carr: “They (Google) impose homogeneity on the Internet’s wild heterogeneity. As the tools and algorithms become more sophisticated and our online profiles more refined, the Internet will act increasingly as an incredibly sensitive feedback loop, constantly playing back to us, in amplified form, our existing preferences” Features like social search will only exacerbate this problem. This point is the more subtle side of the point above. I wrote a post a year or two ago about thinking of centres vs. wholes and networks vs. destinations. As the web of pages becomes a web of flow and streams the experience of the web is going widen again. You can see this in the data — the charts in distribution now post illustrate the shift that is taking place. As the visible — user facing — part of a web site becomes less important than the API’s and the myriad of ways that users access the underlying data, the web, and our experience of it, will widen, again.

Conclusions

I have outlined six broad principles that I believe can be applied as a design methodology for companies building services online today. They are inspired by others, a list of whom would be very long, I’m not going to attempt to document it, I will surely miss someone. Building companies on today’s internet is by definition an exercise in standing on the shoulders of giants. Internet standards from TCP/IP onward are the strong foundation of an architecture of participation. As users pick and choose which services they want to stitch together into their cloud, can companies build services based on these shared data sets in a manner that is consistent with the expectations we hold for the medium? The web has a grain to it and after 15 years of innovation we can begin to observe the outlines of that grain. We may not be able to always describe exactly what it is that makes something “web consistent” but we do know it when we see it.

The Microsoft antitrust trial is a case study in regulators acting as design architects. It didn’t work. Google’s “don’t be evil” mantra represents an alternative approach, one that is admirable in principle but lacking in specificity. I outline a third way here, one in which we as company creators coalesce around a set of principles saying what we aspire to do and not do, principles that will be visible in our words and our deeds. We can then nudge our own markets forward instead of the “helping hand” of government.

As the title suggests the focus was on the Twitter ecosystem in London. Our conversation also touched on the overall size and health of the real-time ecosystem — this topic didn’t make it into the article. It’s hard to gauge the health of a business ecosystem that is still very much under development and has yet to mature into one that produces meaningful revenues. Yet the question got me thinking — it also got me thinking that it has been a while since I had posted here. It was one busy summer. I have a couple of long posts I’m working on but for now I want to do this quick post on the real-time ecosystem and in it offer up some metrics on its health.

Back in June I did a presentation at Jeff Pulver’s 140conf, the topic of which was the real-time / Twitter ecosystem. Since then, I have been thinking about the diversity of data sources, notably the question of where people are publishing and consuming real-time data streams. At betaworks we are fairly deep into the real time / Twitter ecosystem. In fact, every company at betaworks is a participant, in one manner or another, in this ecosystem, and that’s a feature, not a bug! Of the 20 or so companies in the betaworks network, there is a subset that we we operate; one of those is bit.ly.

In an attempt to answer this question about the diversity of the ecosystem, let me run through some internal data from bit.ly. bit.ly is a URL shortener that offers among other things real-time tracking of the clicks on each link (add “+” to any bit.ly URL to see this data stream). With a billion bit.ly links clicked on in August — 300m last week — bit.ly has become almost part of the infrastructure of the real time cloud. Given its scale bit.ly’s data is a fair proxy for the activity of the real-time stream, at least of the links in the stream.

On Friday of this week (yesterday) there were 20,924,833 bit.ly links created across the web (we call these “encodes”). These 20.9m encodes are not unique URL’s, since one popular URL might have been shortened by multiple people. But each encode represents intentionality of some form. bit.ly in turn retains a parent : child mapping, so that you can see what your sharing of a link generates vs. the population (e.g., I shared a video on Twitter the other day; my specific bit.ly link got 88 clicks, out of a total of 250 clicks on any bit.ly link to that same video. see http://bit.ly/Rmi25+).

So where were these 20.9m encodes created? Approximately half of the encodes took place within the Twitter ecosystem. No surprise here: Twitter is clearly the leading public, real-time stream and about 20% of the updates on Twitter contain at least one link, approx half of which are bit.ly links. But here is something surprising: less than 5% of the 20.9m came from Twitter.com (i.e., from Twitter’s use of bit.ly as the default URL-shortener). Over 45% of the total encodes came from other services associated in some way with Twitter – i.e. the Twitter ecosystem — a long and diverse list of services and companies within the ecosystem who use bit.ly.

The balance of the encodes came from other areas of the real time web, outside of Twitter. Google Reader incorporated bit.ly this summer, as did Nokia, CBS, Dropbox, and some tools within Facebook. And then of course people use the bit.ly web site — which has healthy growth — to create links and then share them via instant-messaging services, MySpace, email, and countless other communications tools.

The bit.ly links that are created are also very diverse. Its harder to summarise this without offering a list of 100,000 of URL’s — but suffice it to say that there are a lot of pages from the major web publishers, lots of YouTube links, lots of Amazon and eBay product pages, and lots of maps. And then there is a long, long tail of other URL’s. When a pile-up happens in the social web it is invariably triggered by link-sharing, and so bit.ly usually sees it in the seconds before it happens.

This data says to me that the ecosystem as a whole is becoming fairly diverse. Lots of end points are publishing (i.e. creating encodes) and then many end points are offering ways to use the data streams.

In turn, this diversity of the emerging ecosystem is, I believe, an indicator of its health. Monocultures aren’t very resilient to change; ecosystems tend to be more resilient and adaptable. For me, these few data points suggest that the real-time stream is becoming more and more interesting and more and more diverse.

In February 1948, Communist leader Klement Gottwald stepped out on the balcony of a Baroque palace in Prague to address hundreds of thousands of his fellow citizens packed into Old Town Square. It was a crucial moment in Czech history – a fateful moment of the kind that occurs once or twice in a millennium.

Gottwald was flanked by his comrades, with Clementis standing next to him. There were snow flurries, it was cold, and Gottwald was bareheaded. The solicitous Clementis took off his own fur cap and set it on Gottwald’s head.

The Party propaganda section put out hundreds of thousands of copies of a photograph of that balcony with Gottwald, a fur cap on his head and comrades at his side, speaking to the nation. On that balcony the history of Communist Czechoslovakia was born. Every child knew the photograph from posters, schoolbooks, and museums.

Four years later Clementis was charged with treason and hanged. The propaganda section immediately airbrushed him out of history, and obviously, out of all the photographs as well. Ever since, Gottwald has stood on that balcony alone. Where Clementis once stood, there is only bare palace wall. All that remains of Clementis is the cap on Gottwald’s head.

Book of Laughter and Forgetting, Milan Kundera

The rise of social distribution networks

Over the past year there has been a rapid shift in social distribution online. I believe this evolution represents an important change in how people find and use things online. At betaworks I am seeing some of our companies get 15-20% of daily traffic via social distribution — and the percentage is growing. This post outlines some of the aspects of this shift that I think are most interesting. The post itself is somewhat of a collage of media and thinking.

Distribution is one of the oldest parts of the media business. Content is assumed to be king so long as you control the distribution flow to that content. From newspapers to NewsCorp companies have understand this model well. Yet this model has never suited the Internet very well. From the closed network ISP’s to Netcenter. Pathfinder to Active desktop, Excite Lycos, Pointcast to the Network computer. From attempts to differentially price bits to preset bookmarks on your browser — these are all attempts at gate keeping attention and navigation online. Yet the relative flatness of the internet and its hyperlinked structure has offered people the ability to route around these toll gates. Rather than client software or access the nexus of distribution became search. Today there seems to be a new distribution model that is emerging. One that is based on people’s ability to publically syndicate and distribute messages — aka content — in an open manner. This has been a part of the internet since day one — yet now its emerging in a different form — its not pages, its streams, its social and so its syndication. The tools serve to produce, consume, amplify and filter the stream. In the spirit of this new wave of Now Media here is a collage of data about this shift.

Dimensions of the now web and how is it different?

Start with this constant, real time, flowing stream of data getting published, republished, annotated and co-opt’d across a myriad of sites and tools. The social component is complex — consider where its happening. The facile view is to say its Twitter, Facebook, Tumblr or FriendFeed — pick your favorite service. But its much more than that because all these sites are, to varying degrees, becoming open and distributed. Its blogs, media storage sites (ie: twitpic) comment boards or moderation tools (ie: disqus) — a whole site can emerge around an issue — become relevant for week and then resubmerge into the morass of the data stream, even publishers are jumping in, only this week the Times pushed out the Times Wire. The now web — or real time web — is still very much under construction but we are back in the dark room trying to understand the dimensions and contours of something new, or even to how to map and outline its borders. Its exciting stuff.

Think streams …

First and foremost what emerges out of this is a new metaphor — think streams vs. pages. This seems like an abstract difference but I think its very important. Metaphors help us shape and structure our perspective, they serve as a foundation for how we map and what patterns we observe in the world. In the initial design of the web reading and writing (editing) were given equal consideration – yet for fifteen years the primary metaphor of the web has been pages and reading. The metaphors we used to circumscribe this possibility set were mostly drawn from books and architecture (pages, browser, sites etc.). Most of these metaphors were static and one way. The steam metaphor is fundamentally different. Its dynamic, it doesnt live very well within a page and still very much evolving. Figuring out where the stream metaphor came from is hard — my sense is that it emerged out of RSS. RSS introduced us to the concept of the web data as a stream — RSS itself became part of the delivery infrastructure but the metaphor it introduced us to is becoming an important part of our eveyday day lives.

A stream. A real time, flowing, dynamic stream of information — that we as users and participants can dip in and out of and whether we participate in them or simply observe we are are a part of this flow. Stowe Boyd talks about this as the web as flow: “the first glimmers of a web that isnt about pages and browsers” (see this video interview, view section 6 –> 7.50 mins in). This world of flow, of streams, contains a very different possibility set to the world of pages. Among other things it changes how we perceive needs. Overload isnt a problem anymore since we have no choice but to acknowledge that we cant wade through all this information. This isnt an inbox we have to empty, or a page we have to get to the bottom of — its a flow of data that we can dip into at will but we cant attempt to gain an all encompassing view of it. Dave Winer put it this way in a conversation over lunch about a year ago. He said “think about Twitter as a rope of information — at the outset you assume you can hold on to the rope. That you can read all the posts, handle all the replies and use Twitter as a communications tool, similar to IM — then at some point, as the number of people you follow and follow you rises — your hands begin to burn. You realize you cant hold the rope you need to just let go and observe the rope”. Over at Facebook Zuckerberg started by framing the flow of user data as a news feed — a direct reference to RSS — but more recently he shifted to talk about it as a stream: “… a continuous stream of information that delivers a deeper understanding for everyone participating in it. As this happens, people will no longer come to Facebook to consume a particular piece or type of content, but to consume and participate in the stream itself.” I have to finish up this section on the stream metaphor with a quote from Steve Gillmor. He is talking about a new version of Friendfeed, but more generally he is talking about real time streams. The content and the language — this stuff is stirring souls.

We’re seeing a new Beatles emerging in this new morning of creativity, a series of devices and software constructs that empower us with both the personal meaning of our lives and the intuitive combinations of serendipity and found material and the sturdiness that only rigorous practice brings. The ideas and sculpture, the rendering of this supple brine, we’ll stand in awe of it as it is polished to a sparkling sheen. (full article here)

Now, Now, Now

The real time aspect of these streams is essential. At betaworks we are big believers in real time as a disruptive force — it’s an important aspect of many of our companies — it’s why we invested a lot of money into making bit.ly real time. I remember when Jack Dorsey first saw bit.ly’s plus or info page (the page you get to by putting a “+” at the end of any bit.ly URL) — he said this is “great but it updates on 30 min cycles, you need to make it real time”. This was August of ’08 — I registered the thought, but also thought he was nuts. Here we sit in the spring of ’09 and we invested months in making bit.ly real time — it works, and it matters. Jack was right — its what people want to see the effects on how a meme is are spreading — real time. It makes sense — watching a 30 min delay on a stream — is somewhere between weird and useless. You can see an example of the real time bit.ly traffic flow to an URL here. Another betaworks company, Someecards, is getting 20% of daily traffic from Twitter. One of the founders Brook Lundy said the following “real time is now vital to what do. Take the swine flu — within minutes of the news that a pandemic level 5 had been declared — we had an ecard out on Twitter”. Sardonic, ironic, edgy ecards — who would have thought they would go real time. Instead of me waxing on about real time let me pass the baton over to Om — he summarizes the shift as well as one could:

“The web is transitioning from mere interactivity to a more dynamic, real-time web where read-write functions are heading towards balanced synchronicity. The real-time web, as I have argued in the past, is the next logical step in the Internet’s evolution. (read)

The complete disaggregation of the web in parallel with the slow decline of the destination web. (read)

More and more people are publishing more and more “social objects” and sharing them online. That data deluge is creating a new kind of search opportunity. (read)”

Only connect …

The social aspects of this real time stream are clearly a core and emerging property. Real time gives this ambient stream a degree of connectedness that other online media types haven’t. Presence, chat, IRC and instant messaging all gave us glimmers of what was to come but the “one to one” nature of IM meant that we could never truly experience its social value. It was thrilling to know someone else was on the network at the same time as you — and very useful to be able to message them but it was one to one. Similarly IRC and chats rooms were open to one to many and many to many communications but they usually weren’t public. And in instances that they were public the tools to moderate and manage the network of interactions were missing or crude. In contrast the connectedness or density of real time social interactions emerging today is astounding — as the examples in the collage above illustrate. Yet its early days. There are a host of interesting questions on the social front. One of the most interesting is, I think, how willthe different activity streams intersect and combine / recombine or will they simple compete with one another? The two dominant, semi-public, activity streams today are Facebook and Twitter. It is easy to think about them as similar and bound for head on competition — yet the structure of these two networks is fairly different. Whether its possible or desirable to combine these streams is an emerging question — I suspect the answer is that over time they will merge but its worth thinking about the differences when thinking about ways to bring them together. The key difference I observe between them are:

#1. Friending on Facebook is symmetrical — on Twitter it’s asymmetrical. On Facebook if I follow you, you need to follow me, not so on Twitter, on Twitter I can follow you and you can never notice or care. Similarly, I can unfollow you and again you may never notice or care. This is an important difference. When I ran Fotolog I observed the dynamics associated with an asymmetrical friend network — it is, I think, a closer approximation of the way human beings manage social relationships. And I wonder the extent to which the Facebook sysmetrical friend network was / is product of the audience for which Facebook was intially created (students). When I was a student I was happy to have a symmetrical social network, today not so much.

#2. The data on Facebook is assumed to be mostly private, or shared within private groups, Facebook itself has been mostly closed to the open web — and Facebook asserts a level of ownership over the data that passes through its network. In contrast the data on Twitter is assumed to be public and Twitter asserts very few rights over the underlying data. These are broad statements — worth unpacking a bit. Facebook has been called a walled garden — there are real advantages to a walled garden — AOL certainly benefited from been closed to the web for a long long time. Yet the by product of a closed system is that (a) data is not accessible or searchable by the web in general –ie: you need to be inside the garden to navigate it (b) it assumes that the pace innovation inside the garden will match or exceed the rate of innovation outside of the garden and (c) the assertion of rights over the content within the garden means you have to mediate access and rights if and when those assets flow out of the garden. Twitter takes a different approach. The core of Twitter is a simple transport for the flow of data — the media associated with the post is not placed inline — so Twitter doesnt need to assert rights over it. Example — if I post a picture within Facebook, Facebook asserts ownership rights over that picture, they can reuse that picture as they see fit. If i leave Facebook they still have rights to use the image I posted. In contrast if I post a picture within Twitter the picture is hosted on which ever service I decided to use. What appears in Twitter is a simple link to that image. I as the creator of that image can decide whether I want those rights to be broad or narrow.

#3. Defined use case vs. open use case. Facebook is a fantastically well designed set of work-flows or use cases. I arrive on the site and it present me with a myriad of possible paths I can follow to find people, share and post items and receive /measure associated feedback. Yet the paths are defined for the users. If Facebook is the well organized, pre planned town Twitter is more like new urban-ism — its organic and the paths are formed by the users. Twitter is dead simple and the associated work-flows aren’t defined, I can devise them for myself (@replies, RT, hashtags all arose out of user behavior rather than a predefined UI. At Fotolog we had a similar set of emergent, user driven features. ie: groups formed organically and then over time the company integrated the now defined work-flow into the system). There are people who will swear Twitter is a communications platform, like email or IM — other say its micro-blogging — others say its broadcast — and the answer is that its all of the above and more. Its work flows are open available to be defined by users and developers alike. Form and content are separated in way that makes work-flows, or use cases open to interpretation and needs.

As I write this post Facebook is rapidly re-inventing itself on all three of the dimensions above. It is changing at a pace that is remarkable for a company with its size membership. I think its changing because Facebook have understood that they cant attempt to control the stream — they need to turn themselves inside out and become part of the web stream. The next couple of years are going to be pretty interesting. Maybe E.M. Forrester had it nailed in Howard’s End: “Only connect! That was the whole of her sermon … Live in fragments no longer.”

The streams are open and distributed and context is vital

The streams of data that constitute this now web are open, distributed, often appropriated, sometimes filtered, sometimes curated but often raw. The streams make up a composite view of communications and media — one that is almost collage like (see composite media and wholes vs. centers). To varying degrees the streams are open to search / navigation tools and its very often long, long tail stuff. Let me run out some data as an example. I pulled a day of bit.ly data — all the bit.ly links that were clicked on May 6th. The 50 most popular links generated only 4.4% (647,538) of the total number of clicks. The top 10 URL’s were responsible for half (2%) of those 647,538 clicks. 50% of the total clicks (14m) went to links that received 48 clicks or less. A full 37% of the links that day received only 1 click. This is a very very long and flat tail — its more like a pancake. I see this as a very healthy data set that is emerging.

Weeding out context out of this stream of data is vital. Today context is provided mostly via social interactions and gestures. People send out a message — with some context in the message itself and then the network picks up from there. The message is often re-tweeted, favorite’d, liked or re-blogged, its appropriated usually with attribution to creator or the source message — sometimes its categorized with a tag of some form and then curation occurs around that tag — and all this time, around it spins picking up velocity and more context as it swirls. Over time tools will emerge to provide real context to these pile up’s. Semantic extraction services like Calais, Freebase, Zemanta, Glue, kynetx and Twine will offer a windows of context into the stream — as will better trending and search tools. I believe search gets redefined in this world, as it collides with navigation– I blogged at length on the subject last winter. And filtering becomes a critical part of this puzzle. Friendfeed is doing fascinating things with filters — allowing you to navigate and search in ways that a year ago could never have been imagined.

Think chunk
Traffic isnt distributed evenly in this new world. All of a sudden crowds can show up on your site. This breaks with the stream metaphor a little — its easy to think of flows in the stream as steady — but you have to think in bursts — this is where words like swarms become appropriate. Some data to illustrate this shift. The charts below are tracking the number of users simultaneously on a site. The site is a political blog. You can see on the left that the daily traffic flows are fairly predictable — peaking around 40-60 users on the site on an average day, peaks are around mid day. Weekends are slow — the chart is tracking Monday to Monday, from them wednesday seems to be the strongest day of the week — at least it was last week. But then take a look at the chart on the right — tracking the same data for the last 30 days. You can see that on four occasions over the last 30 days all of a sudden the traffic was more than 10x the norm. Digging into these spikes — they were either driven by a pile up on Twitter, Facebook, Digg or a feature on one of the blog aggregation sites. What do you do when out of no where 1000 people show up on your site?

The other week I was sitting in NY on 14th street and 9th Avenue with a colleague talking about this stuff. We were accross the street from the Apple store and it struck me that there was a perfect example of a service that was setup to respond to chunky traffic. If 5,000 people show up at an Apple store in the next 10 minutes — they know what to do. It may not be perfect but they manage the flow of people in and out of the store, start a line outside, bring people standing outside water as they wait. maybe take names so people can leave and come back. I’ve experienced all of the above while waiting in line at that store. Apple has figured out how to manage swarms like a museum or public event would. Most businesses and web sites have no idea how to do this. Traffic in the other iterations of the web was more or less smooth but the future isnt smooth — its chunky. So what to do when a burst takes place? I have no real idea whats going to emerge here but cursory thoughts include making sure the author is present to manage comments etc., build in a dynamic mechanism to alert the crowd to other related items? Beyond that its not clear to me but I think its a question that will be answered — since users are asking it. Where we are starting at betaworks is making sure the tools are in place to at least find out if a swarm has shown up on your site. The example above was tracked using Chartbeat — a service we developed. We dont know what to do yet — but we do know that the first step is making sure you actually know that the tree fell — real time.

Where is Clementis’s hat? Where is the history?

I love that quote from Kundera. The activity streams that are emerging online are all these shards — these ambient shards of people’s lives. How do we map these shards to form and retain a sense of history? Like the hat objects exist and ebb and flow with or without context. The burden to construct and make sense of all of this information flow is placed, today, mostly on people. In contrast to an authoritarian state eliminating history — today history is disappearing given a deluge of flow, a lack of tools to navigate and provide context about the past. The cacophony of the crowd erases the past and affirms the present. It started with search and now its accelerated with the now web. I dont know where it leads but I almost want a remember button — like the like or favorite. Something that registers something as a memory — as an salient fact that I for one can draw out of the stream at a later time. Its strangely compforting to know everything is out there but with little sense of priority of ability to find it it becomes like a mythical library — its there but we cant access it.

Unfinished

This media is unfinished, it evolves, it doesnt get finished or completed. Take the two quotes below — both from Brian Eno, but fifteen years apart — they outline some of the boundaries of this aspect of the stream.

In a blinding flash of inspiration, the other day I realized that “interactive” anything is the wrong word. Interactive makes you imagine people sitting with their hands on controls, some kind of gamelike thing. The right word is “unfinished.” Think of cultural products, or art works, or the people who use them even, as being unfinished. Permanently unfinished. We come from a cultural heritage that says things have a “nature,” and that this nature is fixed and describable. We find more and more that this idea is insupportable – the “nature” of something is not by any means singular, and depends on where and when you find it, and what you want it for. The functional identity of things is a product of our interaction with them. And our own identities are products of our interaction with everything else. Now a lot of cultures far more “primitive” than ours take this entirely for granted – surely it is the whole basis of animism that the universe is a living, changing, changeable place. Does this make clearer why I welcome that African thing? It’s not nostalgia or admiration of the exotic – it’s saying, Here is a bundle of ideas that we would do well to learn from. (Eno, Wired interview, 1995)

In an age of digital perfectability, it takes quite a lot of courage to say, “Leave it alone” and, if you do decide to make changes, [it takes] quite a lot of judgment to know at which point you stop. A lot of technology offers you the chance to make everything completely, wonderfully perfect, and thus to take out whatever residue of human life there was in the work to start with. It would be as though someone approached Cezanne and said, “You know, if you used Photoshop you could get rid of all those annoying brush marks and just have really nice, flat color surfaces.” It’s a misunderstanding to think that the traces of human activity — brushstrokes, tuning drift, arrhythmia — are not part of the work. They are the fundamental texture of the work, the fine grain of it. (Eno, Wired interview, 2008)

The media, these messages, stream — is clearly unfinished and constantly evolving as this post will likely also evolve as we learn more about the now web and the emerging social distribution networks.

Addendum, some new links

First — thank you to Alley Insider for re-posting the essay, and to TechCrunch and GigaOm for extending the discussion. This piece at its heart is all about re-syndication and appropriation – as Om said “its all very meta to see this happen to the essay itself”. There is also an article that I read after posting from Nova Spivack that I should have read in advance — he digs deep into the metaphor of the web as a stream. And Fred Wilson and I did a session at the social media bootcamp last week where he talked about shifts in distribution dynamics — he outlines his thoughts about the emerging social stack here. I do wish there was an easy way to thread all the comments from these different sites into the discussion here — the fragmentation is frustrating, the tools need to get smarter and make it easier to collate comments.