Perhaps the dress is neither white and gold nor black and blue. Perhaps the dress is a color for which we lack language and reference—a color beyond that which is known to man. It is a color as vast as space and as timeless as infinity. It is the middle ground between light and shadow, between science and superstition, and it lies between the pit of man's fears and the summit of his knowledge. This is the dimension of color. It is an area which we call the Twilight Zone.

I haven’t updated this in so long, I don’t even know where to begin. I guess the logical place to start is where I left off.

Projects I worked on since I last updated:

Relaunching the Skittles Facebook and Twitter presence

A series of Facebook games for Life Savers

The brand strategy for General Electric’s Healthymagination campaign

Building out MarkNet, an internal collaboration platform for General Electric’s global marketing team

Engineering the software platform that powers TheCassandra Report

Acquiring some 2MM+ players for the Facebook game network GameGround

Leaving New York and moving [back] to San Francisco

Getting back into the ad game

And here we are. I’ve learned a heck of a lot. I’ve gotten a lot more into the engineering side of things. I’ve started playing with node.js for a few interesting projects. I’ve gotten a pretty decent handle on HTML5 and iOS development. I should be pushing some fun stuff to GitHub pretty soon.

Twitter is still something of a mystery to those of us in advertising and marketing. Everyone thinks they need to be on top of it, but no one is completely sure now to use it. Even fewer people have an idea of how to measure whether or not they’re using it effectively. Most of the time brands think about Twitter like this: Create an account, start tweeting, and then measure success by looking at how many followers we have. But that doesn’t tell you the whole story. In fact, that tells you almost nothing.

Anyone who’s used Twitter for just a few days will quickly discover that it’s a haven for spammers. But just how bad is the problem? Well, I have an dummy account I created about 18 months ago and it was 300 followers, despite the fact that I’ve never sent a single tweet. That should tell you that, right off the bat, that looking at the number of followers is something of a useless metric. It has no context. It’s just a number.

Almost all of the data stored about Twitter users and their tweets is public and can be pulled down from the API with relative ease. The Twitter API is easily accessible with cURL. This means that anyone who knows how to use a text editor can start pulling down heaps of data. The rest of this post isn’t about how to use cURL, per se, but rather thinking through some of the different ways we might use the massive amount of data Twitter makes available to us to draw insights and set better goals.

The Experiment

To illustrate this, we need a target. I’ve chosen @bbhlabs. @bbhlabs is the Twitter account for BBH Labs, the “marketing skunkworks” division of BBH. I chose this account for two reasons: (1) I admire their work; and (2) I wanted to see what the follower data looked like for an ad agency. Who follows those who’s goal it is to encourage consumers to follow others?

At the time of writing this, BBH Labs had a little over 12,500 followers. Using the statuses/followers REST API Method I was able to quickly pull down the information for almost every one of their followers. Information like this:

ID

Name

Username

Location

Profile Bio

Profile Picture

Web URL

Privacy Settings

# of Followers

# of Friends (“following”)

Account Creation Date

# of Favorites

UTC Offest

Time Zone

Per-tweet Geolocation Status

Verified User Status

# of Tweets

And more…this is just what I thought was relevant

All of this information is public, for almost every follower (unless the account is private). Almost scary, right? It should also be noted that this is but a single API call. There are dozens of different API calls that cover everything from search to lists to retweets. Profile data is a very small subset of the total data available to play with.

After pulling down all of that information I was left with a massive XML file which needed to be parsed and formatted into a CSV file. If you want to pull follower information for 10k users, expect to be left with an XML file some 500k lines long. While parsing XML generally requires some programming knowledge (it sure makes it easier), it’s not a prerequisite to do this kind of analysis. Most of this can be done using cURL, a text editor, and simple functions like VLOOKUP in Microsoft Excel.

With CSV file in hand, you can open in up in any number of applications and start sorting, slicing, pivoting and filtering the data until you find what you’re looking for. And we’ll get there, but before we start looking at numbers let’s have a look at something a little more visually compelling. What if you wanted to map everyone who follows you? How would you do that? Turns out it’s pretty easy. All you have to do is head on over to one of Google’s lab projects, Fusion Tables.

Fusion Tables is an incredibly powerful tool for statistics, analysis, and visualizations. Once you start to use it, and realize that it has an open API behind it, Microsoft Excel starts to look like a toy. One of the processes that Fusion Tables makes simple is geocoding massive amounts of location information. All you have to do is upload a CSV with a list of locations and Fusion Tables does most of the work for you. This is what allowed me to create the interactive map you see at the top of this post. It also allows you to create a heatmap, like this:

All the major cities are account for here, including some interesting finds when you look across the globe. (Note: There are a number of geocoding errors that I’ve not bothered to filter out.)

You might look at this and say, “That’s neat. Who cares?” Well, consider this: One of the key problems with ARGs is that you never know if who’s actually going to participate. You have to spend buckets in media to ensure that you reach the right people, who want to participate, and then you hope that they do something like go on Twitter an tweet about it. You’re also hoping that they have a lot of followers, so the message gets seen by as many people as possible. But you don’t have to hope. You can use information from Twitter to make recommendations with incredible accuracy. Not city level, but street level.

And here’s more food for thought: Since all of this information is public, you can draw down information on your competitors Twitter followers and target billboards, bus stop ads, and other out of home media directly at them—with street level precision.

Strictly By The Numbers

Now back to the spreadsheets. One of the things I like to do when I look at follower data is find out how many “active” users there are. Of the 12,500+ people who follow BBH Labs, how many are spam? How many are inactive? How many people have a network of zero? There are any number of ways to do this, but I like to create filters around real-world usage patterns to get a slightly better idea of how big of an audience you’re actually reaching.

I applied the following four filters to the 12,601 people in my database:

Private: = FALSE

# of Followers: > 100

# of Tweets: > 100

# of Friends: > 10

How many people was I left with? 6,619—or bout 53%. And while that might not mean much without a comparison, that’s really pretty good. To be clear: I’m not suggesting that the other 5,892 people following BBH Labs are spammers; just that they probably aren’t as valuable as the those who passed our test. And when you think about the filter, it’s really not that strict. All it saying is that each follower has 100 people following them (50 of which are probably spam), that they’ve been using Twitter long enough to sent 100 tweets (even at one a day, you’ll hit 100 in a little over 3 months), and that they find at least 10 people interesting enough to follow.

While looking through the rest of the data I pulled out a few other simple statistics that are interesting to think about:

Average # of followers: 1,746 | Median: 163

Average # of friends: 982 | Median: 206

Average # of tweets: 987 | Median: 247

6% of followers keep their tweets private

9% have per-tweet geolocation enabled

12 followers are “verified”

As you can see by the differences between means and medians, all followers are not created equal.

There are an unlimited number of ways to look at this kind of data, and depending what you’re looking for, probably a few surprises. My goal when I sat down to write this was not to thoroughly analyze BBH Labs (I’ve already gone too far), just to jot down some thoughts that might help others think beyond the follower count. I hope I’ve succeeded. Let me know in the comments.

The data set for BBH Labs (@bbhlabs) is publicly available, in CSV format, on Google Fusion Tables here. Let me know if you do anything cool with it. :)

DISCLAIMER: I am NOT currently employed, nor have I ever been employed in any capacity, by BBH, BBH Labs, or the Publicis Group.

Pictodeck is just what it sounds like — a deck of pictograms. It’s a collection of over 700 vector pictograms taken from four different sets: PICOL, Android Icons, Pictoico, and Freshpixel. I have converted all of these sets into graphical assets that exist within a Keynote deck. No need to open them in a program like Adobe Illustrator and import them individually. All you have to do is open Pictodeck in Keynote and copy and paste or drag them into your own decks. You can even drag the entire series of 720p slides into your decks (although I wouldn’t recommended leaving them there, since Pictodeck is rather large at about 30MB).

I created this because I’ve found myself spending a lot of time using Keynote to tell stories. I like telling stories through creative uses of typography and pictograms. I found myself using PICOL (Pictoral Communication Language) a lot last year and decided to formalize my collection and distribute it a way that makes the entire library more accessible to those in advertising, marketing, finance — any industry really. If you work with Keynote, Pictodeck is for you.

You may not realize it at first, but Keynote actually runs on a vector based layout engine. When you drag a vector-based image (Adobe Illustrator, SVG, EPS, etc.) into Keynote the vectors are preserved. Keynote converts all vector based images into PDF assets that preserve the vectors. Just look at them in the Inspector — you’ll notice they all get the filename “droppedImage.pdf”.

If you have no idea what vectors are (or why you should care), I’d encourage you to look up the difference between vectors and bitmaps on Wikipedia. If you just want the short version, it’s this: There are two primary kinds of image files: bitmaps and vectors. Vector graphics can be scaled to any size without a loss in quality; bitmap images cannot. You know how sometimes you find an image and try to make it the entire size of the canvas, only to find out that it’s terribly blurry and pixelated? That’s because it’s a bitmap image and they can’t be rescaled without a loss in image quality.

In addition to a massive collection of vector pictograms, I’ve also included a collection of 32x32 bitmap icons for popular social networking sites created by Komodo Media.

You can download Pictodeck v1.0 for Keynote ’09 here and a package for Keynote ’08 here. They are both ZIP files. I’m currently hosting them on Dropbox. Now on Amazon S3!

☞ Pictodeck is made possible only because the original authors have graciously chosen to license their work under Creative Commons (in one form or another).

☞ I claim no copyrights of my own and only ask that you respect theirs.

Pictodeck exists to help others tell their stories visually though Keynote. I hope you find it useful. Feel free to leave feedback in the comments or shoot me a note if you know of any other pictograms you feel might be worth including in a future version, or if you want to share something you’ve created with Pictodeck. You can also contact me on Twitter @ralphthemagi.

I’m already planning the next version which will feature a mirror set of pictograms with inverted colors so that you can make better use of them on color backgrounds. In the meantime, if you want to use the pictograms on a black background, consider matting them on top of a white rounded square.

This was something I spent quite a bit of time on. You may have seen pieces of it here. It’s [now] a little book called Thresher. It’s the first in a series of thought leadership and innovation workshops for Freesytle Interactive.

Summary:

“Thresher is a quarterly publication by Freestyle Interactive. It’s an experiment in what we are calling ‘reverse aggregation.’ Instead of a standard newsletter that gets lost in your inbox, with a dozen disconnected links, Thresher is a collection of articles that take a deeper look at a single topic, trend, or technology.”

This is a deck I built in collaboration with Freestyle Interactive, Heat, and Wieden + Kennedy on trends and innovation in mobile marketing. And in a completely unrelated note, I don’t think I’ll ever get over the jealousy I feel towards W+K for having a two letter domain name.

The purpose of this deck was to delve into advertising on the third screen (mobile) in this new fifth dimension—where data and information exists in a cloud all around you. This fifth dimension isn’t quite tangible without a device, but if you have an iPhone you can look at a restaurant through your phone and see every review that people left on Yelp or all the tweets tagged with that location.

I made extensive use of two things: color and iconography. All of the icons used in the deck are taken from the open source PICOL icon set. I tried to keep the text to a bare minimum. There are also quite a few videos in the deck, which makes the Slideshare version feel a little static; but you get the idea.

You can download this presentation in PDF format through Slideshare here.

Note: The image above is massive at 13536x1584, and too big for Tumblr. You can find a high-resolution JPEG here.

I have colloquially chosen to call this The Skypeline.

I can’t well cover up the client in this case. This was for a pitch I worked on. (For Skype, in case you were still wondering.) I decided to map the history and evolution of Skype as both a brand and a product.

To do this I used the Wayback Machine to pull creative and copy from Skype’s website dating back to their launch in August 2003. It’s interesting to see how Skype developed over the years, being that it’s been mostly a “free” product and service. Several of the initiatives to monetize their platform have been killed.

Skype business model is something of a paradox. They want more users to sign up for Skype and use their paid services, but every time a new person signs up for Skype that’s one less person that you need to use the paid service to call (since Skype-to-Skype calls are free). It’s not a problem most businesses have. If everyone uses Skype, Skype makes no money. The optimal scenario for them is one where ever customer has exactly half of his or her contacts on Skype so that he or she is using it all the time, but still needs the buy into the paid service to call the other half that isn’t using it.

They would have made an interesting client. I hope to work with them in the future.

Note: Due to contractual obligations, I have decided to remove any brand-identifying information.

The image above is a complete mapping of a major brand’s social media presence on the web. This was an exercise I started doing for all the brands I’ve worked with in order to better understand how the consumer uses social media. Everyone likes to talk about how social media is going to save their business, but I don’t see a lot of examples of people going out there and actually finding out how people interact with the content that brands are seeding out to them.

The colors indicate the specific property in question. Everything having to do with Facebook, for example, is in red. Simply mapping out properties and getting an idea of their size is a good start, but it doesn’t really tell you how the consumer ends up there… or where they go after. The consumer journey remains a mystery. That’s where the madness of arrows comes into play.

The arrows indicate trafficking information. They show where people end up going given any particular starting point. This is the kind of information that can be invaluable when developing strategies for content distribution, and having them actually mapped out on a 2D-plane can make that process much easier. This can also help lead to more optimal linking structures for large brands that have a distributed presence over a number of different sites and networks.

You may notice that the missing piece here is traffic coming from search. It was too difficult to map on a 2D-plane with everything else going on, but is something that was considered as part of the content distribution strategy for this brand.

One of the first projects I worked on at Freestyle Interactive was for the Sims 3 launch earlier this year.

The client wanted a “big Facebook idea” for the launch of the Sims 3, but Facebook doesn’t allow things like page takeovers, skins, or expandable ad units. The only thing Facebook would offer was the use of their new “Engagement Ads,” and there’s just nothing exciting about that. It’s the exact opposite of a big idea.

I suggested that, instead, we should use Facebook as the destination. We could create a site that existed somewhere else and would act as a proxy to a user’s web experience. And that’s exactly what we ended up doing. We created the Sim Sidekick, an in-browser overlay that users could take to any site. The widget would react to the page by scraping it for keywords and returning an appropriate animation based on the underlying content. The result was a widget that people spent over 4-minutes with on average. Skittles, of course, did the same kind of thing when they relaunched Skittles.com back in March 2009.

It was new territory for Freestyle though, and the client loved it. As good as it was it could have been much better. It could have been a more seamless experience for the user, and had much more interaction with the actual page. Instead of using a simple Flash overlay, we could have injected bits of Flash into the page itself with JavaScript, similar to the kinds of effects that can be achieved with the Greasemonkey plugin for Firefox. Imagine if you went to Facebook only to have a character literally walk across the page and appear to manipulate specific elements on the page. Since Facebook’s layout is so structured, this kind of effect is completely possible.

I foresee more of these kinds of experiences in the future, where the brand can create proxies that allow you to view the web through the brands perspective. Well, at least the interesting brands. I’m not sure this is a concept I’d want to pitch to Tampax.

This video, entitled Pwning Noobs, is a short consumer insights video I made to inspire our creative team. This was specifically for some pitch work and the client was a major game publisher. I went in one weekend and cobbled this together in iMovie and Final Cut Express. It’s made entirely from other pieces of user-generated content. The goal was to create a video that shed some light on competitive gaming culture; PC gamers in particular.

The video inspired a good deal of the creative that made it into the deck, and the video itself was how we opened the pitch. The client loved it and actually thought that it was a piece of creative.

About a year ago I was hired by a word-of-mouth marketing agency to work on a social media pilot project for a client. This agency had typically done event marketing, and this project was quite different. It was, in effect, a social media research project. This was about using social media in a fundamentally different way. It wasn’t about “brand monitoring” or using social media as some kind of new broadcast medium. This was about using social media to develop real, accurate insights and have an affect on how a company thinks about their industry, their consumer, and their marketing. This was, without a doubt, one of the most interesting projects I’ve ever worked on.

The client asked us how they should be allocating brand resources and budgets to create the most effective word-of-moth campaigns. They wanted us to find new opportunities for them. Today, almost any effective word of mouth campaign will reach the Internet, and so it stands to make sense that one could make recommendations based on learning about the larger marketplace, where the brand fits within it, and how the consumer interacts with both the brand and the market.

The project consisted of a four key elements:

A survey and audience segmentation.

Scraping tons of data from different social media outlets.

Regression analysis involving social media and sales data.

Alignment of brand messaging with particular retail locations.

I’m going to talk a little bit about each part of the project, and then talk about the outputs and results. Due to contractual obligations, I can’t mention clients or agencies by name… although a truly clever person (with way to much time on their hands) could piece it all together.

1. Survey and audience segmentationAdvertising agencies traditionally learn about their audience by using third-party research firms or hosting small focus groups. In this case, the client needed learn about an area that was far to large to develop any kind of actionable insights through traditional methodology (the entire State of California, and its nearly 25 million persons over the age of 21). We also had a very small budget.

Instead of using a third-party firm to field surveys or fly to all the major cities in California and host focus groups, we did the entire audience segmentation through a combination of Facebook and Craigslist. Not exactly the most random sample, but it turned out to be much better than expected. We ended up hyper-targeting six major cities within the State, and the demographic information we got back from our survey was very close, proportionally, to what the U.S. Census reports for each of those cities. There were some problems with under-representation from certain demographics (the Hispanic community in particular), but not in all cities. Cities like San Francisco, Los Angeles and San Deigo were almost perfectly represented. There were some problems with cities like Fresno, but that was to be expected, and we kept that top of mind throughout the rest of the project.

About 85% of the responses to our survey came from Facebook. We were able to get over 2,000 responses, with very little incentive. And we did it for less than half of what we had originally budgeted the survey for. How?

I spent about a week copy testing ads to find the most effective copy and creative. I was able to get the average CPC insanely low, despite the fairly modest incentive—a chance to win an iPod. Facebook also allowed me to cut my target into little chunks by age and gender. This is what allowed us to do such an accurate survey. I calculated out how many men age 21-24 I need to take the survey to proportionately represent the population, and ran that survey against them until I hit my quota. Then I filled that quote for women. Then I did it again for the next age chunk.

At first I thought this methodology would give me extremely skewed results, but I started looking at the responses as they came in and everything was matching up. I had the right percentage of people who self-identified as Asian. I had almost the exact amount of people in the $35-$50k income bracket in Los Angeles that I would expect to have. I had the right percentage of married people.

Once our survey was complete, it was very easy to look at people and put them into buckets. We actually went about it a very simple way—segmentation by level of education, going off of an insight we had gathered that people who go out drinking together tend to drink with individuals of a similar education, and thus, income level.

2. Scraping data from different social media outletsThere are dozen of “social media monitoring” solutions out there, but they aren’t a perfect science. At best, the good ones return data sets that you can use to create moderately interesting pie charts. You can learn about how the Internet perceives your brand, but you can’t use them to learn about the larger market. We needed something a little more custom, and a lot more structured, so we built it ourselves. We weren’t interested in a single brand. What we needed was a complete database of retail locations. In our case, a database of bars, restaurants, and venues.

We created a database of over 1,000 on-premise locations across California. We pulled from APIs like Yelp and Yahoo Upcoming to get an idea of ratings, reviews, and the number of events organized at different locations. We pulled all other other description, address, location, and contract information. Then we went to Google and started pulling the number of search results that each location returned, to get a better sense of how well they index and how many people might be talking them.

But we weren’t done. We needed to know which locations had a license to serve liquor, so we had merge that database (which was available) with our own. Then we decided to that with the clients last two years of sales data for each location. Now we had a real data set.

3. Regression analysisNow that we had a real data set, we could do some real analysis. We were able to use regression analysis to determine what kind of impact social media has on sales. We found that certain variables are very well correlated with sales, and others not so much. The total number of reviews on Yelp, for example, doesn’t really matter. The rating, however, does.

The reason that we actually needed to run a regression was because we wanted to do three things:

Find a way to weight different social media indicators so that we could create a composite score in order to rank each venue.

Be able to justify why the client needs to seriously look at certain highly social locations where they have no distribution.

Take a good look at our outliers and figure out what’s going on.

Once this was complete, we were able to create a composite score for each venue, rank them, and then take a deeper dive into the top 100.

4. Alignment of brand messagingThis was actually the most precarious part of the entire project. We wanted to align each of the top venues with a particular person and a particular brand. We had developed our own personas as a result of the segmentation we had done, and used them in this alignment because we actually had an idea of who the consumer and what their preferences were.

However, the client did not like our personas. They had developed personas of their own that were rooted in emotional attachment to the brand and how their brand is represented as a lifestyle product. Our personas, on the other hand, were rooted in our survey. We would make statements like, “Your consumer is twice as likely to smoke as the average California male.” The client would rebuttal with a, “I don’t know about that. We’ve developed our own personas, and our brand isn’t really aligned with smoking. Smoking is disgusting.”

The real problem is that the client didn’t like the consumer that they actually had. The personas they had developed had more to do with aligning their brand image with a certain kind of person than it did with actually understanding who it is that’s out there consumer their products.

But I digress. This was a minor hiccup in communication in an otherwise good relationship. Hopefully the client will be more willing to take another look at the data and analysis we provided them, and reconsider it.

The rest of the brand messaging alignment consisted of us looking at the top locations in our database by hand, and doing a qualitative assessment of them to determine how well a particular location might fit for a particular brand-sponsored word-of-mouth program. We used Google Docs to crowd source this portion of the project, which allowed us to complete this portion in less than a day, when it otherwise might have taken a week.

In terms of specific deliverables we provided the client three things:

A 120-page book showcasing the top bars and venues by social media composite, in six major markets in California, complete with actual consumer reviews and sentiment.

A sortable database with over 1,000 locations and a ton of query-able information attached to each location.

An KML file which allowed the client to look at the database overlayed onto a real-world landscape using Google Maps or Google Earth.

I thought this was a remarkable project and an interesting way to use social media beyond just feeding an RSS feed into a Twitter stream. This is the kind of project I’d love work on again. In fact, I’d love to build a custom, real-time engine that can track and analyze these kinds of market trends. It be incredible—for a beer, wine or spirits brand—to actually be able to see what people are saying about programs as they are happening, and better measure just how effective (or ineffective) word-of-mouth programs can be.

A few years ago I tried to start a company. The name of the startup was Anatomy Ads. You can see the remnants of a functional prototype at anatomyads.com, if you wish. The purpose of this company was to find a new way to monetize new media.

The idea was to create a truly social advertising network around the idea of “variable CPM.” You would pay ‘x’ dollars for an ad, and the number of 'y’ impressions you would receive would vary based on how many other people were buying into that space, and at what price. If people could enter an ad unit, at whatever price they wanted, what would happen? Would you have hundreds of people offering small increments for hot properties, or a few big players trying to dominate the pool? How would people act if they didn’t know how many impressions they’d end up getting? Would you ever reach the equilibrium price? These were the kinds of questions I wanted to answer.

Unfortunately, I never really found satisfactory answers to the questions I sought. At least, not yet. This post is about how it worked, what went wrong, and how I still very much intend to find the answers to these questions.

To clarify, the system worked like this:

A person who creates content (say, a blogger) would sign up and post the widget on as many (or few) of their sites as they liked. The more sites they posted the widget on, the higher the number of aggregate impressions they’d generate, and the greater price premium they’d be able to command.

Anyone (e.g. brand, another blogger, your best friend) could come in and “sponsor” your content through the widget. They could pay whatever price they wanted. The minimum (for the sake of transaction costs) was $1. There was no maximum. A sponsorship was good for 30-days. All of the sponsorships within any given 30-day period were pooled together.

All ad units were 125²px square buttons. The reason being that even a normal person could come up with a piece of 125²px creative relatively easily by using an avatar or simple logo.

The widget itself displayed four squares at any given time (horizontal, vertical, or square). The algorithm on the backend would determine what ads to display based on how many people were in that ad pool, and how much each person had paid.

We (Anatomy Ads, the name of my company that built this platform) did not take a cut of any sponsorships. 100% of sponsorships would go directly from the sponsor to the content creator. It was our goal to absorb all the transaction costs. We were going to monetize the business by: (1) selling remnant inventory; and (2) allowing larger brands and advertisers the ability to run traditional ads from time to time in a space that would be much more engaging.

The system operated in near-real time and it was actually quite fun to play with. You could see how many people were participating in any given pool and adjust your sponsorship accordingly. We did get a few hundred people to try it out and start playing with it. We would fill the system with fake money and then spend it as if they were poker chips, trying to feel out at what point someone would leave a particular sponsorship pool and look for another one. And if you were the first person in a pool, you might spend $1 and expect to get 10,000 impressions at the time of sponsorship, only to find out that 10 other people entered that pool by the end of the day, and you’d only be getting about 1,000 impressions now.

I still feel that it’s a great idea, but our particular implementation had a few critical flaws:

It should be the obvious by now — the system was too complicated. It would have made a better thesis than a business.

We didn’t have an embeddable Flash widget (although we did start working on one). As such, our unit was limited to sites, social networks, and profiles that support JavaScript. (i.e. no Facebook, MySpace, etc.)

We deviated away from IAB standard units. Huge mistake.

We forced people to create a separate login with us. This made everything more click-heavy than it should have been to encourage the kind of behavior we wanted. We should have used more open APIs. (Although, to our credit, at the time things like Facebook Connect and Twitter OAuth didn’t even exist.)

We didn’t have the capital or clout to compete with the closest competitor—the now defunct—TipJoy.

I tried to bootstrap the company on personal credit, at the height of the worst credit crisis the world has ever seen.

I say we, but as The Dude would say, “The Royal 'we’! You know, the editorial…” While there were a few other people involved in this project, I was the one responsible for its failure as a business.

But I’m not done with this. I have mothballed it down for now, but I will return to it at some point in the not to distant future. I believe that a system like this can work. It just needs to be simpler. A lot simpler. I also believe that this would make for an unprecedentedly powerful platform for non-profits and testimonials, especially if a future version of this system uses Facebook Connect and Twitter OAuth to actually generate the copy and creative that forms the social ads of the future.