Nova Spivack - Minding the Planethttp://www.novaspivack.com
The Future of the Web, the Nature of Reality, and the Global BrainSun, 22 Feb 2015 19:30:58 +0000en-UShourly1http://wordpress.org/?v=4.1.1Why Twitter’s Engagement Has Fallenhttp://www.novaspivack.com/uncategorized/why-twitters-engagement-has-fallen
http://www.novaspivack.com/uncategorized/why-twitters-engagement-has-fallen#commentsWed, 21 Jan 2015 19:42:30 +0000http://www.novaspivack.com/?p=2920I have been thinking about Twitter for many years. One of the interesting trends that many of us who share an interest in social networks have been tracking is the decline in engagement on Twitter.

Indeed this decline is not only evident from Twitter’s own metrics and reporting, but also to anyone who has been an active user of Twitter since the early days of the service.

Response Per Message Has Declined

In my own experience, Twitter used to be very different than it is today. It used to be more like an online community. In the early days of Twitter, when I Tweeted, I would get a lot of retweets, replies, and new followers.

Today, however, when I Tweet, I notice that the number of retweets, replies and new followers per message is much less than in the early days. It’s not that my content has changed (it really hasn’t), it’s that the way people use Twitter has changed.

Twitter’s Social Design Doesn’t Scale

In thinking about why this change has occurred, I have concluded that it is mainly a symptom of Twitter’s success in acquiring users. The social network design that underlies Twitter does not scale to a large audience.

The user-interface and interaction design of the Twitter Web site and the Twitter mobile app are also, for the most part, the same as they were in the early days. But the way people use Twitter, and the volume of content on Twitter, have outgrown these paradigms.

As Twitter has scaled over the years, each user has gradually followed more people on average. This has lead to social graph saturation — there is a huge amount of social overlap in the graph, meaning that people are more likely to get the same Tweet or news item, many times, from multiple people they follow. This leads to a lot of noise and redundancy in the content stream.

Social Graph Overlap Makes Discovery Harder

In addition, more automated bots and content sites have begun to post more content to Twitter per unit of time. The frequency of posting has increased. There is more content per unit time than before, and this continues to grow. But this is not necessarily good. More content also means more information overload.

These trends have collided, leading to a situation where the average daily number of messages that a typical Twitter user receives in their home timeline has grown dramatically.

Because of this growth in timeline message volume Twitter has become virtually unusable for discovery. Nobody can possibly keep up with all the messages in their home timelines (even in an hour, let alone a day).

A Less Efficient Publishing Channel

Secondly, Twitter has become less and less effective for publishing — at least if you want attention to what you post on Twitter. The probability that anyone will see or engage with anything you publish on Twitter seems to have declined dramatically (and this probability falls off very fast to zero, in a matter under an hour it seems).

The problem is that the follower graph in Twitter has reached a saturation point where it is almost irrelevant — following people has no benefit over not following them — the information overload in either case is just overwhelming.

Filter Failure Leads to Social Overload

When you follow hundreds to thousands of people and outlets, you get too many Tweets. It’s too irrelevant. It moves too fast. It’s simply unmanageable. There is no filter on the firehose anymore. The only solution is just to ignore it all. And that is what most people seem to be doing.

The filter used to be who we chose to follow – but that is no longer effective. Because even within that set there is just too much content coming into home timelines. As a result, in my own case at least, I almost never look at my home timeline in Twitter anymore.

Solutions Waiting in the Wings

Of course there are various ways Twitter could try to solve this. They range from for example, automatically ranking and prioritizing Tweets based on popularity, or how relevant and interesting they might be to me, or on some other metric (like how much someone paid for my attention etc.).

Any or all of these solutions in combination could help improve timeline signal-to-noise in Twitter. But so far I haven’t seen anything that solves it come about.

“Fire and Forget” Behaviors

Meanwhile, because of the declining signal-to-noise ratio, most people are using Twitter in a new mode. Whereas in the early days it was truly a conversational medium where people really paid attention to people they followed and engaged in dialogue with them, today it is more of a “fire-and-forget” medium where people simply post things into the aether, hoping that someone will see and at least Retweet them (which happens less and less).

Why is this happening? In a crowded room where everyone is shouting, the only way to be heard, even when you are talking to just a few people, is to shout even louder. And that is what I see happening on Twitter. More people and more publishers, posting more stuff, more often, in order to hopefully get noticed.

This is a self-amplifying feedback loop that results in total information overload eventually. If Twitter doesn’t solve it, their engagement will continue to fall. And at least if Twitter relies on advertising dollars from eyeballs, this is a serious problem for their business model.

The Third Party App Gap

Unfortunately, the decision made years ago, to stop all third-party innovation around the Twitter public API and eliminate third-party Twitter client apps, has made this situation worse, not better.

Although closing down the third-party Twitter app ecosystem gave Twitter more control over the advertising dollars on their content, it eliminated many apps and services that were actually helping to filter and personalize Twitter content. Ironically those same apps that were eliminated, were actually helping to sustain and grow higher engagement on Twitter.

Twitter has yet to fill this gap with their own apps and services — none of which currently solve the engagement and signal-to-noise problem effectively. But the potential is there.

It’s a bit of mystery to me at least why Twitter has not made solving this their top priority. There has been relatively little innovation or improvement to their core apps and services for many years now. Meanwhile Twitter has been acquiring companies that have little relevance to solving this problem.

Twitter Pivots From The Inside Out

It appears to me that Twitter may have shifted their strategy — they may have given up on improving internal engagement, and begun to just accept that they are more of a fire-and-forget medium going forward. In that reality, monetization strategies shift from internal to external opportunities.

For example, even if people no longer engage at all with each other inside of Twitter, the fact that they post teaches Twitter a lot about their interests, and this can be used to sell re-targeting and advertising intelligence outside of Twitter. In short, Twitter is sitting on an incredibly valuable personalization graph that they can monetize outside of Twitter.

It’s Not a Social Network, It’s An Ad Network!

Another way to monetize the content in Twitter, without increasing engagement, is to sell ads on manually or automatically curated subsets of the content, outside of Twitter. And we see this happening with Twitter’s recent move to enable their ads to run on their content in third-party sites.

But We Still Need a Social Network…

I continue to hope that Twitter will solve this with their own apps — with a new consumer experience designed for the reality of their much larger audience and super-saturated follower graph. I truly believe the world really needs Twitter, or something like it.

However, as the startup economy continues to show us, if Twitter does not solve it, someone and something else surely will.

]]>http://www.novaspivack.com/uncategorized/why-twitters-engagement-has-fallen/feed2Venturebeat Article on the Venture Production Studio Modelhttp://www.novaspivack.com/web-3-0/venturebeat-article-on-the-venture-production-studio-model
http://www.novaspivack.com/web-3-0/venturebeat-article-on-the-venture-production-studio-model#commentsMon, 19 Jan 2015 23:24:39 +0000http://www.novaspivack.com/?p=2919Interesting article in Venturebeat about the emerging Venture Production Studio model, which I wrote about in 2011 here on this blog.
]]>http://www.novaspivack.com/web-3-0/venturebeat-article-on-the-venture-production-studio-model/feed0Bottlenose Ranked as #13 in Top Startups in LA Listinghttp://www.novaspivack.com/uncategorized/bottlenose-ranked-as-13-in-top-startups-in-la-listing
http://www.novaspivack.com/uncategorized/bottlenose-ranked-as-13-in-top-startups-in-la-listing#commentsTue, 06 Jan 2015 16:46:46 +0000http://www.novaspivack.com/?p=2915I’m very pleased to see Bottlenose getting ranked at #13 in the Top 100 Startups in LA listing this year. We’re making progress!
]]>http://www.novaspivack.com/uncategorized/bottlenose-ranked-as-13-in-top-startups-in-la-listing/feed02014: A Turning Point for the Semantic Webhttp://www.novaspivack.com/uncategorized/2014-a-turning-point-for-the-semantic-web
http://www.novaspivack.com/uncategorized/2014-a-turning-point-for-the-semantic-web#commentsTue, 23 Dec 2014 18:22:58 +0000http://www.novaspivack.com/?p=2912Read my article in Semanticweb.com about the significance of 2014 in Semantic Web history.

Google is moving away from hand-made ontologies — they were never a fan of them. From the early days, Google’s philosophy has been biased towards big data over manually constructed knowledge. The end of Freebase, and the rise of Knowledge Vault, are just examples of this bias. However,Schema.org‘s impressive growth and adoption can’t be ignored either, and the jury is still out as to whether decentralized ecosystems can ultimately out-scale more centralized data-mining approaches like Knowledge Vault to reach Semantic Web dominance. Although Freebase is being handed off, it is not necessarily over — it is going into the Wikidata project — which could be an increasingly important repository of open knowledge in the future. The war for the Semantic Web is not over.

]]>http://www.novaspivack.com/uncategorized/2014-a-turning-point-for-the-semantic-web/feed0I Have Joined the I-COM Data Creativity Awards Boardhttp://www.novaspivack.com/uncategorized/i-have-joined-the-data-creativity-awards-board-for-i-com
http://www.novaspivack.com/uncategorized/i-have-joined-the-data-creativity-awards-board-for-i-com#commentsTue, 18 Nov 2014 23:02:14 +0000http://www.novaspivack.com/?p=2907I am pleased to announce that I have been added to the Board for the Data Creativity Awards, by I-COM, The Global Forum for Marketing Data and Measurement. Details here.
]]>http://www.novaspivack.com/uncategorized/i-have-joined-the-data-creativity-awards-board-for-i-com/feed0It’s Time for an Open Standard for Cardshttp://www.novaspivack.com/uncategorized/its-time-for-an-open-standard-for-cards-3
http://www.novaspivack.com/uncategorized/its-time-for-an-open-standard-for-cards-3#commentsSun, 09 Nov 2014 14:08:48 +0000http://www.novaspivack.com/?p=2904Cards are fast becoming the hot new design paradigm for mobile apps, but their importance goes far beyond mobile. Cards are modular, bite-sized content containers designed for easy consumption and interaction on small screens, but they are also a new metaphor for user-interaction that is spreading across all manner of other apps and content.

The concept of cards emerged from the stream — the short content notifications layer of the Internet — which has been evolving since the early days of RSS, Atom and social media.

]]>http://www.novaspivack.com/uncategorized/its-time-for-an-open-standard-for-cards-3/feed0The Next Step for Intelligent Virtual Assistantshttp://www.novaspivack.com/web-3-0/the-next-step-for-intelligent-virtual-assistants
http://www.novaspivack.com/web-3-0/the-next-step-for-intelligent-virtual-assistants#commentsTue, 02 Sep 2014 15:16:41 +0000http://www.novaspivack.com/?p=2895When we talk about the future of artificial intelligence (AI), the discussion often focuses on the advancements and capabilities of the technology, or even the risks and opportunities inherent in the potential cultural implications. What we frequently overlook, however, is the future of AI as a business.

IBM Watson’s recent acquisition and deployment of Cognea signals an important shift in the AI and intelligent virtual assistant (IVA) market, and offers an indication of both of the potentials of AI as a business and the areas where the market still needs development.

The AI business is about to be transformed by consolidation. Consolidation carries real risks, but it is generally a sign of technological maturation. And it’s about time, as AI is no longer simply a side project, or an R&D euphemism. AI is finally center stage.

Think about it for a moment: 3 billion messages is several times more data volume than the entire daily Twitter firehose, and we’re analyzing this much data every single hour, continuously. For one thing this level of real-time big data analytics cannot be done today with Hadoop — Hadoop is actually not capable of doing huge data aggregations this fast — so we’re using new technologies, like ElasticSearch (our team actually contributes to the ElasticSearch codebase) and Cassandra under the hood. We’re now analyzing in under a second what would take about an hour with Hadoop.

It’s an impressive technical accomplishment and I’m very proud of the incredibly talented engineering team that built this. I just want to give a shout out to my amazing co-founder Dominiek ter Heide and our product and engineering team for their work. This level of real-time analytics is truly game-changing.

]]>http://www.novaspivack.com/technology/bottlenose-nerve-center-2-0-released-milestone-for-real-time-big-data-analytics/feed0My Thoughts on the Future of Artificial Intelligence (Live Panel Discussion)http://www.novaspivack.com/science/my-thoughts-on-the-future-of-artificial-intelligence-live-panel-discussion
http://www.novaspivack.com/science/my-thoughts-on-the-future-of-artificial-intelligence-live-panel-discussion#commentsThu, 01 May 2014 03:48:22 +0000http://www.novaspivack.com/?p=2888I had the honor of participating in a panel on the future of AI with a group of industry luminaries, led by Kevin Kelly of WIRED Magazine.

Watch the discussion here.

]]>http://www.novaspivack.com/science/my-thoughts-on-the-future-of-artificial-intelligence-live-panel-discussion/feed0Bottlenose Announces Live Trend Intelligence for TV, Radio and Socialhttp://www.novaspivack.com/technology/bottlenose-announces-live-trend-intelligence-for-tv-radio-and-social
http://www.novaspivack.com/technology/bottlenose-announces-live-trend-intelligence-for-tv-radio-and-social#commentsTue, 25 Mar 2014 17:20:30 +0000http://www.novaspivack.com/?p=2885My venture, Bottlenose, has a big announcement today. We’re unveiling the first live trend intelligence system to provide analytics against real-time broadcast TV and radio (and social media). You can read more in the TechCrunch article here.
]]>http://www.novaspivack.com/technology/bottlenose-announces-live-trend-intelligence-for-tv-radio-and-social/feed0How Bitcoins Could Restructure the Worldhttp://www.novaspivack.com/technology/read-my-article-on-how-bitcoins-could-restructure-the-world
http://www.novaspivack.com/technology/read-my-article-on-how-bitcoins-could-restructure-the-world#commentsSat, 18 Jan 2014 17:23:32 +0000http://www.novaspivack.com/?p=2883Read my article in VentureBeat about how Bitcoins may restructure our civilization, and the need for advocacy to support this transition, if it is going to happen. Here’s an excerpt:

Bitcoin is a trend with all the ingredients necessary for changing the world. It spreads virally, funds its own growth, and can’t be controlled from any central point. Like the Web, it could eat the world.

Bitcoin could be the beginning of a massive transfer not only of wealth, but of power — a shift to a new social order. If you change the money system, you change the economy; that in turn changes society, government and industry. The shift to Bitcoins would be more than an economic shift, it would be a shift to a new social order — one built around a “freer market” economy.

Such a shift would be a lot more likely if a new grassroots organization were formed to accelerate, promote and protect the emerging cryptocurrency economy. By helping the cryptocurrency economy to fund its own evolution and defense, it would have a better chance of surviving the inevitable challenges it will soon face.

As this new digital economy emerges, the mysterious Bitcoin creator, Satoshi Nakamoto, could turn out to be one of the most important historical figures of our time.

]]>http://www.novaspivack.com/technology/read-my-article-on-how-bitcoins-could-restructure-the-world/feed0Why Cognition-as-a-Service (CaaS) is the Next Operating System Battlefieldhttp://www.novaspivack.com/technology/search/why-cognition-as-a-service-caas-is-the-next-operating-system-battlefield
http://www.novaspivack.com/technology/search/why-cognition-as-a-service-caas-is-the-next-operating-system-battlefield#commentsSat, 07 Dec 2013 22:06:19 +0000http://www.novaspivack.com/?p=2879Read my article in Gigaom on the coming cognition-as-a-service wars. The next thing after the Semantic Web.
]]>http://www.novaspivack.com/technology/search/why-cognition-as-a-service-caas-is-the-next-operating-system-battlefield/feed0Did Apple Buy Topsy for Contextual Awareness?http://www.novaspivack.com/uncategorized/did-apple-buy-topsy-for-contextual-awareness
http://www.novaspivack.com/uncategorized/did-apple-buy-topsy-for-contextual-awareness#commentsThu, 05 Dec 2013 18:24:29 +0000http://www.novaspivack.com/?p=2878The stunning news that Apple bought social search engine, Topsy, for more than $200M has many scratching their heads. Why would Apple want social data, and why would they pay so much for it?

There has been a lot of speculation about the reasons for this acquisition — ranging from making Siri better, to making the App Store smarter, to acquiring big data expertise to develop insights on the Apple firehose.

But I think the reason may be something else altogether: Personalization.

If you want to build the next-generation of smart personalized mobile apps and services, you need a way to know what your users are interested in.

Well, what about analyzing everything they’ve ever Tweeted? That’s a pretty good shortcut.

I know from experience how well this approach can work. Analyzing consumer Tweets provides a surprisingly good window into the interests, relationships and affinities of consumers and brands.

(Disclosure: My company, Bottlenose, which focuses on trend intelligence from social firehose data, has done quite a bit of work on deriving user interest profiles from Twitter timelines, and among our 28 pending patents we have a number of applications around this.)

The Topsy deal is not about search. If Apple wanted to build a social search engine, then only searching Twitter history would not be a winning strategy. To win at social search a service has to be real-time, and should encompass many other social outlets — not just Twitter.

Topsy is much more focused on historical data than the present moment, and they only cover Twitter data. The Twitter-centrism and historical focus of Topsy are weaknesses, unless social search is not really the goal.

Topsy’s real strength is that they have indexed every Tweet in Twitter’s history. The only other known company that has done that is Twitter. This history is a goldmine for personalization. This is what I think Apple really bought.

The next frontier for personalization is contextual awareness. Mobile apps that know more about their present context, and the user, can provide even more relevant and timely suggestions.

But contextual awareness is not merely about knowing the device’s position. It’s about knowing everything about what a user is doing, who they are, and what they might think or want next.

The goal of contextual awareness is to create apps that understand not only where you are located and what you are looking at, but also what you are doing, why you are doing it, who and what is nearby, what your goals are, and what you are likely to think and feel about every person, building, device, object, app, product, advertisement, or bit of information you may encounter.

Google is already ahead of Apple in the contextual awareness game with Google Now and Google Glass. Facebook and Twitter both have huge advantages over Apple in this area as well because of all the data they have, and their large mobile footprints.

If Apple wants to compete in this arena, it needs to shore up it’s own apps and services with a way to understand each user’s present context and interests, and the history of what they have said about every location, product and piece of content they have encountered.

From Siri, to iTunes to the App Store, to Apple Maps to Apple’s alleged next-gen search, augmented reality and TV initiatives — context is king.

With Topsy’s historical Twitter data Apple can not only to make Siri smarter, it can power a whole new generation of smarter contextually-aware Apple apps and services. Apps that listen to, watch, and learn from, what users say and do.

Now all of this assumes that Twitter will just sit back and let Apple beat them to the contextual awareness Holy Grail on their own data. But will they? Apple and Twitter have a close relationship already. But could this new move by Apple change the tone of that relationship? It might. I would be willing to bet Twitter is starting to feel a wee bit claustrophobic right about now.

Twitter cannot tie itself only to Apple; Android matters too. Plus Twitter wants to control its own consumer experiences, and they need contextual awareness too. If Apple leapfrogs Twitter on the contextual awareness front, how long will Twitter continue to supply their full firehose to Apple’s Topsy team?

Twitter certainly has to walk an increasingly fine line with Apple, their $500B market cap frenemy.

]]>http://www.novaspivack.com/uncategorized/did-apple-buy-topsy-for-contextual-awareness/feed2Can Humans Fall in Love With Bots?http://www.novaspivack.com/science/can-humans-fall-in-love-with-bots
http://www.novaspivack.com/science/can-humans-fall-in-love-with-bots#commentsThu, 21 Nov 2013 06:05:50 +0000http://www.novaspivack.com/?p=2868I was quoted in this New Yorker article about whether relationships between humans and bots are real, along with some other AI experts. Can bots experience love? Read it and find out.
]]>http://www.novaspivack.com/science/can-humans-fall-in-love-with-bots/feed0Is Twitter’s Business Model Going to Work?http://www.novaspivack.com/uncategorized/is-twitters-business-model-going-to-work
http://www.novaspivack.com/uncategorized/is-twitters-business-model-going-to-work#commentsThu, 07 Nov 2013 18:42:03 +0000http://www.novaspivack.com/?p=2862Twitter could have been the exclusive seller of data to an industry of 3rd party apps that needed that data. But by decreeing last year that third-party consumer Twitter apps were against the rules, Twitter’s management choose not to take that route. Why did they do this?

Twitter’s management believed that they could make more money in the long-run by controlling the end-to-end channel for their data and ads.They chose to make a play to be the exclusive provider of the data AND the exclusive manufacturer of all apps that convey that data to consumers.

By making sure nobody else could be involved between them and the consumer eyeballs they seek to monetize, they in theory can prevent commoditization of their service, maintain exclusivity of their experience, and thus charge more to advertisers.

But is this strategy working? And will it result in more long-term value for Twitter shareholders than the former approach of pumping data to third-party apps?

Twitter’s Declining User Engagement: Can it be Solved?

There are two primary ways that consumers engage with Twitter:

Posting to Twitter.

Consuming Twitter timeline content.

We are only concerned with (2) — because that’s where ads are displayed in Twitter today.

The success of Twitter’s strategy depends on whether Twitter can build apps that really engage users consistently, and that grow timeline engagement.

The Twitter timeline is where ads are shown: it’s where the rubber meets the road for Twitter’s current strategy.

The evidence is only anecdotal at this point, but so far it appears that timeline engagement in Twitter’s timeline is in trouble.

According to Twitter’s own reports, timeline views per user are in decline.

This is an all-important metric of engagement and is a key metric to track in order to project whether Twitter’s app’s will succeed in capturing user attention.

If this metric continues to fall, Twitter’s ad business is going to be a tough road.

At the same time as the decline in timeline engagement, the US user growth rate for Twitter is also saturating, causing growth rates to decline.

For more on these metrics, see this article that details them further.

Assuming that total user growth is nearing the saturation point for Twitter, the company has to focus on mining its existing user base for more revenue.

To accomplish this, with their current strategy, they have to find ways to increase revenue per timeline view.

Only if Twitter succeeds in finding a way to capture more timeline attention from each consumer will it really grow and monetize its advertising network in the long-term.

But so far that’s not happening. In fact, anecdotally almost everyone I speak to reports they consume less timeline content on Twitter than they did a year ago, and this is reflected in Twitter’s macro engagement metrics.

Twitter’s UX Challenge

The underlying problem is that Twitter’s user-experience has drifted away from the original elegance and simplicity that made it popular.

Twitter used to be a fast way to get a succinct list of headlines from everyone you trust.

But in the Twitter UI/UX of today, it takes a lot of mental energy — and eyestrain — to plow through the timeline for the needles in the haystack.

Getting news efficiently out of Twitter apps today takes a lot of effort — you have to read through a lot of noise — like cards, embedded videos and photos, conversation threads, sponsored posts, and ads.

Due to large surface area of cards and expanded posts, there are fewer messages above the fold, meaning you have to scroll a lot more.

Scrolling takes more work and reduces engagement with content even further.

Ironically, the company that was able to hold an entirely irrational hard-line on the 140 character limit has not been able to hold nearly as firm a line on keeping their UI/UX simple.

In the last year we’ve witnessed increasing complexity in Twitter’s official app UI. It went from simple, understated and elegant to bloated and overbearing and confusing. It went from being Twitter to being more like Facebook.

That’s fine if Twitter is trying to emulate Facebook. But my understanding was always that Twitter was trying to Twitter, not Facebook.

Twitter was supposed to be the place to get short and easily consumable bursts of news from the world.

Part of the equation that made Twitter work was a simple UX that made getting news really fast and efficient: timelines comprised of simple short textual headlines were efficient to read and consume. Twitter has strayed far from that ideal today.

If Twitter can find a way to grow timeline attention per user, then advertising within Twitter could become a true sustainable growth business for Twitter long-term, and Twitter could maintain tight control of their channel. But that’s not looking likely right now.

If on the other hand, attention per user in Twitter continues to decline this means that advertising inside of Twitter will become an increasingly tough business proposition for Twitter and for Twitter advertisers.

Twitter will either need to steadily lower advertising prices and stuff their timelines with even more ads, or take some other drastic action in order to push their ads to consumers and make sure they get attention. That’s a race to the bottom.

How Can Twitter Win?

What happens if Twitter cannot grow timeline attention? Is there another way they could still win?

Yes! Twitter could still win, but it would require a massive shift in orientation.

Twitter could actually make more money from ads that Twitter runs outside of Twitter than inside it.

To do this Twitter would have to shift to becoming a retargeting network. Twitter would have to focus on monetizing their audience outside of Twitter apps and the official Twitter.com site.

Retargeting is already widely used online, notably by Facebook.

Twitter is sitting on a wealth of rich user profiles that could be used to target ads to their users on any sites and apps that use Twitter to OATH visitors.

Interestingly, there are tens of billions of impressions outside of Twitter across around a million third-party sites. Twitter is not monetizing these yet.

Furthermore, if Twitter reversed their ban on third-party consumer apps making use of Twitter data, and built out a retargeting network across them all, they could probably double or triple this number of outside impressions, and ad impressions, fairly rapidly.

There is a hidden opportunity here for Twitter to monetize as a network rather than as an app.

Of course this would be a bit of a reversal from Twitter’s previous position that monetizing Twitter outside of official Twitter apps is against the rules of the road.

If Twitter becomes a retargeting network then they will have to open the door to third-party apps again to fully tap the potential of this strategy.

How might Twitter amplify this even more?

Make all their data free (with some rules) to third-party apps, sites and services to re-use.

Require that third-party developers and service providers NOT modify Twitter content or insert their own ads.

Require that third-party developers and service providers MUST carry only ads that come with it from Twitter.

Twitter would also share revenues on any Twitter-provided ads with third-party apps that deliver impressions on those ads.

This is a killer strategy for Twitter and I’m willing to bet that someday they will return to it when they discover that monetizing only traffic inside Twitter isn’t going to generate enough growth.

Twitter-as-Network vs. Twitter-as-App

If Twitter was a network instead of an app, then in every ecological niche — every market niche — some third party developer or Webmaster would be able to figure out how to create a particular user experience that is best-optimized to squeeze out more engagement and attention per user within that niche.

That would enable Twitter to monetize the long-tail of attention. Twitter cold not possibly monetize all these opportunities with a generic solution as well as thousands of developers working, and competing, in parallel.

But instead of harnessing the power of the masses to optimize the long-tail, Twitter is making a bet they can monetize the entire channel better by themselves.

But is that even the best bet to make? The future value of owning the entire channel from end-to-end and controlling everything is not necessarily greater than the long-tail value of all the niches that Twitter has blocked by transforming from an open to a closed ecosystem.

The long-tail of Twitter advertising opportunities is worth more than owning just the head of the tail. It’s probably a better bet, if you just look at the statistics. It’s simply more likely to work in the long-run than betting that Twitter can build a generic solution that will succeed at being all things to all people.

A Better Endgame for Twitter

Twitter could in fact own both the head of the tail and the long tail opportunity. That’s the best of both worlds.

Twitter could have a strong portal and set of official apps PLUS an open content and advertising network that encourages third-party apps to drive even more attention to Twitter ads.

In this scenario, Twitter could encourage third-party consumer apps to compete to generate the most engagement and ad sales on Twitter data, in the new Twitter network.

They could even amplify that with a promise of cherry-picking the best performers of those and blessing them with massive traffic from Twitter’s central hub and maybe even investment funds or M&A for the best of them.

This would be the Twitter ecosystem 2.0, with new rules, and a built-in monetization system for everyone to partake in.

If Twitter renewed their commitment to growing a thriving third-party ecosystem, they could be been in a position to harvest a huge array of apps for the ones that generate the most ad engagement. Instead of farming only their official apps.

Thousands, or even millions, of apps and services generating timeline and ad impressions for Twitter is definitely a better way to grow their ad business than going it alone.

It’s not too late for Twitter to evolve their strategy around this issue, and if they don’t it’s pretty likely that Twitter advertising is going to be a tough sell in the long-run.

]]>http://www.novaspivack.com/uncategorized/is-twitters-business-model-going-to-work/feed1Bottlenose Announces Free Live Visualization of Global Social Trendshttp://www.novaspivack.com/science/bottlenose-announces-free-live-visualization-of-global-social-trends
http://www.novaspivack.com/science/bottlenose-announces-free-live-visualization-of-global-social-trends#commentsWed, 16 Oct 2013 18:34:30 +0000http://www.novaspivack.com/?p=2858Bottlenose has just launched something very very cool: A free version of it’s live visualization of trends in the Twitter firehose. Check it out at http://sonar.bottlenose.com and get your own embed for any topic. This is the future of real-time marketing. And by the way it’s also an awesome visualization of the global mind as it thinks collective thoughts.
]]>http://www.novaspivack.com/science/bottlenose-announces-free-live-visualization-of-global-social-trends/feed0Consolidate This: Quantified Self Editionhttp://www.novaspivack.com/uncategorized/consolidate-this-quantified-self-edition
http://www.novaspivack.com/uncategorized/consolidate-this-quantified-self-edition#commentsMon, 14 Oct 2013 16:58:02 +0000http://www.novaspivack.com/?p=2856Read my article in Gigaom on the Quantified Self market and how it is developing.

There are too many choices available for consumers when it comes to devices and apps that track your steps or daily activities. What needs to happen is consolidation across the industry and a focus on storytelling, not just activity.

All of the interesting stuff happens when data collides. A voice-based interface to a single data set is a thing of the past. A voice-based interface that talks to over 400 applications, representing over 10,000 unique units of knowledge across over 3,000 discrete products? Now we’re talking.

Creating the next generation of assistance is in fact a data federation problem. It’s a brain problem. A data routing problem. A big data problem, even.

]]>http://www.novaspivack.com/uncategorized/the-future-of-virtual-assistants/feed0Bottlenose Series A to Bring “Trendfluence” to the Enterprisehttp://www.novaspivack.com/business/bottlenose-series-a-to-bring-trendfluence-to-the-enterprise
http://www.novaspivack.com/business/bottlenose-series-a-to-bring-trendfluence-to-the-enterprise#commentsTue, 30 Jul 2013 17:00:22 +0000http://www.novaspivack.com/?p=2847Bottlenose Secures $3.6 Million Series A Round of Financing to Bring Trendfluence™ to the Enterprise

LOS ANGELES, July 23, 2013 — Bottlenose, the first application for Trendfluence™ discovery in social streams, today announced that the company has completed a $3.6 million Series A round of venture capital financing.

The round was led by ff Venture Capital, with participation from Lerer Ventures, Transmedia, Advancit, as well as other leading funds and angel investors. The Series A financing will fund new hires in engineering, sales and marketing to scale operations for the formal entry of Bottlenose into the enterprise market this autumn.

“We are excited to have the opportunity to lead the A round, as we believe that Nova and the Bottlenose team are building a truly compelling and disruptive business”, said John Frankel, Partner, ff Venture Capital. “After all, we traditionally partner with companies that are changing the way people behave, and we look forward to supporting Bottlenose with all of our internal resources as the team continues to flourish and thrive.”

An early, free alpha version of Bottlenose, released in 2012, spurred interest and demand from nearly 100,000 professional marketers seeking real-time solutions for mapping trends in social networks, in a way that allowed them to see through the fog of social media. Early enterprise partners helped shape Bottlenose for enterprise use, resulting in today’s robust system for revealing Trendfluence in firehose levels of data.

The New Science of Trendfluence™ Makes Social Listening Actionable

Bottlenose has developed a new technology for isolating Trendfluence from the noise of social streams. Trendfluence enables Bottlenose customers to proactively identify, anticipate and instigate the trends that drive their businesses.

Bottlenose applies big data cloud computing and analytics to continuously data-mine streams from social networks and enterprise data sources, to detect, visualize and monitor trends as they develop and move in real-time. As trends take shape in real-time, Bottlenose applies proprietary natural language and statistical techniques (16 pending patents) to calculate and visualize the live attention and sentiment around them.

With hundreds of millions of messages, topics, people and links analyzed to-date, and billions more being added on an ongoing basis, Bottlenose is constantly sensing the unfolding live conversation across major social networks, isolating the topics, people, issues, and content that have gathering speed, influence and shove.

The ability to detect real-time trends enables marketers to understand the emotional energy of the crowd and how that is affecting their businesses and brands, right now. It also helps enterprises discover and monitor the “unknown unknowns” on the horizon that may grow into threats, issues, or opportunities – up to hours, days, or even weeks before they are noticed by others.

Bottlenose customers gain an unprecedented ability to find and focus on the trends that matter, as or before they materialize, to inform their real-time tactics and strategies.

Major brand Fortune 500 customers are using Bottlenose to:

Detect emerging threats and opportunities

Inform advertising keyword buying strategies

Direct real-time content creation and curation

Visualize and track activity around live events

Monitor and predict brand health and crisis management outcomes

Conduct real-time market and opinion research

Extract customer insights and competitive intelligence

Cross-correlate social activity with business outcomes like stock prices, engagement, and sales

“We are thankful to have the support of forward-thinking investors and enterprise customers who share our vision and understand the growing importance of real-time discovery analytics applied to massive data streams. We’ve seen significant traction from Fortune 500’s since the enterprise version went beta in January, both in volume of inbound, and deal size.” said Nova Spivack, CEO and cofounder of Bottlenose. “Social networks have created an environment where rumors, breaking news stories, and customer sentiment can spike and spread globally in minutes. Big brands are now in an arms race to proactively detect and respond to these emerging issues in real-time, instead of after the fact.”

Previously available as a free, trial application, Bottlenose is in limited release on a subscription basis to enterprise customers. General Availability of Bottlenose is slated for autumn.

About Bottlenose:

Bottlenose is the first application for Trendfluence discovery in social and business data streams. Bottlenose provides an enterprise-grade dashboard for discovering, monitoring and acting on influential trends, beginning with social media communications affecting brands.

Bottlenose was founded in 2010 by serial entrepreneur, Nova Spivack, and Web technologist, Dominiek ter Heide. Bottlenose has offices in Los Angeles, California, New York City, and Amsterdam, the Netherlands.

About ff Venture Capital:

ff Venture Capital is an institutional venture capital investor in seed-stage companies. Since 1999, our Partners have made over 180 investments in over 72 companies. Our exits include Cornerstone OnDemand (IPO, CSOD) and Quigo Technologies (sold to AOL for a reported $340m). ffVC has twenty employees based in New York and New Jersey and extensive resources dedicated to portfolio acceleration, including strategy consulting, recruiting assistance, in-house accounting services, communications and PR strategy, engineering assistance, a pool of preferred service providers and an executive portfolio community.

]]>http://www.novaspivack.com/business/bottlenose-series-a-to-bring-trendfluence-to-the-enterprise/feed2The Post-Privacy Worldhttp://www.novaspivack.com/uncategorized/the-post-privacy-world
http://www.novaspivack.com/uncategorized/the-post-privacy-world#commentsFri, 26 Jul 2013 18:50:00 +0000http://www.novaspivack.com/?p=2844Read my article in WIRED Insights about what the post-privacy world will be like. Here’s an excerpt:

Edward Snowden’s recent allegations regarding what most of us already suspected the NSA was doing, have ignited a huge controversy around privacy and the role of the State versus the individual. And while it is tempting to have a knee-jerk reaction against government intrusion in our lives, in fact it’s not that simple.

In the post-privacy world, privacy is no longer guaranteed or expected. Given that we can’t stop this shift from happening, the question becomes, how can we turn lemons into lemonade in this situation?

It turns out that the post-privacy world may not be as dystopian as some people seem to think. In fact, despite all the negative hype about it, it’s really not that different from the world we live in today. But a more transparent world even has potential to be better than a one of excessive privacy and secrecy.

]]>http://www.novaspivack.com/uncategorized/the-post-privacy-world/feed0The Present IS the Future: Real-Time Marketing In the Era of the Stream – Part Twohttp://www.novaspivack.com/technology/the-present-is-the-future-real-time-marketing-in-the-era-of-the-stream-part-two
http://www.novaspivack.com/technology/the-present-is-the-future-real-time-marketing-in-the-era-of-the-stream-part-two#commentsSat, 08 Jun 2013 05:28:30 +0000http://www.novaspivack.com/?p=2834In Part I of this article series, we looked at how the real-time Web has precipitated Nowism as a fundamental shift in how we understand and engage with information. Nowism is a cultural shift to a focus on the present, instead of the past or future.

One example of Nowism in action is Nowcasting, which attempts to make sense of the present in real-time, before all the data has been analyzed, in order to project trends sooner or even continuously. Nowcasting is quickly becoming a necessary and powerful function in our media, culture and society.

These ideas are being harnessed by savvy brands and companies not only in how they operate, but also in how they conceive of themselves.

Next we will look how they impact social marketing, and why brands must learn to act like media companies in this new environment.

The Three Stages of Real-Time Marketing Evolution

The Stream is more real-time than the Web. And it’s even more real-time than blogging and the early days of social networking. But it’s not only faster, it’s also orders of magnitude bigger. Instead of millions Web pages every month, we’re dealing with billions of messages every day.

There’s vastly more activity, more change, more noise, and when trends happen they are more contagious and spread more quickly. It’s therefore even more important to sense and respond to change in the present, right when it happens.

Unlike the Web, the Stream is constantly changing, everywhere, on the second timescale: It is a massively parallel real-time medium. And instead of a few channels there are billions of channels — at least one (if not many) for each person, brand, organization and media outlet on the Net — and they are all flowing with messages and data.

Keeping up with the deluge of real-time conversation across so many channels at once is a huge challenge, but making sense of so much change in real-time is even harder. Yet, even harder still is intelligently engaging with the Stream in real-time.

These three objectives represent three levels of maturity and mastery for real-time marketers.

Stage One: Week Marketing.

Today, most brands and agencies are still stuck trying to accomplish Stage One, if they are even that far along.

Stage One social marketers are focused on simply monitoring the Stream and trying to keep up with the conversations about their brand.

Some Stage One marketers are also actively trying to drive perception and optimize engagement through social media. But their ability to measure the effects of their actions and optimize their engagement are primitive at best.

The timescale of their measurement and engagement with the stream can range from hours to days, or even to the weekly timescale.

Stage Two: Day Marketing.

A smaller set of organizations have learned how to make sense of the Stream in real-time and are operating on the hour to daily timescale. They have graduated to Stage Two.

Stage Two marketers don’t merely monitor and respond, they digest and interpret. They measure and engage in sense-making and trend discovery. They generate live insights from millions of messages and incorporate these insights into their thinking and behavior on an hourly to daily basis.

Stage Two social marketers have evolved past the stage of simple reflexive response to the stage where they can interpret and reason about the Stream intelligently in near-real-time. They leverage social analytics, data mining, and visualization tools to facilitate insight, and this leads to smarter behavior, more optimal engagement and better results.

But Stage Two organization response times are still not real-time. Instead of seconds or minutes their responses often take hours or even days and that’s not fast enough anymore.

Stage Three: Now Marketing.

Stage Three marketing organizations are incredibly rare today. But there will be more of them soon.

Stage Three’s are able to monitor, make sense of, and then engage intelligently with the Stream, all in real-time, not after-the-fact.

In other words, they are consistently able to detect, measure, analyze, reason, and respond to signals in the Stream within seconds or minutes, or at most within the hour.

Stage Three organizations continuously run a real-time marketing feedback loop that works like this:

Sense. First a signal is sensed: it might be a breaking story rumor, a complaint by an influencer, a change in customer perception or audience sentiment, a shift in engagement levels, a crisis, or a sudden new trend or opportunity. Sensing signals, and differentiating important signals from noise, in real-time, requires new approaches to determining relevance, timeliness, importance that don’t rely on analyzing historical data. There is no time for that in the present. Sensing has to intelligently filter signal from noise by recognizing the signs of potentially interesting trends, regardless of their content. Organizations that can do this well are able to detect emerging trends early in their life cycles, giving them powerful time advantages.

Analyze. Next, the signal is analyzed in real-time to understand what drives it, and what it drives. Its underlying causes, influencers, effects, demographics, time dynamics and relationships to other entities and signals are mined, measured, visualized and interpreted. This requires new live social discovery and analytics capabilities – what’s new here is that this isn’t merely classical social analytics (measuring follower count or message volumes, or charting historical volume), it’s massive “big data” mining and discovery in real-time. It’s live prospecting around all the potentially relevant signals that are connected to each signal to get the context.

Respond. Then an intelligent response (itself a signal) is generated across one or more channels within seconds to minutes. For example a reply or an offer may be sent to a customer, an alert may be sent to a team, a new piece of content may be synthesized and published, the targeting for an ad campaign may be adjusted, or the tone of social messaging may be modified, prices may be adjusted, and even policies may be changed – all in real-time. Organizations that get good at this process are able to respond so quickly they cross the threshold from being reactive to being proactive. They are able to drive the direction of trends by being the first to detect, analyze and respond to them.

This feedback loop is exemplified in real-time social advertising targeting and campaign optimization, for example. But as organizations mature to Stage Three they must learn to apply this same methodology to ALL their interactions with the Stream, not just advertising. They must apply this feedback loop across ALL their engagement with customers, the media and the marketplace.

Within a decade, all leading brands will be stage three marketing organizations.

Mapping The Ripple Effect

“Stage Three” agencies and brands need to master ripple effects to thrive in the next generation social environment.

Ripple effects are the key forces in the emerging real-time social Web. Information propagates through ripple effects along social relationships, across channels, communities, and media. Ripple effects are how trends emerge and rise, how rumors spread, and how ads and content are distributed. But we’re currently almost completely blind to ripple effects, we have almost no way to detect, measure or predict them.

The average Facebook user has 190 friends. The average Twitter user has 208 followers. Each group contains a number of influencers. Within each influencer’s social graph there are another set of even more powerful influencers. And so on and so on. When you seed a branded message on Facebook, for example, it’s not a straight trajectory. A ripple starts with the above numbers, but each new impression creates a new set of ripples.

Suddenly, your branded message is being seeded across a number of platforms — even spreading to platforms you never intended — and 99% of brands and agencies have no way of mapping this ripple effect, let alone controlling it.

But what if you could track your ripple effects? What if you could guide them? What if you could measure their effectiveness, or even predict where they are going? Suddenly there would be a wealth of new insights to pull and learn from. And this is precisely what is now possible, using emerging tools.

Problems with Existing Tools

Stage Three marketing organizations have to keep up with ripple effects in real-time, and they have to anticipate where those ripple effects are headed, in order to react immediately.

But most social analytics and engagement tools fail to show ripple effects. They provide loads of raw data — lists of messages from various social accounts and searches. But they expect humans to do most of the work of actually figuring out what’s important in those lists of messages. That is no longer realistic. Humans can’t cope with the data — it’s overwhelming.

Furthermore, most existing social analytics tools focus on simply measuring engagement via follower counts, mentions, likes, Retweets, favorites, impressions, click-throughs, and basic sentiment. But those metrics are no longer sufficient: They aren’t the trends, they are just signals that may or may not be relevant to actual trends. Not all signals are trends. The art is in finding a way to pull out the actual trends from the rest of the signals that are not in fact trends of any value.

What existing tools fail to do is actually make sense of what’s going on for you — they show you either too little or too much information, but they fail to show you what’s actually important; they’re not smart enough to figure that out for you.

Existing tools are good at finding known topics and trends (“known unknowns”) things you explicitly ask to know about in advance — but what we need in the era of the Stream are tools that show you what you don’t even know to ask for (“unknown unknowns.”) They have to detect novelty, outliers, anomalies, the unexpected — and they have to do this automatically, without being instructed on how to find these nuggets.

Existing social media analytics tools are too retrospective in nature – they show how a brand performed on social channels from the past up to the moment a question is asked. But these reports are static. They don’t show change happening, they don’t say anything about what’s next. The minute they are generated they become obsolete. It’s interesting to look at past performance, but what is really needed is more predictive analytics.

Trendcasting

We need a new generation of tools that are designed for identifying real-time ripple effects and filtering them to figure out which ones are noise and which are actual trends we should pay attention to. Better yet, we need tools that can not only identify the trends, but that can project where they are headed in real-time. Think of this as the next evolution of Nowcasting.

We might call this Trendcasting. Where Nowcasting figures out what’s happening now in real-time, Trendcasting figures out what’s happening next in real-time.

No human can Trendcast in real-time anymore without help from software, the Stream has too much volume and velocity for the human mind to comprehend or process on its own. This is a problem that can only be solved by cloud computing against big data in real-time.

In the today’s real-time Stream, marketers cannot afford to be hours or days behind the curve. They need to know and understand the present in the present, while it is still unfolding. They need tools for Trendcasting – for finding and predicting trends. Trends are not merely raw data, they are particularly meaningful and noteworthy trajectories in the data.

Trendcasting is going to be come absolutely key. The next-generation real-time marketing platforms will provide automated trendcasting as a key feature. They will sift through the noise, find the signal, and then measure it to see if it actually matters. Trendcasting is about filtering for the trends that actually matter, because not all signal is important and not all trends are equal.

Traditionally, finding and forecasting trends has always been thought of as an exclusively human skill — but today we’re starting to automate this function. I believe Trendcasting can be fully automated, or at least dramatically improved, using massively parallel big data analytics approaches. This is the where the cutting-edge of innovation for real-time marketing will focus for the next decade. (Disclosure: My own company, Bottlenose, is focused on exactly this goal, for Fortune 500 brands).

Trendcasting tools are the next leap in a long process of measurement tools innovation that has included inventions like telescopes, microscopes, X-rays, weather satellites, MRIs, and search engines. In a sense trendcasting engines could be thought of as automated cultural measurement tools — the social equivalent of a weather satellite — social satellites. They help us to visualize, understand and project the weather of markets, cultures, industries, communities, brands and their audiences, just like satellites have helped us understand and map the weather patterns of our planet.

Every Brand is a Media Company

The shift to real-time and the advent of the Stream changes how brands must think of themselves.

Whether they are ready or not, all brands have to learn to function more like media companies – and in particular like news networks – in order to remain competitive in the era of live social media.

For the first decade of social media the emphasis was clearly on social, but now it is shifting to media. Leading brands have learned how to be social for the most part. Now they have to learn how to act like media companies.

Consider a network like CNN: They have reporters all over the world, constantly giving them text, images, video, opinion, insights and leads. They have viewers, some of whom are also contributing news tips and stories, and opinions, all over the world across many platforms and channels.

CNN’s bread and butter is finding breaking stories first, getting the best information about them, and covering them most comprehensively and creating original news content and analysis for their audience.

CNN is a good model for what every Stage 3 brand has to learn to do in order to master Now Marketing.

Brands that want to lead in the Stream era have to gather intelligence constantly, using social media. They have to create content, share it, and engage. They have to keep their fingers on the pulse of their markets and culture in general in order to remain relevant and timely. They have to respond to a huge influx of questions, opinions, complaints, suggestions, leads. And they have to do this across many platforms and channels at once, in real-time.

The distinction between content provider and audience is dissolving. It’s now a two-way live conversation with the market, a conversation among equals. Brands have to learn to share, interact, make friends, and socialize just like people do. They have to not only create content for their audiences, they have to use their audiences as the content. And they have to do it on a massive scale.

Some brands – like Nike and Red Bull – have gone very far down this path and even think of themselves as media companies to some degree. But for most brands thinking like a media company is still a completely new orientation and set of skills.

Brands need new tools in order to think and operate like media companies. They can’t work on the weekly or monthly timescale anymore. Even daily timescales are too slow: they have to go live.

They can’t just market to their customers, they have to engage them in marketing the brand and creating media, together. They can’t just analyze key metrics anymore, they have to understand the trends that are emerging, and what’s driving change.

The Stream is here, and it’s happening in real-time. Marketers who can adapt to this shift early will be the leaders of tomorrow; Brands that are late in adopting these practices risk become nothing but historical data points.

]]>http://www.novaspivack.com/technology/the-present-is-the-future-real-time-marketing-in-the-era-of-the-stream-part-two/feed1The Present IS the Future: Real-Time Marketing In the Era of the Stream – Part Onehttp://www.novaspivack.com/uncategorized/the-present-is-the-future-real-time-marketing-in-the-era-of-the-stream-part-one
http://www.novaspivack.com/uncategorized/the-present-is-the-future-real-time-marketing-in-the-era-of-the-stream-part-one#commentsSat, 08 Jun 2013 05:25:05 +0000http://www.novaspivack.com/?p=2832

Introduction

The pulse of the Net has gotten faster. It’s not a static Web of documents anymore, it’s a new real-time messaging medium we call the Stream.

The Stream is unlike any form of live media before it: It is a completely real-time, globally distributed, two-way conversation. And it’s already changing everything we know about marketing, advertising, branding and PR.

For marketers – and particularly for brands and agencies – mastering the Stream requires a new set of approaches, new tools and new practices. Like the similar shift, over two decades ago, from traditional media to digital media, this shift is both an existential challenge and a potentially destiny-changing opportunity.

Some organizations are learning to master the Stream faster than others, and they will be the leaders of tomorrow. But even those that lag will have to adapt soon, or they will become irrelevant by 2016. It’s evolve or die, all over again.

The Clock Rate of the Net is Speeding Up

One of the things that makes the Stream different from the Web is clock rate. The clock rate of the Net is increasing.

For the past 30 years we have been trending towards immediacy – the world has been getting faster, and nowhere has this been more apparent than online.

Now we have arrived at real-time and the Net has become a live medium. This changes everything.

Before blogging the clock rate of the Web was slow. Most Web sites were updated less than once per day. News and media sites were updated daily or perhaps a few times a day. Blogging eventually increased the rate of change to the hours timescale. RSS then made it possible to keep up with this change more efficiently – the Web became a gigantic news ticker. But it was still a relatively slow one compared to today.

Starting in 2000, instant messaging and text messaging both started to gain adoption. These shifted marketing from the hours to intra-hour timescale.

Facebook was launched in 2004, followed by YouTube in 2005, Twitter in 2006 and Instagram in 2010. Due to the rapid message-based brand conversations they enabled, social networks sped up the timescale of digital marketing from hours to minutes, and even to seconds.

Since 2000 we have also seen a steady transition from stationary and sporadic Internet access to continuous mobile access, complemented by simultaneous increases in bandwidth and reductions of bandwidth cost.

Everyone is now connected all the time, both as a content consumer and as a content provider. These trends have democratized the Internet from a nearly static and one-way textual medium to a fully live two-way multimedia medium – the Stream of today.

Social Media Beats Mainstream Media

Social media is the new media – it is a new form of media, not just a media distribution pipe, and it is much faster than traditional media in every way.

The Stream is more live and real-time than TV and radio ever were. For example, social networks consistently beat TV and radio to the story. They sense and distribute breaking news and trends ahead of mainstream TV networks and media outlets, often by up to tens of minutes and even as long as hours.

Likewise, ambient social apps that take advantage of geofencing and silent, constant communication between devices are driving real-time content and engagement.

SoLoMo (Social + Mobile + Local Technologies) tech is still in its infancy, but it opens myriad new doors for brands and agencies to begin mining live data. Tracking consumer behavior in real-time paints an incredibly vivid picture of your fans and followers as SoMoLo becomes widely adopted.

The digital native demographic has developed an expectation that their devices seamlessly integrate with the world around them. As such, Millennials have begun hyper-tasking: operating multiple devices/consuming data from multiple sources at once.

Brands can no longer rely on a blanket broadcast strategy with their messaging. You need to know the behaviors of your target demo on each device – what they’re saying, when they’re saying it, and who they’re saying it to. And you need to know this in real-time.

Attention has Shifted to the Stream

Much of the growth of the SoLoMo movement has been driven by increased bandwidth, faster adoption times for smartphones and tablets, but also broader demographics embracing social media.

This produces a phenomenal amount of data – and it’s a challenge to manage and make sense of it, but it’s imperative brands/agencies begin sifting through it all to execute the right kind of campaigns.

The pace and volume of social messages on Twitter and Facebook have been growing exponentially, year over year and this trend shows no signs of slowing down. Attention is shifting from search to social.

Meanwhile attention to the top sites on the Web is increasingly being driven by this social messaging activity or dark social, rather than by Web navigation, Web search and SEO.

Among the top 50 sites on the Web, most get at least equal, if not more, of their traffic from social than from search. In other words, the primary driver of digital consumer attention, brand perception, and engagement has shifted to social.

The Age of Nowism

The shift to faster timescales is altering the landscape of marketing, sales and even customer service. Consumers live in the Now, and they are demanding that brands live there too.

This new and growing obsession with now even has its own marketing buzzword called Nowism:

NOWISM | “Consumers’ ingrained* lust for instant gratification is being satisfied by a host of novel, important (offline and online) real-time products, services and experiences. Consumers are also feverishly contributing to the real-time content avalanche that’s building as we speak. As a result, expect your brand and company to have no choice but to finally mirror and join the ‘now’, in all its splendid chaos, realness and excitement.”

Nowism is a cultural shift to a focus on the present, instead of the past or future. It’s new and unprecedented: never before has a civilization on this planet lived so exclusively in the now.

In the information age, thanks to the double-edged blessing of information technology and communication networks, the present has become bigger, faster, and more consuming.

Today we are focused on immediacy and instant gratification. And with these come an expectation of instant response, instant customer service, and instant solutions. This is an era of shoot-first-ask-questions-later, where trends and rumors flare up and go global in minutes.

Now, the risk of being late is greater than the risk of being wrong. And due to this, even experienced media outlets and brands are under pressure to publish or respond as fast as possible, without even time to think or fact-check. It’s better to issue a correction than to be perceived as slow.

This is a world in which there will be more error, more confusion, more threats, more crises, but they will start and end more quickly as well. It is also a world in which there will be more leads, more opportunities, and more transactions, and they too will start and end more rapidly.

To survive and prosper in these faster cycles of activity organizations have to learn to think and respond in real-time, even if it means making mistakes and corrections more frequently.

The present contains more data, and more change, than what used to occur in months or even years of activity. And the present is therefore more difficult to understand today than it was before.

Nowcasting: Predicting the Present

Because there is so much to measure in the present, and we can measure everything in higher resolution, there is vastly more that we must pay attention to at any time. And this means we simply don’t have as much time or resources to focus on the past or the future.

Instead of predicting the future, there is a new option, in the age of Nowism: Predict the present, while it is still unfolding, using an approach called Nowcasting, which was pioneered at Google.

Nowcasting attempts to make sense of the present before all the data has been analyzed, in order to project trends sooner, or even continuously. For example, using nowcasting techniques Google has been able to predict monthly sales, economic indicators, and disease spread in near real-time, before end of month results are usually available.

Nowcasting has been applied by hedge funds, economists, and epidemiologists, and soon it will be a standard tool for marketers.

Applying nowcasting effectively is more than simply measuring and predicting events. The opportunity is to use those measurements to then act proactively to respond, create new content, adjust campaigns, and stay one step ahead of emerging trends. It is not reactive, but proactive to opportunities as they develop in real-time.

This of course requires a sophistication and understanding of the present to sense, analyze and act in a way that is substantial and moves engagements forward.

In part II of this series, we will look at how marketing has evolved to its present state in the real-time Web, and how Nowism is necessarily changing the way that marketers work. We will also explore how leading businesses must learn to consider themselves as creators of media rather than simply forces moving within it.

Notes:

The author thanks Adam Blumenfeld and Phil Ressler for contributions, edits and suggestions for this article.

]]>http://www.novaspivack.com/uncategorized/the-present-is-the-future-real-time-marketing-in-the-era-of-the-stream-part-one/feed7Twitter is No Longer a Villagehttp://www.novaspivack.com/uncategorized/twitter-is-no-longer-a-village
http://www.novaspivack.com/uncategorized/twitter-is-no-longer-a-village#commentsTue, 26 Feb 2013 10:10:47 +0000http://www.novaspivack.com/?p=2819I’ve noticed a distinct change in how people use Twitter in the last year:

1. People are increasingly not using Twitter for actual two-way conversations or interactions. Instead it’s being used more for one-way “fire and forget” posting. People just post into the aether, without knowing or even caring if anyone actually reads their posts.

2. People are spending less time reading Twitter messages, they are paying less attention to what other people say. This is because it’s too difficult to keep up with what your friends are up to on Twitter: we all follow too many people now, and there are just too many messages flowing by all the time.

These two shifts are going to fundamentally change what Twitter is for, and how it is used. It is gradually becoming less of a social network where people interact, and more of a place to simply express opinions.

Maybe in a way this is a return to the original intent of Twitter — a place where you could post what you were doing. That was originally a one-way activity. However soon after those early days a community formed and Twitter became conversational and highly interactive for a while. Until it got so big that it lost that village feeling.

Twitter used to be a village — it was in fact the epicenter of the global village for a while. But now it has become a gigantic industrialized urban sprawl. A megacity. It’s lost that feeling of intimacy and community it once had.

Today Twitter is a mass market backchannel for consumers to express themselves to businesses and media providers, and for businesses to market to their audiences. It is also a place where people express themselves around live events like sports games, television shows and breaking news.

But while people and businesses are increasingly expressing themselves on Twitter, they are actually doing less listening to each other there.

Listening is on the decline because the message volumes on Twitter are now so high that it just is impossible to keep up. There are too many messages flowing by all the time. It’s information overload. There’s no point in even trying to pay attention to what people you follow are saying.

Of course people still pay attention to replies, mentions and Retweets of them — at least if they are not famous. Famous people get far too many mentions from strangers and so they usually just ignore them as well.

I’m willing to bet that you aren’t paying attention to Twitter. Your friends aren’t either. At least not like in years past.

So who is listening to Twitter if it’s not all of us? Businesses. They are listening, analyzing, and using this data to gauge perception, market and advertise. This is where the real value of Twitter seems to be headed: It’s a channel for people to express themselves around products, brands, events and content. And it’s a tool for businesses to learn about their audiences and market to them in real-time. Twitter is becoming our global backchannel.

As a side-effect of these shifts, Twitter is feeling less social every day. It’s no longer a place where people listen or pay attention to one another anymore. It’s certainly not a place where people have conversations beyond the occasional reply. Instead, it’s more like a giant stadium where everyone is shouting at the same time.

This probably means that as a publishing and messaging channel Twitter will become less effective over time.

As message volumes keep growing, what are the chances that your audience will be looking at the exact second that your message is actually visible above the fold, before it is buried by 1000 new Tweets? The chances are getting lower every day. And nobody scrolls down to look at older messages anymore. Why look back through the past when there are so many new Tweets arriving in the present?

This means that the likelihood of your intended audience seeing anything you post to Twitter is headed towards zero.

Unless of course, you’re famous. If you’re famous you can post once and get a thousands of Retweets and that might get your post noticed. But for most of us, and even most brands, most of their posts are going to be missed. They are like shots in the dark.

If you’re not famous you can still get noticed however. If you are willing to pay. You can buy visibility for your Tweets by making them into Promoted Tweets. But ads are different than conversation. And a network where people have to advertise to each other to be heard would not feel social at all.

Should this be fixed? I’m willing to bet that Twitter will probably not put much effort in reducing noise, or adding really good personalization, precisely because such measures would compete with Promoted Tweets. Promoted Tweets make money precisely because there is increasing noise in Twitter, just like Google Ads make money because Google is not as relevant as it could be.

These trends throw into question the value of posting anything to Twitter today, at least if your goal is to reach your followers organically and get attention. That is just increasingly unlikely.

If you really want to reach people on Twitter, the best bet will be to advertise there.

But advertise to whom? If attention to Twitter is declining because people are posting more but reading less, that would reduce attention to Twitter ads as well.

Ironically it’s the noise on Twitter that creates a need for Twitter ads, but it’s that same noise that will ultimately cause people to not pay attention to Twitter anymore. And if people pay less attention to Twitter’s content, there will be less of an audience for Twitter’s ads. It’s just too much work to find the needles you care about in all that hay.

The noise problem on Twitter is a side-effect of mass adoption. But it’s also a side-effect of a growing mismatch between how Twitter was designed as a product and the size of audience, and message volumes, it now supports today. Twitter was not designed for this level of audience or activity, and it shows. Twitter was designed to be village, but it’s now a megacity.

It will be interesting to see how Twitter evolves to meet this challenge. Can they restore the balance by creating ways for consumers to filter the noise? Can they attract more attention and content consumption?

My theory is that Twitter may inevitably focus more on advertising outside of Twitter, than inside, perhaps by using a retargeting approach on sites that use Twitter OATH to register their users. Here’s how this could work:

Twitter can potentially see the interests of anyone who posts content to Twitter.

When any member of Twitter uses their Twitter credentials to login to any site that uses Twitter OATH as a login (including Twitter.com), Twitter can place a cookie in their browser.

Then any site that uses Twitter OATH can detect that user and associate them with their interest profile from Twitter.

With this knowledge any site in the Twitter network can target ads to Twitter users’ personalized interests when they get visits from those users.

This technique is already being applied by one company, LocalResponse. I wonder when Twitter will start doing it themselves. If they do this, Twitter can become an ad network that uses what people talk about inside of Twitter, to target ads to them outside of Twitter.

Ultimately this may solve the attention problem in Twitter. Don’t even bother getting people to pay attention to content inside of Twitter. Just get them to talk about their interests and then target ads to them when they pay attention to content outside of Twitter. This “retargeting” approach is working well for Facebook and it’s only a matter of time until Twitter does it. Of course I’m sure Facebook has applied for a patent on this idea by now and that will also add a wrinkle to how this plays out in the future.

]]>http://www.novaspivack.com/uncategorized/twitter-is-no-longer-a-village/feed7Making Sense of Streamshttp://www.novaspivack.com/uncategorized/making-sense-of-streams
http://www.novaspivack.com/uncategorized/making-sense-of-streams#commentsWed, 24 Oct 2012 23:31:06 +0000http://www.novaspivack.com/?p=2805This is a talk I’ve been giving on how we filter the Stream at Bottlenose.

Note: I recommend the webinar if you have time, as I go into a lot more detail than is in the slides – in particular some thoughts about the Global Brain, mapping collective consciousness, and what the future of social media is really all about. My talk starts at 05:38:00 in the recording.

]]>http://www.novaspivack.com/uncategorized/making-sense-of-streams/feed0The Twitter API Insanity – What Everyone Seems to Be Missinghttp://www.novaspivack.com/uncategorized/the-twitter-api-insanity-what-everyone-seems-to-be-missing
http://www.novaspivack.com/uncategorized/the-twitter-api-insanity-what-everyone-seems-to-be-missing#commentsFri, 17 Aug 2012 06:20:28 +0000http://www.novaspivack.com/?p=2781One thing I find quite ironic about the whole Twitter API crackdown is when people try to rationalize it because “Twitter needs to be a multibillion dollar business” or “Twitter must remain an independent company” etc. These kinds of statements just don’t hold water. Twitter CAN in fact make more money, and be more independent, by opening up the APIs than by closing them.

I’ve run the numbers (below). Let’s look at this in more detail.

Alternative 1 – The Dual API Approach.

How It Works

APPS CAN USE THE API FREE WITH ADS: API is free and unlimited, but every 50th Tweet is an ad from Twitter. If you use this API in your app you MUST display the ads exactly as Twitter intends.

OR APPS CAN PAY FOR NO ADS: If you don’t want to have ads in your app, you can pay for the premium API. You can pass that charge to your users as a fee or subscription for your app. Or you can sell your own ads and give your app away for free. Or you can give your app away for free and make money in some other way (commerce, data, consulting, donations, etc.). Lots of options, but Twitter gets paid no matter what.

Note: neither of these options prevent Twitter from also having their own free apps, and their site, and monetizing with ads/and or optional subscriptions there as well. Twitter gets your eyeballs or your money, everywhere. But users get diversity of tools, innovation, and choice.

What’s It Worth To Twitter?

Twitter makes more money from ads in the Free API option.

On the Free API side, Twitter can make the same CPM or CPC per ad, whether it is in their site or in their data in a third-party app. There’s no loss of revenue to Twitter. But there IS an increase in revenue because more apps being used by more people in more ways means more people see more Tweets per day, and that means they see and are likely to click on more ads. This means more ads are shown and clicked. And that equals more money to Twitter.

Twitter makes more money from the premium API option too.

For the apps that opt out of the free API, guess what? Twitter makes the same CPM or CPC or EVEN MORE from them that they make from the ads. It’s at least the same price, but Twitter could also charge a premium. Those apps are buying the ad space from Twitter and might even have to pay a premium.

App providers that opt for this option can do what they want – sell more expensive ads to cover the cost, because perhaps they have more exclusive audiences, or show no ads and charge a fee. It’s up to them. Or they can make money in some other way. Twitter wins here too because again, more apps showing more Tweets to more people in more ways in these apps, means more data payments to Twitter.

The Power of The Long Tail (Why Twitter Should Not Cut off Its Tail to Save Its Face)

Twitter does not make less money in either case. But they can make more money because more Tweets may be looked at this way. Twitter will never succeed in getting as much attention as a million third party apps can get.

This is the power of the long tail. Twitter can never support all platforms, all contexts, all situations where people might want to interact with Tweets. By cutting off the long tail of third-party apps and clients, and restricting use, they are shutting down all these long-tail distribution opportunities, and that means they miss out on the entire long-tail of their advertising revenue. They are cutting off their tail to save their face. But in the end, the tail is much more valuable than they realize.

The Power of Being an Ad Network Instead of A Media Property

In fact, if Twitter monetizes the APIs as suggested above, they would be an ad network — and they could actually charge more to advertisers depending on which app and site the ads appear in.

For example Some sites and apps in the network could actually be smaller but more focused or elite and have more value to advertisers — Twitter could make more money by running ads there than in their own walled garden media property. Look at some examples of being a walled garden that failed: AOL, MySpace… I could go on.

Alternative 2 – The Option To Pay For Choice Model

How It Works

Twitter charges an OPTIONAL $3/month subscription fee (or it could be less) to any user who wants to use non-official apps. This buys them a special flag on their account that let’s them OATH it to third party apps and services.

Users who don’t care about choice don’t have to pay this fee — they can still use Twitter’s official apps for free. But the ones who care about choice can pay.

NOTE: The optional fee is a LOT less than people pay for cable every month. The segment of people who would want this would have no problem paying for it. Think of this as a special kind of membership dues for users who want “Twitter Pro.” Power users, or anyone who wants choice can pay for it. And it’s still cheaper than a fully loaded coffee at Starbucks.

What’s it Worth To Twitter?

Twitter has around 150 million active users today. Let’s say 10% of them pay for this special feature. That’s 15 million people * $3/month. Twitter suddenly makes $45 million/month from this. That’s $540 million a year folks! And as active users grow, they soon make a billion or more a year this way. That’s at least as good or better than advertising, frankly. Getting half a billion dollars as a “tax payment,” for doing nothing doesn’t suck either.

Oh and guess what, third party apps can STILL be required to either carry ads or pay to opt out of them as per Alternative 1, above. So Twitter can double dip here – they can make money from these “Twitter Pro” fees PLUS still make money on ads or API fees from those users when they are outside Twitter.

There are probably several other alternatives beyond the two outlined above, but this should be enough to make it clear that Twitter can make more money by being open than by being closed.

Therefore it seems that the management at Twitter must have hired the wrong management consulting firm to tell them what to do. Closing the API makes no logical sense at all. It also doesn’t make business sense.

What the Money Says

Forget about what’s right for developers or whether Twitter is evil or not. It also doesn’t really matter whether Twitter’s apps are better or worse than third-party apps.

“All that matters is the money” (to quote Shark Tank).

And the money says, closing the APIs is sub-optimal. You can make more money by opening them with a good monetization model.

To me what is the most disturbing thing about Twitter’s API policies, and the press coverage of the changes, and even many Tweets by industry people who should know better, is that they don’t seem to see the overwhelming financial benefits of keeping the APIs open.

Only someone who has no understanding of power laws, the long tail, and network effects, or how the Twitter ecosystem actually functions would shut down the APIs completely. It simply make no sense. It’s bad business. From a purely capitalist perspective, it’s not just leaving money on the table, it’s putting it in the shredder.

That’s why I doubt Twitter will shut down the API’s completely – they are smart people after all.

But what worries me is that they might be missing the value of the long tail here. Instead of shutting down third-party clients and trying to own all the consumer eyeballs, Twitter can make far more money leveraging them into a larger surface area that they control and monetize.

Justifying a Higher Valuation for Twitter

Twitter’s valuation is high – possibly too high. How are they going to justify that? Well for one thing, making a smaller business that is ultimately less important to the world is not going to help. But that’s actually what the result of becoming a walled garden usually is. Walled gardens are never as big or as valuable as global networks or infrastructures that everything depends on.

Instead of closing the walls, if Twitter stays open, but monetizes the openness, they could actually justify obscene valuations eventually.

Ultimately by being a global network that is woven into millions of apps, sites, products and services the valuation of Twitter will be much greater than any walled garden, and will be valued more highly. Why? Because they would be playing a central infrastructure role in the entire economy, instead of just being a single walled garden.

The future market cap of the company will ultimately be orders of magnitude greater if they are stewards of the open nervous system of the planet (with the exclusive right to charge everyone tolls) than if they are the next MySpace trying to sell ads on their own pages and apps. It’s really that simple.

Twitter is a network, not an app. It’s all about the data. The data is the ultimate viral vector for Ads. Not pages. Not apps. It’s about the data!

Open Does Not Equal Out of Control

One more point — I keep hearing people say that somehow closing the APIs is necessary to maintain “control.” Wrong. Twitter gives up no control by opening up the APIs and monetizing them. That is a complete fallacy.

They can enforce usage of their open API and data. They can require authentication, they can even approve apps, and they can monetize everything. They can shut off apps that don’t display their ads. Having an open API doesn’t mean you can’t stop violators.

Conclusions

Twitter gives up nothing really by being open, and they make MORE money. And more money means they are MORE independent, MORE powerful. MORE liquid.

Now maybe “I am constrained by logic.” But if I was a board member of Twitter, or an investor, I would want the company to run the numbers and be logical too. I have yet to hear from anyone any cogent argument that can convince me that closing the APIs makes more money than keeping them open in the scenarios above. Instead of all the hype and fury, run the numbers. Then let’s discuss it rationally.

]]>http://www.novaspivack.com/uncategorized/the-twitter-api-insanity-what-everyone-seems-to-be-missing/feed38Bottlenose Beat Bit.ly to the First Attention Engine – But It’s Going to Get Interestinghttp://www.novaspivack.com/uncategorized/bottlenose-beat-bit-ly-to-the-first-attention-engine-but-its-going-to-be-interesting
http://www.novaspivack.com/uncategorized/bottlenose-beat-bit-ly-to-the-first-attention-engine-but-its-going-to-be-interesting#commentsSat, 28 Jul 2012 04:04:52 +0000http://www.novaspivack.com/?p=2774Bottlenose (disclosure: my startup) just launched the first attention engine this week.

It’s going to get interesting to watch this category develop. Clearly there is new interest in building a good real-time picture of what’s happening, and what’s trending, and providing search, discovery, and insights around that.

I believe Bottlenose has the most sophisticated map of attention today, and we have very deep intellectual property across 8 pending patents and a very advanced technology stack behind it as well. And we have some pretty compelling user-experiences on top of it all. So in short, we have a lead here on many levels. (Read more about that here)

But that might not even matter because I think ultimately Bit.ly will be a potential partner for Bottlenose, rather than a long-term competitor — at least if they stay true to their roots and DNA as a data provider rather than a user-experience provider. I doubt that Bit.ly will succed in making a search destination that consumers will use and I’m guessing that is not really their goal.

In testing their Realtime service, my impression is that it feels more like a Web 1.0 search engine. Static search results for advanced search style queries. I don’t see that as a consumer experience.

Bottlenose on the other hand, goes way into a consumer UX, with live photos, newspapers, topic portals, a dashboard, etc. It is also a more dynamic, always changing, realtime content consumption destination. Bottlenose feels like media, not merely search (in fact I think search, news and analytics are actually converging in the social network era).

Bottlenose has a huge emphasis on discovery, analytics, and other further actions on the content that go beyond just search.

I think in the end Bit.ly’s Realtime site will really demonstrate the power of their data, which will still mainly be consumed via their API rather than in their own destination. I’m hopeful that Bit.ly will do just that. It would be useful to everyone, including Bottlenose.

The Threat to Third-Party URL Shorteners

If I were Bit.ly, my primary fear today would be Twitter with their t.co shortener. That is a big threat to Bit.ly and will probably result in Bit.ly losing a lot of their data input over time as more Tweets have t.co links on them than Bit.ly links.

Perhaps Bit.ly is attempting to pivot their business to the user experience side in advance of such a threat potentially reducing their data set and thus the value of their API. But without their data set I don’t see where they can get the data to measure the present. So as a pivot it would not work – where would they get the data?

In other words, if people are not using as many Bit.ly links in the future, Bit.ly will see less attention. And trends point to this happening in fact — Twitter has their own shortener. So does Facebook. So does Google. Third-party shorteners will probably represent a decreasing share of messages and attention over time.

I think the core challenge for Bit.ly is to find a reason for their short URLs to be used instead of native app short URLs. Can they add more value to them somehow? Could they perhaps build in monetization opportunities for parties who use their shortener, for example? Or could they provide better analytics than Twitter or Facebook or Google will on short URL uptake (Bit.ly arguably does, today).

Bottlenose and Bit.ly Realtime: Compared and Contrasted

In any case there are a few similarities between what Bit.ly may be launching and what Bottlenose provides today.

But there are far more differences.

These products only partially intersect. Most of what Bottlenose does has no equivalent in Bit.ly Realtime. Similarly much of what Bit.ly actually does (outside of their Realtime experiment) is different from what Bottlenose does.

It is also worht mentioning that Bit.ly’s “Realtime” app is a Bit.ly “labs” project and is not their central focus, whereas at Bottlenose it is 100% of what we do. Mapping the present is our core focus.

There is also a big difference in business model. Bottelnose does map the present in high-fidelity, but has no plans currently to provide a competing shortening API, or an API about shortURLs, like Bit.ly presently does. So currently we are not competitors.

Also, where Bit.ly currently has a broader and larger data set, Bottlenose has created a more cutting-edge and compelling user-experience and has spent more time on a new kind of computing architecture as well.

We actually have buit what I think is the most advanced engine and architecture on the planet for mapping attention in real-time today.

The deep semantics and analytics we compute in realtime are very expensive to compute centrally. Rather than compute everything in the center we compute everywhere; everyone who uses Bottlenose helps us to map the present.

Our StreamOS engine is in fact a small (just a few megabytes) Javascript and HTML5 app (the size of a photo) that runs in the browser or device of each user. Almost all the computing and analytics that Bottlenose does happens in the browser at the edge.

We have very low centralized costs. This approach scales better, faster, and more cheaply than any centralized approach can. The crowd literally IS our computer. It’s the Holy Grail of distributed real-time indexing.

We also see a broader set of data than Bit.ly does. We don’t only see content that has a bit.ly URL on it. We see all kinds of messages moving through social media — with other shortURls, and even without URLs.

We see Bit.ly URLs, but we also see data that is outside of the Bit.ly universe. I think ultimately it’s more valuable to see all the trends across all data sources, and even content that contains no URLs at all (Bottlenose analyzes all kinds of messages for example, not just messages that contain URLs, let alone just Bit.ly URLs).

Finally, the use-cases for Bottlenose go far beyond just search, or just news reading and news discovery.

We have all kinds of brands and enterprises actually using our Bottlenose Dashboard product, for example, for social listening, analytics and discovery. I don’t see Bit.ly going as deeply into that as us.

For these reasons I’m optimistic that Bottlenose (and everyone else) will benefit from what Bit.ly may be launching — particularly via their API, if they make their attention data available as an additional signal.

A New Window Into the Collective Consciousness

Bottlenose offers a new window into what the world is paying attention to right now, globally and locally.

We show you a live streaming view of what the crowd is thinking, sharing and talking about. We bring you trends, as they happen. That means the photos, videos and messages that matter most. That means suggested reading, and visualizations that cut through the clutter.

The center of online attention and gravity has shifted from the Web to social networks like Twitter, Facebook and Google+. Bottlenose operates across all them, in one place, and provides an integrated view of what’s happening.

The media also attempts to provide a reflection of what’s happening in the world, but the media is slow, and it’s not always objective. Bottlenose doesn’t replace the media — at least not the role of the writer. But it might do a better job of editing or curating in some cases, because it objectively measures the crowd — we don’t decide what to feature, we don’t decide what leads. The crowd does.

Other services in the past, like Digg for example, have helped pioneer this approach. But we’ve taken it further — in Digg people had to manually vote. In Bottlenose we simply measure what people say, and what they share, on public social networks.

Bottlenose is the best tool for people who want to be in the know, and the first to know. Bottlenose brings a new awareness of what’s trending online, and in the world, and how those trends impact us all.

We’ve made the Bottlenose home page into a simple Google-like query field, and nothing more. Results pages drop you into the app itself for further exploration and filtration. Except you don’t just get a long list of results, the way you get on Google.

Instead, you get an at-a-glance start page, a full-fledged newspaper, a beautiful photo gallery, a lean-back home theater, a visual map of the surrounding terrain, a police scanner, and Sonar — an off-road vehicle so that you can drive around and see what’s trending in networks as you please. We’ve made the conversation visual.

Each of these individual experiences is an app on top of the Bottlenose StreamOS platform, and each is a unique way of looking at sets and subsets of streams. You can switch between views effortlessly, and you can save anything for persistent use.

Discovery, we’ve found from user behavior, has been the entry point and the connective tissue for the rest of the Bottlenose experience all along. Our users have been asking for a better discovery experience, just as Twitter users have been asking for the same.

The new stuff you’ll see today has been one of the most difficult pieces for us to build computer-science-wise. It is a true technical achievement by our engineering team.

In many ways it’s also what we’ve been working towards all along. We’re really close now to the vision we held for Bottlenose at the very beginning, and the product we knew we’d achieve over time.

The Theory Behind It: How to Build a Smarter Global Brain

If Twitter, Facebook, Google+ and other social networks are the conduits for what the planet is thinking, then Bottlenose is a map of what the planet is actually paying attention to right now. Our mission is to “organize the world’s attention.” And ultimately I think by doing this we can help make the world a smarter place. At at the end of the day that’s what gets me excited in life.

After many years of thinking about this, I’ve come to the conclusion that the key to higher levels of collective intelligence is not making each person smarter, and it’s not some kind of Queen Bee machine up in the sky that tells us all what to do and runs the human hive. It’s not some fancy kind of groupware either. And it’s not the total loss of individuality into a Borg-like collective either.

I think that better collective intelligence really comes down to enabling better collectiveconsciousness. The more conscious we can be of who we are collectively, and what we think, and what we are doing, the smarter we can actually be together, of our own free will, as individuals. This is a bottom-up approach to collective consciousness.

So how might we make this happen?

For the moment, let’s not try to figure out what consciousness really is, because we don’t know, and we probably never will, but regardless, for this adventure, we don’t need to. And we don’t even need to synthesize it either.

Collective consciousness is not a new form of consciousness, rather, it’s a new way to channel the consciousness that’s already there – in us. All we need to do is find a better way to organize it… or rather, to enable it to self-organize emergently.

What does consciousness actually do anyway?

Consciousness senses the internal and external world, and maintains a model of what it finds — a model of the state of the internal and external world that also contains a very rich model of “self” within it.

This self construct has an identity, thoughts, beliefs, emotions, feelings, goals, priorities, and a focus of attention.

If you look for it, it turns out there isn’t actually anything there you can find except information — the “self” is really just a complex information construct.

This “self” is not really who we are, it’s just a construct, a thought really — and it’s not consciousness either. Whatever is aware is aware of the self, so the self is just a construct like any other object of thought.

So given that this “self” is a conceptual object, not some mystical thing that we can’t ever understand, we should be able to model it, and make something that simulates it. And in fact we can.

We can already do this for artificially intelligent computer programs and robots in a primitive way in fact.

But what’s really interesting to me is that we can also do it for large groups of people too. This is a big paradigm shift – a leap. Something revolutionary really. If we can do it.

But how could we provide something like a self for groups, or for the planet as a whole? What would it be like?

Actually, there is already a pretty good proxy for this and it’s been around for a long time. It’s the media.

The Media is a Mirror

The media senses who we are and what we’re doing and it builds a representation — a mirror – in the form of reports, photos, articles, and stats about the state of the world. The media reflects who we are back to us. Or at least it reflects who it thinks we are…

It turns out it’s not a very accurate mirror. But since we don’t have anything better, most of us believe what we see in the media and internalize it as truth.

Even if we try not to, it’s just impossible to avoid the media that bombards us from everywhere all the time. Nobody is really separate from this, we’re all kind of stewing a media soup, whether we like it or not.

And when we look at the media and we see stories – stories about the world, about people we know, people we don’t know, places we live in, and other places, and events — we can’t help but absorb them. We don’t have first hand knowledge of those things, and so we take on faith what the media shows us.

We form our own internal stories that correspond to the stories we see in the media. And then, based on all these stories, we form beliefs about the world, ourselves and other people – and then those beliefs shape our behavior.

And there’s the rub. If the media gives us an inaccurate picture of reality, or a partially accurate one, and then we internalize it, it then conditions our actions. And so our actions are based on incomplete or incorrect information. How can we make good decisions if we don’t have good information to base them on?

The media used to be about objective reporting, and there are still those in the business who continue that tradition. But real journalists — the kind who would literally give their lives for the truth — are fewer and fewer. The noble art of journalism is falling prey, like everything else, to commercial interests.

There are still lots of great journalists and editors, but there are fewer and fewer great media companies. And fewer rules and standards too. To compete in today’s media mix it seems they have to stoop to the level of the lowest common denominator and there’s always a new low to achieve when you take that path.

Because the media is driven by profit, stories that get eyeballs get prioritized, and the less sensational but often more statistically representative stories don’t get written, or don’t make it onto the front page. There is even a saying in the TV news biz that “If it bleeds, it leads.”

Look at the news — it’s just filled with horrors. But that’s not an accurate depiction of the world. For example crimes don’t happen all the time, everywhere, to everyone – they are statistically quite unlikely and rare — yet so much news is devoted to crimes for example. It’s not an accurate portrayal of what’s really happening for most people, most of the time.

I’m not saying the news shouldn’t report crime, or show scary bad things. I’m just pointing out that the news is increasingly about sensationalism, fear, doubt, uncertainty, violence, hatred, crime, and that is not the whole truth. But it sells.

The problem is not that these things are reported — I am not advocating for censorship in any way. The problem is about the media game, and the profit motives that drive it. Media companies just have to compete to survive, and that means they have to play hard ball and get dirty.

Unfortunately the result is that the media shows us stories that do not really reflect the world we live in, or who we are, or what we think, accurately – these stories increasingly reflect the extremes, not the enormous middle of the bell curve.

But since the media functions as our de facto collective consciousness, and it’s filled with these images and stories, we cannot help but absorb them and believe them, and become like them.

But what if we could provide a new form of media, a more accurate reflection of the world, of who we are and what we are doing and thinking? A more democratic process, where anyone could participate and report on what they see.

What if in this new form of media ALL the stories are there, not just some of them, and they compete for attention on a level playing field?

And what if all the stories can compete and spread on their merits, not because some professional editor, or publisher, or advertiser says they should or should not be published?

Yes this is possible.

It’s happening now.

It’s social media in fact.

But for social media to really do a better job than the mainstream media, we need a way to organize and reflect it back to people at a higher level.

That’s where curation comes in. But manual curation is just not scalable to the vast number of messages flowing through social networks. It has to be automated, yet not lose its human element.

That’s what Bottlenose is doing, essentially.

Making a Better Mirror

To provide a better form of collective consciousness, you need a measurement system that can measure and reflect what people are REALLY thinking about and paying attention to in real-time.

It has to take a big data approach – it has to be about measurement. Let the opinions come from the people, not editors.

This new media has to be as free of bias as possible. It should simply measure and reflect collective attention. It should report the sentiment that is actually there, in people’s messages and posts.

Before the Internet and social networks, this was just not possible. But today we can actually attempt it. And that is what we’re doing with Bottlenose.

But this is just a first step. We’re dipping our toe in the water here. What we’re doing with Bottlenose today is only the beginning of this process. And I think it will look primitive compared to what we may evolve in years to come. Still it’s a start.

You can call this approach mass-scale social media listening and analytics, or trend detection, or social search and discovery. But it’s also a new form of media, or rather a new form of curating the media and reflecting the world back to people.

Bottlenose measures what the crowd is thinking, reading, looking at, feeling and doing in real-time, and coalesces what’s happening across social networks into a living map of the collective consciousness that anyone can understand. It’s a living map of the global brain.

Bottlenose wants to be the closest you can get to the Now, to being in the zone, in the moment. The Now is where everything actually happens. It’s the most important time period in fact. And our civilization is increasingly now-centric, for better or for worse.

Web search feels too much like research. It’s about the past, not the present. You’re looking for something lost, or old, or already finished — fleeting. Web search only finds Web pages, and the Web is slow… it takes time to make pages, and time for them to be found by search engines.

On the other hand, discovery in Bottlenose is about the present — it’s not research, it’s discovery. It’s not about memory, it’s about consciousness.

It’s more like media — a live, flowing view of what the world is actually paying attention to now, around any topic.

Collective intelligence is theoretically made more possible by real-time protocols like Twitter. But in practice, keeping up with existing social networks has become a chore, and not drowning is a real concern. Raw data is not consciousness. It’s noise. And that’s why we so often feel overwhelmed by social media, instead of emboldened by it.

But what if you could flip the signal-to-noise ratio? What if social media could be more like actual media … meaning it would be more digestible, curated, organized, consumable?

What if you could have an experience that is built on following your intuition, and living this large-scale world to the fullest?

What if this could make groups smarter as they get larger, instead of dumber?

Why does group IQ so often seem inversely proportional to group size? The larger groups get, the dumber and more dysfunctional they become. This has been a fundamental obstacle for humanity for millennia.

Why can’t groups (including communities, enterprises, even whole societies) get smarter as they get larger instead of dumber? Isn’t it time we evolve past this problem? Isn’t this really what the promise of the Internet and social media is all about? I think so.

And what if there was a form of media that could help you react faster, and smarter, to what is going on around you as it happens, just like in real life?

And what if it could even deliver on the compelling original vision of the cyberspace as a place you could see and travel through?

What about getting back to the visceral, the physical?

Consciousness is interpretive, dynamic, and self-reflective. Social media should be too.

This is the fundamental idea I have been working on in various ways for almost a decade. As I have written many times, the global brain is about to wake up and I want to help.

By giving the world a better self-representation of what it is paying attention to right now, we are trying to increase the clock rate and resolution of collective consciousness.

By making this reflection more accurate, richer, and faster, and then making it available to everyone, we may help catalyze the evolution of higher levels of collective intelligence.

All you really need is a better mirror. A mirror big enough for large groups of people to look into and see what they are collectively paying attention to in it, together. By providing groups with a clearer picture of their own state and activity, they can adapt to themselves more intelligently.

Everyone looks in the collective mirror and adjusts their own behavior independently — there is no top-down control — but you get emergent self-organizing intelligent collective behavior as a result. The system as a whole gets smarter. So the better the mirror, the smarter we become, individually and collectively.

If the mirror is really fast, really good, really high res, and really accurate and objective – it can give groups an extremely important, missing piece: Collective consciousness that everyone can share.

We need collective consciousness that exists outside of any one person, and outside of any one perspective or organization’s agenda, and is not merely just in the parts (the individuals) either. Instead, this new level of collective consciousness should be something that is coalesced into a new place, a new layer, where it exists independently of the parts.

It’s not merely the sum of the parts, it’s actually greater than the sum – it’s a new level, a new layer, with new information in it. It’s a new whole that transcends just the parts on their own. That’s the big missing piece that will make this planet smarter, I think.

We need this yesterday. Why? Because in fact collectives — groups, communities, organizations, nations — are the units of change on this planet. Not individuals.

Collectives make decisions, and usually these decisions are sub-optimal. That’s dangerous. Most of the problems we’ve faced and continue to face as a species come down to large groups doing stupid things, mainly due not having accurate information about the world or themselves. This is, ultimately, an engineering problem.

We should fix this, if we can.

I believe that the Internet is an evolving planetary nervous system, and it’s here to to make us smarter. But it’s going to take time. Today it’s not very smart. But it’s evolving fast.

Higher layers of knowledge, and intelligence are emerging in this medium, like higher layers of the cerebral cortex, connecting everything together ever more intelligently.

And we want to help make it even smarter, even faster, by providing something that functions like self-consciousness to it.

Now I don’t claim that what we’re making with Bottlenose is the same as actual consciousness — real consciousness is, in my opinion a cosmic mystery like the origin of space and time. We’ll probably never understand it. I hope we never do. Because I want there to be mystery and wonder in life. I’m confident there always will be.

But I think we can enable something on a collective scale, that is at least similar, functionally, to the role of self-consciousness in the brain — something that reflects our own state back to us as a whole all the time.

After all, the brain is a massive collective of hundreds of billions of neurons and trillions of connections that themselves are not conscious or even intelligent – and yet it forms a collective self and reacts to itself intelligently.

And this feedback loop – and the quality of the reflection it is based on – is really the key to collective intelligence, in the brain, and for organizations and the planet.

Collective intelligence is an emergent phenomena, it’s not something to program or control. All you need to do to enable it and make it smarter, is give groups and communities better quality feedback about themselves. Then they get smarter on their own, simply by reacting to that feedback.

Collective intelligence and collective consciousness, are at the end of the day, a feedback loop. And we’re trying to make that feedback loop better.

Bottlenose is a new way to curate the media, a new form of media in which anyone can participate but the crowd is the editor. It’s truly social media.

This is an exciting idea to me. It’s what I think social media is for and how it could really help us.

Until now people have had only the mainstream, top-down, profit-driven media to look to. But by simply measuring everything that flows through social networks in real time, and reflecting a high-level view of that back to everyone, it’s possible to evolve a better form of media.

It’s time for a bottom-up, collectively written and curated form of media that more accurately and inclusively reflects us to ourselves.

Concluding Thoughts

I think Bottlenose has the potential to become the giant cultural mirror we need.

Instead of editors and media empires sourcing and deciding what leads, the crowd is the editor, the crowd is the camera crew, and the crowd decides what’s important. Bottlenose simply measures the crowd and reflects it back to itself.

When you look into this real-time cultural mirror that is Bottlenose, you can see what the community around any topic is actually paying attention to right now. And I believe that as we improve it, and if it becomes widely used, it could facilitate smarter collective intelligence on a broader scale.

The world now operates at a ferocious pace and search engines are not keeping up. We’re proud to be launching a truly present-tense experience. Social messages are the best indicators today of what’s actually important, on the Web, and in the world.

We hope to show you an endlessly interesting, live train of global thought. The first evolution of the Stream has run its course and now it’s time to start making sense of it on a higher level. It’s time to start making it smart.

With the new Bottlenose, you can see, and be a part of, the world’s collective mind in a new and smarter way. That is ultimately why Bottlenose is worth participating in.

Keep Reading

]]>http://www.novaspivack.com/uncategorized/how-bottlenose-could-improve-the-media-and-enable-smarter-collective-intelligence/feed4Bottlenose – The Now Engine – The Web’s Collective Consciousness Just Got Smarterhttp://www.novaspivack.com/uncategorized/bottlenose-the-now-engine
http://www.novaspivack.com/uncategorized/bottlenose-the-now-engine#commentsTue, 24 Jul 2012 03:51:08 +0000http://www.novaspivack.com/?p=2756Recently, one of Twitter’s top search engineers tweeted that Twitter was set to “change search forever.” This proclamation sparked a hearty round of speculation and excitement about what was coming down the pipe for Twitter search.

The actual announcement featured the introduction of autocomplete and the ability to search within the subset of people on Twitter that you follow — both long-anticipated features.

However, while certainly a technical accomplishment (Twitter operates a huge scale and building these features must have been very difficult), this was an iterative improvement to search…an evolution, not a revolution.

Today I’m proud to announce something that I think could actually be revolutionary.

And here’s the video….

My CTO/Co-founder, Dominiek ter Heide, and I have been working for 2 years on an engine for making sense of social media. It’s called Bottlenose, and we started with a smart social dashboard.

Now we’re launching the second stage of our mission “to organize the world’s attention” — a new layer of Bottlenose that provides a live discovery portal for the social web.

This new service measures the collective consciousness in real-time and shows you what the crowd is actually paying attention to now, about any topic, person, brand, place, event… anything.

If the crowd is thinking about it, we see it. It’s a new way to see what’s important in the world, right now.

This discovery engine, combined with our existing dashboard, provides a comprehensive solution for discovering what’s happening, and then keeping up with it over time.

Together, these two tools not only help you stay current, they provide compelling and deep insights about real-time trends, influencers, and emerging conversations.

All of this goes into public beta today.

An Amazing Team

I am very proud of what we are launching today, in many ways — while still just a step on a longer journey — it is the culmination of an idea I’ve been working on, thinking about, dreaming of… for decades… and I’d love you to give it a spin.

And I’m proud of my amazing technical team — they are the most talented technical team I’ve ever worked with in my more than 20 years in this field.

I have never seen such a small team deliver so much, so well. And Bottlenose is them – it is their creation and their brilliance that has made this possible. I am really so thankful to be working with this crew.

Welcome to the Bottlenose Public Beta

So what is Bottlenose anyway?

It is a real-time view of what’s actually important across all the major social networks — the first of its kind — what you might call a “now engine.”

This new service is not about information retrieval. It’s about information awareness. It’s not search, it’s discovery.

We don’t index the past, we map the present. That’s why I think it’s better to call it a discovery engine than a search engine. Search implies research towards a specific desired answer, whereas discovery implies exploration and curiosity.

We measure what the crowd is paying attention to now, and we build a living, constantly learning and evolving, map of the present.

Twitter has always encouraged innovation around their data, and that innovation is really what has fueled their rapid growth and adoption. We’ve taken them at their word and innovated.

We think that what we have built adds tremendous value to the ecosystem and to Twitter.

But while Twitter data is certainly very important and high volume, Bottlenose is not just about Twitter… we integrate the other leading social networks too: Facebook, LinkedIn, Google+, YouTube, Flickr, and even networks whose data comes through them like Pinterest and Instagram. And we also see RSS too.

We provide a very broad view of what’s happening across the social web — a view that is not available anywhere else.

Bottlenose is what you’d build if you got the chance to start over and work on the problem from scratch — a new and comprehensive vision for how to make sense of what’s happening across and within social networks.

We think it could be for the social web what Google was for the Web. Ok that’s a bold statement – and perhaps it’s wishful thinking – but we’re at least off to a good start here and we’re pushing the envelope farther than it has ever been pushed. Try it!

Oh and one more thing, why the name? We chose it because dolphins are smart, they’re social, they hunt in pods, they have sonar. We chose the name as an homage to their bright and optimistic social intelligence. We felt it was a good metaphor for how we want to help people surf the Stream.

Thanks for reading this post, and thanks for your support. If you have a few moments to spare today, we’d love it if you gave Bottlenose a try. And remember, it’s still a beta.

Note: It’s Still a Beta!

Before I get too deep into the tech and all the possibilities and potential I see in Bottlenose, I first want to make it very clear that this is a BETA.

We’re still testing, tuning, adding stuff, fixing bugs, and most of all learning from our users.

There will be bugs and things to improve. We know. We’re listening. We’re on it. And we really appreciate your help and feedback as we continue to work on this.

Want to Know More?

]]>http://www.novaspivack.com/uncategorized/bottlenose-the-now-engine/feed12A New Approach to Artificial Intelligence: Non-Computational AIhttp://www.novaspivack.com/science/a-new-approach-to-artificial-intelligence-non-computational-ai
http://www.novaspivack.com/science/a-new-approach-to-artificial-intelligence-non-computational-ai#commentsSat, 05 May 2012 05:33:20 +0000http://www.novaspivack.com/?p=2738I was recently contacted by a computer scientist, Sergey Bulanov, who has been working quietly for 20 years on a new approach to artificial intelligence. It’s a pretty interesting and novel approach, and I would like to see what others think about it.

From what I understand, the essence of Sergey’s approach is a new form of computer reasoning that implements “non-computational” networks of logical operations to solve problems.

It is “non-computational” in the sense that it is not an expert system or traditional computer program — rather it is a network of simple operators that compute locally and interact with one another, to emergently arrive at results, reflected by an overall state of the system at the end of the process. This approach reminds me of “connectionist” approaches to AI, such as neural networks and cellular automata.

Sergey believes that his approach could be an important step towards making truly humanlike artificial intelligence in the future. His point is that the brain is a non-computational system, and might in fact use some of these principles.

Sergey calls his approach “Artificial Consciousness,” but I don’t think the word “consciousness” adds value here – and it may even distract from the core idea. But, for the moment, let’s not argue about terminology — his theory is very interesting.

I can’t explain it very well, so here is Sergey’s explanation to me, from our correspondence (please note, he is not a native English speaker, so I have added some corrections to his letter to improve readability):

1.

I consider the present version of system, which only solves logical tasks, to not be a truly “intelligent” system. This system is only a starting point for my investigations. This system only looks like it is intelligent because it is solving tasks that are hard for people. The idea for how to solve logical problems in this way came to me accidentally by thinking about the book, Lady and the Tiger, by Raymond Smullyan. In my classification of AI, a system for solving logical puzzles appears to be a kind of low complexity system (according my theory). This present version of the system is just a step along the way towards more sophisticated AI.

2.

Despite my low valuation of systems for logical solving, for practical use at least, such systems can be amusing for people. And such system can be the starting point to thinking about more sophisticated “non-computational” systems. The theory of such systems is well developed for computational case and such system is called SAT system (Boolean satisfiability problem).

The essence of the problem is as follows. Suppose we have a logical expression. (In our case the logical expression reflects the statement of a puzzle). And we consider that logical expression has value “TRUE” (in our case the formulation of the puzzle is true). Then we shall find out logical arguments of this expression which satisfy this expression (to make this expression to be “TRUE”). This procedure is so called NP-complete. In the worst case, this requires full enumeration of all possible arguments. The SAT approach aims to reduce the probable enumerations. The methods of SAT is well developed. But I don’t know about this at the beginning of my work. Moreover, from the beginning I started to create a non-computational approach.

3.

My idea was very simple. Assume we have a logical function , “AND,” with two arguments. This function will have output value “TRUE” only in case where both of its arguments are “TRUE”. So if we know the value of the output of function, we can predict (not in any cases) the value of its inputs.

The formulation of the puzzle is expressed as a logical expression. The expression is represented in a form of a tree (mathematical tree). This tree you can see at video in my website. The nodes of the tree are logical functions (AND, OR and some more types). These nodes are represented as balls in the video. Each ball has one output link and several input links. The state of the function can be TRUE (red ball), FALSE (blue ball) and UNKNOWN (grey ball). From the beginning the logical tree has some nodes with pre-determined initial values (according to the formulation of the puzzle). These values are reassigned not only at the top or the bottom of the tree, but also in the middle of it.

After the start of the system, each ball (each of the logical functions, i.e. each node) can fill states of the adjacent nodes. And each of the balls begins to continuously correct its state depending on the states of the nearby balls. For example, if one of the balls bears function AND with three inputs (thee arguments) and the upper ball sends to this ball information to be a “TRUE” then this ball will assign value “TRUE” at the each of its three inputs. In such a way different kinds of information will be propogated through the tree until a steady state is reached.

This information can change until steady state, asynchronously and even without clocking (this is not proved by me). During the theory about NP-completeness, solving can’t be reached unconditionally (like solving in the linear or differential equations). After some time, the system reaches an unresolvable state and it would need some more iterations to reach the complete solution. The system can be knocked out from each of these unresolvable states by assuming a hypothesis on one of the unresolved balls. The system can reach a global contradiction state or it can reach a global solution. If system doesn’t reach global solution or global contradiction state we must add a next hypothesis on the one of the next balls. In case of contradiction state we must change one of the hypotheses (typically the last hypothesis).

So the system can reach the solution (or set of the solutions) during the iterations between the assignment of hypotheses. This solving can be achieved without explicit algorithm and it can be achieved on non-computational structure, thousands or million time faster than in the computational devices.

4.

These results appear to be an unusual and promising for the AI domain. The importance of these results is in the demonstration of possibilities of non-computational solving of complicated tasks. I hope this system can attract attention of people to develop non-computational cognitive system millions times more powerful than human brain.

But unfortunately this kind of system is not yet a true AI system. Below is some explanations of why.

5.

A full AI system can’t be based on traditional (simple) logical basis. The system represented in our website can solve some kinds of logical tasks. But it can’t discus with humans about these tasks. It can’t explain the solving of these tasks. It can’t (and never could in future) understand natural written text. And it couldn’t do most of the human brain’s functions. One of the most fundamental reasons is that a network of logical functions (as I represent it) could only solve logical tasks, and it can’t grow by its own reasoning. There are many reasons to construct completely another kind of AI system based on different principles. But creating of more complicated system would be hard without understanding principles and problems of more simple system. Logical systems, such as mine, can be a starting point of the way to more powerful systems that apply my non-computational approach.

6.

I came to idea that a really powerful system must be based on the idea of mathematical sets. I found a way to create a network based on sets that can grow, and how such a network can solve different tasks. The range of these tasks is much greater than only solving of mathematical puzzles. I am working on this presently.

7.

My idea for a the chain of model tasks is not an engine of the system but it is a method of research. This idea is very close to the statement of philosopher Bertrand Russell:

“The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it”.

So that is my approach. For example, I made an expression of the idea of logical functions without logical notions. And I found unusual ideas for my novel system in this way.

There is another example of my principle. Assume we take a simplest question, so simple that decision of this question would be almost inevitable. Then if the decision would have high quality, the principles of this decision can be applied to a next but more complicated question. So moving from simple task to more complicated we can develop our theory.

I hope Sergey’s 20 years of thinking in this direction will prove interesting, and perhaps even fruitful, for the field of artificial intelligence. It does appear to me to be a novel and potentially promising vein of innovation.

Best of luck to Sergey and his collaborators. I’m always happy to see really original thinking in the field of AI.

]]>http://www.novaspivack.com/science/a-new-approach-to-artificial-intelligence-non-computational-ai/feed2How I Got Into College (by Doing the Opposite of What I Should Have Done). An Essay.http://www.novaspivack.com/uncategorized/how-i-got-into-college
http://www.novaspivack.com/uncategorized/how-i-got-into-college#commentsFri, 13 Apr 2012 00:50:18 +0000http://www.novaspivack.com/?p=2730Today I had an interesting phone call with an alumnus of my alma mater, Oberlin College. He called me for an informational interview, asking for some career advice. It was a good conversation. At one point, on a tangent, he asked me why I went to Oberlin? It’s a funny story actually.

In fact, I didn’t want to attend Oberlin. It was my absolute last choice; I was forced to apply by my mother. She went to Oberlin and loved it. She said she knew me better than anyone and knew for sure that Oberlin was where I belonged.

But from my perspective, there was no way I was going from Boston to some tiny school in the midwest with no city, no ocean, no tech community, no anything! No frikkin way. I wanted to go to Brown, or NYU, or somewhere “cool” or at least “big.”

Never mind all of that. My mother went there, and it was in Ohio. And it wasn’t Brown University. Those three facts were enough to convince me I didn’t belong there.

I procrastinated until I had sent out all my other applications. But my mother would not leave me alone. So, at the last minute, one evening, in a very rebellious mood, I filled out my Oberlin application in a way that I thought would GUARANTEE that they would not admit me.

I wasn’t going to take my mother’s advice, no matter what. I did my best to write an essay that was the very opposite of what a college application essay should be. It was not serious, well reasoned, carefully written, or intellectually brilliant, and certainly did not demonstrate my desire or qualifications to attend Oberlin. In fact, if anything, I was hoping that Oberlin’s admission staff would read it and cross me right off their list.

But fate or destiny had other plans for me.

Brown University lost my application (I received a belated apology from their admissions department months later).

And to make matters worse, much to my dismay, Oberlin loved my essay.

They called me and told me it was one of the most creative essays they had ever received. They were convinced I really wanted to attend and that my essay was actually a serious attempt to get admitted.

They didn’t believe me when I said that no, in fact, I really didn’t want to go there and that it was my last choice and that I only applied because my mother forced me.

Nothing I said would convince them otherwise. They were sure I was playing an elaborate game with them. They were sure I really wanted to attend, and the more I denied it, the more they thought I was playing with them.

Their admissions director said I was exactly the kind of out-of-the-box thinker they look for. They called again. I said no. So they wrote, they spoke to my mother, and they even offered me a very generous scholarship. It was by far the best offer I got from any college. Ironically, in the end, I just could not say no.

It just goes to show you, everyone wants whomever doesn’t want them. Even colleges.

But on hindsight it turned out that my mother was right about me (as mothers usually are when it comes to their children). Oberlin was the best college I could have possibly have gone to. It was the perfect petri dish for an interdisciplinary, intensely curious, anti-authoritarian, free-thinking creative person like myself.

And the fact that there was no city to speak of and nothing at all to do off-campus (you could barely even find coffee off-campus when I attended) contributed to the most active, vibrant, non-boring on-campus community imaginable.

It was an absolute hotbed of thinking, activism, creativity, music, literature, art, science, philosophy, and basically just about everything but sports.

I tried my best to avoid it, and when I applied I tried to disqualify myself, but there was no escaping it. And it turned out that it really was the best place for me in the end; it was where I belonged.

Sometimes life works that way. What’s best for you is sometimes the opposite of what you think or want. And sometimes, when you are stubbornly certain that you know what’s best for you — just don’t listen to yourself, listen to your mother.