Big Data Needs To Think Bigger

0

Editor’s note: Guest author Semil Shah is an entrepreneur interested in digital media, consumer Internet, and social networks. He is based in Palo Alto and you can follow him on twitter @semilshah

Spend enough time in Silicon Valley, and of all the buzz words you’ll hear neatly tucked in with “graph,” “serendipity,” and “personalization” is one often uttered though, on the whole, not yet fully understood: “Big Data.” On the surface, everyone realizes the opportunity. Data is being generated at lightning speed, the cost of storing is tiny, and new technologies are available to help manage, organize, and secure the data. Earlier this month, LinkedIn co-founder and Greylock partner Reid Hoffman delivered a presentation on this topic at SxSW, and starting next week, GigaOM’s annual big data conference “Structure’” kicks off in NYC.

At the consumer level, while we are wowed by pretty visualizations, the real advancements in big data technologies cover (1) how data is structured and stored, (2) how it is organized and retrieved, and, most interesting to me, (3) how underlying mathematics can be written into algorithms to leverage the data and help discover entirely new things. I’ll paraphrase from one data scientist, LinkedIn’s Peter Skomoroch, who notes on Quora that cheap data storage allows users to leverage asymmetric information, larger data sets increase the likelihood that new insights can be found, and machine learning advancements can be used in entirely new, game-changing ways.

This being Silicon Valley, the obvious targets in sight concern the massive bits of data generated online through social networks, e-commerce, mobile location, and advertising technology. There are no surprises here, and some of the best data scientists happen to reside within these social networks, such as Dmitry Ryaboy from Twitter, Jeff Hammerbacher from Cloudera (formerly of Facebook) Deepak Singh from Amazon, and Skomoroch and DJ Patil from LinkedIn. Not only is the amount of data generated within social networks staggering, but the pace at which its generated and its complexity are both accelerating. Beyond the data visualizations captured by social network maps, the opportunities that lie hidden within those relationships is phenomenal and will feed into social commerce, context-awareness, and location-based ads.

These are the current “hot spots” for big data. There are many companies working on some angle within “big data,” and some which have a long history. Earlier this week, Aster Data Systems was acquired by TeraData, and there are plenty of firms focused on some aspect of data. Dataguise focuses on “masking” sensitive data that is either regulated by law or corporate policy, protecting information from external and internal breaches. Lattice Engines uses algorithms to provide its clients with predictive analytics and learning. Cloudera develops and distributes Hadoop, which powers data processing for websites. And companies like Factual and InfoChimps provide platforms where anyone can share and manipulate data on any subject. (While there are many companies focused on big data, I’ll highlight a few and ask the crowd to help input more into the system, here, and follow up on Quora.)

One of the big data companies to break out into the mainstream tech press is located in downtown Palo Alto: Palantir Technologies. As TechCrunch’s Leena Rao pointed out in June 2010, after the company raised Series D funding, big data companies, and especially Palantir, don’t capture much social media attention. They are instead busy selling their flagship products, Palantir Government and Palantir Finance, to government and financial institutions worldwide. Big data investors know the writing is on the wall: Palantir’s Chairman, Peter Thiel, has been on the record about big data and believes the company will not only cross the billion-dollar threshold, but shoot past it. Will it help securities regulators find the next crisis or Bernie Madoff? Will it help governments monitor potential terrorist activity and provide actionable information before it’s too late? These are big problems that affect our society and for which we don’t have the best solutions. We needed solutions yesterday, and when Palantir and other companies help us identify and head off these threats, they will be rewarded a billion times over.

Now, let’s take big data one step further. Whether we’re all data scientists or not, we understand the scale of the opportunity. We know there’s smart money to invest in data storage, masking, security, retrieval, analysis, and visualizations. But, what about leveraging data for true discovery? Can new techniques in mathematics and physics help computer scientists create a new breed of programs to analyze datasets that traditional approaches cannot? How could our world change if we better understood the underlying mathematics behind the data? If finding insights within data is like finding a needle in a haystack, will the right math-based approaches help us build better magnets to draw out those needles? The conventional wisdom to date has been to apply these new techniques to the online world, where data is generated and stored in robust and zero-cost ways, but there is much, much more to explore.

While these are certainly big problems to tackle and will generate valuable insights for web properties to exploit, I’m most intrigued by the mathematicians and physicists who are innovating within their disciplines and applying them to tackle big problems around big data, particularly concerning the speed and shape of data. There are two aspects of data that capture my interest as a consumer. First, what are the speed and motion characteristics around the data generated, especially for networks that move in realtime, such as social networks and financial markets? Second, what is the shape of the data, and what can we learn from analyzing new dimensions within the data that perhaps weren’t accessible even just a year ago?

It’s within those fast-moving data and subsequent nooks and crannies that our next big discoveries may be hidden, waiting for new equations to unearth them. There are many public datasets (such as data.gov) available to scientists, some of which are listed here and here. There’s no shortage of opportunities to mine these resources, such as old public health studies, and to find new trends to inform the future. Perhaps just as interesting, if not more, is old data collected by large private companies and/or governments that are either too sensitive or competitive to release into the wild. Today, big pharmaceutical and biotechnology companies are sitting on mountains of internal data related to trials they’ve run, energy firms have data related to mineral and resource deposits, and finance speculators use the most sophisticated programs to run hedge funds and the like, looking for the smallest holes to exploit and extract gains.

Let’s assume this data was released, or at least made available to the best mathematicians out there today—could they help us sift through life science data and harvest information that could itself lead to the formation of entirely new products and services? Could they help us find new deposits of minerals, oil, or gas buried deep in the ground or remote parts of the ocean bed? Could the data help us target geoengineering tactics high up in the clouds to combat global warming? Could the data be used in financial markets, not only to notify us of fraudulent behavior, but also to prevent market movers from profiting during bubbles while the masses get doused after the bubbles pop? And, could we analyze seismic activity to predict earthquake likelihoods and tsunami arrival times? The folks and institutions that currently sit on this data have reasonable, short-term incentives to protect it given how competitive their industries are. Yet in the long-term, we’ll need to access these and other data, and hopefully allow entrepreneurs to probe them with all these new tools so, as Hammerbacher says, we can “use the past to impact the future.”

Yes, there is still much more value to extract from social commerce and interpersonal networks—but while these are worthwhile pursuits, the real game-changing innovation and advancement in big data will only come when we’re able to apply the most cutting-edge mathematics and physical sciences to the biggest problems we collectively face.

OverviewCloudera, the commercial Hadoop company, develops and distributes Hadoop, the open source software that powers the data processing engines of the world's largest and most popular web sites.
Founded by leading experts on big data from Facebook, Google, Oracle and Yahoo, Cloudera's mission is to bring the power of Hadoop, MapReduce, and distributed storage to companies of all sizes in the enterprise, …

OverviewThe Apache:tm: Hadoop:registered: project develops open-source software for reliable, scalable, distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation …