21 and 22 March, at the Chelsea Pier on the island of Manhattan.
Organized by GigaOM, Structure:Data is an event dedicated to the phenomenon of Big Data, to “explore the technical and business opportunities spurred by the growth of big data, including storage needs, data analysis and the uncovering of new revenue opportunities”.

The volume of personal and professional data has exploded in the last few years and the continued growth of Social Networking will only accelerate this.

Bime was invited as a speaker for the event, I think we may have been the only non-US based speakers among many Californians. Our presentation took place on the first day. Our topic "Smart Tools: Dissect, digest it and Deliver on Big Data", how the Cloud allows SMEs to tame big data.

Analytics for Big Data in the Cloud? Come on Dude... except now there is Google's Big Query. Big Query is a new generation of database, a revolutionary way to process large amounts of data in the cloud. There has been no launch yet, but using Big Query with Bime we were able to show off the business potential. At the conference we were able to do some demos, over a billion lines, on my laptop over the WiFi in the room, or indeed possible anywhere there is an internet connection. In short, when it comes to Big Data, Cloud Analytics can now cut the mustard. (If we were 12 years old, we might even be replacing our DeLaSoul poster above our beds with one of Big Query.)

The topic of Big Data in the Cloud (multi-tenant cloud it must be stressed) is very recent. Hadoop made an appearance in various sessions over the two days and we have given our opinion of the yellow elephant elsewhere: very good for pre-computed analysis, yet not perfect with latency for interactive analysis and online dynamics. We are aware many companies are working to resolve it; but Big Query already provides near-constant, non-linear time response on top of very large data volumes.

Regarding the sessions (many of them !), each time, two fundamental points were addressed: how to first understand these volumes of data and how to monetize it (business is business) via new hardware capabilities, via time implementation shortcuts, via interface analysis mixing structrured and unstructured data in near real time... among other examples. In particular the solutions Quid and Continuuity looked very interesting.

The audience was composed of larger enterprises and although the topic of BigData is already ubiquitous in these circles, everyone was keen to mix opinions and continue to learn, like Jean-Luc Chatleain, CTO, EVP Strategy & Technology at DataDirect Networks, based in California. For him:

"Bigdata is not new in some circles like high performance computing, life sciences and defense but this year it "democratization" is in full force. The Gigaom Structure Data conference showed that we have entered the hype cycle of Bigdata, Analytics & Hadoop as each and every vendor goes out of its way to make sure that one or more of these words is on its tagline. Hadoop is definitively a technology "de rigueur" for Bigdata storage and archiving but more interesting are the following emerging trends: "beyond Hadoop" technology and architecture focused on real-time analytics and lower TCO, advance visualizations approach for data discovery and in-storage computing."

Here is what I also took away from the event, articulated perfectly by the CEO of Hortonworks: Hadoop allows ‘infinite’ processing of wide data volumes; the cloud allows ‘infinite’ use of machine power… so there is a time when the two will have something to do together.

Obviously the world will not be black OR white (on-premise or cloud) but will continue to be black AND white for a while...there are still compliance regulations out there... and i’m sure it will continue to be a hot topic and one we’d be happy to discuss. Feel free to contact us.

See you soon and thank you to the GigaOM team for this conference.
Rachel.