6 Things You Should be Looking for in a Big Data Platform

6 Things You Should be Looking for in a Big Data Platform

Isabelle Nuage is Director of Product Marketing for Big Data at Talend. Her field of expertise include Data Integration, Big Data and Analytics. Isabelle brings more than 19 years of experience in the software industry holding various leadership positions in product marketing at SAP & Business Objects.

In order to become data-driven, organizations need to be able to understand the “what?”, the “why?” and the “what if?” using not only their enterprise data (myopic view), but also the Big Data surrounding them (geolocation, social, and sensor data). This is the only way to gain a 360-degree view of their customers, business and market, and what it means in terms of business challenges and opportunities.

One of the largest Big Data challenges for organizations is the integration or ingestion of multiple data sources to gain meaningful insight. In fact, according to IDC, gathering and preparing data for analysis is typically 80 percent of the time spent on any analytics project. That’s an astounding amount of time wasted prepping data just so that you glean insight from it. In other words, data integration software is very likely the key to the success of your Big Data project.

Here are six simple questions you should ask to ensure you are getting the most effective data integration platform for extracting the maximum amount of insight from your data:

Is it Easy to Use?

Ask to see the User Interface. Is it simple or complex? Does the application automatically generate code or does it force you to do it by hand? Can you perform tasks using drag-and-drop actions? Does the platform offer a single, consistent workflow and UI or does it look like a mix of separate applications?

Is it Unified?

Does the platform enable the integration of all types of data (cloud, on-premises, IoT, etc.) and can you perform both batch and real-time processing within the same solution?

Does it fully leverage the power of Hadoop?

Some tools require that you process and transform data before loading into Hadoop. Not only does this data movement slow projects down, but it also means you are not fully exploiting the processing power of Hadoop.

Is it Up to Date?

Is the software based on open source or is it proprietary? Open source solutions are proven to better keep pace with the rate of big data innovation and enable you to remain agile and more responsive to the needs of the business.

Is it Fast?

Does is utilize Spark and Spark streaming within Hadoop to process data? Or is it stuck in the days of YARN?

Is it cost effective?

What’s the total cost of ownership? Is it reasonable and based on the number of developers or is it based on data volumes, connectors or CPUs?

Tell us your thoughts: What do you find to be the biggest challenge when it comes to mastering Big Data?