One size does not fit all
If you are a data architect or a data modeler, when you design a system or model a business process the first order of business would be understand the data-flow. Such an exercise would turn toward understanding data volume. Second aspect of that is, in cases where such models have been previously built, data architects and modelers will periodically check to see the ‘current applicability’ of their model(s) where data tends to (constantly) grow and change.

Is this important and if so why ? Yes and here is why.

A data model is not set on a stone
The assumption that once a business process has been modeled, it will stand time as businesses develop is wrong. Even when there are no changes to the model as is – that is the business expansion and process changes has not influenced one, except volume – data volume alone could dictate revisiting the model and in cases it will dictate changes as well. A fully functional data model might bring the business processes and system(s) to a grinding halt, if it does not address changes in data volume and misses out on volumetric’s.

Current state of data
Companies big and small rely more and more on data collection on more aspects of their business, more about their customers, more about their buying patterns. E-commerce and Mobile revolutions has made it possible for businesses to get a microscopic view of a transaction along with the profile of the buying or interested customer. Mixed with the influence of Social Networks, it is a data deluge.

More than ever, businesses are collecting information of all kinds – tags and identifiers, signals and readings collected from machine-parts, location coordinates from mobile devices, transactional data, customer demography, selling medium, buying patterns, geographical trends, effective promotions, cross-sell influences etc. That brings in a lot of data. That brings in a lot of stress on poorly designed systems and data models.

Advancements in Data Modeling
A data model reflects business, business processes and is constructed to efficiently manage data that is collected. Efficiency is measured when the business is able to get actionable intelligence out of it. Data architects are exploring, innovating and introducing news way to model information systems. As business grows, the data model evolves into ways in which it can address and manage the change and still serve the business analysts and data researchers of the organization. No longer is a modeler confined to a singular model or a structure – ER or others. Advancements in DBMS (Database Management System) also makes it possible to harness the power of underlying hardware – Client-Servers out of commodity hardware parts or Appliances which are built for a specific purpose.

Solutions after modeling
When the data volume grows into terabytes and petabytes, modeling demands newer approaches and solutions. As the legacy models while they solved and still solve the problems of the world, those problem statements were different. Expectations from such systems were different and mostly limited (in size). And so expansion (using such solutions) is limited too. Newer expectations are a different problem to solve. And hence they demand newer solutions.

And this is the case for Big Data solutions to deal with volume (one of the three V’s). We will see velocity and variety in detail and revisit volume to explore as how Big Data solutions deal with them all.

Let us continue to look at Big Data beyond the hype, by understanding two things – types of data that we deal with and need for change in the approach to deal with them.

Types of DataWe spend most of our time than ever, on our mobile devices – Mobile Phones, Tablets and traditions forms – Laptops, Desktops etc. Then you have other forms of electronic devices such as Nest, Internet Televisions etc. which are data driven and getting smarter by the day to give us a “integrated experience”. All of them are inter-connected consuming ton of information on the Internet – tons of different types of information. We use our phones with apps (or applications) to listen to songs, to catch up on movie trailers, to turn on our cars’ air conditioner before we get in and to even control heaters at home based on consumption and usage patterns.

Data from these can be classified broadly to have the following characteristics – Volume, Velocity and Variety or 3V’s

Volume refers to the large amount of information collected and that we have to deal with – in Gigabytes, Terabytes or Petabytes

Velocity refers to the streaming aspect of such information – Streaming data

Variety refers to the different types of information – Comments on Social Networks to Readings from room heaters to Blogs

ApproachOne could think about these and ask “Have we not been dealing with them already, what is new now?”

It is akin to the question “I have been using mobile phones for a long time, what is special and different about iPhone?”

When you have to deal with data which has the above characteristics, the approach that you have used so far, dealing with them in a small or restricted scale are no longer applicable or they need to be revisited. As volume grows, as variety of data flows in – approaches have to change as well.

Just as one would seek an iPhone for its rich user experience, big app ecosystem, powered by unparalleled technological innovation, a CIO of Data Manager would seek solutions that are driven by innovations in managing such data and to drive value of them. And that is the case for Big Data solutions. We will explore more on 3V’s in the next part.

It has been an interesting last few weeks, to discuss and brain storm on Big Data. Broadly these conversations were around the following

Should I invest in Big Data for my enterprise today ?

Is this just an hype to pass ?

What are the challenges ?

Hype
Let us be honest and agree that there is hype. Forrester as Gartner agrees with that. Big Data is a buzz word today. Also, the fact is Big Data matured amongst us, in a much rapid manner that it is hard to ignore. But it does not mean that it is without merit.

Before we decide to keep it or throw it away or to get carried away by the “buzz words” or “hype”, let us look at few common situations through the eyes of different persona’s.

If you are a CIO

Look at the amount of information that flows in and out of your enterprise

Is the data management program efficient enough ?

Look at your analytical systems and process

Are they constrained by existing system’ capabilities ?

Look at your analytical reports

Are you getting ‘actionable intelligence’ ?

Listen to the “voice of your business”

Is your business analyst in need of more, to take your business to the next level ?

Look at the types of data (Structured, Semi-Structured, Quasi Structured, Unstructured) you manage

Can your systems and applications handle them all ?

Look at your HA (High Availability) and Fault Tolerance thresholds

How economic is the existing setup ?

Look at your existing setup

What could be virtualized ?

Look at the ROI on your systems and processes ?

Take a strategic view of what could be introduced (anew) or changed or tweaked in your eco-system to bring in efficiency and value-add?

If you head a Services Company

Listen your Customers (Clients)

What is their business problem ?

Look at your current engagements

Can you identify opportunities to expand the conversation based on their business problems ?

To start with, Vertica is a good product. They have carved out a niche space for themselves. Columnar database has come a long way and I would say its still emerging to give benefits. Vertica has long been a proponent of columnar databases. It is a good company, product line to buy. But I am concerned that the good product could go waste with HP, which has had questionable success in the space of Data warehousing.

But what stands out in this move by HP, is the market for Big Data solutions.

Big Data solutions are hot commodities today. Companies big and small are trying to manage data in a new scale and trying to consume them for purposes, that needs a different architecture all together. That is where Big Data solutions such as Vertica, Aster Data, ParAccell, Infobright, Xtremedata and frameworks such as MapReduce and Hadoop come to help.

With the ever growing need to collect more information and with the storage costs crashing, companies find lot of value in adopting Big Data solutions. Also, the type of data that companies big and small, have to deal with today is going through a fundamental transformation.

Combing through the Social world to mine ‘comments’ and ‘tweets’, which are unstructured, having to deal with hierarchical data structures such as JSON and having to be able to relate them with GeoSpatial information needs a shift in our approach towards data warehousing and analytics you would expects out of these systems.

I will write more about these challenges and also about some of these Big Data solutions soon. Stay tuned.

The CIEL Project and Skywriting looks very promising and simple to implement. Task Graph Execution is an important feature and it looks to me that it would be very flexible to implement that with CIEL Project.