Massively Parallel Processing Database

IBM Netezza Data Warehouse Appliances

Database Massively Parallel Processing (MPP) has been a reality for some years now, and there are several similar solutions available on the market. This technology allows working with huge amounts of data and run queries on it much faster than a regular database system. How fast? Well, that will depend on the physical hardware, the number of appliances and most importantly the way that data is structured and divided amongst the nodes inside the appliance. It’s not easy to put a number on how fast it is compared to traditional database systems, however IBM states that in complex queries, it should be around 10 to 100 times faster when compared to traditional database systems.

In order to understand how these systems work, let’s take into consideration a Customer table that has 30 columns and 50 billion records, having the first one as the primary key, another which represents the country. If there are 10 nodes and 10 distinct countries in the table, with a perfect distribution of number of records for each country, we should use it as the partition key as this ensures the best distribution for the records. This ensures that all nodes end their task at the same time.

In MPP systems, data would be divided amongst the several nodes of the appliance using those keys, and each of those nodes would take a part of the processing, meaning that the results will be available much faster because tasks are divided amongst each node. Ideally this would mean that if there are 10 nodes and data could be exactly divided, there would be 5 billion records in each node. The complexity of the query is also distributed amongst the appliance nodes, meaning that each node will take a part on the processing of the implemented logic, making sure that there is no downtime and results are presented faster. There is also a disk mirroring system that ensures that data isn’t lost in case of system failure alongside with traditional backup support.

The system is completely configurable when it comes to setting partitions keys, but it suggests the best combination, according to the data that it contains, not according to the most performant execution plan. Also, using statistics of tables / columns used, it suggests optimizations to partition keys.
However, these systems are not very good with very small tables, because there is no logic way to divide the data amongst the appliance nodes and performance would be greatly impacted by doing so. With small tables, performance is the same as traditional database and it is better to keep it in a single node or use the Random distribution system.

IBM PureData System for Analytics

Hardware

Physical hardware designed by IBM:

Netezza solution is a bit different from other commercial appliances because it uses a specific processor (Field Programmable Gate Arrays), designed by IBM that is used to reduce the access time to data, hardware and also design the query execution plans, making sure that all the nodes are aware of their tasks when running a query. This poses as a middleware in order to improve performance.

Scalability

Several appliances can be linked together in order to increase the processing capabilities. There’s a limit to how many appliances can be connected but it is constantly increasing as new versions come through.

The number of necessary appliances is directly related to the disc space required for the databases.

Network

The nodes inside the appliance communicate using fiber optics guaranteeing that the fastest communication happens between all the appliance nodes.

Disk redundancy

All disks are configured in RAID meaning that one disk is represented by a set of physical disks which ensures that in case of system failure data isn’t lost and can be easily recovered.

Additionally there’s a node that holds data from all the other nodes and it is automatically used in case of a node failure.

Data distribution

As described previously, data is distributed using distribution keys, which consists of one or more columns and can use two algorithms Hash and Random (Round Robin).

Ideally, in order to achieve the best performance, each node should contain the same amount of data from each table. This is the only way to ensure that each node has an equal workload.

When uneven distribution exists, the fastest nodes will have to “wait” for the result of the ones that take more time.

Query execution plan

Netezza uses table statistics computed in real time, not heuristic searches in order to compile the most efficient execution plan. Additionally, optimizer takes into consideration the join method used in the query, data dispersion amongst the nodes and the optimal order in which to execute the joins in the query.

Also when small tables are used, software can decide to replicate those tables for the nodes that will use it in order to optimize the performance.

Work load is divided into snippets which are the several parts of the query that can be divided into smaller tasks and distribute it amongst the nodes enabling the possibility of running several different tasks in parallel.

Restrictions (WHERE clause) are applied by the FPGA for performance reasons.

Query history Collection

Netezza is also able to store in a specific internal database execution plans and information from previously executed queries, meaning that results can be shown even faster because there’s no waiting time on running the execution plan. It detects automatically when an execution plan of a query is no longer up-to-date and deletes it from history.

This information is also used to improve data distribution and organization in the appliance.

BigData - MapReduce / Hadoop

Netezza systems were built in order to handle large volumes of data. BigData is one obvious usage for this type of system.

Netezza is natively prepared to be used by Apache Hadoop and IBM developed a connector to be used by this. It can be used directly with text files or access to tables.

There are two optimal solutions using these components:

Hadoop to be used as processing layer for very large volumes of data

Using Hadoop to aggregate data in order to build an archive solution

In both solutions, the MapReduce component native to Hadoop is used to process and aggregate data. Hadoop produces text files as output that can be easily used by other systems.

How to spot a system viable for upgrade?

Technical Requirements

Lots of complex and ad hoc queries

Encountering performance challenges

500 GB to 1,000 TB (lower GB needs to be growing quickly)

Price sensitive

Old technology installations or end-of-life concerns

Mid-range Oracle customers

Business Needs

Current BI systems are slow to answer business, or needs to settle on sample data