Teradata announced several new features and product updates at its 2016 Partners Conference.

Meet the New MPP Database System -- Intelliflex

Teradata's newest IntelliFlex system was the headliner. "IntelliFlex" is what Teradata now calls its massively parallel processing (MPP) architecture, first announced in April of this year.

It's also the name of a big, honking analytics database system, crammed chock-full of compute, memory, and storage resources. A seven-foot-tall IntelliFlex system anchored Teradata's exhibit area in the Partners expo hall this year. Arrayed with lights and sounding like an air conditioner running at full blast, IntelliFlex was a source of curiosity and, in some cases, amazement for attendees.

If you know anything about MPP, you'll understand why IntelliFlex is such a departure.

In a conventional MPP system, there's a hard and fast relationship between compute and storage. If you configure one node with 32 cores and 16 TB of storage, you must to configure all of the nodes in an MPP cluster the same way. If you want to add capacity to your MPP warehouse, you must to add nodes with 32 cores and 16 TB of storage. Sure, you can add larger systems, but the MPP architecture will only exploit 32 cores and 16 TB of the total available system capacity.

IntelliFlex Expanding Capacity, Allows New Configurations

With traditional MPP, you can't selectively expand compute or storage capacity. It has to be both.

IntelliFlex breaks this relationship. Customers can optimize IntelliFlex to their liking, cramming in additional compute capacity to address workload-specific needs or, conversely, doubling down on storage. If you want to dedicate an IntelliFlex system to compute-intensive workloads, you can disproportionately populate its modular system racks with compute capacity.

If you're a Teradata customer, IntelliFlex is the upgrade path for all of your future investments in Teradata. Teradata officials have anointed it as "the foundation of all future technology advancements." If IntelliFlex's own recent evolution is an indication of what Teradata means by "technology advancement," it's poised to become one hell of a system.

For example, shortly after announcing IntelliFlex, Teradata tripled its memory capacity to 1 TB per node. At Partners, Imad Birouty, Teradata's director of technical product marketing, said the next version of IntelliFlex will see its memory capacity doubled once again.

"The next version [is an] all-SSD configuration [with] two times the performance. We're now [supporting] up to 12 nodes per cabinet, and after tripling the amount of memory a few months ago, we're doubling it again," he told analysts. "Hopefully every six months or so you're going to see new updates to this."

Optimizing the Optimizer for Hybrid Environments

Teradata's vaunted database optimizer is getting a major overhaul, too, in the form of a new Adaptive Optimizer technology. Birouty describes this as pretty much what it sounds like: a revamped database optimizer technology that's designed for hybrid (on-premises, managed cloud, platform-as-a-service cloud, virtual machines) environments. The Adaptive Optimizer is able to devise efficient query plans and optimize for fast query execution and system utilization -- regardless of context.

In other words, if you're querying against both an on-premises Teradata system and a Teradata instance running in the cloud, the new Adaptive Optimizer is smart enough to optimize for either context. "[The Adaptive Optimizer] will [on a] per-platform [basis] pick the best capabilities for that platform. The database now auto-adapts to its host environment," Birouty said.

The Adaptive Optimizer supports incremental query planning -- a capability grandfathered in from a previous release -- and a number of new enhancements, to boot.

"It supports real-time query rerun as well as [the ability to make] query plan adjustments," he said, "so if it's monitoring a query and it determines that if it changes plans right now, it can finish in four minutes rather than five minutes, it will replan the query midstream."

QueryGrid Comes into Its Own

Birouty also discussed Teradata's upcoming QueryGrid 2.0 release, due sometime in Q4 of this year.

QueryGrid is a data federation-like technology for Teradata environments. In its version 1.x incarnation, QueryGrid actually comprised three distinct technologies: connectors for Teradata-to-Hadoop, Teradata-to-Oracle, and Aster-to-Hadoop. QueryGrid 2.0, by contrast, will be a single, unified technology offering. "In QueryGrid 2.0, we've brought these three technologies together more in a logical way," Birouty noted.

"QueryGrid is another way of saying, 'It's okay, leave the data over there in another system as long as you can do some processing to reduce the amount of data coming back,'" Birouty explained.

QueryGrid 2.0 will knit together all of the galaxies in the Teradata universe: the Teradata Database, Teradata's Aster Discovery platform, RDBMS platforms such as Oracle, and, of course, Hadoop. Customers can use it to write queries on one platform (e.g., Teradata Database) and transparently execute them on another (e.g., Hive running on Hadoop) -- or across all platforms.

The revamped QueryGrid will make smarter decisions about how to schedule query workloads (for example, optimizing for the strengths or weakness of specific platforms and minimizing data movement), Birouty said. QueryGrid is unlike data federation (or its successor, data virtualization) in one key respect: it doesn't feature a distributed query optimizer and it doesn't cache data. That won't change with QueryGrid 2.0.

About the Author

Stephen Swoyer is a technology writer with 20 years of experience. His writing has focused on business intelligence, data warehousing, and analytics for almost 15 years. Swoyer has an abiding interest in tech, but he’s particularly intrigued by the thorny people and process problems technology vendors never, ever want to talk about. You can contact him at evets@alwaysbedisrupting.com.

Featured Resources

Find out what's keeping teams up at night and get great advice on how to face common problems when it comes to analytic and data programs. From head-scratchers about analytics and data management to organizational issues and culture, we are talking about it all with Q&A with Jill Dyche.