Understanding Teradata Elasticity

Tags:

As a child in 1980 with the last name of Armstrong, you were bound to be teased with the nickname “Stretch.” (Google it.) Now many years later, it is a fortunate coincidence to have that nickname “legacy” as Teradata delivers on a long-desired capability of greater elasticity of the database environment with the hybrid cloud offerings and our Teradata Everywhere licensing models.

Scalability and elasticity

It is first important to note that for elasticity to be realized, there must be the scalable foundation already in place. It makes no sense to expand platform resources if the software cannot fully take advantage of that expansion.

While Teradata has been the leader in scalability for decades, that scale did come with some strings attached. Due to a variety of hardware and software design features, the ability to grow or shrink the Teradata environment required downtime to make sure the data was redistributed and the parallelism took full advantage of all resources. It was also true that Teradata was normally deployed as a physical system in a customer’s data center.

But with the advent of cloud environments, both public and private, people are looking for a different type of scale and elasticity, which is much more on demand without outages. They also need to scale with much more granularity and frequency. People expect that all environments should be able to be quickly, and seamlessly, configured to fit the need of the day. More importantly, they also want to only pay for what was used for those purposes.

Any one of these desires presents challenges and, when taken together, they look daunting. Clearly there is not one solution, but there needs to be a spectrum of options that provide an elastic continuum.

Elasticity comes in many flavors

The hybrid cloud environment recognizes that companies will invest in many different types of deployment, from having on-premises physical systems to managed cloud offerings and to publicly available cloud providers. Being able to provide elasticity across all these options must include not only the hardware aspects but the software ones as well. Understanding each environment brings different challenges — and opportunities. Teradata has now four distinct types of elasticity to offer. These options are shown below:

Dynamic Workload Prioritization

While not elasticity in the classic sense, one of the goals of elasticity is to manage resources to meet demand. Here, there is a defined system, and rather than adjusting the amount of resources, the resources are directed for the most critical workloads. The underlying goal is to meet service level agreements and ensure that a prioritized workload gets completed. With Dynamic Workload Prioritization, systems can be configured to ensure that the right resources are applied to the right workloads, dynamically at the right time, according to business needs.

Performance on Demand

As we move through the spectrum, being able to add more system capacity on demand provides for elastic demand, both up and down. In this instance, there is a physical platform, and, by using system controls, the CPU is capped at 75 percent, with reduced operating costs. As workload changes or peak periods are encountered, the system can have the additional resources made available, and charged for, in 1 percent increments. After the peak volume has passed, the system can be brought back down to the 75 percent level. This requires no downtime and ensure costs are more directly related to need and usage. The benefit here is that companies do not need to “over purchase” for the peak and have “paid for” resources sitting idle during the normal workloads.

Expand compute power

With the new IntelliFlex platform, Teradata can add CPU and IO independently. Teradata has taken this new capability to provide the next level of elasticity. In this option, nodes are connected to the platform and, with a small restart, additional CPU and memory resources are made available to handle peak workloads for a temporary timeframe, such as month-end processing or an annual peak demand, such as Christmas time. This is called “unfolding” a system. Again, after the peak need has passed, the system can be “folded” to the original configuration, once again reducing costs.

Rapid operational expansion

In the past, a Teradata system expansion required an extensive outage to redistribute the data to the new nodes in the system. Now with Teradata 16.10 and the introduction of multiple hashmaps, that is no longer required. In this scenario, new capacity is added to the system for a longer term capacity need. The customer then has the option to migrate tables to the larger configuration (i.e., more AMPs) at their discretion. Tables can be grouped to make the redistribution together, and it is no longer an “all or nothing” expansion.

This option is more applicable when customers need to expand to accommodate data and processing growth and it is not expected to just meet a temporary surge.

Getting what you paid for and paying for what you get

Elasticity is about making sure that you are getting the resources you need to meet your workload, but at the same time, only paying for the resources that you need to use to meet your needs at the time. By combining the above elasticity options across the software and hardware spectrums, with the Teradata Database licensing options, customers can now match their workload need to system resources throughout the ebb and flow of the business processes.

Starting with Teradata in 1987, Rob Armstrong has contributed in virtually every aspect of the data warehouse and analytical processing arenas. Rob’s work in the computer industry has been dedicated to data-driven business improvement and more effective business decisions and execution. Roles have encompassed the design, justification, implementation and evolution of enterprise data warehouses.

In his current role, Rob continues the Teradata tradition of integrating data and enabling end-user access for true self-driven analysis and data-driven actions. Increasingly, he incorporates the world of non-traditional “big data” into the analytical process. He also has expanded the technology environment beyond the on-premises data center to include the world of public and private clouds to create a total analytic ecosystem.

Rob earned a B.A. degree in Management Science with an emphasis in mathematics and relational theory at the University of California, San Diego. He resides and works from San Diego.