Search form

Teradata is a message passing system. Messages are sent from parsing engines to AMPs, and from AMPs to AMPs, and from AMPs to parsing engines. That’s the key way that components in a shared nothing architecture pass data and work requests among themselves.

When a message arrives on an AMP and the message represents work that needs to get done on the AMP, that message is assigned a“work type”, depending on the importance of the work-to-be-done. There are 16 different work types supported: Work00 to Work15.

Under usual conditions, all load utility jobs and all queries run using AMP worker tasks (AWTs) of the same message work types: Work00, Work01, and Work02. However, if you increase AWTs per AMP above a certain threshold, then all your utility jobs will be assigned to different work types and given their own reserve pools.

If you are someone who monitors or is otherwise interested in AWTs and how they are being used, this posting describes changes related to your utility jobs, and what options you have for managing these changes.

Tactical workload exceptions are in place to prevent tactical queries from consuming unreasonable amounts of resources. It is important to have this protection because the super-priority and almost unlimited access to resources given to work running in the Tactical tier with SLES 11 is easy to abuse.

Have you ever found yourself confused when setting up classification criteria for a new workload in Teradata? You’re not alone. This posting discusses the main principles at work when it comes to combining different classification criteria. It also provides some general tips to help you do effective, clean, predictable classification, whether on your workloads, your throttles or your filters.

Traditionally, query optimizers depend on information available at optimization time such as statistics, cost parameters, predicate values, and resource availability in order to perform query transformations and optimization. The final plan, referred to as a static plan, for a request is chosen by computing the cost of each possible plan variation and selecting the least costly plan. During this process, the optimizer assumes that all the information is accurate and generates the plan for the entire request (a request can consist of multiple statements/queries).

Earlier this year I posited that, due to the exponential rate of growth, the amount of data collected for analysis is becoming beyond the scope of the current analytical staffs to examine it all. And that the answer to growing the cumulative brain power necessary for this exponential growth in analysis is going to have to be machine learning and artificial intelligence. This is now in the works.