I was asked the other day about the request cache, speficially, how to keep plans in the cache so the query does not have to undergo repetitive parsing and optimizing. A request cache exists on each parsing engine (PE) and will hold generic plans that have been identified as potentially re-usable. For very short queries, request cache hits can reduce your elapsed time and provide more consistent response time. So the longer those re-usable plans stay in the cache, the better.

The Big Flush

You’re probably familiar with the way that plans get removed from the request cache periodically. Each PE flushes both its dictionary and request cache every 4 hours. The dictionary cache is flushed first, and always completely; the request cache is only flushed partially. In-use plans, or plans not dependent on statistics, such as primary index queries, are retained in the request cache. This flushing event is staggered so that no 2 PEs undergo flushing at the same time.

Other Factors That Push Plans out of the Cache

Some of the other things I have seen that can cause entries to be removed from the request cache include:

Operations performed on objects: Collecting Statistics , ALTER or DROP, INSERT/SELECTs that change more than 10% of the rows will cause specific objects to be removed from the request cache. To lesson the impact on the request cache, try to restrict these types of operations against objects used by the tactical apps during the times those tactical queries are running.

The limit on number of cache entries has been exceeded: If request cache entries per PE would exceed the MaxRequestsSaved number (default is 600), then plans will be released from the cache. I’ve talked to several sites that have changed this setting to the maximum of 2000, and all of them saw more cache hits as a result.

A really big request: Total request cache size is not allowed to exceed 100 MB by the caching of a new plan. To get big plans in, sometimes smaller plans will have to go. There’s no option to increase this setting, but you might consider targeting sessions where you expect large plans and sessions where you expect small plans to different groups of PEs.

Available swap space on the node has fallen below some minimum healthy number. When the system is tight on memory, a minimal number of entries will be automatically purged from the request cache, with the least-recently-used to go first. This is not a common experience among Teradata sites, but it’s still a good idea to keep on eye on what’s happening with your memory.

The application is spread across a large number of PEs, and other non-tactical plans are pushing out the tactical plans.Consolidating the tactical application to a small number of PEs can help to keep the plans for the tactical applications in the request cache longer. The more frequently the plans are used, the less likely they are to be aged out.

A Surprise Factor

In our exploratory tactical query testing last year we discovered that a MultiLoad job can cause plans to be flushed from the request cache when the tactical query references the table undergoing loading. (Yes, the tactical queries were using access locking.) This happens because a utility lock must be set and unset as part of the MuliLoad job. Utility locks cause the table header to change, and when the table header changes, all plans releated to that table get flushed from the request cache. We were doing lots of short mini-batch loads against the same table we were issuing tactical queries against, so we were able to see this effect. Increasing MaxRequestsSaved reduced this impact significantly, which is the first thing I’d suggest you do if you find this happening to you.

How to Monitor Success

You can see how consistently your cached plans are being used by examining the CacheFlag value in DBQLogTbl. If the plan used was from the request cache, you’ll see a ‘T’ in that field. For more background on how cached plans are managed see my article posted on Dev X last August called “To Peek or Not to Peek”.

You always want to be concerned about memory if you are having any swapping at all. See Woody's blog on how to assess memory issues. That might provide some tips. If you're not experiencing any memory issues now, then increasing MaxReqSaved may not have negative side-effects. However, you need to keep an eye on memory after the increase.

There are some checks in place internally.

The MaxRequest setting can be increased, but it does not necessarily mean that the specified number of cached plans will always be accomodated. If the MaxRequest number is greater than 300, a check is made against the size of the cache before additional plans are cached, to ensure that the cache size would not exceed an internal threshold.

In addition, if available swap space on the node has fallen below a mininum level needed for node health, then the least recently used cached plans are purged from the cache, to keep it smaller.

I don't know the exact process for doing this. It has to do with defining a host group that links to sessions coming from specific IP addresses with specific PEs. Your CS person should be able to help you set this up, or you can call the support center for assistance.

ok. Thanks for the direction. I was thinking along the same lines too. The only way you can bound an application to a specific PE/set of PE's is by ensuring that the application connects to the same node every time it runs. Generally there is a load balancer that lets a query run on a random node everytime it comes in.

For tactical apps, if required, we would need to ensure that they do not go through the loadbalancer at all and use a specific node ip instead.

i come back in the discussion to ask for possible updates (V15 and UDA too ...):

About those modifications of dictionary objects - table, column, table header, stats - that have the effect of purging the cache, shall we expect a purge of query cache when tactical queries (sent via JDBC as macros or execute SQL in the case) and TPTload jobs run concurrently on the same table ?

about the conditions to place a query in the cache: will a query coming from two sessions (same host, client, connection method and text) be eligible too, whatever the session ?

Executing queries by themselves will not cause the request cache to be spoiled. (By "spoiling" I mean selectively removing entries, in contrast to "purging" which removes all or most entries.)

However, if the table header is changed for any reason, that will cause request cache spoiling. When a load utility runs (such as TPT Load or TPT Update) it updates the table header of the table being loaded into at two different times: 1.) When the utility begins; and 2.) When the utility ends. At both those points in time the request cache will be flushed for any plans that come from queries that reference the table being loaded.

In terms of your second question, queries coming from different sessions can use the same cached plan from the request cache. It is not a requirement that queries have to come from the same session in order to make use of a cached plan.

YOUR QUESTION: How does the optimizer recognize a "query" as being already in cache from another query being not ?

Strictly speaking it is request (a request can consist of multiple queries), not a query, that is being processed.

It hashes the request text and looks up if there is a hit is cache. If there isn’t a hit, it is not in cache. If there is a hit, it checks to see request text matches the cached request text and also checks some other things for a match. If everything matches, the request is in cache and otherwise not. If it is cached, it may not or may not have a plan – for specific, it won’t, generic it will, etc.

YOUR QUESTION: I understand that every query is marked as an entry in the query cache, even before the execution plan is kept (as "generic") or not, right ?

In general, every request (that is, its request text and other limited number of items, but not the plan) is put in the cache to indicate request has been seen if it is not already in the cache. There may be some exception where we don’t think that kind of request needs to be cached. These entries are relatively small since they don’t have a plan. If a plan is decided to be cached, it is cached. If request is not seen again soon, it will probably be pushed out of cache to hold info about other requests.

Regards, -Carrie

0
Kudos

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.