Hi,
We are facing long parsing issues and observed most of the sessions which are reporting the high delays, are also reporting the high parser time due to the workload chosen for the session classification. I can think of is to introduce a separate high priority workload for PARSING phase of all the sessions (which are critical and suffering from high delays). I am seeing the parsing time and as well Parsing CPU Time is greater than 30 seconds and even seiing three and four digits seconds for the some of the sessions. As per my understanding, ideal parsing time (both ParsingCPUTime and ParsingTime) is abnormal if it greater than two digit seconds. please correct me if i am wrong.

Large CPU Time for parsing versus elongated elapsed Parsing Time (for relatively modest Parsing CPU) are two different things.

When the elapsed time is long relative to the CPU time, and particularly if the issue correlates with a specific SessionWDID, then a parsing-only workload may help. See Carrie's blog for details (which depend on the release of TD you are running): http://developer.teradata.com/blog/carrie

On the other hand, large Parsing CPU time typically indicates a complex query. If the request itself looks simple, perhaps there are multiple layers of complex views involved, or many joins, etc. A separate workload won't reduce the Parsing CPU. Sometimes large parsing CPU is necessary. If there are specific queries that seem to consume far more parsing CPU than other similar queries, try to determine what makes the problem queries different.

Same point as Fred mention. In case of large CPU times - check the complexity of the SQL by running "show select...".
Depending on the DB release there had been issues with the existance of JI in complex queries in the past.

Yes, we are seeing elapsed time is very long relative to ParserCPUTime (specific to session WDID) but not all the time as shown below:

Row Labels

Sum of Sum(ParserCPUTime)

Sum of QueryPrepSec

Sum of AMPSec

Sum of TotDelaytime

8

18,372.36

2,445,888.68

180,361.62

95,826.36

12

9,903.17

2,022,695.26

66,527.88

602.53

13

5,394.01

1,946,405.29

41,457.32

259.95

3

8,040.07

1,899,677.38

125,235.63

74,250.37

15

5,747.61

1,353,337.37

47,052.16

380.46

7

12,750.79

1,176,524.13

207,700.98

128,698.83

9

12,324.01

638,505.84

132,456.17

39,696.64

11

6,889.21

411,856.95

61,908.05

664.29

10

8,806.21

390,505.97

102,365.60

13,492.49

14

6,341.00

92,785.41

45,897.98

61.51

It is not specic to one complex query, it is across whole environment during mulitiple periods.
we are having two issues here, one is long parsing issue and another is session delay issue (ParserCPUTime vs QryPrepSecs are relatively low):
As part of solution, We are planning to follow the below steps to nail down the issue:
1) Clearing skew in accessrights table (currently we are having 76) by running checktable
2) Seeing the options to implement seperate parser workload
3) Cache effectiveness.
4) ETL window and reporting window confined to their defined window or not.
Am i missing any other factors to consider in digging more.

Checktable has no effect on skew. Skew in AccessRights is usually addressed by implementing Roles and database-level GRANTs and then revoking redundant user / table level access rights; in particular you may want to revoke "automatic" rights granted to the creator of an object.

Review DBQL (or use Viewpoint Rewind) to understand concurrency and throttle delays during the periods in question.

I would also look at ResUsage (particularly ResSawtView) to see if Flow Control is a factor.

There is no ideal parsing time. A very complex query accessing a complex data model can take a while to parse in favor of having a shorter run-time by finding the best plan. Eg if spending a minute on parsing results in a plan that runs quickly, then the parsing time is a good thing.

Would it be possible for you to describe the type and complexity of the quereies and describe the data model that is being accessed? How many tables? views? nested views? views with joins/unions/...? indexes/JIs on tables? complexity in queries - complex expressions, case expressions, subqueries,...?

Note that parsing times can be affected by workload priority of the session/query, especially if the system is highly CPU busy. The overall CPU busy of the system should be examined during the periods when the queries are exhibiting the long parse times. If the workload has a critical response time, then the workload should be given a high priority which will also apply to the parsing of that workload.

Parsing time can be affected by dictionary locks. Review the query log to see if lots of DDL is being done at the times long parsing times are being experienced. DDL locks on the dictionary can make the parser wait for dictionary information to be retrieved.

Delays are mentioned. Make sure to segregate the delay time from the parsing time. The delay time is from workload throttling rules and is completely independent of parse ti,e A critical response time query should be in a workload which is not throttled.

Anything you can provide in the way of details or findings will be appreciated. Pare time is an area we are actively looking at but there are many contributors and more data would be valuable to understand each variation of the issue.