They have gone again! Forgive the trip down memory lane, but I think it is worth reviewing their history.

Prior to PeopleTools 8.14, if you specified a key or alternate search field in record as descending then where it appeared in automatically generated key and alternate search key indexes, that column would be descending. PeopleTools would add the DESC keyword after the column in the CREATE INDEX DDL. Similarly, columns can be specified as descending in user indexes (that is to say ones created by the developer with index ID A through Z).

In PeopleTools 8.14 to 8.47, descending indexes were not built by Application Designer because of a problem with the descending key indexes in some versions of Oracle 8i.

PeopleSoft had previously recommended setting an initialisation parameter on 8i to prevent descending indexes from being created even if the DESC keyword was specified.parameters.

_IGNORE_DESC_IN_INDEX=TRUE

From PeopleTools 8.48 the descending keyword came back because this was the first version of PeopleTools that was only certified from Oracle 9i, in which the descending index bug never occurred. (see blog posting 'Descending Indexes are Back!'October 2007).

In PeopleTools 8.54, there are again no descending indexes because the descending keyword has been omitted from the column list in the CREATE TABLE DDL. You can still specify descending keys in Application Designer because that controls the order in which rows are queried into scrolls in the PIA. You can also still specify descending order on user indexes, but it has no effect upon either the application or the index DDL.

I haven’t found any documentation that explains why this change has been made. This time there is no suggestion of a database bug. However, I think that there are a good database design reasons behind it.

Normally creation of a primary key automatically creates a unique index to police the constraint. It is possible to create a primary key constraint using a pre-existing index. The index does not have to be unique, but it may as well. However, there are some limitations.

You cannot create primary key on nullable columns - that is a fundamental part of the relational model. This is rarely a problem in PeopleSoft where only dates that are not marked 'required' in the Application Designer are created nullable in the database.

You can create a unique index on nullable columns, which is probably why PeopleSoft has always used unique indexes.

You cannot use a descending index in a primary key constraint because it is implemented as a function-based index.

Non-PeopleSoft digression: you can create a primary key on a virtual column. An index on the virtual column is not function-based. So you can achieve the same effect if you move the function from the index into a virtual column, and you can have a primary key on the function. However, PeopleTools Application Designer doesn't support virtual columns.

I think that descending keys have removed from PeopleTools because:

It permits the creation of primary key constraints using the unique indexes.

It does not pose any performance threat. In Oracle, index leaf blocks are chained in both directions so it is possible to use an ascending index for a descending scan and vice versa.

Update 20.4.2016: There is an optimisation in Oracle 11.2.0.4 that improves the performance of the MAX() function in correlated sub-queries on ascending indexes only. This will benefit all PeopleSoft applications, but especially HCM.

What are the advantages of having a primary key rather than just a unique constraint?

The optimizer can only consider certain SQL transformations if there is a primary key. That mainly affects star transformation.

It allows the optimizer to rewrite a query to use a materialized view.

It allows a materialized view refresh to be based on the primary key rather than the rowid (the physical address of the row in a table). This can save you from performing a full refresh of the materialized view if you rebuild the table (I will come back to materialized views in another posting).

If you are using logical standby, you need to be able to uniquely identify a row of data otherwise Oracle will perform supplemental logging. Oracle will additionally log all bounded-size columns (in PeopleSoft, that means everything except LOBs). Oracle can use non-null unique constraint, but it cannot use a unique function-based index.

Logical standby can be a useful way to minimise downtime during a migration. For example, when migrating the database from a proprietary Unix to Linux where there is an Endian change. Minimising supplemental logging would definitely be of interest in this case.

The descending index change is not something that can be configured by the developer or administrator. It is just something that is hard coded in Application Design and Data Mover that changes the DDL that they generate.

There are some considerations on migration to PeopleTools 8.54:

It will be necessary to rebuild all indexes with descending keys to remove the descending keys.

There are a lot of descending indexes in a typical PeopleSoft application (I counted the number of function-based indexes in a typical system: HR ~11000 , Financials: ~8500). If you choose to do this during the migration it may take considerable time.

In Application Designer, if your build script settings are set to only recreate an index if modified, Application Designer will not detect that the index has a descending key and rebuild it. So you will have to work out for yourself which indexes have descending key columns and handle the rebuild manually.

With migration to PeopleTools 8.54 in mind, you might choose to prevent Oracle from building descending indexes by setting _IGNORE_DESC_IN_INDEX=TRUE. Then you can handle the rebuild in stages in advance

ConclusionI think the removal of descending indexes from PeopleSoft is a sensible change that enables a number of Oracle database features, while doing no harm.

The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015. In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

I have been doing some research, and have even read the release notes! As a result, I have picked out some topics that I want to talk about. I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:

I have been doing some on-line performance tuning on a PeopleSoft Financials system using PeopleSoft Performance Monitor (PPM). End-users have collect verbose PPM traces. Usually, when I use PPM in a production system, all the components are fully cached by the normal activity of the user (except when the application server caches have recently been cleared). However, when working in a user test environment it is common to find that the components are not fully cached. This presents two problems.

The application servers spend quite a lot of time executing queries on the PeopleTools tables to load the components, pages and PeopleCode into their caches. We can see in the screenshot of the component trace that there is a warning message that component objects are not fully cached, and that these cache misses skew timings.

Figure 1. Component trace as collected by PPMIf I go further down the same page and look in the SQL Summary, I can see SQL operations against PeopleTools tables (they are easily identifiable in that they generally do not have an underscore in the third character). Not only are 5 of the top 8 SQL operations related to PeopleTools tables, we can also see that they also account for over 13000 executions, which means there are at least 13000 rows of additional data to be read from PSPMTRANSHIST.Figure 2. SQL Summary of PPM trace with PeopleTools SQLWhen I open the longest running server round trip (this is also referred to as a Performance Monitoring Unit or PMU), I can only load 1001 rows before I get a message warning that the maximum row limit has been reached. The duration summary and the number of executions and fetches cannot be calculated and hence 0 is displayed.Figure 3: Details of longest PMU with PeopleTools SQLAnother consequence of the PeopleTools data is that it can take a long time to open the PMU tree. There is no screenshot of the PMU tree here because in this case I had so much data that I couldn't open it before the transaction timed out!Solution My solution to this problem is to delete the transactions that relate to PeopleTools SQL and correct the durations, and the number of executions and fetches held in summary transactions. The rationale is that these transactions would not normally occur in significant quantities in a real production system, and there is not much I can do about them when they do.The first step is to clone the trace. I could work on the trace directly, but I want to preserve the original data.PPM transactions are held in the table PSPMTRANSHIST. They have a unique identifier PM_INSTANCE_ID. A single server round trip, also called a Performance Monitoring Unit (PMU), will consist of many transactions. They can be shown as a tree and each transaction has another field PM_PARENT_INST_ID which holds the instance of the parent. This links the data together and we can use hierarchical queries in Oracle SQL to walk the tree. Another field PM_TOP_INST_ID identifies the root transaction in the tree.Cloning a PPM trace is simply a matter of inserting data into PSPMTRANSHIST. However, when I clone a PPM trace I have to make sure that the instance numbers are distinct but still link correctly. In my system I can take a very simple approach. All the instance numbers actually collected by PPM are greater than 1016. So, I will simply use the modulus function to consistently alter the instances to be different. This approach may break down in future, but it will do for now.On an Oracle database, PL/SQL is a simple and effective way to write simple procedural processes. I have written two anonymous blocks of code. Note that the cloned trace will be purged from PPM like any other data by the delivered PPM archive process.

Now I will work on the cloned trace. I want to remove certain transaction.

PeopleTools SQL. Metric value 7 reports the SQL operation and SQL table name. So if the first word is SELECT and the second word is a PeopleTools table name then it is a PeopleTools SQL operation. A list of PeopleTools tables can be obtained from the object security table PSOBJGROUP.

Implicit Commit transactions. This is easy - it is just transaction type 425.

Having deleted the PeopleTools transactions, I must also

Correct transaction duration for any parents of transaction. I work up the hierarchy of transactions and deduct the duration of the transaction that I am deleting from all of the parent.

Transaction types 400, 427 and 428 all record PeopleTools SQL time (metric 66). When I come to that transaction I also deduct the duration of the deleted transaction from the PeopleTools SQL time metric in an parent transaction.

Delete any children of the transactions that I delete.

I must also count each PeopleTools SQL Execution transaction (type 408) and each PeopleTools SQL Fetch transaction (type 414) that I delete. These counts are also deducted from the summaries on the parent transaction 400.

The summaries in transaction 400 are used on the 'Round Trip Details' components, and if they are not adjusted you can get misleading results. Without the adjustments, I have encountered PMUs where more than 100% of the total duration is spent in SQL - which is obviously impossible.Although this technique of first cloning the whole trace and then deleting the PeopleTools operations can be quite slow, it is not something that you are going to do very often.

Now, I have a second PPM trace that I can open in the analytic component.Figure 4: Original and Cloned PPM traces

When I open the cloned trace, both timings in the duration summary have reduced as have the number of executions and fetches. The durations of the individual server round trips have also reduced.Figure 5: Component Trace without PeopleTools transactionsAll of the PeopleTools SQL operations have disappeared from the SQL summary.Figure 6: SQL Summary of PPM trace after removing PeopleTools SQL transactionsThe SQL summary now only has 125 rows of data.Figure 7: SQL Summary of PMU without PeopleTools SQLNow, the PPM tree component opens quickly and without error.Figure 8: PMU Tree after removing PeopleTools SQLThere may still be more transactions in a PMU than I can show in a screenshot, but I can now find the statement that took the most time quite quickly.

Figure 9: Long SQL transaction further down same PMU treeConclusions I think that it is reasonable and useful to remove PeopleTools SQL operations from a PPM trace.In normal production operation, components will mostly be cached, and this approach renders traces collected in non-production environments both usable in the PPM analytic components and more realistic for performance tuning. However, it is essential that when deleting some transactions from a PMU, that summary data held in other transactions in the same PMU are also corrected so that the metrics remain consistent.

This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then. There are still many places in the delivered PeopleSoft application that would benefit from it. However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered. Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement. So I started by calculating how much time was lost by not setting it.Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).

Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database. The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

NB: The settings in the configuration file can be overridden by command line options. If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database. As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them. This is a useful source of performance metrics. Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements. Profile CompilationsThis query produces a simple profile of batch timings for statements.

In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.

There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds. Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.

Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

So now I have a list of steps with lots of compilations. I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time. So now I need determine the parse time from ASH data.

This query is based on the previous one, but includes scalar queries on the ASH data for each step.

WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.

This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.

In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry. These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.

The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything. The Comment column in this profile was added manually as I did that. Some statements I can change, some I have to accept the overhead.

More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings. This is a consequence of ASH sampling, where we count 10 seconds for each sample.

ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set. This technique picks out the relatively few places where doing so could potentially have a significant effect.

Or, to put it another way, I want to change or drop this index, who and what will I impact?

The Challenge

The problem that I am going to outline is certainly not exclusive to PeopleSoft, but I am going to illustrate it with examples from PeopleSoft. I often find tables with far more indexes than are good for them.

The Application Designer tool makes it very easy for developers to add indexes to tables. Sometimes, too easy!

Sometimes, DBAs are too quick to unquestioningly follow the advice of the Oracle tuning advisor to add indexes.

Recently, I have been working on 3 different PeopleSoft Financials systems where I have found major tables with a host of indexes.

There are several concerns:

Indexes are maintained during data modification. The more indexes you have, the greater the overhead.

The more indexes you have, particularly if they lead on the same columns, the more likely Oracle is to use the wrong one, resulting in poorer performance.

There is of course also a space overhead for each index, but this is often of less concern.

If you can get rid of an index, Oracle doesn't store, maintain or use it.

In some cases, I have wanted to remove unnecessary indexes, and in others to adjust indexes. However, this immediately raises the question of where are these indexes used, and who will be impacted by the change. Naturally, I turn to the Active Session History (ASH) to help me find the answers.

Index Maintenance Overhead during DDL

ASH reports the object number, file number, block number and (from 11g) row number within the block being accessed by physical file operations. However, the values reported in v$active_session_history (and later other views) are not reliable for other events because they are merely left over from the previous file event that reported them. So, we can profile the amount of time spent on physical I/O on different tables and indexes, but not for other forms of DB Time, such as CPU time, spent accessing the blocks in the buffer cache.

Let me take an extreme example from PeopleSoft Global Payroll. The table PS_GP_RSLT_ACUM is one of the principal result tables. It has only a single unique index (with the same name). The table is populated with the simplest of insert statements.

I can profile the ASH data for just this statement over the last week on a production system. Note that DBA_OBJECTS and DBA_DATA_FILES are outer joined to the ASH data and only matched for events like 'db file%'

A full payroll calculation inserts over 3 million rows on this particular system. The calculation is run incrementally several times per week during which old rows are deleted and newly recalculated rows inserted. Looking at just this insert statement:

30% of the time is spent on CPU operations, we cannot profile that time further with ASH.

38% of the time is spent reading from the table and index, yet this is a simple INSERT … VALUES statement.

More time is spent reading the index than the table. That is not a surprise. When you insert a row into a table, you also insert it into the index. Rows in index leaf blocks are ordered by the key columns, and the new entry must go into the right place, so you have to read down the index from the root block, through the branch blocks, to find the correct leaf block for the new entry.[Digression: Counter-intuitively index compression can improve DML performance. It does for this index. The overhead of the compression processing can be outweighed by the saving in physical I/O. It depends.] Profile Physical I/O by Object I can twist this query around and profile DB_TIME by object for 'db file%' events

This is a worthwhile exercise, it shows the sources of physical I/O in an application.

However, if you want to find where an index is used, then this query will also identify SQL_IDs where the index is either used in the query or maintained by DML. If I am interested in looking for places where changing or deleting an index could have an impact then I am only interested in SQL query activity. ASH samples which relate to index maintenance are a false positive. Yet, I cannot simply eliminate ASH samples where the SQL_OPNAME is not SELECT because the index may be used in a query within the DML statement.

Another problem with this method is that it matches SQL to ASH by object ID. If someone has rebuilt an index, then its object number changes.

A different approach is required.Index Use from SQL Plans Captured by AWR During an AWR snapshot the top-n SQL statements by each SQL criteria in the AWR report (Elapsed Time, CPU Time, Parse Calls, Shareable Memory, Version Count) , see dbms_workload_repository. The SQL plans are exposed by the view DBA_HIST_SQL_PLAN.

On PeopleSoft systems, I generally recommend decreasing the snapshot interval from the default of 60 minutes to 15. The main reason is that SQL gets aged out of the library cache very quickly in PeopleSoft systems. They generate lots of dynamic code, often with literal values rather than bind variables. Cursor sharing is not recommended for PeopleSoft, so different bind variables result in different SQL_IDs. The dynamic code also results in different SQL IDs even with cursor sharing. Therefore, increasing the snapshot frequency means that will capture more SQL statement. This will increase total volume of the AWR repository simply because there are more snapshots. However, the overall volume of ASH data captured does not change, it just gets copied to the repository earlier.

On DBA_HIST_SQL_PLAN the object ID, owner, type and name are recorded, so I can find the plans which referenced a particular object. I am going to carry on with the example from a PeopleSoft Financials system, and look at indexes on the PS_PROJ_RESOURCE table.

These are some of the indexes on PS_PROJ_RESOURCE. We have 4 indexes that all lead on PROCESS_INSTANCE. I suspect that not all are essential, but I need to work out what is using them, and which one I should retain.

I find it easier to extract the ASH data to my own working storage table. For each index on PS_PROJ_RESOURCE, I am going to extract a distinct list of plan hash values. I will then extract all ASH data for those plans. Note, that I have not joined the SQL_ID on DBA_HIST_SQL_PLAN. That is because different SQL_IDs can produce the same execution plan. The plan is equally valid for all SQL_IDs that produce the plan, not just the one where the SQL_ID also matches.

I am fortunate that PeopleSoft is a well instrumented application. Module and Action are set to fairly sensible values that will tell me whereabouts in the application the ASH sample relates. In the following query, I have omitted any ASH data generated by SQL*Plus, Toad, or SQL Developer, and also any generated by Oracle processes to prevent statistics collection jobs being included.

The next stage is to look at individual SQL statements This query looks for which SQL statement is using a particular index on PROJ_RESOURCE. If I can't find the SQL which cost the most time, then just choose another SQL with the same plan

I have found that sometimes a plan is captured by AWR, but the SQL statement is not. Personally, I think that is a bug. Working around it has made the following query quite complicated.

SELECT * FROM table(dbms_xplan.display_awr('ga2x2u4jw9p0x',2282068749,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:6760*/;SELECT * FROM table(dbms_xplan.display_awr('9z5qsq6wrr7zp',3665912247,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:3500*/;SELECT * FROM table(dbms_xplan.display_awr('b28btd6k3x8jt',1288409804,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:3060*/;SELECT * FROM table(dbms_xplan.display_awr('avs70c19khxmw',2276811880,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:2660*/;SELECT * FROM table(dbms_xplan.display_awr('b78qhsch85g4a',1019599680,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1960*/;SELECT * FROM table(dbms_xplan.display_awr('65kq2v1ubybps',3138703971,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1820*/;SELECT * FROM table(dbms_xplan.display_awr('1dj17ra70c801',1175874548,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1460*/;SELECT * FROM table(dbms_xplan.display_awr('3w71v896s7m5d',3207074729,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:500*/;SELECT * FROM table(dbms_xplan.display_awr('35mz5bw7p5ubw',2447377432,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:170*/;

Ultimately, I have needed to look through the SQL plans that use an index to decide whether I need to keep that index or decide whether the statement would perform adequately using another index. In this case, on this particular system, I think the index PSMPROJ_RESOURCE would be adequate for this statement, and I would consider dropping PSLPROJ_RESOURCE.

I carried on with examination of SQL statements and execution plans to determine whether each index is really needed or another index (or even no index at all) would do as well. This decision also requires some background knowledge about the application. Eventually, I decided that I want drop the J, L and N indexes on PROJ_RESOURCE and just keep M. Limitations of Method

AWR does not capture all SQLs, nor all SQL plans. First the SQL has to be in the library cache and then it must be one of the top-n. A SQL that is efficient because it uses an appropriate index may not be captured, and will not be detected by this approach. This might lead you to erronously believe that the index could be dropped.

ASH data is purged after a period of time, by default 31 days. If an index is only used by a process that has not run within the retention period, then it will not be detected by this approach. This is another reason to retain ASH and AWR in a repository for a longer period. I have heard 400 days suggested, so that you have ASH for a year and a month.

However, this also causes the SYSAUX tablespace to be become very large, so I would suggest regularly moving the data to a separate database. I know one customer who has built a central AWR repository for all their production and test databases and automated regular transfer of data. That repository has been of immense diagnostic value.

[Update] This analysis will not detect index use in support of constraint validation (PeopleSoft doesn't use database referential integrity constraints). As Mark Farnham points out below, that may be a reason for retaining a particular index.

Getting Rid of Indexes Obviously any index changes need to be tested carefully in all the places that reference the index, but on the other hand it is not viable to do a full regression test every time you want to change an index.

Therefore, if all the testing is successful and you decide to go ahead and drop the index in production, you might prefer to make it invisible first for a while before actually dropping it. It is likely that the indexes you choose to examine are large and will take time to rebuild. An invisible index will not be used by the optimizer, but it will continue to be maintained during DML. If there are any unfortunate consequences, you can immediately make the index visible without having to rebuild it.

Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed. At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time. There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application. I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR. A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. The Slings and Arrows of outrageous execution plans Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability. Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO.

However, below is the plan we get on instance 4. We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record. Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables. This is a totally different SQL statement.

Note----- - SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement

Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term. It worked fine until a process errored, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.So we could just create another profile using the coe_xfr_sql_profile.sql script

The profile has advantage that it can be applied quickly without a code change. It is the perfect tool for the DBA with a production performance problem. However, there are some other considerations.

If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).

We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records. So perhaps instance 5 of one table and instance 6 of another. In which case, you might also get instance 6 of the first table and instance 5 of the other. A SQL profile could be needed for each permutation.

Bear in mind also that some areas of PeopleSoft use dynamically generated SQL. So you get similar SQL statements which are sufficiently different for the profile not to match.

Any changes to the expansion of Application Engine and PeopleCode MetaSQL on upgrading PeopleTools, or potentially even patching, will also prevent matching.

There is also the challenge of dealing with code changes as the system is upgraded, patched and customised. A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems!

Profiles are likely to be effective if there are no PeopleSoft temporary records present. So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).

Aye, there's the rub,

I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that. One strong argument against making code change is that you have to change the code again to remove or change the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade. However, on balance, I think it is better than the scenario where the profile stops working one day without warning.

Those of us who work with PeopleSoft, and especially the HCM product, are all too familiar with the concept of effect-dated data, and the need to find data that was effective at a particular date. PeopleSoft products have always made extensive use of correlated sub-queries to determine the required rows from an effective-dated record.

The JOB record is a the heart of HCM. It is both effective-dated and effective sequenced. I will use it for the demonstrations in this article. I am going to suggest an alternative, although Oracle-specific, SQL construction.

Let's start by looking at the job data for an employee in the demo database. Employee KF0018 has 17 rows of data two concurrent jobs. The question I am going to ask is "What was the annual salary for this employee on 11 February 1995?". Therefore, I am interested in the rows marked below with the asterisks.

on all PeopleSoft systemsAnalytic Function and In-LineView/Sub-query FactorI have seen people use a combination of analytic functions and in-line views to avoid having to use the correlated sub-query construction. This has been possible since Oracle 9i.

We get the same result, but now the index is scanned just once and we only need 14 consistent reads, so it produces a significant improvement. However, it still includes a sort operation in addition to the window function. We have to create a sequence number field in the in-line view and filter by that in the final query.

Analytic Function Keep Clause This form of the analytic functions is documented for the first time in 12c, but is available in 10g (my thanks to Tony Hasler for introducing me to it). It works by effectively keeping a running maximum value of the columns in the order by clause within in group.

SELECT emplid, empl_rcd, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effdt, MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effseq, MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS action, MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS deptid, MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS currency_cd, MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS annual_rtFROM ps_job jWHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')AND j.emplid = 'KF0018'GROUP BY emplid, empl_rcd/

I think this construction could be useful in PeopleSoft. At first glance the SQL appears more complicated, but it in this example it removed two correlated sub-queries. Using Analytic Functions in PS/QueryOf course you can code it anywhere where you can simply enter SQL as text. However, it also has the advantage over the other analytic function construction that it can be coded in the PS/Query tool. The analytic functions in the select caluse should be created in PS/Query expressions with the aggregate expression checkbox ticked.

Analytic Function in Aggregated Expression in Windows Client version of PS/Query The analytic functions can be selected in the PS/Query, and their lengths and titles can be tidied up. PS/Query with Analytic 'Keep' FunctionsThis is the resulting SQL which is the same as before (with row level security added by PS/Query) and produces the same results.

SELECT A.EMPLID, A.EMPL_RCD, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) FROM PS_JOB A, PS_EMPLMT_SRCH_QRY A1 WHERE ( A.EMPLID = A1.EMPLID AND A.EMPL_RCD = A1.EMPL_RCD AND A1.OPRID = 'PS' AND ( A.EFFDT <= TO_DATE('1995-02-11','YYYY-MM-DD') AND A.EMPLID = 'KF0018' ) ) GROUP BY A.EMPLID, A.EMPL_RCD

I have written previously about Deferred Segment Creation. Each empty table and its indexes use only 64Kb (assuming an 8Kb block size and locally managed tablespaces), but in a PeopleSoft there can be be tens of thousands of such tables and that adds up to a saving worth making.

If you are upgrading your database to 11gR2 , you might want to make sure that you are using it. Deferred segment creation was introduced in Oracle 11.2.0.2 and it became the default in 11.2.0.3. However, any table created in a previous version will have a physical segment.

This problem could affect any system, but it also manifests itself in PeopleSoft in a particular way.

When you run the alter scripts in PeopleTools a table may be recreated. If it is a regular table (record type 0) then the CREATE TABLE command will not specify a segment creation clause and so the segment creation will be deferred until rows are inserted.

However, from PeopleTools 8.51, Application Designer uses the Oracle delivered DBMS_METADATA package to extract the DDL to recreate the object from the actual object. However, this behaviour only occurs for Temporary working storage tables (record type 7). Yes, these are exactly the tables that would benefit most from deferred segment creation because in many systems there are many unused temporary table instances. If table was created under a version of the database prior to 11.2.0.2 then the segment will exist and DBMS_METADATA will generate the DDL with the SEGMENT CREATION IMMEDIATE clause.

You can use DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS to remove the segments for any empty tables (and their indexes) for which the segment has been previously created. There would be no harm in simply running this program for every table in the system. If there are rows then DBMS_SPACE_ADMIN will take no action.

As this package drops the empty segments, the SEGMENT_CREATED column on USER_TABLES changes to NO and if you were to extract the DDL with DBMS_METADATA the SEGMENT CREATION clause would have changed to DEFERRED.

As soon as any data is inserted, the segment is created, SEGMENT_CREATED changes to YES and the DDL generated by DBMS_METADATA would have SEGMENT CREATION IMMEDIATE.

The result is that 64Kb of space (assuming a block size of 8Kb) will be freed up for each empty table and index segment that is dropped. Your mileage may vary, but in my demo HR database that is over 20000 tables and 25000 indexes. 2.7Gb isn't a vast amount these days, but it is an easy win.

To answer Noons' question below. So long as the table or partition doesn't have any rows, the segment will be dropped it will as if the segment creation had been deferred. You don't have to do anything special to the table. There is no problem applying this to any empty tables create with their segments. Here is a simple test with my results on 11.2.0.3:

I will create a table and I have explicitly created the segment immediately, then I insert a row, commit the insert and delete the row. I haven't even bothered to commit the delete.

This blog posting describes a script to convert Oracle date columns to Timestamps as used from PeopleTools 8.50 but only rebuilding those indexes that reference those columns, rather than drop and recreate every index in the system, thus producing a significant saving of time during the upgrade.

I am working on a PeopleSoft upgrade project. We are going from PeopleTools 8.49 to 8.53. One of the things that happens is that some date columns in the Oracle database become timestamps.

Timestamps were introduced by Oracle in version 10g of the database, and provide the ability to store times accurate to the nanosecond (although the default is microsecond). Dates are accurate to the whole second.

There are 3 types of temporal column in PeopleSoft as defined on PSDBFIELD. Prior to PeopleTools 8.50 they all become Oracle data columns in the database. However, from PeopleTools 8.50; Time and DateTime fields are built as TimeStamp columns.if bit 5 of PSSTATUS.DATABASE_OPTIONS (value 32) is set.

PeopleTools Field Type

Database Column Type

PeopleTools <= 8.49 PeopleTools >= 8.50 4 Date DATE DATE 5 Time DATE TIMESTAMP 6 DateTime DATE TIMESTAMPTimestamps must be handled differently to dates in SQL. Some date arithmetic must be done differently, in particular the difference between two timestamps is a timestamp rather than a number of days. Therefore this setting also controls how PeopleCode date macros expand on Oracle.

During the upgrade, PeopleSoft Change Assistant simply alters all the Time and DateTime columns from dates to timestamps. This generally works well. The data value doesn't appear to get longer, so the block doesn't run out of room leading to row migration, and so it isn't necessary to rebuild every table that is affected.

However, there are some limitations. If the column being converted to a timestamp falls into one of the following categories you will get an error.

The column is a key column in a function-based index.

The table or index is partitioned by the column.

The functional key index issue has not affected many customers because the upgrade template drops all the indexes before altering the tables and rebuilding them again.

However, dropping and recreating all these indexes can be very time consuming and increases the duration of the outage required to perform the upgrade. This has been my incentive to find a better way.

Function-Based IndexesPeopleSoft Application Designer defines some key and search fields as descending. The rows in components and the results of search dialogue are sorted on the key fields in the order specified. Application Designer then indexes these columns in descending order (prior to PeopleTools 8 and since PeopleTools 8.47). If any column in an Oracle index is in descending order the index is created as a function-based index. Consequently, there can be a lot of descending indexes in a PeopleSoft system! HR systems are particularly affected because many tables are effective-dated, and the field EFFDT is usually a descending key field.

It is not possible to alter a column to a timestamp if it appears anywhere in a function-based index. You get the following error:

ORA-30556: functional index is defined on the column to be modified

PartitioningPartitioning is not something that you usually encounter in a vanilla PeopleSoft system, but it can be added by customisation. You generate the necessary DDL yourself if you want to use it. However, from PeopleTools 8.51 Application Designer will preserve existing partitioning.

In the system on which I am working, when partitioned the audit tables by AUDIT_STAMP which is a DateTime field.

ORA-14060: data type or length of an table partitioning column may not be changed

We have had no alternative but to rebuild these tables and repopulate the data. This has also dealt with all locally partitioned indexes.

We have also found that we have one global index partitioned on a timestamp.

ORA-14061: data type or length of an index partitioning column may not be changed

We also have had to drop this index in order to alter the table.

My ApproachWe have had no alternative but to rebuild and repopulate our partitioned audit tables which are partitioned by a DateTime field. However, that is what we did when we first partitioned them. The scripts are very similar to those generated by Application Designer. The table is renamed, a new one is built, and the data is copied. In our case these scripts are built with a PL/SQL utility. This also addressed the need to rebuild the locally partitioned indexes..