Slow Performance When Upgrading to Siebel 8 on Oracle 10g - Part 2

I recently wrote about having performance problems after upgrading from Siebel 7.7 (Oracle 9i) to Siebel 8 (Oracle 10g):
http://it.toolbox.com/blogs/siebel-answers/slow-performance-when-upgrading-to-siebel-8-on-oracle-10g-33128 I have been able to improve performance, primarily by changing the way statistics are generated, setting certain db parameters and ensuring there is sufficient amount of memory allocated to the environment (SGA).
Gathering Statistics After much trial and error we found that the best statistics method to use at our implementation was as such: 1. Delete all schema stats 2. Gather dictionary stats, gather fixed objects stats, gather system stats 3. Gather schema stats for all tables for all columns using method_opt=>FOR ALL COLUMNS SIZE 1. Doing this first ensured that every column in the schema had statistics generated but no histograms generated yet. Note: During this step we intermittently received the following error: * ERROR at line 1: ORA-01476: divisor is equal to zero This is a known Oracle Bug #464440.1, to workaround this problem execute the following before this step: connect / as sysdba alter system set events '38041 trace name context forever, level 16'; 4. Then gather schema stats using method_opt=> FOR ALL INDEXED COLUMNS SIZE AUTO (This is Siebel recommendation). This allowed for Oracle to determine what indexed columns are appropriate to have histogram statistics associated. 5. Then gather schema stats using method_opt=> FOR ALL COLUMNS SIZE 254 for tables: S_POSTN_CON, S_ORG_BU and S_ORG_GROUP (This is also Siebel recommendation). This will generate histograms for all columns in these tables, I think this is because the data is known to be heavily skewed on these tables. I did not see any performance impact by not doing this either. 6. Delete all stats on tables that have less than 16 records. Oracle has performance problems if stats exist on small tables. The commands to do all of the above are listed at the bottom of this article.
SGA_TARGET Parameter The db parameter sga_target is set to the amount of memory that is allocated to the "Shared Global Area". This is very important to ensure that the environment has enough memory to be able to handle the "big" multi-join Siebel queries. Especially in a test or development environment setup we have many Siebel environments layered on the same hardware set sharing the same server RAM. On a production sized data set such as ours we needed at least SGA_TARGET=4GB. When we performed the upgrade only 1GB was available to be allocated to SGA_TARGET because other environments were taking up large amounts of SGA memory. We had a situation where we had a server that had 4 Siebel environments with SGA_TARGET set as such: UpgradedEnv - 1GB OtherEnv1 - 4GB OtherEnv2 - 4GB OtherEnv3 - 8GB When it came time to upgrade the "UpgradedEnv" the DBA could only allocate 1GB to SGA_TARGET and the environment performed very badly as a result. Once 4GB was taken from OtherEnv3 and allocated to UpgradedEnv the performance was much much better.
Other DB Parameters Disclaimer: This is what worked for us, you need to reach out to Oracle support before changing these settings! It is important to ensure that all oracle db parameters are correctly set according to Siebel recommendation. As many of these parameters are different to the 10g default. Also when upgrading from 9i to 10g any parameters that were previously set to a particular value will still remain set, for example if in 9i the parameter was set to a particular value, but in 10g you want the parameter to be defaulted it should be specifically removed to allow for 10g to default it. Here are the parameters recommended that we applied: optimizer_mode = ALL_ROWS _optim_peek_user_binds = FALSE OPTIMIZER_INDEX_CACHING = 0 OPTIMIZER_FEATURES_ENABLE = (your oracle version - eg. 10.1.0.3) QUERY_REWRITE_INTEGRITY = ENFORCED STAR_TRANSFORMATION_ENABLED = FALSE OPTIMIZER_DYNAMIC_SAMPLING = 1 OPTIMIZER_MAX_PERMUTATIONS = 100 OPTIMIZER_INDEX_COST_ADJ = 1 QUERY_REWRITE_ENABLED = FALSE _always_semi_join = OFF _b_tree_bitmap_plans = FALSE _no_or_expansion = FALSE _optimizer_max_permutations =100 _partition_view_enabled = FALSE _optimizer_max_permutations = 100 _optimizer_cost_based_transformation = OFF DB_FILE_MULTIBLOCK_READ_COUNT = 32 NLS_SORT = BINARY These last parameters were the big factor that helped performance for our situation: _complex_view_merging = false _new_initial_join_orders = false _gby_hash_aggregation_enabled = false _always_anti_join = OFF _first_k_rows_dynamic_proration = FALSE _optimizer_cartesian_enabled = FALSE _optimizer_skip_scan_enabled = FALSE _optimizer_sortmerge_join_enabled = FALSE optimizer_secure_view_merging = FALSE Here is the commands used to gather the stats: begin DBMS_STATS.GATHER_DICTIONARY_STATS; DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; DBMS_STATS.GATHER_SYSTEM_STATS; DBMS_STATS.DELETE_SCHEMA_STATS('SIEBEL'); DBMS_STATS.GATHER_SCHEMA_STATS( ownname => 'SIEBEL', method_opt => 'FOR ALL COLUMNS SIZE 1', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE ); DBMS_STATS.GATHER_SCHEMA_STATS( ownname => 'SIEBEL', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE ); DBMS_STATS.GATHER_TABLE_STATS( ownname => 'SIEBEL', tabname => 'S_POSTN_CON', method_opt => 'FOR ALL COLUMNS SIZE 254', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE ); DBMS_STATS.GATHER_TABLE_STATS( ownname => 'SIEBEL', tabname => 'S_ORG_GROUP', method_opt => 'FOR ALL COLUMNS SIZE 254', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE ); DBMS_STATS.GATHER_TABLE_STATS( ownname => 'SIEBEL', tabname => 'S_ORG_BU', method_opt => 'FOR ALL COLUMNS SIZE 254', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE ); FOR r IN ( select table_name from all_tables where owner='SIEBEL' and NVL(num_rows,0) < 16 ) LOOP DBMS_STATS.DELETE_TABLE_STATS( ownname => 'SIEBEL', tabname => r.table_name, FORCE => TRUE ); END LOOP; end;

Thanks for quick and detailed reply. Yes existing 7.5.3. system is on Oracle 9i. Since I posted the my question, I have completed the repository merge using UNDO_MANAGEMENT = AUTO and didn't see any issue, also didn't notice any error mentioned in the metalink doc ID 477025.1.

Our test database is pretty small and repository merge took around 10 hrs. + 2 hrs for ICL migration.

Overall merge results looks OK, there are no unacceptable errors.

The Database is configured with Automatic Memory Management (AMM) by setting following two prameters.

memory_max_target=9663676416
memory_target=9663676416

and that's why PGA_AGGREGATE_TARGET was set to 0 to give full control to oracle to manage PGA and SGA.

My understanding is that Siebel doesn't have any materialized view so setting of QUERY_REWRITE_ENABLED to true or false really doesn't matter. (Please correct if I am making a wrong assumption).

Thanks again for your reply.
Regards
Asheesh

"

Sorry! Something went wrong on our end. Please try again later.

Joshua Weir

October 13, 2009 07:29 PM

"

Siebel document suggest using UNDO_MANAGEMENT = MANUAL and use single large rollback segment. But our DBA insist on using UNDO_MANAGEMENT = AUTO and do not want to use rollback segment.

I assume you are referring to metalink3 doc: Undo Management in Oracle9i and Higher Would Cause Failures During Siebel Upgrades [ID 477025.1]
as your reference.

I would suggest you go with the DBAs recommendation to use UNDO_MANGEMENT = AUTO. We used this parameter and did not have any issues, however all Siebel custom implementations are different and you may have the problem described in the above metalink doc. I would suggest that you prep the DBA before the repository merge process commences to read the doc so that a plan can be in place to make these changes if any of the specified errors do occur.

I understand that you want to put a focus on making the repository merge process as quick as possible. Have you already tried running the process? How long does it take? What is the required timings for that process? We did not make any parameter changes specific for the repository merge process that is different to the parameters I have listed in my article above. Our merge process took around 12 hours in test environment.

Looking at your parameters, the value for PGA_AGGREGATE_TARGET concerns me. This is the amount of memory assigned to PGA. We set this parameter to over 4GB. Not sure what you mean about siebel suggesting value of 1 for this parameter, where did you see this? I believe you want to set this value to much higher, ask your DBA about this.

We used STATISTICS_LEVEL = TYPICAL

We used QUERY_REWRITE_ENABLED = False

We used OPTIMIZER_INDEX_COST_ADJ = 1

We used OPTIMIZER_DYNAMIC_SAMPLING = 1

So you are upgrading from Siebel 7.5, this would be on Oracle 9i right? When upgrading to Oracle 11g the statistics generation process dramatically changes, statistics are very important for performance. See my advice in the article above also.
"