Abstract - DB2 10 for z/OS - Where we are today, and where we are going. This session will take you through the latest with DB2 10. What functions are customers finding most valuable, what the latest enhancements are, and what is the current status of DB2 10 in the marketplace? We will also take you through the latest on DB2 11, the status of the ESP, and also touch on some industry trends that are influencing the enhancements that we are planning for DB2 in the future.

6.
Keys to customer migration success• Plan for regular scheduled program of preventative service which is implemented – Need to stay more current on HIPERs at this stage in the release take up cycle – Apply preventative service every 3 months • Two “major” and two “minor” releases • Refresh of the base every 6 months (“major”) • Each base should be based on latest quarterly RSU as opposed use of PUT • In addition, two ‘minor’ packages covering HIPERs and PEs in between time – Augment by exploiting Enhanced HOLDDATA on at least weekly basis before production cutover and continue thereafter • Identify and pull all applicable HIPERs and PE fixes • Exploit Fix Category HOLDDATA (FIXCAT keywords) • Expedite the most critical PTFs into production

7.
CST and RSU example CST4Q10 CST1Q11 RSU1012 RSU1101 RSU1102 RSU1103 All service through end All service through end Sept 2010 not already Dec 2010 not already marked RSU marked RSU + H&PE H&PE + H&PE through end Dec 2010 through end Jan 2011 H&PE through end Nov 2010 through end Feb 2011Available at the beginning of Available at the beginning of Available at the beginning of Available at the beginning of January 2011 February 2011 March 2011 April 2011 Base: Sep 2010 Base: Sep 2010 Base: Sep 2010 Base: Dec 2010 H&PE: Nov 2010 H&PE: Dec 2010 H&PE: Jan 2011 H&PE: Feb 2011 H&PE = HIPER/Security/Integrity/Pervasive PTFs + PE resolution (and associated requisites and supersedes)

8.
Keys to customer migration success ...• Build a realistic project plan– Avoid crash project– Allow time at each stage for ‘soak testing’– Allow contingency for some ‘bumps in the road’– Involve applications teams early • Investigation of incompatible changes and fix up • Testing• Performing application regression and stress testing is the best way to keep ‘fires’ away from production– Use realistic production-like workload to certify readiness• Perform systematic testing of release fallback toleration• Monitor and control CPU, virtual and real storage consumption

9.
Performance and Scalability• Many opportunities for price/performance (cost reduction) improvements– Major theme of this release– Most welcome to our customers• Question is how much?– Some workloads not seeing improvements in CPU and elapsed time– Conversely see big improvements for certain workloads– Small workloads can skew expectations on savings– Some measurements and quotes are insanely positive • Should be ignored– How to extrapolate and estimate for production mixed workload • Estimation with accuracy and high confidence not practical • Benchmarking effort would be required• Very important to correctly level set performance expectations• Do not spend any performance benefits until they see them

10.
Performance and Scalability …• Assumes no major access path regressions• On Day 1 in production in CM without any changes (e.g., no rebind, no use of 1MB page size) there may be customers who see zero % improvement and even some will see degradation – Why? SPROCs disabled, puffing of run time structures for migrated packages from V8 or V9, etc• To maximise the performance improvements must: – REBIND static SQL packages – Use PGFIX=YES bufferpools with sufficient 1MB real storage page frames to 100% fully back the requirement from PGFIX=YES bufferpools• Seeing 0-10% improvement after REBIND and use of 1MB real storage frames• Need to look at total CPU resource consumption picture across – Acctg Class 2 TCB Time (Accounting Trace) – DB2 System Address spaces (Statistics Trace)

11.
Performance and Scalability …• The 0-10% CPU reduction is based on the DB2 portion of a given application workload• Customer value driven on how sub-capacity workload licensing works – Based on 4-hour rolling average MSU utilisation – Highest rolling average figure for each month used to calculate software charges for all MLC products (IBM and non-IBM) – Provided DB2 forms a significant component of the total MSU usage during peak period, any MSU savings will translate directly to MLC savings – Typically this is the online day - mid morning and mid afternoon – So for example - this may be driven by CICS-DB2 workload where the DB2 portion of the workload only represents 40-60% of the total path length – So the 0-10% may represent only 0 to 6% (i.e., needs to be discounted) – Investigate how much CPU is used in the 4-hour period for DB2 work (SQL) – Evaluate V10 price bands under WLC pricing vs. V10 MSU savings – Factor in the impact on overall z/OS software stack cost reduction: z/OS, CICS, MQ

13.
Performance and Scalability …• Opportunities for additional price/performance improvements driven by DBM1 31-bit VSCR supported by additional real storage include – More use of persistent threads with selective use of RELEASE(DEALLOCATE) • High Performance DBATs • CICS Protected ENTRY Threads • CICS Unprotected ENTRY Threads with queuing • Typical savings 0-10%, may be more – Increasing MAXKEEPD to improve Local Dynamic Statement Cache hit ratio and reduce the number of short prepares – Sysplex/Data sharing Group consolidation • So for example, 8-way to 4-way • Reduced cost of data sharing• Very important to correctly level set customer performance expectations• Do not spend any performance benefits until they see them

14.
Performance and Scalability …• Customers should expect to see some increase in real storage resource consumption with V10 (10-30%)• How to make a gross level estimate for V10 – Make sure the V8/V9 system is warmed up as the memory needs to be allocated to support normal operations – Collect QW0225RL and QW0225AX from IFCID 225 – Subtract out the VPSIZEs for all bufferpools – Add 30% to this remaining number – Also add in extra space needed for increased defaults for sort and RID pools – Must also factor an increase in MAXSPACE requirement for DB2 dumps • Typical size needed to avoid partial dump (approx 16GB) • Avoid very long dump capture times and bad system performance • Critical for V10 serviceability – Add back in the VPSIZEs for all bufferpools

21.
Performance and Scalability …• Monitoring Virtual and Release Storage – SPREADSHEETDD support in OMPE has not been enhanced to support V10 • OMPE are working on a generic spreadsheet generator • Outstanding requirement to also include serviceability fields – MEMU2 and MEMUSAGE already enhanced for V10 and available on the DB2 for z/OS Exchange community website on IBM My developerWorks • From IBM My developerWorks My Home (sign in with your IBM login at https://www.ibm.com/developerworks/mydeveloperworks/homepage), search memu2 in All My developerWorks. • (From DB2 for z/OS Exchange (http://www.ibm.com/developerworks/software/exchange/db2zos), click on View and download examples. The file is tagged with memu2. • To access MEMU2 directly (but note that if you want to be kept informed of updates and new versions, you need to log on to developerWorks rather than download the file anonymously...) – V8/V9 https://www.ibm.com/developerworks/mydeveloperworks/files/app/file/3af12254-4781-43f3-b4a8-3336e09c36df?lang=en – V10 https://www.ibm.com/developerworks/mydeveloperworks/files/app/file/e2736ed5-0c73-4c59-b291-9da08255b941?lang=en

23.
Performance and Scalability …• DB2 10, z196, EC12 synergy – Taking the general case, performance improvement from V9 to V10 observed on z10 processor should be in same range on z196 processor as long as they are measured on the same number of processors • Expectation is still in the 5-10% range – Apart from MIPs improvement, z196 provides • Higher cache hit ratio thus better scalability as number of processors per LPAR increases (more than 16 processors per LPAR) – V10 performance on z196/EC12 • Scales better with more processors per LPAR than z10 • Can run with higher number of concurrent threads – IBM measurement shows 20% ITR improvement from V9 (with a few benchmark specials) compared to V10 on z196 80-way with IRWW-like workload • Measurement is extreme case • Will only apply to very high end customers • Not a general message – Why does V10 run better on z196 • Latch contention reductions, 1MB real storage page frame size, general path length

24.
Performance and Scalability …• Use of 1MB size real storage page frames on z10, z196 and EC12 – Useful commands • DB2 -DISPLAY BUFFERPOOL(BP1) SERVICE=4 – Useful command to find out how many 1MB size page frames are being used – Especially useful when running multiple DB2 subsystems on the same LPAR – See DSNB999I message • MVS DISPLAY VIRTSTOR,LFAREA – Show total LFAREA, allocation split across 4KB and 1MB size frames, what is available – See IAR019I message

25.
Performance and Scalability …• Use of 1MB size real storage page frames on z10, z196, and EC12• Potential for reduced for CPU through less TLB misses – CPU reduction based on customer experience 0 to 6% – Buffer pools must be defined as PGFIX=YES to use 1MB size page frames – Must have sufficient total real storage to fully back the total DB2 requirement – Involves partitioning real storage into 4KB and 1MB size page frames • Specified by LFAREA xx% or n in IEASYSnn parmlib member and only changeable by IPL • 1MB size page frames are non-pageable • If 1MB size page frames are overcommitted, DB2 will use 4KB size page frames • Recommendation to add 5-10% to the size to allow for some growth and tuning – Must have both enough 4KB and enough 1MB size page frames – Do not use 1MB size real storage frames until running smoothly on V10 – Make sure any critical z/OS maintenance is applied before using 1MB size real storage page frames

26.
Performance and Scalability …• Exceptions where CPU regression for very light OLTP transactions – Skinny packages with few simple SQL – Package allocation cost overrides benefit from SQL optimizations in V10 – APAR PM31614 may solve this by improving package allocation performance – Good candidate for the use of persistent threads with RELEASE(DEALLOCATE) and will help compensate

29.
Performance and Scalability …• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time – Major customer opportunities here for 31-bit VSCR and improved price/performance • Potential to reduce legacy OLTP transaction CPU cost through use of – More CICS protected ENTRY (persistent) threads – More use of RELEASE(DEALLOCATE) with next/existing persistent threads • Potential to reduce CPU for DRDA transactions by using High Performance DBAT – Must be using CMTSTAT=INACTIVE so that threads can be pooled and reused – Packages must be bound with RELEASE(DEALLOCATE) to get reuse for same connection – MODIFY DDF PKGREL(BNDOPT) must also be in effect – Do not to overuse RELEASE(DEALLOCATE) on packages • Will drive up the MAXDBAT requirement • Potential to reduce CPU when using KEEPDYNAMIC(YES) e.g., SAP – Increase MAXKEEPD to improve Local Dynamic Cache Hit Ratio and reduce the number of short prepares • Must provision additional real storage to back the requirement for each opportunity

30.
Performance and Scalability …• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time – More persistent threads with RELEASE(DEALLOCATE) is also trade off with BIND/REBIND and DDL concurrency – For RELEASE(DEALLOCATE) some locks are held beyond commit until thread termination • Mass delete locks (SQL DELETE without WHERE clause) • Gross level lock acquired on behalf of a SQL LOCK TABLE • Note: no longer a problem for gross level lock acquired by lock escalation – CICS-DB2 accounting for cost of thread create and terminate, or avoidance thereof • CICS uses the L8 TCB to access DB2 irrespective of whether the application is thread safe or not • Thread create and terminate cost will clock against the L8 TCB and will be in the CICS SMF Type 110 record • Note: prior to OTE did not capture the thread create in the SMF Type 110

31.
Performance and Scalability …• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time – High Performance DBATs (Hi-Perf DBATs) is a new type of distributed thread • Must be using CMTSTAT=INACTIVE so that threads can be pooled and reused • Packages must be bound with RELEASE(DEALLOCATE) to get reuse for same connection and - MODIFY DDF PKGREL(BNDOPT) must also be in effect • When a DBAT can be pooled after end of clients UOW – Now DBAT and client connection will remain active together • Still cut an accounting record and end the enclave – After the Hi-Perf DBAT has been reused 200 times • DBAT will be purged and client connection will then go inactive – All the interactions with the client will still be the same in that if the client is part of a sysplex workload balancing setup, it will still receive indications that the connection can be multiplexed amongst many client connections – IDTHTOIN will not apply if the if the Hi-Perf DBAT is waiting for the next client UOW – If Hi-Perf DBAT has not received new work for POOLINAC time • DBAT will be purged and the connection will go inactive – If # of Hi-Perf DBATs exceed 50% of MAXDBAT threshold • DBATs will be pooled at commit and package resources copied/allocated as RELEASE(COMMIT) – Hi-Perf DBATs can be purged to allow DDL, BIND, and utilities to break in • Via -MODIFY DDF PKGREL(COMMIT)

32.
Performance and Scalability …• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time – High Performance DBATs (Hi-Perf DBATs) should be carefully • Want to have some high performance applications running on LUW application servers connected to DB2 10 for z/OS running with High Performance DBATs and others not • Standard ODBC and JDBC packages supplied with drivers/connect packages should be bound twice into two different package collections e.g., – The CS package in collection 1 (e.g., NULLID) would be bound with RELEASE(COMMIT) and would not use high performance DBATs – The CS package in collection 2 (e.g., NULLID2) will be bound with RELEASE(DEALLOCATE) so that the applications using that package will be eligible to use high performance DBATs – For JDBC applications • Set the currentPackageSet property in the respective datasource – For .NET and ODBC / CLI applications • Set CurrentPackageSet parameter in the db2dsdriver.cfg configuration

33.
Performance and Scalability …• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time – Potential to reduce the number of DB2 subsystems in data sharing group • First step is to collapse multiple DB2 members running on the same LPAR • May then be able to reduce the number of LPARs/DB2 members • Consider the increase in logging rate per DB2 member – Possible aggravation of LC19 contention despite V10 improvement • Consider the increase in SMF data volume per LPAR – Can enable DB2 compression of SMF data to reduce SMF data volume • Experience is that Accounting records compress ~70% • Tiny CPU overhead at ~1% – Re-consider use of accounting roll up for DDF and RRSAF workload (default) • Compromises performance PD/PSI as lose information on outlying transactions • Significant enhancements to package level accounting so it is now useful • Consider the increased DUMPSRV and MAXSPACE requirement – Re-emphasise the continued value of data sharing to differentiate the platform • Support avoidance of planned outages • Avoid humongous single points of failure • Recommended minimum of 4-way as very best solution for true continuous availability

34.
Performance and Scalability …• 31-bit and 64-bit virtual storage contraction – CONTSTOR=YES and MINSTOR=YES • These existing system parameters drive the contraction of 31-bit storage pools and the best fit allocation of 31-bit storage respectively • Not applicable to 64-bit storage • Not as critical as before V10 • Assuming generous DBM1 31-bit VSCR in V10 achieved, set CONTSTOR=MINSTOR=NO – 64-bit thread pools are contracted under control of • Commit count • New Real Storage Management DISCARD function (see follow on slides)

35.
Performance and Scalability …• Real storage – Need to carefully plan, provision and monitor real storage consumption – Prior to V10 a hidden zparm SPRMRSMX (‘real storage kill switch’) existed • SPRMRSMX prevents a runaway DB2 subsystem from taking the LPAR down – Should be used when there is more than one DB2 subsystem running on the same LPAR – Aim is to prevent multiple outages being caused by a single DB2 subsystem outage – Should to set to ~1.2 to 2x normal DB2 subsystem usage depending on contribution from total bufferpool storage requirement – DB2 subsystem usage is DBM1 ASID usage (as reported in IFCID 225) plus MSTR ASID usage plus DIST ASID usage – Kills the DB2 subsystem when SPRMRSMX value reached • With V10, will need to factor in 64-bit shared and common use to establish new footprint – Problems with introduction of V10 • Unable to monitor the REAL and AUX storage frames used for 64-bit shared storage – V9 not really an issue, as limited use of 64-bit shared – But now V10 makes extensive use of 64-bit shared • LPAR level instrumentation buckets for REAL and AUX storage use – If more the one DB2 subsystem on the same LPAR then the numbers reported are inaccurate – Only able to get reliable numbers if only one subsystem like DB2 on the LPAR uses 64-bit shared • Lack of ENF 55 condition monitoring – 50% of AUX used

36.
Performance and Scalability …• Real storage … – DB2 APAR PM24723 is very important • Monitoring issue is addressed and new extensions to IFCID 225 provided – Pre-req is new MVS APAR OA35885 which provides a new callable service to RSM to provide REAL and AUX used for addressing range for shared objects • SPRMRSMX hidden zparm now becomes an opaque parameter REALSTORAGE_MAX • Introduces DISCARD mode to contract storage usage to protect against excessive paging and use of AUX • New zparm REALSTORAGE_MANAGEMENT controls when DB2 frees storage frames back to z/OS – ON -> Discard unused frames all the time - discard stack, thread storage, keep footprint small – OFF -> Do not discard unused frames unless things are getting out of hand – AUTO (default) -> Detect whether paging is imminent and reduce the frame counts to avoid system paging • DB2 monitors paging rates, switches between ON/OFF and decides when to start discard of unused frames based on – 80% of REALSTORAGE_MAX reached – 50% of AUX (ENF55 condition) used – Hitting AVQLOW (available real storage frame) when REALSTORAGE_MANAGEMENT=AUTO • New messages (DSNV516I, 517I) for when paging rate thresholds cause DB2 to free real frames • Strong recommendation to apply PTF for APAR PM24723 before going into business production and to run with REALSTORAGE_MANAGEMENT=AUTO

37.
Performance and Scalability …• High INSERT performance … – Reduced LRSN spin for inserts to the same page • Works well for MRI and INSERT within loop in a data sharing environment – Optimization for ‘pocket’ sequential insert works well • Index manager picks the candidate RID during sequential insert (next lowest key rid) • Higher chance to find the space and avoiding a space search – Parallel index IO works very well when activated for random key inserts • >= 3 indexes • Prefetch and deferred write offload to zIIP to compensate• Compress on INSERT – Compression ratios almost as good compared with running REORG later• Active log writes – Prior to V10, log writes are done serially when re-writing partial filled CIs – Determined that destructive writes due to IO errors no longer occur – Now all log write IOs are done in parallel – Elapsed time improvements

39.
Performance and Scalability …• Hash access vs. Index only access and index lookaside – Competes against index only access and index lookaside • Advantage that index only access still provides for clustered data access • Can now have unique index with INCLUDE columns – Reduce number of indexes required for performance reasons – Improve insert, update and delete performance – Need to find the sweet spot • High NLEVELS in index (>=3) • Purely direct row access by primary key • Truly random access • Read intensive, not volatile • No range queries • Many rows per page etc – Space allocation of fixed hash space is key to control overflow • Too small will lead to rows in overflow • Too large will lead to random IO • REORG AUTOESTSPACE(YES) but still some rows in overflow – Degraded LOAD and REORG utility performanceftp://public.dhe.ibm.com/software/data/sw-library/db2/zos/value_DB2_hash_access_wp.pdf

40.
Performance and Scalability …• Improved index space search when index leaf page split – In V8/V9, Index Manager • First searches the space map page covering the splitting page • If there is no free entry, searches the space map pages starting from the first space map page to the highest allocated page • If all the space map pages are full, has to extend • If the index is huge and all the space map pages having free entries are toward the end of the index, this process can take a very long time – In V10, Index Manager • After searching the space map page covering the splitting page, and if it is full, will start searching from the page number it last remembered having the free entry in it (page A) • When it reaches to the highest allocated page, it starts from the beginning and search forward till it reaches to page A • Then it has to extend since the entire index is full • The page number of the space map page having free entry is stored in an in-memory control block – When Index Manager finds a space map page with free entry, it is updated to be the page number of that space map page – Index Manager updates this value when an index page is deleted or when the index is mass deleted • Retrofitted back to V9 via APAR PM15474

44.
Migration and Planning …• Use of V10 Early Code with V8 – It will take an IPL to originally install the V10 Early Code – V8 Early Code does not understand –REFRESH – However, subsequent maintenance to the V10 Early Code can be accomplished with a -REFRESH command• If coming from V8 – BSDS must be reformatted for larger active / archive tracking• IPL amounts for need to be adjusted based on number of DB2 members – 64-bit Private (1TB) – 64-bit Shared (128GB) – 64-bit Common (6GB)

46.
Migration and Planning …• DBRMs bound directly into plans no longer supported – If found in V10, will trigger auto bind into packages – For V8 and V9 • APARs PK62876/PK79925 adds new syntax to convert from DBRMs to packages – REBIND PLAN option COLLID – Could result in access path change • APARs PM01821 (Version) and PM30382 (Location from * to blank) should be on – Best to migrate DBRMs to packages before migrating to V10 – Old plans and packages bound prior to V6 will require REBIND – Catalog and Directory must be SMS managed (EF, EA) ahead of CM – PDSEs required for SDSNLOAD, SDSNLOD2, ADSNLOAD – DSNHDECP NEWFUN=V10|V9|V8

48.
BIND, REBIND and EXPLAIN …• RUNSTATS/REBIND recommendations based upon on scenario – V8 preparation • If RUNSTATS will be difficult on large number of objects immediately after migration to V9/10, then REORG and/or RUNSTATS (V8) immediately prior to migration can reduce RUNSTATS need on V9/10 - as RUNSTATS INDEX under V10 can be sufficient to capture new CR/DRF – V9 migration • RUNSTATS objects as soon as possible after migration • Target dynamic applications first as these are exposed to new access paths immediately • Delay static REBINDs until associated objects have RUNSTATS run – V8->V10 migration • RUNSTATS objects as soon as possible after migration • Target dynamic applications first as these are exposed to new access paths immediately • Equal priority - target static parallelism packages to REBIND to avoid incremental bind at each execution • Delay non-parallelism REBINDs until associated objects have RUNSTATS run – V9->V10 migration • REBIND static parallelism packages as soon as possible to avoid incremental bind at each execution • Delay non-parallelism REBINDs until associated objects have RUNSTATS run • BIND/REBIND options APREUSE/APCOMPARE are available on V10 for packages bound on V9

49.
BIND, REBIND and EXPLAIN …• RUNSTATS/REBIND recommendations based upon on scenario … – V9/10 co-existence • Set ABIND=COEXIST while in co-existence with V8 • What to do with static parallel queries? – Accept incremental bind whenever executed on V10 member – OR, REBIND with DEGREE(1) to disable parallelism while in co-existence • Follow V8-V10 migration steps after all members are V10, including resetting the following zparms • Set ABIND=YES

50.
BIND, REBIND and EXPLAIN …• Single thread BIND/REBIND performance – Degraded CPU and elapsed time performance on entry to CM • PLANMGMT=EXTENDED is now default • New indexes defined for post ENFM when hash links are eliminated • Change in access path (index access) on entry to CM • No concurrency improvement until after Catalog restructure in ENFM• Concurrent BIND/REBIND performance – Problems addressed • Performance problems related to DELETE/INSERT process • Space growth in SPT01 for both LOB space and base table – Now working well • Inefficient space search for out of line LOB in data sharing (APAR PM24721) • Inline LOB with compression for SPT01 to address SPT01 growth (APAR PM27073) • More efficient space reuse for LOB tablespace (APAR PM64226) – Recommendations • Customers need to change existing procedures to go parallel • But cannot do this until post ENFM • Benefit from reducing application down time to implement new application releases

51.
BIND, REBIND and EXPLAIN …• EXPLAIN tables – Format and CCSID from previous releases is deprecated in V10 • Cannot use pre V8 format – SQLCODE -20008 • V8 or V9 format – Warning SQLCODE +20520 regardless of CCSID EBCDIC or UNICODE • Must not use CCSID EBCDIC with V10 format – EXPLAIN fails with RC=8 DSNT408I SQLCODE = -878 – BIND with EXPLAIN fails with RC=8 DSNX200I – Recommendations • Use CCSID UNICODE in all supported releases (V8, V9, V10) due to problems with character truncation and conversion etc • Use the V10 extended column format with CCSID UNICODE when – Applications access EXPLAIN tables and can only tolerate SQLCODE 0 or +100 • V10 column format is supported under V8 and V9 with the SPE fallback APAR PK85956 applied with the exception of – DSN_STATEMENT_CACHE_TABLE due to the BIGINT columns – APAR PK85068 can help migrate V8 or V9 format to the new V10 format with CCSID UNICODE

53.
Availability …• Restart Light enhancement – LBACKOUT will now be honoured • LBACKOUT=YES|AUTO will cause postponed abort (PA) URs to be created • Restart will complete • DB2 will shut down • Retained locks will be kept on behalf of PA URs – Controlled via new system parameter – Also retrofitted back to V9 via APAR• Online REORG with FORCE – Only running threads which are blocking are cancelled – Threads which are suspended / inactive will cause REORG to still fail• Online REORG LOB with DISCARD – Cannot handle LOB columns greater than 32KB

54.
Online Migration in 24*7 environment• Technically possible to run DSNTIJTC and DSNTIJEN alongside well behaved online workloads – Jobs use SQL DDL with frequent commit and REORG SHRLEVEL(REFERENCE) – Designed to fail gracefully leaving DB2 catalog fully operational – After problem determination is complete, the respective job can be corrected and resubmitted – The respective job will restart from where it left off

55.
Online Migration in 24*7 environment …• But some ‘rules of the game’ and you must be prepared to play – DSNTIJTC and DSNTIJEN jobs should be scheduled during a relative quiet period – If non data sharing • Must stop all application workload when DSNTIJTC job is running – If data sharing • Must route work away from the DB2 member where DSNTIJTC job is running • Must temporarily change workload balancing and sysplex routing scheme – Should synthetically stop all of the following workload types from running • SQL DDL, Grants & Revokes, BIND/REBIND, utilities, monitors – All essential business critical workloads that are running should commit frequently – Must be prepared to watch and intervene if needed – Strong recommendation to perform Pre-Migration Catalog Migration Testing – Must be prepared for DSNTIJTC and/or DSNTIJEN jobs to possibly fail or for some business transactions to fail

56.
Online Migration in 24*7 environment …• Some critical maintenance – APAR PM62572 • Undetected lock contention failure during the switch phase of the ENFM REORG step – APAR PM58575 • Autobind triggers deadlock with RTS• If not prepared to play by the ‘rules of the game’ then take the outage – Quiesce all applications – Run DSNTIJTC or DSNTIJEN job with DB2 started ACCESS(MAINT)

57.
Removal of DDF Private Protocol• Must absolutely eliminate all use of DDF Private Protocol before migrating – No longer supported In V10 – Any local packages miss tagged with DDF Private Protocol will be tolerated – Otherwise package must exist in both local and remote sites – A lot of packages and plans are bound with DBPROTOCOL(PRIVATE) because this was the default (zparm DBPROTCL) when introduced in DB2 V6 • DSNT226I is issued if DBPROTOCOL(PRIVATE) is used during REBIND• See Reference Material for additional information on removing Private Protocol

58.
Steps to investigate CPU performance regression• Comparing CPU performance on V10 relative to V8 or V9 – More difficult to do in real customer production environment • Uncertainty caused by application changes • Fluctuation in the daily application profile especially batch flow – Must try to normalise things out to ensure workloads are broadly comparable • Broadly similar in terms of SQL and getpage profile • Usually have to exclude the batch flow • Factor out extreme variation • Need to look at multiple data points

59.
Steps to investigate CPU performance regression …• Check that you have the same pattern across releases from a DB2 perspective based on combined view of DB2 Statistics and Accounting Traces• Validate that there have been no access path regression after migration or from application changes going on at the same time as the migration• Use as a starting point look at – Statistics Trace • MSTR TCB & SRB, DBM1 TCB, SRB & IIP SRB, IRLM TCB & SRB CPU times • Split of CP vs. zIIP for DBM1 is likely to be very different between V9 and V10 – Accounting • For each CONNTYPE – Class 2 CPU times on CP and zIIP, numbers of occurrences and commits/rollbacks – Workload indicators: • DML (split by type: select, insert, update, fetch, etc...), • Commits, rollbacks, getpages, buffer update • Read and write activity (#IOs. #pages)

61.
Steps to investigate CPU performance regression …• Need to check the workload indicators for the chosen periods• Similarities between data points for a given version, but big variations between V8/V9 and V10 – Sign that something has changed from an application or access path perspective – More granular analysis of accounting data will be required to pin point the specific plan/package

62.
Other• Ability to create classic partitioned table space (PTS) – Classic PTS deprecated in V10 – By default will be created as UTS PBR – UTS will only support table based controlled partition syntax – Options to be able to officially create classic PTS • Specify SEGSIZE 0 on CREATE TABLESPACE • Set new zparm DPSEGSZ=0 (default 32)• Fast Log Apply storage – System parameter LOGAPSTG is eliminated – Total FLA storage is now 510MB• Old COBOL and PL/1 – V7 lookalike pre-compiler (DSNHPC7) for older COBOL and PL/I is still provided• DDL Concurrency after Catalog restructure – Some help provided but concurrency issues not absolutely solved – Still deadlocks with parallel heavy DDL against different databases

63.
Other …• SPT01 compression is back – Via system parameter COMPRESS_SPT01=YES (default is NO) – Followed by REORG of SPT01 as part of ENFM process or later• Statistics Interval – IFCIDs 2, 202, 217, 225, 230 are always cut at a fixed 1 minute interval – Remember to normalise the data when comparing V8/V9 vs. V10 – Only the frequency of IFCIDs 105, 106, 199 are controlled via system parameter STATIME – Consider setting STATIME to 5 minutes now to reduce SMF data volume

66.
Other …• CHAR(dec), VARCHAR(dec), CAST(dec as CHAR), CAST(dec as VARCHAR) – APAR PM29124 restores compatible behavior for CHAR(dec) • Set new system parameter BIF_COMPATIBILITY=V9 (default) • Get the old behavior for the CHAR(dec) function • But new V10 behavior for VARCHAR/CAST • New IFCID 366 to identify potential applications exposed – APAR PM66095 and PM65722 • BIF_COMPATIBILITY=V9 will continue to provide the old behavior for the CHAR(dec) • BIF_COMPATIBILITY=V9_DECIMAL_VARCHAR will get the old behavior for VARCHAR and CAST also • BIF_COMPATIBILITY=CURRENT provides the new behavior for all the built in functions – APAR PM70455 will retrofit IFCID 366 to Version 9

67.
Summary• Very good release in terms of the opportunities for price/performance and scalability improvement – Significant DBM1 31-bit VSCR after rebind – Use long term page fixed buffer pools • Exploit 1MB real storage page frames on z10 and z196 – Reduced latch contention, log manager improvements, etc – Opportunity for further price performance improvements • More use of persistent threads – CICS, IMS/TM, High Performance DBATs • More use of RELEASE(DEALLOCATE) with persistent threads • More use of RELEASE(DEALLOCATE) is a trade off – Increased storage consumption • Need to plan on additional real memory – Reduced concurrency • BIND/REBIND and DDL • Increase MAXKEEPD to reduce short prepares for dynamic SQL – Opportunity for scale up and LPAR/DB2 consolidation

68.
Summary …• Carefully plan, provision and monitor real storage consumption• Any customer migrating from either V8 or V9 to V10 should make a solid plan, take extra care to mitigate against the risk and set themselves up for success – Regular full ‘major’ maintenance drops – Exploitation of CST/RSU recommended maintenance – Augment by regular use of Enhanced HOLDDATA – Perform application regression and stress testing to keep ‘fires’ away from production – Plan should allow some contingency for some ‘bumps in the road’

72.
Security considerations when removing DDF Private Protocol• This section also applies to customers using DRDA exclusively• There are fundamental differences on how authorization is performed based on the distributed protocol used• Private Protocol (DB2 for z/OS requester) – Supports static SQL statements only – Plan owner must have authorization to execute all SQL executed on the DB2 server – Plan owner authenticated on DB2 requester and not on the DB2 server• DRDA Protocol – Supports both static and dynamic SQL statements – Primary auth ID and associated secondary auth IDs must have authorization to execute package and dynamic SQL on the DB2 server – Primary auth ID authenticated and secondary auth IDs are associated on DB2 server• Prior to V10, Private Protocol and DRDA Protocol can be used by same application – Private Protocol security semantics was used due to possible inconsistent behavior which is dependent on how programs are coded and executed

73.
Security considerations when removing DDF Private Protocol …• But there is also a difference prior to V10 in the authorizations required by an incoming DRDA connection at the DB2 for z/OS server, depending on where the connection come from: – Dynamic SQL DRDA connection from DB2 Connect and/or DB2 client direct connection • Connecting userid needs authority to run the appropriate DB2 package and authority to access the DB2 table – Dynamic SQL DRDA connection from DB2 for z/OS requester • Connecting userid needs authority to access the DB2 table • Originating plan owner needs authority to run the appropriate DB2 package• It is different for DB2 for z/OS requester to DB2 for z/OS server because connections were designed to use Private Protocol (PP) semantics to avoid changing authids when switching between PP to DRDA Protocol• With the disappearance of PP in V10, DB2 have decided to bring the DRDA connection from DB2 for z/OS requester to DB2 for z/OS server in line with other DRDA requesters and to change the authorizations required – This was retrofitted back into V8 and V9 with APAR PM17665 – It is very important to distinguish clearly between the behavior of DRDA before and after APAR PM17665

74.
Security considerations when removing DDF Private Protocol …• APAR PK92339 introduced new zparm PRIVATE_PROTOCOL=YES|NO – To prevent future introduction of PP then set PRIVATE_PROTOCOL=NO• The result of migrating to V10 or the introduction of APAR PM17665 under V8 or V9, when running with PRIVATE_PROTOCOL=NO introduces the authorization changes at the DB2 for z/OS server for DRDA connections coming from DB2 for z/OS requester – PP security semantics are no longer used as default for access from a DB2 for z/OS requester – Plan owner value is ignored and connecting userid must be granted authority to execute the package at the remote site – Otherwise the connection will fail with SQLCODE -551• As a result of customer complaints, APAR PM37300 introduces PRIVATE_PROTOCOL=AUTH which allows an installation to – Disable PP but keep the plan owner authorization check (the “private protocol semantics”)• Migration to V10 or the application of PTF for APAR PM17665 does affect you even if you have everything already bound as DRDA

75.
Security considerations when removing DDF Private Protocol …• In summary – Before disabling private protocol, ensure all appropriate grants are performed • Grant execute privilege to any user who plans to run a package or stored procedure package from a DB2 for z/OS requester, just like other DRDA clients – DB2 V8 and V9 can disable private protocol but still maintain private protocol authorization checks by • Setting system parameter PRIVATE_PROTOCOL=AUTH – DB2 10 does not support private protocol but can allow private protocol authorization checks for use of DRDA protocol for DB2 for z/OS requesters by • Setting system parameter PRIVATE_PROTOCOL=AUTH