Archive

Query Processing and its performance is the heart of any RDBMS. SAP ASE uses cost based optimizer for query processing and optimization means it creates the different access path (query plan) for a query and it picks up best plan which has least cost for a given query. This query plan cost is calculated from the data distribution and statistics information of a table and its columns.

There are two types of statistics –

Object Level Stats : The object level statistics describe a table and its indexes and include values such as number of rows and pages in the table and/or index (es), the number of empty pages, and the cluster ratios among others. Some of the object level statistics are updated automatically by ASE, others when ‘update statistics’ is run.

Column Level Stats : The column level statistics describe the distribution of values in the column; they consist of the column’s histogram and density values and are updated when an index is created or an ‘update statistics’ command is run.

System Tables which store these stats:

The systabstats and sysstatistics tables store statistics for all tables, indexes, and any un-indexed columns for which you have explicitly created statistics.

systabstats stores information about the table or index as an object, and is updated by query processing, data definition language, and update statistics commands. Simply it stores object level of statistics.These statistics include:

Number of data pages for a table, or the number of leaf level pages for an index.

Number of rows in the table

Height of the index

Average length of data rows and leaf rows

Number of forwarded and deleted rows

Number of empty pages

Statistics to increase the accuracy of I/O cost estimates, including cluster ratios, the number of pages that share an extent with an allocation page, and the number of OAM and allocation pages used for the object

Stopping points for the reorg command so that it can resume processing

systabstats contains one row for each clustered index, one row for each nonclustered index, one row for each table without a clustered index, and one row for each partition.

The storage for clustered index information depends on the locking scheme for the table:

For data-only-locked tables, systabstats stores an additional row for a clustered index.

For allpages-locked tables, the data pages are treated as the leaf level of the index, so the systabstats entry for a clustered index is stored in the same row as the table data.The indid column for clustered indexes on allpages-locked tables is always 1.

sysstatistics table stores one or more rows for each indexed column on a user table; it also stores statistics for unindexed columns.

The first row for each column stores basic statistics about the column, such as the density for joins and search arguments, the selectivity for some operators, and the number of steps stored in the histogram for the column.

If the index has multiple columns, or if you specify multiple columns when you generate statistics for unindexed columns, there is a row for each prefix subset of columns.

Additional rows store histogram data for the leading column. Histograms do not exist if indexes were created before any data was inserted into a table. To generate a histogram, run update statistics after inserting data.

Transaction T1 (Session 1)modifies a data item. Another transaction T2 (Session 2)then reads that data item before T1 performs a COMMIT or ROLLBACK. If T1 then performs a ROLLBACK, T2 has read a data item that was never committed and so never really existed.

Now with isolation level 0 , I am trying to read data and it is allowing dirty reads.

rollback

If Session 1 rollbacks, session will have inconsistent.

Understanding Isolation Level “2” : Avoid Repeatable Reads

What is Repeatable Reads?

Transaction T1 (session 1) reads a data item. Another transaction T2 (session 2) then modifies or deletes that data item and commits. If T1 then attempts to reread the data item, it receives a modified value or discovers that the data item has been deleted.

Transaction T1 reads a set of data items satisfying some . Transaction T2 then creates data items that satisfy T1’s and commits. If T1 then repeats its read with the same , it gets a set of data items different from the first read.

SAP

Located in Walldorf, Germany, SAP (www.sap.com) has several DBMS products that are used for transaction systems: SAP Sybase Adaptive Server Enterprise (ASE), SAP Sybase iAnywhere and SAP Hana. Both ASE and iAnywhere are available as software only, while SAP Hana is marketed as an appliance.

In less than 18 months since the offering’s release in April 2012, more than 1,000 customers have chosen to run SAP Business Suite on SAP Sybase ASE and there are more than 2,000 customer installations. Both new and existing SAP customers can run a high-performance relational database management system (RDBMS) optimized for SAP Business Suite that helps improve operational efficiency and significantly reduce overall costs. The announcement was made at the SAP Database and Technology Partner Summit in Barcelona.

Business & People want fast exact and correct answer of all questions from this massive amount of data.

Challenge 3: Current Technologies Can not deliver with this massive data growth.

Historical DBMS :

Historically database systems were designed to perform well on computer systems with limited RAM, this had the effect that slow disk I/O was the main bottleneck in data throughput. Consequently the architecture of these systems was designed with a focus on optimizing disk access, e. g. by minimizing the number of disk blocks (or pages) to be read into main memory when processing a query.

New Hardware Architecture ( up to or more 128 Cores of CPU and 2TB of RAM)

Computer architecture has changed in recent years. Now multi-core CPUs (multiple CPUs on one chip or in one package) are standard, with fast communication between processor cores enabling parallel processing. Main memory is no-longer a limited resource, modern servers can have 1 TB of system memory and this allows complete databases to be held in RAM. Currently server processors have up to 80 cores, and 128 cores will soon be available. With the increasing number of cores, CPUs are able to process increased data per time interval. This shifts the performance bottleneck from disk I/O to the data transfer between CPU cache and main memory

Need of In-memory Technology SAP HANA :

From the discussion above it is clear that traditional databases might not use current hardware most efficiently and not able to fulfill current and future business need.

The SAP HANA database is a relational database that has been optimized to leverage state of the art hardware. It provides all of the SQL features of a standard relational database along with a feature rich set of analytical capabilities.

Using groundbreaking in-memory hardware and software, HANA can manage data at massive scale, analyze it at amazing speed, and give the business not only instant access to real time transactional information and analysis but also more flexibility. Flexibility to analyze new types of data in different ways, without creating custom data warehouses and data marts. Even the flexibility to build new applications which were not possible before.

Always consider to migrate the Development environment first , then UAT. Before moving to production Perform Regression testing on UAT enviornment.

Please consider to create the script to perform update stats,xp_postload(drop and re create index) for each and every database.

Steps for an ASE Database( You can repeat same steps for other databases) :

Step 1: Run the consistency checks in ASE database in Source (AIX) environment, to make sure that everything is fine.

Step 2: Put the database in single user mode.

Step3: Make sure there is no user activity on the Source Database .

Step 4: Run the sp_flushstats in the database.

Step 5: Take the backup of the database in Source (AIX) environment.

Step 6: Ftp the Files to Target environment. (AIX to Linux)

Step 7: Create and build the dataserver and databases in target Linux environment with exactly same configuration. You might require to change some of the config param in Linux environment for performance point of view. ( Lets not discuss it here, as it is out of context).

Step 8: Also migrate the Login, roles from source server to target server

Step 9: Load the database in Linux environment. (If there were user activity during dump process, load will be fail.)

Step 10: Online the database. If the target ASE version is new with source, It will also perform upgrade in this step.

Step 11: Fix the corrupt indexes using the xp_postload. If the Database size is more than 20G, try drop and re-create index , in this case xp_postload would not be effective.

Step 12: Update the stats on all tables.

Step 13: If there is replication setup in your environment, please setup replication after that.

Issue Faced:

1. If there is any user online during backup process, your load will fail( in the step for cross platform conversion).

2. After online database, we seen the -ve values in sp_helpdb output for few databases. There are two ways to fix this :

i) Try to run dbcc

dbcc usedextents(<DB name or DB ID>, 0, 1, 1)

ii) Use the Traceflag 7408 and 7409 in Run Server file and reboot the instance. It will not take much time as compare first option.

By default ASE doesn’t allow to lock last unlocked login that have sa_role/sso_role role. However, it can be locked if a role has explicit password set to enable the role while login.

If a role is altered to have a limitation on failed attempts, and a login attempts to enable the role and fails the required number of times, the role is locked for all holders of the role. Likewise, since we can’t explicitly lock the last unlocked login with sa_role or sso_role, it is possible for failed login attempts to indirectly lock that login.

Case: An user X owns a stored procedure that access a table owned by another user Y, and that table has a trigger that updates another table owned by user Z

Clumsy Solution –> Grant explicit permissions to user X on table1 and table2, similarly grant execute permissions and table access to users Y and Z. In a production environment think of many users available in a database and thousands of objects, it is not possible.

Good Solution –> Create all objects with dbo user and provide execute permission ONLY on stored procedures to user X, thus when user would execute the stored procedure, SP would be able to modify the table1 and trigger on table1 would be able to modify tabl2, though explicit access on tables have not been granted to user X. This is how stored procedure maintain the security features. User X shall not be able to access table1 and table2 explicity but stored procedure run by user X can modify those tables.

Subscribe Us

Worldwide Visitors

Contributors

Anurag has more than 6+ years of experience in Sybase Database Development .His Area of expertise includes Performance, Query Optimization, Cost Optimization, TSQL Development. He is also involved in Consultancy to Financial Firms for Database Implementation and Maintenance. He has supported many Global Financial firms. AnVa (Founder) has more than 6+ years Exp in Sybase ASE/REP Database Administration. His area of interest is ASE Implemenation and maintenance , Performace Tunning , Sybase HA ,Shared Cluster and Replication. Also exploring the In-Memory Databases (HANA) , Big Data, Hadoop and Java. He is also supporting the Forum Sybase Team and started this sybaseblog.com in late 2009.Andrew is Guest Blogger on sybaseblog.com. Andrew have been working with Sybase for more than a decade: ASE & RS mostly.
He has written lot of tools that help to manage/monitor the system for ASE and Replication and some tools are on the way..
He is also poet and love to be creative all the way.

Admin

Disclaimer

This Blog is neither sponsored by nor in any way affiliated with SAP & Sybase,Inc. In any case, neither any authors nor SAP & Sybase Inc assumes any responsibility for errors in the blog postings, nor do they have any liblilties for damages resulting from the use of blog postings.If you are using any Supported & Unsupported commands/features listed in blog, this is completely on your(user) risk, sybaseblog.com not having any libilitiy.

This blog is only for sharing purpose. The opinions expressed here represent bloggers own and not those of their any past and present employer.