Archive

We see various kinds of enterprise level application software around us and these application soft-wares are being used for different industries like in technology, business, research, finance and many more. But all these application software runs upon a core infrastructure: hardware, storage (internal/external) and one specialized software operating system(OS). We all know OS is specialized software which provides the environment or you can say, it facilitates to other app software so that they can run on shared resources with optimal performance. In a nutshell, all application software depends up one specialized software – operating system. And finally operating system is layer on hardware.

If you see closely ,all these three make a tightly coupled system and why tightly coupled- suppose if there is any error in hardware , then that error can impact operating system and finally operating system will effect application software. Any error in operating system, can leads to shutdown of application software. Means error in a hardware component will propagate to OS and finally to application package. Similarly error in OS will propagate to application software. We come across these situations due to error in hardware or bug in any lib of operating system – finally we hit application software and this leads to take the downtime for application. Application package is highly impacted with OS as well as hardware problems.

Now the question is , can I think a system where these three components are loosely coupled. Can we make a system where If we can reboot the operating system and the process running on top of that will be intact. Is this really possible? If We can replace a hardware but there would not be any interruption in operating system and application software. Looking weird thought na? But, can we really imagine a system like this? Does cloud provide similar kind of infrastructure to us? We definitely need this kind of system in this fast demanding world and we need to come up with something in this direction.

Now the question is – Is that really possible? I am not sure whether it is possible are not but we definitely have problem with these tightly coupled software architecture , and definitely we need a brain storming on this. One potential thought can be – Can we transport process from one system to other system before taking a downtime of system? And This was the Title of discussion. Can we transport the processes from one system to another without much impacting the application package. Can we make the top layer – application software layer independent of OS and hardware? This is one possible pointer , we can have many more like this, isn’t it? What do you think?

Query Processing and its performance is the heart of any RDBMS. SAP ASE uses cost based optimizer for query processing and optimization means it creates the different access path (query plan) for a query and it picks up best plan which has least cost for a given query. This query plan cost is calculated from the data distribution and statistics information of a table and its columns.

There are two types of statistics –

Object Level Stats : The object level statistics describe a table and its indexes and include values such as number of rows and pages in the table and/or index (es), the number of empty pages, and the cluster ratios among others. Some of the object level statistics are updated automatically by ASE, others when ‘update statistics’ is run.

Column Level Stats : The column level statistics describe the distribution of values in the column; they consist of the column’s histogram and density values and are updated when an index is created or an ‘update statistics’ command is run.

System Tables which store these stats:

The systabstats and sysstatistics tables store statistics for all tables, indexes, and any un-indexed columns for which you have explicitly created statistics.

systabstats stores information about the table or index as an object, and is updated by query processing, data definition language, and update statistics commands. Simply it stores object level of statistics.These statistics include:

Number of data pages for a table, or the number of leaf level pages for an index.

Number of rows in the table

Height of the index

Average length of data rows and leaf rows

Number of forwarded and deleted rows

Number of empty pages

Statistics to increase the accuracy of I/O cost estimates, including cluster ratios, the number of pages that share an extent with an allocation page, and the number of OAM and allocation pages used for the object

Stopping points for the reorg command so that it can resume processing

systabstats contains one row for each clustered index, one row for each nonclustered index, one row for each table without a clustered index, and one row for each partition.

The storage for clustered index information depends on the locking scheme for the table:

For data-only-locked tables, systabstats stores an additional row for a clustered index.

For allpages-locked tables, the data pages are treated as the leaf level of the index, so the systabstats entry for a clustered index is stored in the same row as the table data.The indid column for clustered indexes on allpages-locked tables is always 1.

sysstatistics table stores one or more rows for each indexed column on a user table; it also stores statistics for unindexed columns.

The first row for each column stores basic statistics about the column, such as the density for joins and search arguments, the selectivity for some operators, and the number of steps stored in the histogram for the column.

If the index has multiple columns, or if you specify multiple columns when you generate statistics for unindexed columns, there is a row for each prefix subset of columns.

Additional rows store histogram data for the leading column. Histograms do not exist if indexes were created before any data was inserted into a table. To generate a histogram, run update statistics after inserting data.

Scalability, speed, security, simplicity…these were the guiding principles for our engineers. We’ve increased scalability and speed with extensive optimization in its transaction concurrency management, query plan execution, data compression and utilization of computing resources in large SMP servers. Security enforcement and system auditability have been augmented to provide customers more flexibility to adapt to their specific regulatory compliance needs. And SAP Control Center delivers simplified database management helping to reduce overall cost of ownership.

Memory Management internally uses TLB cache to map the Virtual address to physical address. If the TLB cache is small (TLB Miss) (since page size is small), it will need to refer the Page table. Page Table look ups are costly as compare to TLB cache. That’s reason the applications ( Like Database) which have heavy memory demand can be configured to Huge TLB Pages so that Page Table access can be reduced and overall application performance can be increased. Linux has had support for huge pages since around 2003 where it was mainly used for large shared memory segments in database servers. ASE Database performance can be increased bt 2-7% by using huge page on Linux Platform. You can check Huge Page Support on Linux :

From a memory management perspective, the entire physical memory is divided into “frames” and the virtual memory is divided into “pages”. The memory management unit performs a translation of virtual memory address to physical memory address. The information regarding which virtual memory page maps to which physical frame is kept in a data structure called the “Page Table”. Page table lookups are costly. In order to avoid performance hits due to this lookup, a fast lookup cache called Translation Lookaside Buffer(TLB) is maintained by most architectures. This lookup cache contains the virtual memory address to physical memory address mapping. So any virtual memory address which requires translation to the physical memory address is first compared with the translation lookaside buffer for a valid mapping. When a valid address translation is not present in the TLB, it is called a “TLB miss”. If a TLB miss occurs, the memory management unit will have to refer to the page tables to get the translation. This brings additional performance costs, hence it is important that we try to reduce the TLB misses.

On normal configurations of x86 based machines, the page size is 4K, but the hardware offers support for pages which are larger in size. For example, on x86 32-bit machines (Pentiums and later) there is support for 2Mb and 4Mb pages. Other architectures such as IA64 support multiple page sizes. In the past Linux did not support large pages, but with the advent of HugeTLB feature in the Linux kernel, applications can now benefit from large pages. By using large pages, the TLB misses are reduced. This is because when the page size is large, a single TLB entry can span a larger memory area. Applications which have heavy memory demands such as database applications, HPC applications, etc. can potentially benefit from this.

Memory Mgmt uses Translation Look Buffer(TLB) Cache to map Virtual to physical address, The amount of memory that can be translated by this cache is referred to as the “TLB reach” and depends on the size of the page and the number of TLB entries.

If the TLB miss time is a large percentage of overall program execution, then the time should be invested to reduce the miss rate and achieve better performance. Using more than one page size(Huge Page) was identified in the 1990s as one means of reducing the time spent servicing TLB misses by increasing TLB reach. Broadly speaking, database workloads will gain about 2-7% performance using huge pages whereas scientific workloads can range between 1% and 45%. Huge pages are not a universal gain, so transparent support for huge pages is limited in mainstream operating systems it is possible that huge pages will be slower if the workload reference pattern is very sparse and making a small number of references per-huge-page. Many modern operating systems, including Linux, support huge pages in a more explicit fashion, although this does not necessarily mandate application change. Linux has had support for huge pages since around 2003 where it was mainly used for large shared memory segments in database servers such as Oracle and DB2

Sqsh (pronounced skwish) is short for SQshelL (pronounced s-q-shell), it is intended as a replacement for the venerable ‘isql’ program supplied by Sybase.

Sqsh is much more than a nice prompt, it is intended to provide much of the functionality provided by a good shell, such as variables, aliasing, redirection, pipes,back-grounding, job control, history, command substitu-tion, and dynamic configuration. Also, as a by-product of the design, it is remarkably easy to extend and add functionality.

Sqsh was developped by Scott C. Gray, and is currently maintained by Michael Peppler (mpeppler@peppler.org) and also by Martin Wesdorp (mwesdorp@users.sourceforge.net).

Transaction T1 (Session 1)modifies a data item. Another transaction T2 (Session 2)then reads that data item before T1 performs a COMMIT or ROLLBACK. If T1 then performs a ROLLBACK, T2 has read a data item that was never committed and so never really existed.

Now with isolation level 0 , I am trying to read data and it is allowing dirty reads.

rollback

If Session 1 rollbacks, session will have inconsistent.

Understanding Isolation Level “2” : Avoid Repeatable Reads

What is Repeatable Reads?

Transaction T1 (session 1) reads a data item. Another transaction T2 (session 2) then modifies or deletes that data item and commits. If T1 then attempts to reread the data item, it receives a modified value or discovers that the data item has been deleted.

Transaction T1 reads a set of data items satisfying some . Transaction T2 then creates data items that satisfy T1’s and commits. If T1 then repeats its read with the same , it gets a set of data items different from the first read.

SAP

Located in Walldorf, Germany, SAP (www.sap.com) has several DBMS products that are used for transaction systems: SAP Sybase Adaptive Server Enterprise (ASE), SAP Sybase iAnywhere and SAP Hana. Both ASE and iAnywhere are available as software only, while SAP Hana is marketed as an appliance.

One fine morning you want to see the code for a stored procedure but what happen if the text of the proc is hideen through sp_hidetext ?
I am sure you guys know that, once a stored proc is hidden through sp_hidetext, this can not be reverse .

This will even get worse if you do not have a OS level file backup of that proc any where on your server machines or local machines . Even your DBA friends do not have the back up on their desktops/laptops. This seems to be a very rare scenario but some times it happens
I was in this challenging scenario … was able to find a crack to unhide the code for a hidden proc . Sharing on this forum.

Subscribe Us

Worldwide Visitors

Contributors

Anurag has more than 6+ years of experience in Sybase Database Development .His Area of expertise includes Performance, Query Optimization, Cost Optimization, TSQL Development. He is also involved in Consultancy to Financial Firms for Database Implementation and Maintenance. He has supported many Global Financial firms. AnVa (Founder) has more than 6+ years Exp in Sybase ASE/REP Database Administration. His area of interest is ASE Implemenation and maintenance , Performace Tunning , Sybase HA ,Shared Cluster and Replication. Also exploring the In-Memory Databases (HANA) , Big Data, Hadoop and Java. He is also supporting the Forum Sybase Team and started this sybaseblog.com in late 2009.Andrew is Guest Blogger on sybaseblog.com. Andrew have been working with Sybase for more than a decade: ASE & RS mostly.
He has written lot of tools that help to manage/monitor the system for ASE and Replication and some tools are on the way..
He is also poet and love to be creative all the way.

Admin

Disclaimer

This Blog is neither sponsored by nor in any way affiliated with SAP & Sybase,Inc. In any case, neither any authors nor SAP & Sybase Inc assumes any responsibility for errors in the blog postings, nor do they have any liblilties for damages resulting from the use of blog postings.If you are using any Supported & Unsupported commands/features listed in blog, this is completely on your(user) risk, sybaseblog.com not having any libilitiy.

This blog is only for sharing purpose. The opinions expressed here represent bloggers own and not those of their any past and present employer.