That is 24GB. It is not the biggest one I have seen, but it is up there in the top 3.

Not so many versions ago, it was not unusual to have a system tablespace of less then 128MB in size. In fact, if you did not use stored PL/SQL this was probably the case up to and through 8i and into 9i. So – it’s PL/SQL causing the bloat? No, probably not. Ahhh, what about all that AWR automatic real-time monitoring gubbins you hear so much about? Nope, that goes into SYSAUX.

No, it’s probably down to one or both of two internal tables, SYS.AUD$ and SYS.HISTGRM$.

Let’s check out what are the biggest objects in this particular SYSTEM tablespace:

To be a little obtuse, I’ll start with the second largest object, SYS.AUD$, which you can see is getting close to 6GB in size {Oh, and SYS.I_AUD1, the fourth largest object, is the PK index on AUD$}.

AUD$ is the underlying table that holds all of the system auditing information that is turned on and off with the SQL AUDIT command. You can use AUDIT to monitor particular users, changes to system priveleges, access on key tables or particular activity like table changes. Go check the SQL manual if you want to know more.
The records for this captured activity all go into this table. On this particular system a few key priveleges and logons by certain users are the only things being audited, but the data has built up over 3 or 4 years.

On version 9 and 10 the options for managing this table are…poor. Bascially, you have to do it yourself. I think I am right to say that it is the only SYS-owned table that Oracle Corp say it is OK for you to move into another tablespace yourself and that you are allowed to delete and truncate data from. If you AUDIT a lot of things or activities you probably need to delete some data. {If you AUDIT a lot of things or activities and you don’t delete any of the data, you probably did not turn on AUDIT very long ago or have not checked the size of your SYSTEM tablespace for a while…}.

Oh, and one final thing on the AUD$ table. It is lacking indexes – it has only one {ignoring LOB indexes}. This is a good thing as indexes slow down inserts quite significantly {I use a rule of thumb that if it takes “N“ms to insert a record into a table, it will take “2.2*N“ms if there is an index on the table, “3.4*N“ms if there are two indexes, “4.6*N“ms if there are three… Get the idea? However, I have not checked this for real in years. Treat this statement as a potential myth}. I would recommend you do not add further indexes to AUD$. If you need to investigate the information in there, I would strongly suggest you extract what you want into a new table with a CTAS statement, then index that copy table and do your work on that. You do not want to slow down further everything that is already slowed down a little by being audited.

What about the largest thing in the SYSTEM tablespace? It’s SYS.C_OBJ#_INTCOL# nd is over 13GB in size. I said it would be SYS.HISTGRM$ didn’t I?

Well, SYS.C_OBJ#_INTCOL# is a cluster. Clusters are structures, segments, that hold one or more tables. All records for the same key across all those tables are held together, in the same block or blocks. The key is either a normal index or a hash. The idea is that it speeds up selecting records from different tables with the same key – they are all in the same block or blocks. It does work too, but they seem to have fallen out of favour with Oracle designers, but are worth considering.

So, SYS.HISTGRM$ is the only table in the cluster. It {and it’s cluster index, on columns OBJ# and INTCOL#} are hiding in this cluster segment. It IS the biggest thing in this particular SYSTEM tablespace. The only index, other than the cluster index, on the table SYS.HISTGRM$ is I_H_OBJ#_COL# and is the third largest object in the SYSTEM tablespace.

What is HISTGRM$? Do you think it looks like it could be something to do with column Histograms? Well it is, it’s the underlying data dictionary table that holds column histograms. It started to become common to see a large HISTGRM$ table with Oracle 9 systems with many partitions {and of course subpartitions, if used}, where statistics were gathered on these partitioned tables aggressively and included a METHOD_OPT clause that causes column statistics to be collected. eg FOR ALL INDEXED COLUMNS SIZE 200 or FOR ALL COLUMNS SIZE 100.

Where most people have seen it balloon in size though is with Oracle 10 and the introduction of…the Automated Stats gathering job {so, there is a connection to previous recent posts :-) }.
Every night, every weekend, any partition or sub-partiton that is 10% or more different to last time stats were gathered, get stats gathered. The automatic job gathers states with METHOD_OPT=AUTO. Which means your Oracle software decides which columns to gather histogram stats on and how many buckets it collects.

Whoever wrote that part of the stats gathering feature really liked to gather information.

If you have lots of partitions {or sub-partitions} and gather stats with the METHOD_OPT=AUTO, the two factors added together result in a massive HISTGRM$ table and thus a massive cluster.

Is space important? No, of course not, disc acreage is cheap, just buy bigger discs ;-). {That, of course, was a highly questionable suggestion}.
Being serious, I’m personally not too concerend about my SYSTEM tablespace being that big, after all compared to the size of databases that use thousands of partitions, 32GB is usually not a major concern, but I know of people who did not expect their system tablespace to get that big and have issues with simple running out of strage available to them.

I am more concerned about the impact on parse times and general data dictionary perforance of such large objects. I’ve found the data dictionary can be very slow for some queries when there are large numbers of extents, objects, columns…Histograms are just in there with the mix. I’ve had to work around such slow data dictionary access several times in the past and I suspect {though have not had time to in any way support this with evidence} that parsing is slowed down by unneccessary histograms. As an example of this, which actually has nothing to do with SYS.HISTGRM$, if you look back up this post at the code listing the indexes on the cluster and table, the code took between a second and two seconds to run. Normally it takes less than 1/10 of a second.

Can you do anything about the size of these objects? Well, with SYS.AUD$ Oracle allow you to delete from it and move the table to a new tablespace. If you use the SQL AUDIT command, you should include housekeeping the SYS.AUD$ table in your overall implementation.

SYS.C_OBJ#_INTCOL# and SYS.HISTGRM in it are slightly trickier. It is not easy to reduce the space used by these objects and thus the SYSTEM tablespace once they have grown large, but you can stop them getting bigger by collecting fewer column histogram statistics on your partitioned tables. Obviously that is not as simple as it sounds, but many sites with large numbers of objects do turn off the automated stats gathering and run their own code.

I’m not so sure Tanel looks my way very often, but I’d love to have some input from him. I figure I should spend a few days checking out slow data dictionary speed but it’s just going to have to be added to my TDO list for now!

I believe that your environment is having partitions which are getting added regularly as a result , the cluster SYS.C_OBJ#_INTCOL# size is increasing in size. .. With respect to stats gathering, instead of stopping gather_stats_job, I think we can disable the histogram collection by modifying the job ..

Well, yes, an active system with thousands of partitions is going to be gaining partitions as time goes by and new partitions are created or split off to hold the new data. That is part of why the HISTGRM$ table and it’s cluster increase in size over time.

You can change change the default value for METHOD_OPT with the DBMS_STATS.SET_PARAM funtion, which will change it for every segement analysed by the automated stats job and for any DBMS_STATS.GATHER_XXX where you do not state it explicitly. It’s hard to see a single other fixed value that will work for the whole database, especially if it is large and complex, which is why “FOR ALL COUMNS SIZE AUTO” is the default – but it is not a good solution for these big and/or complex databases.

Sadly, the only good option is to look at the database system as a whole and decide on a stats gathering methodology that works for your system, which I admit is not a trivial job. It’s just one of the long list of tasks that a modern DBA had to fit in :-)

Thanks Amit. Yes, Oracle say you have to do the little task of recreating the database! One wonders why you can’t copy the table, drop the original and rename the copy. Maybe the cluster makes that tricky or oracle makes use of internal pointers (even rowids) somewhere inside or some other internal trick means it just does not work. But it doesn’t matter, if Oracle don’t support it you should not be doing it!

In order to avoid the block checksum of System tablespace, we intend to move the AUD$ and FGA_LOG$ onto sysaux. We would like to partition AUD$ as it seems normal to manage big tables with drop partitions rather than huge delete. However FGA_LOG$ cannot be partitioned due to a Long (col PLHOL). Thanks Oracle corps, now we are forced into millions of row delete.

Funny you should mention partitioning AUD$ Bernard. Yesterday, I was talking to a client about relocating AUD$ and putting in place some housekeeping and they suggested partitioning AUD$. I said I would rather not partition it as I don’t like the idea of doing anything to an internal table that I don’t have to do.
But as all that oracle does is write to AUD$, would partitioning on timestamp# {which is a DATE, Thanks Larry E} be a sensible thing to do? It should not slow down the inserts that much.
You know, I think FUD {Fear, Uncertainty nd Doubt} will probably result in me suggesting what I have done before.

Move SYS.AUD$ to a different tablespace
Create a new table called something like “CORPNAME_AUDIT” in a suitable schema, partitioned by date
Pull all the $AUD data into the new table (which has a couple of indexes on it) on a regular basis
Trim the $AUD table on a regular but less frequent basis

If you do decide to partition AUD$ Bernard, I’d love to know how you get on.

BTW “Block checksum of system tablespace”. Can you expand, I don’t know what you mean by this.

I had same problem last year, when my SYSAUX tablespace growth from 3 GB to 30 GB in just one day!!!

In my case, I collected histograms on a few big partitioned tables (each was over 50 GB). Lately I tried to remove unnecessary data from WRH$ and other tables with infos about histograms, so I can resize the SYSAUX datafile, but I got an error, there are some objects which belongs to XDB component. So I needed to clean up everything (and to relocate XDB component), to be able to resize the SYSAUX datafile …

Collecting histogram data could have bloated the SYSTEM tablespace (you might want to check it if you have not already), whereas gathering stats would probably have bloated the SYSAUX tablespace, especially if you gathered stats several times on the same object. Each time you gather stats, Oracle automatically puts the old values into some of the WRI$ tables – WRI$_OPTSTAT_* to be more accurate. I thought I’d blogged about them but I think I’ve only presented on them. I’ll add it to my list of “blogs to write”!

We had this issue when the system tablespace got full. We had raw devices and we couldn’t add new raw devices right away and needed and this def helped because we needed to clear up some space in the system tablespace.

At a guess I would say you need to either add a new datafile to the system tablespace as the current one (or ones) are all at maximum size, or the datafiles are not set to autoextend and you need to make them autoextend. Check out the admin guide about datafiles, it will tell you everything you need to know.

Being more helpful, if you look at the view text for dba_source you can see that it is based on SYS.SOURCE$. I think Saroj that you might have a lot of old versions of code in your UAT system or just a huge amount of PL/SQL – but that is an enormous volume. Maybe as an alternative reason you have had some process that has bloated the PL/SQL store. Here is the view text: