Totally depends on the volume of reads and writes on the table, and how it is being used, size of database, performance of hardware etc.. If the database is heavy on reads a low percentage of available space means less pages to read when full table scans are performed, if heavy on writes then a lot of space available means less page splits so writes could be quicker.

Its almost a suck it and see as you have to monitor and adjust as required depending on your database use.

For small low use databases 90% is a good starting point, which is why the SQL Server 2000 maintenance plans defaulted to this.

It also depends on the updates. Updates don't force page splits. Updates that increase the amount of data, beyond what fits on the page, force page splits.

If I have integers, dates, or CHAR in my tables, then I can update all day long and not cause page splits because the data size hasn't changed. If I insert a single character into varchar fields and then update it with 1,000 char, I might be causing splits.

You can rebuild the indexes on the table, then check the table fragmentation regularly (daily?) with sys.dm_db_index_physical_stats. If the table becomes fragmented quickly after reindexing, you should probably lower the fill factor to leave more space for updates.

If your identity primary key is clustered (the default), all inserts occur at the end of the table and cannot cause fragmentation. It would only be caused by updates that increase the size of variable-length columns (varchar, varbinary).

Keep in mind that fill factor only takes effect when an index is created or rebuilt. If you don't have a maintenance plan that periodically rebuilds indexes, it doesn't matter what the fill factor setting is.

Hi,Kindly let me know whether my understanding is incorrect...A word regarding the environment, the database comes to around few GBs and daily reindex process is done!If we are having an identity column as primary key(clustered), all the inserts will be happening to the end of the last insterted data and index. fcourse i admit that we are not doing forcefull updation (set identity_insert off). So, if frequent insert process happens, it will continue from the last index page. (We can leave update process as it will not do any changes in the pages). Hence, we are keeping extra page free space unnecessarily, for all the index pages (which is not at all used) if we keep the fill factor as 75 or 80 and wasting space (i admit space is not a constraint now). Let the page split happens, so that it will get enough free space for further inserts.

What do you need further suggestions on? It sounds like you have a pretty good handle on things...

The one suggestion I can make (which was made by one of the indexing and storage gurus on here, Paul Randal) is to figure out whether you really need to reindex daily. His point was that most databases see no advantage in something that frequent, and that the incurred overhead, log space wasted, etc., were not worth it. Just like hard drives, some amount of fragmentation is assumed, and the system doesn't suffer until you get into lots and lots of fragmentation. Unless you're on a system where a substantial portion of a table is changed each day - there's really no rationale to rebuilding each day. At best -perhaps look at being more selective, and rebuild the one or two indexes in a database that get changed/fragmented a lot.

----------------------------------------------------------------------------------Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

Matt Miller (2/15/2008) At best -perhaps look at being more selective, and rebuild the one or two indexes in a database that get changed/fragmented a lot.

Absolutely true! Do not re-index the entire database blindly, but target only those indexes that need to be rebuilt, based on a pre-determined threshold. You will be placing an unnecessary amount of load on your system.

There is no hard and fast rule on threshold values - in SQL 2000 environments I have been targetting indexes with under 40% scan density and over 10% logical fragmentation. In SQL 2005, the measures of interest are avg_fragmentation_in_percent and avg_page_space_used_in_percent from view sys.dm_db_index_physical_stats.

To get a feel on whether a 90% fill factor is adequate for your index, you can monitor sys.dm_db_index_physical_stats.avg_page_space_used_in_percent over a period of time.If you find that it quickly and significantly deviates from the 90%-space-used value enforced originally by the fillfactor setting, then that means the 90% fillfactor value is probably too high.