In response to: Found prove that SQL Server is the Devils Engine - ERROR 666http://www.insidesql.org/blogs/tgrohser/2011/08/05/found-prove-that-sql-server#c1185
2011-08-09T05:53:17ZadminBrilliant! I like that developer's humour... :-)In response to: increasing the performance of count(*)http://www.insidesql.org/blogs/tgrohser/2011/01/26/maximum-count-performance#c1161
2011-02-24T18:25:46ZChristoph IngenhaagAn indexed view is another choice.
With 1100000 rows in MyTable a select makes 957 logical reads on my system using the IX_ID index.
A select on MyView (code follows) makes 2 logical reads
create view dbo.MyView
with schemabinding
as
select
count_big(*) as cnt
from dbo.MyTable
go
create unique clustered index cuidx on MyView(Cnt)
go
select cnt from MyView with (noexpand)
It is interessting the noexpand hint is necessary with more then 1000000 rows in MyTable on my system... (with Express Editions you need this hint)
And, the inserts are faster without the IX_ID Index. The update of the indexed view costs almost nothing. To check this I used the number function from Steve Kass (http://stevekass.com/2006/06/03/how-to-generate-a-sequence-on-the-fly/) and this statement:
insert into MyTable(Payload)
select replicate('ABC', 100)
from dbo.numbers(1, 100000)
Please check the plan. Maybe I have overseen something.<br />]]>In response to: A little bit more information on multi location backupshttp://www.insidesql.org/blogs/tgrohser/2011/01/26/a-little-bit-more-information-on-multi-location-backups#c1150
2011-01-28T09:59:41ZtgrohserLog shipping is a great way to do unfortunately with the build in log shipping you cant use multiple locations, but you can write your own log shipping to do soIn response to: A little bit more information on multi location backupshttp://www.insidesql.org/blogs/tgrohser/2011/01/26/a-little-bit-more-information-on-multi-location-backups#c1149
2011-01-28T09:02:38ZcmuGood idea! Additionaly you can set up log shipping to proof the consistency of your log backups.In response to: increasing the performance of count(*)http://www.insidesql.org/blogs/tgrohser/2011/01/26/maximum-count-performance#c1145
2011-01-26T19:33:36ZadminLOL.
Agreed, in that table dimensions I guess it is really not worth it departing from the "standard" way of doing things. :-)In response to: increasing the performance of count(*)http://www.insidesql.org/blogs/tgrohser/2011/01/26/maximum-count-performance#c1143
2011-01-26T14:53:15ZtgrohserThe actual problem was in the size of about 0 to 2000 rows, so not a huge table, the exact count was not 100% relevant, we thought about query in the system tables too but we found out that the actual work for sql server in this size of table was smaller letting him count than finding the right object, the coresponding partitions and then reading the result.
Sure there is overhead for the extra index but the counting was done much more often then inserts.
for large tables I totaly agree the system tables are the much better way to goIn response to: increasing the performance of count(*)http://www.insidesql.org/blogs/tgrohser/2011/01/26/maximum-count-performance#c1142
2011-01-26T13:22:49ZadminIf I were to perform a COUNT(*) constantly on a large table I would maybe revise this strategy and question the requirement at all. Even with a tailored index just to support that query, the actual work still has to be carried out by SQL Server and I wouldn't be surprised when the query still runs like a dog.
However, if accuracy of the COUNT(*) isn't important at all, say for example, if you want to use this for some kind of paging, or track growth over time, or other stuff where you can live with a more or less "good approximation", it might be an option to get the row_count from the system tables such as sys.partitions.
Of course, with the usual caveat, that system tables can change over time...In response to: Hello Worldhttp://www.insidesql.org/blogs/tgrohser/2011/01/24/hello-world#c1138
2011-01-24T12:04:19ZadminHi Thomas,
a warm welcome from me again as well. Good to have you here!
--
Cheers,
Frank