The issue with the query above is that I lose precision. For instance, this query for the table above will be 1 second off, which will get worse for more values to a point that I actually lose tens of minutes for a large dataset.

I was thinking to replace it with one of two alternatives, but I'm not sure which one is more efficient to use?

1 Answer
1

Between your two queries, there really isn't much difference. It can be said that integer maths is marginally faster than floating point, so you could prefer the latter if reporting by SECONDS is acceptable.

Casting from int (int32) to bigint isn't really as expensive as you imagine, so yes, that could also play into the hand of preferring the latter.
–
RichardTheKiwiSep 30 '12 at 3:24

Also about your comment "reporting by SECONDS is acceptable". It is not "acceptable", it is a NECESSITY! I lose precision a big time, if I don't do it. Here's an example: stackoverflow.com/questions/12657680/…
–
c00000fdSep 30 '12 at 3:25

You can always do a SINGLE division at the very end against the SUM(bigint)...
–
RichardTheKiwiSep 30 '12 at 3:26

Yes, that's what the latter example will do behind the scenes, in C# code (not SQL).
–
c00000fdSep 30 '12 at 3:27