Sean Chittenden <sean(at)chittenden(dot)org> writes:
> Now, there are some obvious problems:
You missed the real reason why this will never happen: it completely
kills any prospect of concurrent updates. If transaction A has issued
an update on some row, and gone and modified the relevant aggregate
cache entries, what happens when transaction B wants to update another
row? It has to wait for A to commit or not, so it knows whether to
believe A's changes to the aggregate cache entries.
For some aggregates you could imagine an 'undo' operator to allow
A's updates to be retroactively removed even after B has applied its
changes. But that doesn't work very well in general. And in any case,
you'd have to provide serialization interlocks on physical access to
each of the aggregate cache entries. That bottleneck applied to every
update would be likely to negate any possible benefit from using the
cached values.
regards, tom lane