Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> Brandon Craig Rhodes <brandon(at)oit(dot)gatech(dot)edu> writes:
> > The `working' test case omits the
> > AND (account, policy) NOT IN
> > (SELECT account, policy FROM policy_accounts_active)
> > condition from the end of executor_active, which magically makes the
> > executor_hamlet rule start firing as it should.
>
> I don't think this is a bug. The executor_hamlet rule fires after the
> executor_active rule does; therefore the (account, policy) pair *has
> already been inserted into policy_accounts_active*, and will be found
> when executor_hamlet re-executes the select to look for it.
This was for me neither intuitive nor clear from the documentation; I
had assumed that the NEW relation of a rule always contained the same
rows that were inserted, updated, or deleted by the query that wound
up invoking the rule.
If I understand your assertion, NEW does *not* in fact refer strictly
to the rows that (in this case) were INSERTed by the upstream query;
rather, NEW refers to a re-invocation - a copy or re-execution - of
the query which produced the NEW rows themselves. So if the query
which produced the NEW rows has side effects which affect its own
outcome when it is re-evaluated, then the rule will not be working on
the same rows at all?
This is also troubling because it may mean that rules are less
efficient than I had imagined. Having assumed that rules following
(say) an INSERT used for NEW exactly the same set of rows that were
inserted, then PostgreSQL could get away with only executing that
query once and using the resulting rows for every rule fired by the
INSERT. But if the original relation is entirely re-executed with the
possibility of a different outcome then it is not clear that several
rules could all share the results of the original query, vastly
reducing the efficiency of several rules cascaded from one another.
Forgive me for complaining about something that has probably been a
fundamental part of the design since the first Postgres, but in my
misunderstanding I had been looking forward to a cascade of rules
cleanly and efficiently sharing the same set of, say, one thousand
INSERTed rows as they propagated its results through our tables.
> My advice to you is to use triggers, not rules, for pushing data
> from one table to another; especially when you need logic this
> complex to decide what to do.
We had hoped, by casading about a dozen rules through tables of this
sort, to allow PostgreSQL to optimize our entire dataflow as a whole
rather than making dozens or hundreds or little trigger invocations
when sets of accounts are created or deleted.
Thanks so much for spending some time helping us to think straight, :-)
--
Brandon Craig Rhodes http://www.rhodesmill.org/brandon
Georgia Tech brandon(at)oit(dot)gatech(dot)edu