According to the docs (yes, I do not have tested this!), it's expected behaviour that a query against a read-only mirror server will be roll backed when it interferes with applied transactions from the primary server - to cite:

Queries that are executed against the mirror database can place locks, depending on the isolation level specified. If locks interfere with operations being applied from the primary server, then the connections holding the locks have their transactions rolled back and any open cursors for those connections are closed. Applications running at isolation level 0 do not add row locks, but still acquire schema locks. If the schema locks interfere with operations being applied from the primary server, the transaction on the mirror database is rolled back.

So it seems that the internal connection applying the transactions from the primary has always a higher priority. That's the Watcom way of how it should be, methinks.

And I guess the same doc page also names the remedy - snapshot isolation:

Applications that require a consistent view of the database (and so cannot use isolation level 0) should consider using snapshot isolation. To do so, the allow_snapshot_isolation option must be set to On. This option takes effect on both the primary server and the mirror server, so the costs associated with snapshot isolation need to be considered.

You are correct Volker. The idea is that we don't want to block the recovery thread from doing its work on the mirror (or copy node) since, if we did allow the recovery to block, this could adversely affect the time it would take to failover in the case that the primary went down (and we want failover to be as fast as possible).

Breck: The exception handler does not fire for the same reason stated above - we don't want to delay the recovery thread in any way.

Having said the above, I/we are not sure why a DEADLOCK error is being thrown? (but I may have answered this later in this paragraph!) What really happens is that the recovery thread sends a CANCEL to the connection that is holding the lock (that the recover thread needs to get past) and a ROLLBACK is queued on that same connection so the locks are released. FWIW the same sequence occurs when a thread deadlock is detected so this is likely why the error is being reported as a DEADLOCK. We should likely attempt to change the server to throw a different error in the case that it is the recovery thread that is cancelling the connection's transaction?

In your particular case the prefetch operation was the 'unlucky' operation that was occurring when your connection got cancelled. Disabling prefetch will have no effect; the cancel will simply be noticed and reported by some other operation.

The screenshot actually shows what happens when SET TEMPORARY OPTION PREFETCH = 'OFF' is in effect... I'm guessing the description of SQLCODE -684 needs work :)

IMO the intentions are noble, but the effects on the read-only application are too extreme. It's one thing to ram through the recovery operations, it's another to suppress the TYPE DEADLOCK event... what possible motivation can there be to suppress an independent connection from running? It is just a diagnostic tool, and it cannot have any bad effect on the recovery process.

I'm also not convinced the EXCEPTION should be suppressed. Why can't it be fired after the deadlock has been processed and the offending transaction rolled back?