It turns out there was clearly not enough space on the destination. The copy was going to take 20GB, and after the crash there was still 1GB on the destination, so I thought this was probably a protocol bug rather than a problem with running out of space.
Perhaps what this is is a situation where the remote end should have sent back an indication that it was out of space and should have shut down gracefully.
But there is another problem here: the two ends should start out by negotating whether there is enough space for the copy. In a better world, the destination OS would give the rsync process the ability to atomically grab disk resources up front to be used for the files and folders it creates, and if that fails, the remote rsync would tell the UI rsync no dice. Or how about this: writing the destination files could be transaction in the OS file system! Nah.

Yes, the error reporting coming back from some errors can indeed be lacking. However, the pipe-lined nature of the protocol can make this hard to overcome (the error can be behind so much checksum data that it can't make it back prior to the connection getting torn down). In 3.1.0, I have a new option, --msgs2stderr, that can often be used to debug such situations (for non-daemon transfers).
It would be good to investigate a reliable way to drain (and discard) the pending data to get all the relevant messages more reliably. For instance, if a new message was added "fatal exit in progress", it could be sent and circle the 3 processes before the connection is torn down. e.g. a write error on the receiver sends the error message (text) to the generator, sends the fatal message too, and then just discards file-change data until it gets the fatal message back from the sender.