Wouldn't a much simpler design be two substantial Coda clients each running a web server?
The web pages would then be a shared pool stored on N Coda servers ---- N being determined
by capacity and load. You could experiment with singly-replicated Coda servers to decide if it
works well -- server replication is needed only to provide additional resiliency to failures. Note
that the Coda clients (i.e. web servers) can handle brief periods of disconnection (few minutes to
tens of minutes, perhaps) without any server replication.
The policy to direct web accesses to specific web servers is outside the above. You could
just have any web server service any web request. Or, you could partition the Coda namespace
statically and redirect requests to the other web server if needed. This would have the
advantage of increasing the locality seen by the cache on each Coda client. Fancier load
balancing schemes are easier to imagine, but simplicity usually wins the day.
It is worth going back to the dawn of the Web and seeing how AFS was used very successfully at NCSA.
(see Thomas Kwan, Robert McGrath, and Daniel Reed. NCSA's World Web Server: Design and Performance, IEEE Computer 1995(0018-9162):68-74, 1995 --- pdf attached))
Replace "AFS" with "Coda" and you have a simple starting point to explore.