Some noncompliant http servers close client connection before reading
all MIME headers!

DNS "negative caching" timeouts were too lengthy, causing
users to report that DNS lookups worked fine until Harvest was used.

Browser-specific dynamically-generated Web pages cause problems with
hit rates and really require MIME headers to be included in comparison for
correctness.

Client and server implementation differences, noncompliance with standards,
and vendor interoperability in general have forced tradeoffs between efficiency/performance,
design cleanliness and operational transparency.

Keeping metadata in memory and limiting the VM image size to avoid
page faults was an important win.

Monolithic filesystems are the wrong model for the evolving Internet:
feature set is overkill for many applications, implementations are complex
and nonmodular, vendor interoperability is more difficult since components
are "larger" and more tightly coupled to rest of OS...