There are two reasons that you can't rely on TCP checksums. The first is that the TCP checksum is very weak and can easily fail to detect errors in packets. This means that it is possible for a packet to be corrupted somewhere between the sender and the receiver, without the receiver ever noticing. If you build a large system, this is almost guaranteed to happen occasionally. The second problem is that the TCP checksum happens too late, and only protects the contents of a single packet, which is not sufficient. This is the classic end-to-end argument. If the corruption happens before your data gets to TCP, because of a memory or software error, for example, the TCP checksum can't help. If the corruption happens while reassembling a message from multiple packets, again TCP is no help. However, intelligent use of higher level checksums can detect these types of errors.

Amazon's problem was likely more insidious than simply relying on TCP checksums, but it still provides a good example. They mention in their failure analysis that they are going to add checksums to system state messages. Google uses in-memory checksums to protect against software bugs in their Paxos implementation, which is part of their Chubby distributed lock service. In conclusion, if you have state that is critical for a system's correctness, it needs some form of redundancy, in order to detect and recover from errors.