How tolerant should we be when machines communicate?

HTML is a language, just like English, Klingon or ASL. It is used to represent data in a Hypertext format in order to be interpreted by user-agents (e.g., browsers, screen readers, handhelds).

Just like most languages, HTML consists of a set of syntax and grammar rules that are agreed upon. As you read these words, you've committed to rules which allows you to understand the information that I'm outlining. If I were to speak like this: Go U tody what? it would be difficult to communicate and be subject to many interpretations. What if I had said: How are you doing today? We could certainly agree that is a lot clearer - not to mention it requires less of a hassle.

Of course, human communication (e.g., verbal, body, internal) is advanced enough to be forgiving and redundant in parts. When we verbally communicate, for instance, in person, we also use supplemental methods to stress, emphasise or transmit our thoughts using our hands, body posture and facial expression. All of this combined minimises misinterpretations.

User-agents also have similar mechanisms that have been outlined (e.g., W3C) in order for machines to communicate properly. These user-agents are nowhere as advanced. A human language (like English) can't be fully mapped onto HTML (not 1-1). It is simply too complex to be represented within a narrower set of HTML rules. HTML pretty much covers a subset (most common use) of the language to represent information as a Web document.

This brings me to my final thought: if HTML can't represent our thoughts as accurately as we would ideally like to and be subject to natural human errors (both syntactically and grammatically), how forgiving should and can we be towards the user-agents?

The point that I'm trying to make is that, humans do a lot of error handling when they communicate. We can only expect error handling to be an important part of user-agent conformance to the specs. The draconian approach is fundamentally wrong.

Interactions

It's only logical to be as forgiving as HTML is able to represent the English language. If the subset it represents is only 0.5% of the entire language, then we can only expect 0.5% performance and therefore be 99.5% forgiving in as far as elements go that it can't represent.