Goddam natural language processing. Sure, in some specific applications you can do some clever NLP algorithms to save some human-processing time, but the general problem of AI is probably unsolvable. Specifically, Goedel's famous theorem forces us to accept one of two possibilties:

(1) Human Intelligence is nothing more than the shuffling around of symbols and logical rules, devoid of any underlying meaning

We don't know enough about Human Intelligence to say whether (1) is true. There's a nugget of information in that sentence that should have keyed you in: "We don't know enough about Human Intelligence...". How in the hell does the AI crowd go about simulating something we don't even remotely understand!?

Ego. Many cs whiz-kids grew up thinking they could solve any technical problem in the book. They liked being the people who could solve the problems nobody else could. The natural progression when looking for problems that other people can't solve seems to be AI. Because AI is unsolvable.

It's been almost a century, and the meaning of Goedel's theory is still lost on most academics.