We focus on the issue of robustness of conversational interfaces which are flexible enough to allow natural ``multithreaded'' conversational flow. Our main advance is to use context-sensitive speech recognition in a general way, with a representation of dialogue context that is rich and flexible enough to support conversation about multiple interleaved topics, as well as the interpretation of corrective fragments. We explain, by use of worked examples, the use of our ``Conversational Intelligence Architecture'' (CIA) to represent conversational threads, and how each thread can be associated with a language model (LM) for more robust speech recognition. The CIA uses fine-grained dynamic representations of dialogue context, which supersede those used in finite-state or form-based dialogue managers. In an evaluation of a
dialogue system built using this architecture we found that 87.9% of recognised utterances were recognised using a context-specific language model, resulting in an 11.5% reduction in the overall utterance recognition error rate, and a 13.4% reduction in concept error rate. Thus we show that by using context-sensitive recognition
based on the predicted type of the user's next dialogue move, a more
flexible dialogue system can also exhibit an improvement in speech recognition performance.