If you happen to be around TPAC on Tuesday afternoon, you might be
interested in a discussion we're having during the Multimodal Interaction
Working Group on multimodal use cases for standardizing statistical language
models and possible updates to SRGS. A multimodal environment makes it
possible to do things like display speech recognition results and it makes
it much easier to get user corrections than a voice-only environment.
Consequently, I think that statistical recognition has some interesting use
cases in a multimodal setting. Please let me know if you'd like to join the
discussion and aren't already in the MMIWG. I plan to schedule this for
after the break on Tuesday afternoon.