> A true LR algorithm does not merge left-contexts (that affect> reductions--the canonical LR algorithm never merges, and minimal> state ones marge and then re-split (ala Spector) or use lane tracing> (ala Pager) before merging to avoid problematic merges) and thus> each state has only the lookaheads that are appropriate to it.> Thus, an LR algorithm doesn't have the issue unless one "optimizes"> the states as described above.

I'm not sure how that could be correct for any minimal LR algorithm. My
understanding is that, if every state has only the lookaheads that are
appropriate to it (or, more precisely, to all of its viable prefixes),
then the tables are canonical LR. If there are minimal LR algorithms that
somehow avoid the merging of different lookahead sets, I'd interested in
hearing how they accomplish that.

If, after parser table generation, some actions are converted to error
actions because of, for example, Yacc's %nonassoc, then even states in
canonical LR tables can contain lookaheads that are not appropriate. Of
course, Yacc's %nonassoc requires a non-LR(1) grammar in order to be
useful.

According to another of your emails, you've done some work with Yacc++ to
correct the expected tokens reported upon syntax errors. I've also worked
on this for Bison. Do you know of any papers that discuss this topic?