However, a context-sensitive grammar is generally overkill for that
problem, or more correctly, it is a lousy formalism for solving that
problem. The Algol-68 grammar was a disaster because it tried to
encode the typing descisions in a 2-level grammar. And, while I have
great respect in general for Quinn-Tyler Jackson, I think that his
Meta-S/GrammarForge attempts too much when it attempts to solve
similar problems, effectively making his tool Turing Machine powerful.

Grammars work best when they are kept simple. Encoding type
compatiblity in them is not keeping them simple. The formalism of
context free grammars handles most syntax concerns well. It rarely
handles any semantic concern well.

The notation of attributes is a relatively good formalism for
expressing many semantic concerns. In particular, it is usually a
good notation for expressing typing decisions.

Now, technically, there is an equivalence between [some] attribute
grammars and 2-level grammars and other context-sensitive formalisms
as well. However, that does not mean that one should express the
semantic concerns which have a natural attribute grammar in the
equivalent 2-level grammar.

If you do, the person who picks up the grammar after you is likely to
curse you, and be justified in doing so. Attribute grammars are
generally readable by anyone with enough programming experience to
understand a grammar at all. 2-level grammars (and most other context
sensitive formalisms I have seen) are not as easily understood,
especialy when used extensively or subtly.

And, that was my point, most people (including me) do not think of a
grammar with actions attached as defining a context-sensitive
language, even though it does at a technical level. It is quite
natural to think of the attribute evaluations as happening after (or
concurrent with) the parsing (and often it is implemented this way
also), and simply being normal "programming" steps that happen to know
about the tree structure of the input. Don't burden these people with
extra formalism, unless it makes the problem simpler.

> [Has there ever been a useful truly context free parser? All the> ones I've written have cheated by using symbol table info to decide> what symbol to return for a variable name and the like. I presume> that's what Chris is suggesting. -John]

Yes, John, you understood exactly what I meant.

Moreover, I was agreeing with Bob Duff, when he writes:

> And I think encoding type information in the grammar is a losing> proposition. I say, let semantic analysis deal with types, and avoid> feedback from semantic analysis into the parser or lexer. "Separation> of concerns" is the appropriate buzzword here.

In other words, it would be better to take the type information out of
the BNF and make a simpler BNF and then compute the types using code.
That may technically be an attribute grammar, but you don't have to
think of all the technical implications of that to make the solution
work.

However, even if you are going to leave the types in the grammar, then
use the symbol table and lexer to help you and don't try to make a
grammar that keeps track of the types of variables through extra
productions. Going that direction is making a bad problem even worse.

(To bring it back to the point of the previous posting, I suspect if
you just use the lexer/symbol table trick, the Algol 60 grammar won't
become significantly more complicated, and you can get away with
leaving the types in the grammar. However, if you try to make a
2-level grammar that tracks variable types, I think you will make
something which is borderline unreadable at best. Now, perhaps Quinn
will take that as a challenge and write down a meta-S solution and
show that it is simple enough to follow.

Note that I don't fault the Algol 60 writers for leaving the type
decisions in their grammar. We have 40+ years of hindsight to say
that we don't think such choices are optimal. Moreover, they didn't
even have formal attribute grammars to consider as a choice of
expression.)

The situation is similar to the fact that DFA's, NFA's and regular,
expression are all equivalent--and one can transform one into another.
However, some problems are most nicely expressed as a regular
expression, and some as a DFA, and some as an NFA. It can take some
work to tease, what may be a very simple regular expression, out of
what initially appears to be a rat's nest of a DFA, and vice versa.
Moreover, I wouldn't use any of the notations to express divisibility
by 73 even though I could--I would write the problem down in
arithmetic notation and say that it can be mechanically translated
into the right DFA (NFA/RE) if needed that way.

If you pick the right representation, the problem/solution seems
obvious. If you pick the wrong one, you've just made things harder.

Don't use formalisms to solve inappropriate problems. Find the right
formalism and use it instead.