I see you're in favor of the "new" transformation idea of which I was
unaware. But have you heard about the coding idea? (I forget what exactly
the name of the idea was...) I found this one a much, much more plausible
solution. The idea is that every single word has a code that goes with that
can predict or delimit what comes next depending on what semantic/pragmatic
context it carries with it. I don't know the specifics, but an example that
ordinarily would have been explained by transformation would run as follows:
"The hamburger", noun, should be followed by a verb (or an adverb phrase
which would be followed by a verb) if it comes initially. However, if it's
followed by something other than a verb, it emphasizes it. (This is
ridiculously simplified, and ignoring relative clauses.) So:
1.) "The hamburger is good."
2.) "The hamburger I gave him." (In response to, "What did you give him?"
in, say, Yiddish American English.)
The point is that every word delimits what can come next and what it me
ans therefore. (By the by, this obviously doesn't matter for languages like
Latin that can take on literally any word order and convey the same meaning.)
But, say, if you had an OSV language in which word order is important, then
the first noun of the sentence would just encode the idea of being the object
without having to be near the verb and without having to undergo some sort
of transformation. If this doesn't make sense, can you at least see the idea?
-David