This site uses cookies to deliver our services and to show you relevant ads and job listings.
By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service.
Your use of Stack Overflow’s Products and Services, including the Stack Overflow Network, is subject to these policies and terms.

Join us in building a kind, collaborative learning community via our updated
Code of Conduct.

A recent question on this site led to some discussion which provoked the following comment by one of our community members:

UG is "controversial in the latter interpretation" only insofar as
there are linguists who are sure there must be domain-independent
explanations for all the phenomena of language. I've actually never
seen an actual domain-independent explanation for any problem explored
by generative grammarians. Usually these non-generative linguists
either discuss different phenomena entirely or else deal with relevant
issues so superficially (e.g. word order) that they don't even engage
the key data

In the current question I'd like to solicit examples of problems explored by generative grammarians which are considered to rely most heavily on an assumption of FLN (faculté de langage, narrowly defined, i.e. a domain-specific human endowment which permits the learning and computation of natural languages), and for which the correctness of the analysis is generally accepted within the tradition.

If community members can provide good examples, then a worthwhile follow-up question (that I or anyone else could ask) might explore whether there are good candidates for actual, relevant, non-superficial, alternative analyses which do not rely on FLN.

2 Answers
2

First there's the phrase "and for which the correctness of the analysis is generally accepted within the tradition". So if someone comes with an example, you can always dip into the vast literature of linguists exploring alternatives to each other's ideas - to characterize the example as not "generally accepted".

Then there's the reference to "FLN", which will allow you to reject all sorts of examples for which there isn't an analysis compatible with Chomsky's recent hyper-minmalist speculation that the only component of UG is Merge (i.e. FLN). Chomsky may be famous, but these recent speculations do not drive the field - precisely because it is so hard to explain anything real within their strictures. (Some people might find this a bracing challenge, but the other possibility is that it's a blind alley.)

Then there's your reference to "domain-specific". No one's going to be able to prove that a principle relevant to grammar might not turn out to be domain-general. Maybe the Case Filter reflects some abstract property of the mind that is also used in vision. That would actually be a very exciting result for everyone, generative grammarians included. My point (I'm the author of the quote) is that I've never seen any demonstration of this sort that's real, not that we know for a fact that the principles underlying human language are all domain-specific.

That said, have a go at:

the existence of obligatory verb-initial and obligatory verb-second languages, and the non-existence of obligatory verb-third, verb-fourth, etc. languages

the Cinque hierarchy

the complement/non-complement asymmetry in Incorporation and compounding (Baker's standard example)

island conditions as they apply to wh-phrases like "why" and "how", both overtly displaced and in situ

The Person-Case Constraint

These are examples that come to my mind. I imagine others can supply other relevant examples from different areas of linguistics

Edit: In response to the request for references, a first stab:

"the existence of obligatory verb-initial and obligatory verb-second languages, and the non-existence of obligatory verb-third, verb-fourth, etc. languages" Maybe start with this survey article by Holmberg and follow the references he gives.

"the complement/non-complement asymmetry in Incorporation and compounding (Baker's standard example)" Discussed for lay audiences in Baker's Atoms of Language, based on research first reported in his book Incorporation.

"island conditions as they apply to wh-phrases like "why" and "how", both overtly displaced and in situ". Lots of sources. Probably the references in this article will be a good start.

The Person-Case Constraint. Simple Googling is probably good enough to show the nature of the condition, proposed accounts, and the languages across the globe that show its effects.

If you think it's possible to rephrase the question in a less tendentious way, would you consider editing the question? also could you provide representative citations for the phenomena you've cited?
– jlovegrenMar 2 '12 at 14:13

Perhaps you might edit your own question. I would feel more comfortable about that way of proceeding. I will try to come back with references as you ask -- but we're talking about topics that one needs some background in the field to understand, so there isn't always one single place to go (and this is a website that we visit casually -- I don't have the time to give you a carefully constructed syllabus as I would in planning a class). Googling will bring up lots of relevant citations, and anyone in the field will be familiar with these topics.
– pensatorMar 2 '12 at 14:52

I am sensing that you don't find the question appropriate, so I'll go ahead and vote to close it. Though I asked it in good faith I could have made a better approach at it.
– jlovegrenMar 2 '12 at 15:38

I think this argument might be misconstrued. The latest incarnation of generative syntax, Minimalist Program, doesn't purport to explain cross-linguistic variation. Its sole goal is to find out how this optimal system works, from the point of view of cognitive science (see more on biolinguistics). UG turned out to be primarily recursion and nothing else. That is why Cedric Boeckx has consistently argued that there is no such thing as parametric syntax.
– Alex B.Mar 2 '12 at 17:10

FLN is a retrenchement of a much stronger claim, which is the claim of universal grammar. Since you asked about Universal Grammar, not just FLN, I will try to answer the UG question.

The claim of universal grammar is that any sentence, no matter how complex or embedded, can be translated between any two languages with essentially the same kind of embedding structure. This would mean that aside from superficial differences in vocabulary and word order, that all languages share a deeply isomorphic stack-parsed tree grammar at their core.

To show you that this makes nontrivial predictions, here is a super-embedded English sentence:

I cut the tree which held the girl which put the frog which swallowed the fly in its throat in her pocket down quickly.

This translation is word for word, with the exact same recursive structure, except the final "down" in English is omitted (since cut-down is a peculiar English structure). The embedding clause structure and the general sentence pattern is exactly the same, even though the two languages are not particularly related. The recursion works exactly the same.

Hebrew is a revived language, but you can do this with Chinese, with French, with any old-world language, basically, and you will produce a sentence with essentially the same nesting structure, with perhaps some word order or clause-order differences, but which can be recognized to be basically the same sentence, because the parse-tree structure is largely isomorphic, with the same levels of nesting.

That's the key prediction of UG.

Made up constructions that violate UG

The statement that all languages recurse in this way is to be contrasted with made-up recursive structures which never ever occur in natural languages:

Here are sentences in a made-up language, with the same vocabulary as English, but where the following sentence is grammatical:

"I walked to the store is green."

The intended semantics of this construction are

(I walked to the store) and (the store is green).

Since "the store" repeats in both sentences, one could imagine a culture where you could say:

"I walked to the store is green is my favorite color looks better than Nancy's favorite color is yellow."

"I walked to the store which is green which is my favorite color which looks better than Nancy's favorite color which is yellow."

Those "which"es are necessary, and something like it is necessary in all the world's languages. The reason is that you are not allowed to make overlapping constructions

(I walked to the store) is green

I walked to (the store is green)

you can see that without the "which", the parentheses overlap. When parentheses overlap, the language is not parsable by a stack automaton, meaning that it isn't the type of context-free grammar that Chomsky identified as central to human language grammatical embedding.

The "which" makes the stuff that follows subordinate to the preceding stuff, so you get

(I walked to (the store which is (green which is (my favorite color which is better than (Nancy's favorite color which is yellow)))))

notice that the units don't overlap, so this is ok. That's a major nontrivial prediction of UG: all language embedding must be non-overlapping.

Exceptions

when languages have complex case systems, they sometimes have a freer word order, which allows units which are grouped together to slide apart because no ambiguity is possible, because of the case markings. Such things might lead structures to partially overlap, which contradicts UG. This is not so terrible, because you can usually figure out how to translate any such thing back and forth with the same "deep structure", essentially the same parse tree description.

The more serious challenge is from Piraha and Warlpiri, which do not allow complex embedding at all. You couldn't even translate the Boy-frog-tree sentence to Piraha, you would need many separate sentences. These are counterexamples to the claims of UG.

Why stacks?

So UG is a predictive, powerful, statement. It just happens to be wrong, despite the fact that you have to go to the far corners of the Earth for a true counterexample. That's weird. Why should it be true of almost all languages, with a few isolated exceptions?

One hypothesis you could make is that bilingual speakers and the need for translation were able to homogenize the grammar between neighboring languages over time, leading the world's grammars to standardize on essentially the same tree recursive form.

But from my experience with mathematical formalisms, and with artificial languages, I have noticed that human beings are just "hard wired" to parse stack languages naturally. This is most notable in computer languages, where PASCAL (which is rigidly structured and very heirarchical) is considered super-elegant, LISP (which is essentially all nested parentheses) is considered the king of elegance, while "C" (which has side effects) is considered ugly.

Despite its ugliness C wins.

The same is true in mathematics, where index notation for tensors is considered "ugly", and function notation is preferred, with nested parentheses. This despite the obvious fact that index notation is a language that perfectly fits the domain, and parentheses notation does not. The same problems occur in Fregian semantics, in biochemical modeling langauges, in Aristotelian philosophy (where categories are not allowed to overlap), in object oriented languages (no multiple inheritence), and basically everywhere the humans try to design rules for natural things: they almost always make nonoverlapping structures, even when these are wrong for the intended domain.

I speculate that the human brain is just hard-wired for tree grammars because of some biochemistry. If memory is encoded in specific RNA chains in the brain, as I am sure it is, it is possible that sliding certain RNA chains in one direction makes a "push" operation on a stack, while sliding them in the other direction makes a pop. Then the one-dimensional linear stack structure of language would reflect the biochemistry of the memory storing RNA in the brain, as the ears respond to the input. The existence of a stack in the brain, an actual biochemical stack, would explain this peculiarity. But this is just speculation at this point.