I met Maurice Gross a few times. He had long-standing connections with Penn, and he was a charming host when I visited his lab for a thesis defense. After the event, we had a memorable dinner at a local restaurant, where Maurice amused us with stories about his country house, wine-making, and I'm sure many other topics I don't remember now. He recommended that I try clafoutis for desert; I had never had it before, and it was superb. This experience agrees well with Eric Laporte's account.

Maurice Gross was right and ahead of his time in his focus on the local grammar of lexical items, and in recognizing the combinatorial uniqueness of individual lexical items, in contrast with the very impoverished tag sets of standard generative grammar. The use in his laboratory of finite-state transducers for local grammars was highly original. However, I'm less sure that their specific approach is sufficient. The local grammar approach imposes extreme constraints on the interactions between lexical items, and seems too brittle to handle natural variation. Local grammars are better as compact summaries of observations than as models of the interactions and variations that may occur. Lexicalized TAG has a similar flavor of local grammar, but it allows greater combinatorial flexibility and generalization. Still, both the overall view of language and the specific methods that Maurice Gross pioneered deserve continued study. In our rush to build theories and systems, we keep forgetting that language is much more an assemblage of particulars than the neat result of a few general principles. We need to savor it slowly, as if were sitting for dinner with Maurice.

No comments:

About Me

I am VP and Engineering Fellow at Google, where I lead work on natural-language understanding and machine learning. My previous positions include chair of the Computer and Information Science department of the University of Pennsylvania, head of the Machine Learning and Information Retrieval department at AT&T Labs, and research and management positions at SRI International. I received a Ph.D. in Artificial Intelligence from the University of Edinburgh in 1982, and I have over 120 research publications on computational linguistics, machine learning, bioinformatics, speech recognition, and logic programming, as well as several patents. I was elected AAAI Fellow in 1991 for contributions to computational linguistics and logic programming, ACM Fellow in 2010 for contributions to machine-learning models of natural language and biological sequences, and ACL Fellow in 2017 for contributions to sequence modeling, finite-state methods, and dependency and deductive parsing. I was president of the Association for Computational Linguistics in 1993.