Monday, November 1, 2010

Thoughts on Bod (2009)

I'm very fond of the general idea underlying this approach, where the structure of sentences is used explicitly to generalize appropriately during acquisition. The way that Bod's learner has access to all possible tree structures reminds me very much of work by Janet Fodor and William Sakas, who have some papers in the early 2000s about their Structural Triggers Learner, which also has access to all possible tree structures. I think the interesting addition in Bod's work is that tree structures can be discontiguous in ways that don't necessarily have to do with dependencies (e.g., Fig 13, p.768, with the discontiguous subtree that involves the subject and part of the object). That being said, I don't know how reasonable/plausible it is for a child to keep track of these kind of strange discontinuities, really. Also, I don't know how plausible it is to track statistics on all possible sub-trees. I know Bod offers some techniques for making this tractable, but it seems trickier than the Bayesian hypothesis spaces, because those hypothesis spaces of structures are very large but importantly implicit - the learner doesn't actually deal with all the items in the hypothesis space. Bod's learner, on the other hand, seems to need to track all those possible sub-trees explicitly.

More specific comments:

  • I admit my feathers got a little ruffled in section 6 with the poverty of the stimulus discussion. On p.777, Bod cites Crain (1991) who claims that (at the time - it's been 20 years since then) complex yes/no questions were the "parade case of an innate constraint". And then, Bod goes on to show how the U-DOP learner can learn complex yes/no questions. This is all well and good, because the "innate constraint" the nativists claimed was needed to learn this is precisely what Bod's U-DOP learner uses: structure-dependence. So it would actually be really bad (and strange) if the U-DOP learner, with all its knowledge of language structure, couldn't learn how to form complex yes/no questions properly. It seems to me that what Bod has done is shown a method that uses structure-dependence in order to learn complex yes/no questions from the input. Since his learner assumes the knowledge the nativists say children need to assume, I don't think he can claim that he's shown anything that should change nativists' views on the problem.

  • It seems like this learner is actually tackling a harder problem than is necessary, since children will likely have some idea of grammatical category knowledge (even if they don't have it for all words yet). Given this, children also may be able to use some simple probability information between grammatical categories to form initial groupings (constituents) - so the U-DOP learner is actually considering a wider hypothesis space of possible tree structures when it allows any fragment of a sentence to form a productive unit (e.g., "the book" (constituent) vs. "in the").

  • I found it interesting that binarity plays such an integral role for this learner. That property seems similar to the property "Merge" (wikipedia info here) in current generative linguistics.

  • It also seems like the overall process behind the U-DOP is a formalization of the chunking or "algebraic learning" process that gets talked about a lot for learning. In this case, it's chunking over tree structures. This struck me particularly in section 5.2, on p.774, with the "blow it up" example.

  • Smaller note: Why does the U-DOP do so poorly on Chinese, when compared to German and English data in section 4? It makes me wonder if there's something language-specific about approaching the learning problem this way, or perhaps something language-specific about using this particular structural representation.

No comments:

Post a Comment