Monday, November 29, 2010

Thoughts on Parisien & Stevenson (2010)

So I very much like the fact that they're combining aspects of previous models (one of which we looked at last time: Perfors, Tenenbaum, & Wonnacott (2010)) into a model that is tackling a more realistic learning problem: not only grouping verbs into classes based on their usage in various constructions, but also identifying the relevant constructions themselves from among many different verbs and construction types. A couple of more targeted thoughts:

  • I think this is the first time I've seen "competency model" used this way - I think this is basically the same as what we've been calling computational-level models, since this model is interested in whether the statistical information is present in the environment (rather than worrying about how humans could extract that information)?

  • Practical note: I didn't realize a matlab package (NPBayes) was available that does this kind of Bayesian inference, courtesy of Teh at UCL. This seems like a very nice option if WinBUGS isn't your thing.

  • Figure 3, generalization of novel dative verbs: The difference between the model with verbs classes (Model 2) and the model without verb classes (Model 1) doesn't seem so great to me. While it's true there's a small change in the right direction for PD only and DO only verbs, it's unclear to me that this is really a huge advantage. Implications: Knowing about verb classes isn't a big advantage at this stage of acquisition? (Which seems not quite right, given what we saw with Perfors, Tenenbaum, & Wonnacott (2010), where having those classes was a key feature in the good model.)

  • The comparison of the model's behavior to three-year-old generalization behavior, and why it's not the same: While it's entirely possible that they're right and the difference is due to the model learning from too small (and biased) a corpus, isn't the idea to try to use the data children have access to in order to make the kind of judgments children do? So whether the sample is biased or not compared to normal (adult?) corpora like the ICE-GB, the point is that the Manchester corpus is fairly large (1.5 million words of child-directed speech) and presumed to be a reasonable sample of the kind of data children hear - shouldn't the model generalize from these data the way children do if it's a competency model? I suppose it's possible that the Manchester corpus is particularly biased for some reason, but this corpus is made up of data from multiple children, so it would be somewhat surprising if the data sets all happened to be biased the same way by chance.

No comments:

Post a Comment