Tuesday, November 23, 2021

Some thoughts on Bohn et al. 2021

I think it’s really nice to see a developmental RSA model, along with explicit model comparisons. To me, this approach highlights how you can capture specific theories/hypotheses about what exactly is developing via these computational cognitive modeling “snapshots” that capture observable behavior at different ages. Also, we get to see the model-evaluation pipeline often used in RSA adult modeling now used with kids (i.e., the model makes testable predictions that are in fact tested on kids). I also appreciate how careful B&al2021 are with respect to how model parameters link to psychological processes in the discussion (they emphasize in the general discussion that their model necessarily made idealizations to be able to get anywhere).


Some other thoughts:

(1) It’s interesting to me that B&al2021 talk about children integrating all available information, in contrast to alternative models that ignore some information (and don’t do as well). I’m assuming “all” is relative, because a major part of language development is learning which part of the input signal is relevant. For instance, speaker voice pitch is presumably available information, but I don’t think B&al2021 would consider it relevant for the inference process they’re interested in. But I do get that they’re contrasting the winning model with one that ignores some available relevant information.


(2) I feel like the way that B&al2021 talk about informativity seems to differ at points. In one sense, they talk about an informative and cooperative speaker, which seems to link with the general RSA framework of speaker utility as maximizing correct listener inference. In another sense, they connect informativity to alpha specifically, which seems like a narrower sense of “informativity”, maybe tied to how much above 1 alpha is (and therefore how deterministic the probabilities are that the speaker uses).


(3) Methodology, no-word-knowledge variant: I was still a little fuzzy even after reading the methods section about how general vocabulary size is estimated and used in place of specific word familiarity, except that of course it’s the same value for all objects (rather than in fact differing by word familiarity).


No comments:

Post a Comment