Friday, February 10, 2023

Some thoughts on Hahn et al. (2022)

I love the way Hahn et al. (2022) set up the two approaches they’re combining – it seems like the most natural thing in the world to combine them and reap the benefits of both. Hats off to the authors for some masterful narrative there.


In general, I’d love to think about how to apply the resource-rational lossy-context surprisal approach to models of acquisition. It seems like this approach to input representation could be applied to child input for any given existing model (say, of syntactic learning, but really for learning anything), so that we get a better sense of what (skewed) input children might actually be working from when they’re trying to infer properties of their native language. 


A first pass might be just to use this adult-like version to skew children’s input (maybe a neural model trained on child-directed speech to get appropriate retention probabilities, etc.). That said, I can also imagine that the retention rate might just generally be less for kids (and kids of different ages) compared to adults because of lower thresholds on the parts that go into calculating that retention rate (e.g., the delta parameter that modulates how much context goes into calculating next-word probabilities). Still,  the exciting thing for me is the idea that this is a way to formally implement “developing processing” (or even just “more realistic processing”) in a model that’s meant to capture developing representations.