Tuesday, November 3, 2020

Some thoughts on Fourtassi et al. 2020

It’s really nice to see a computational cognitive model both (i) capture previously-observed human behavior (here, very young children in a specific word-learning experimental task), and (ii) make new testable, predictions that the authors then test in order to validate the developmental theory implemented in the model. What’s particularly nice (in my opinion) about the specific new prediction made here is that it seems so intuitive in hindsight -- of *course* noisiness in the representation of the referent (here: how distinct the objects are from each other) could impact the downstream behavior being measured, since it matters for generating that behavior. But it sure wasn’t obvious to me before seeing the model, and I was fairly familiar with this particular debate and set of studies. That’s the thing about good insights, though -- they’re often obvious in hindsight, but you don’t notice them until someone explicitly points them out. So, this computational cognitive model, by concretely implementing the different factors that lead to the behavior being measured, highlighted that there’s a new factor that should be considered to explain children’s non-adult-like behavior. (Yay, modeling!)


Other thoughts:

(1) Qualitative vs. quantitative developmental change: It certainly seems difficult (currently) to capture qualitative change in computational cognitive models. One of the biggest issues is how to capture qualitative “conceptual” change in, say, a Bayesian model of development. At the moment, the best I’m aware of is implementing models that themselves individually have qualitative differences and then doing model comparison to see which best captures child behavior. But that’s about snapshots of the child’s state, not about how qualitative change happens. Ideally, what we’d like is a way to define building blocks that allow us to construct “novel” hypotheses from their combination...but then qualitative change is about adding a completely new building block. And where does that come from?


Relatedly, having continuous change (“quantitative development”) is certainly in line with the Continuity Hypothesis in developmental linguistics. Under that hypothesis, kids are just navigating through pre-defined options (that adult languages happen to use), rather than positing completely new options (which would be a discontinuous, qualitative change). 



(2) Model implementation:  F&al2020 assume an unambiguous 1-1 mapping between concepts and labels, meaning that the child has learned these mappings completely correctly in the experimental setup. Given the age of the original children (14 months, and actually 8 months too), this seems a simplification. But it’s not an unreasonable one -- importantly, if the behavioral effects can be captured without making this model more complicated, then that’s good to know. That means the main things that matter don’t include this assumption about how well children learn the labels and mappings in the experimental setup.


(3) Model validation with kids and adults: Of course we can quibble with the developmental difference between a 4-year-old and a 14-month-old when it comes to their perceptions of the sounds that make up words and referent distinciveness. But as a starting proof of concept to show that visual salience matters, I think this is a reasonable first step. A great followup is to actually run the experiment with 14-month-olds, and vary the visual salience just the same way, as alluded to in the general discussion.


(4) Figure 6: Model 2 (sound fuzziness = visual referent fuzziness) is pretty good at matching kids and adults, but Model 3 (sound fuzziness isn’t the same amount as visual referent fuzziness) is a little better. I wonder, though, is Model 3 enough better to account for additional model complexity? Model 2 accounting for 0.96 of the variance seems pretty darned good. 


So, suppose we say that Model 2 is actually the best, once we take model complexity into account. The implication is interesting -- perceptual fuzziness, broadly construed, is what’s going on, whether that fuzziness is over auditory stimuli or visual stimuli (or over categorizations based on those auditory and visual stimuli, like phonetic categories and object categories). This contrasts with domain-specific fuzziness, where auditory stimuli have their fuzziness and visual stimuli have a different fuzziness (i.e., Model 3). So, if this is what’s happening, would this be more in line with some common underlying factor that feeds into perception, like memory or attention?


F&al2020 are very careful to note that their model doesn’t say why the fuzziness goes away, just that it goes away as kids get older. But I wonder...


(5) On minimal pairs for learning: I think another takeaway of this paper is that minimal pairs in visual stimuli -- just like minimal pairs in auditory stimuli -- are unlikely to be helpful for young learners. This is because young kids may miss that there are two things (i.e., word forms or visual referents) that need to be discriminated (i.e., by having different meanings for the word forms, or different labels for the visual referents). Potential practical advice with babies: Don’t try to point out tiny contrasts (auditory or visual) to make your point that two things are different. That’ll work better for adults (and older children).


(6) A subtle point that I really appreciated being walked through: F&al2020 note that just because their model predicts that kids have higher sound uncertainty than adults doesn’t mean their model goes against previous accounts showing that children are good at encoding fine phonetic detail. Instead, the issue may be about what kids think is a categorical distinction (i.e., how kids choose to view that fine phonetic detail) -- so, the sound uncertainty could be due to downstream processing of phonetic detail that’s been encoded just fine.


No comments:

Post a Comment