I’m very sympathetic to the difficulties of creating experimental stimuli (like artificial languages) that don’t idealize away from important aspects of actual language data. So, LR&A2019’s main point about the importance of ecologically valid stimuli is certainly one I can get behind. That said, the trick is figuring out what we want to find out from the experiment -- if we’re interested in children’s ability to use, say, statistical cues along for segmentation (in the absence of any other information) just to show children have this ability, then we specifically don’t want ecologically valid stimuli.
LR&A2019’s main point about the utility of higher entropy for language acquisition tasks like segmentation and object-label mapping is also one I’m sympathetic to. I’m just less clear on how this relates to what (I thought) we already knew about children’s language acquisition abilities. For instance, if children are sensitive to entropy, doesn’t this just mean that children can tell the difference between probability distributions of different types, like uniform vs. somewhat skewed vs. highly skewed? (So, I thought we already knew that.) For example, I’m thinking of some of the work on how children (vs adults) respond to input that’s inconsistent (work by Hudson Kam and by Newport), and the thing that varies is what the exact probability distribution is. It’s possible I’m missing something more subtle about entropy and information rates, which is touched on in the discussion near the end.
Some other thoughts:
(1) What we can conclude about early native language acquisition from studies with 10-year-olds: I’m always hesitant to conclude anything about early stages of acquisition (here, tasks that start happening before the child is a year old) from studies conducted on older participants. Often it’s a good way to start, in order to get a developmental trajectory of whatever it is we’re studying or provide a proof of experimental concept. But, for example, it’s tricky to conclude something about infant abilities from the performance of 10-year-olds. LR&A2019 do note that they intend to test younger children (7-year-olds, I believe, given their previous work). But even then, I don’t quite know how to extrapolate from 7-year-olds to infants.
(2) Something that comes to mind when considering the specific stimuli setup LR&A2019 went with: the work on how children of different ages vs. adults respond to input with a highly skewed vs. not highly skewed distribution seems really important to think about for comparison purposes. I’m thinking of work by Hudson Kam and Newport, where they see the difference in generalizations made when the input is something like 90-5-5 vs. 60-30-10 vs. other splits. So, the fact that LR&A2019 have a super-frequent option and the rest evenly infrequent (80-7-7-7) might yield different results than having different sort of skews.
Related materials question: Not that I have particular expectations about this mattering, but why not make it so the exposure in minutes was the same for the two different entropy conditions? One could reasonably argue that better performance happened for the one kids heard for longer (even if they heard certain word forms less frequently -- they still had more time on the task). And it doesn’t seem that difficult to create an 80-7-7-7 split for the low entropy condition that lasts the same amount of time as the high entropy condition.
(3) The general scaffolding story that LR&A2019 put forth in the discussion about why higher entropy is helpful makes good sense to me. There’s a bunch of infant segmentation work showing that anchor words (e.g., familiar words) facilitate segmentation of other words. So, if kids here in the high entropy condition can segment the frequent word, that allows them to have a familiar word they can use to segment the other words. Once segmentation is off to a good start, then they have a solid set of labels that they can use for object-label mapping. So, this study would be additional supportive evidence for scaffolding in these two particular tasks.
No comments:
Post a Comment