This paper really highlights to me the impact of pragmatic factors on children’s interpretations, something that I think we have a variety of theories about but maybe not as many formal implementations of (hello, RSA framework potential!). Also, I’m a fan of the idea of the Semantic Subset, though not as a linguistic principle, per se. I think it could just as easily be the consequence of Bayesian reasoning applied over a linguistically-derived hypothesis space. But the idea that information strength matters is one that seems right to me, given what we know about children’s sensitivities to how we use language to communicate.
That being said, I’m not quite sure how to interpret the specific results here (more details on this below). Something that becomes immediately clear from the learnability discussions in M&C2014 is the need for corpus analysis to get an accurate assessment of what children’s input looks like for all of these semantic elements and their various combinations.
Specific thoughts:
(1) Epistemic modals and scope
John might not come. vs. John can not come: I get that might not is less certain than cannot, and so entailment relations hold. But the scope assignment ambiguity within a single utterance seems super subtle.
(a) Ex: John might not come.
surface: might >> not: It might be the case that John doesn’t come. (Translation: There’s some non-zero probability of John not coming.)
inverse: not >> might: It’s not the case that John might come. (Translation: There’s 0% probability that John might come. = John’s definitely not coming.)
Even though the inverse scope option is technically available, do we actually ever entertain that interpretation in English? It feels more to me like “not all” utterances (ex: “Not all horses jumped over the fence”) — technically the inverse scope reading is there (all >> not = “none”) , but in practice it’s effectively unambiguous in use (always interpreted as “not all”).
(b) Ex: John cannot come.
surface: can >> not: It can be the case that John doesn’t come. (Translation: There’s some non-zero probability of John not coming.)
inverse: not >> can: It’s not the case that John can come. (Translation: There’s 0% probability that John can come. = John’s definitely not coming.)
Here, we get the opposite feeling about how can is used. It seems like the inverse scope is the only interpretation entertained. (And I think M&C2014 effectively say this in the “Modality and Negation in Child Language” section, when they’re discussing how can’t and might not are used in English.)
I guess the point for M&C2014 is that this is the salient difference between might not and cannot. It’s not surface word order, since that’s the same. Instead, the strongly preferred interpretation differs depending on the modal, and it’s not always the surface scope reading. This is what they discuss as a polarity restriction in the introduction, I think. (Though they talk about might allowing both readings, and I just can’t get the inverse scope one.)
(2) Epistemic modals, negation, and input: Just from an input perspective, I wonder how often English children hear can’t vs. cannot (and then we can compare that to mightn’t vs. might not). My sense is that can’t is much more relatively frequent, and might not is much more relatively frequent in each pair. One possible learning story component: The reason we have a different favored interpretation for cannot is that we first encounter it as a single lexical item can’t, and so treat it differently than an item like might where we overtly recognize two distinct lexical elements, might and not. Beyond this, assuming children are sensitive to meaning (especially by five years old), I wonder how often they hear can’t (or cannot) used to effectively mean “definitely not” (favored/only interpretation for cannot) vs. might not used to mean “possibly not” (favored/only interpretation for might not).
(3) Conversational usage:
(a) Byrnes & Duff 1989: Five-year-olds don’t seem to distinguish between “The peanut can’t be under the cup” and “The peanut might not be under the box” when determining the location of the peanut. I wonder how adults did on this task. Basically, it’s a bit odd information-wise to get both statements in a single conversation. As an adult, I had to do a bit of meta-linguistic reasoning to interpret this: “Well, if it might not be under the box, that’s better than ‘can’t’ be under the cup, so it’s more likely to be under the box than the cup. But maybe it’s not under the box at all, because the speaker is expressing doubt that it’s under there.” In a way, it reminds me of some of the findings of Lewis et al. (2012) on children’s interpretations of false belief task utterances as literal statements of belief vs. parenthetical endorsements. (Ex: “Hoggle thinks Sarah is the thief”: literal statement of belief = this is about whether Hoggle is thinking something; parenthetical endorsement: there’s some probability (according to Hoggle) that Sarah is the thief.) Kids hear these kind of statements as parenthetical endorsements way more than they hear them as literal statements of belief in day-to-day conversation, and so interpret them as parenthetical endorsements in false belief tasks. That is, kids are assuming this is a normal conversation and interpreting the statements as they would be used in normal conversation.
Lewis, S., Lidz, J., & Hacquard, V. (2012, September). The semantics and pragmatics of belief reports in preschoolers. In Semantics and Linguistic Theory (Vol. 22, pp. 247-267).
(b) Similarly, in Experiment 1, I wonder again about conversational usage. In the discussion of children’s responses to the Negative Weak true items like “There might not be a cow in the box” (might >> not: It’s possible there isn’t a cow), many children apparently responded False because “A cow might be in the box.” Conversationally, this seems like a perfectly legitimate response. The tricky part is whether the original assertion is false, per se, rather than simply not the best utterance to have selected for this scenario.
(4) The hidden “only” hypothesis:
In Experiment 1, M&C2014 found on the Positive True statements (“There is a cow in the box” with the child peeking to see if it’s true) that children were only at ~51.5% accuracy for being right. This is weirdly low, as M&C2014 note. They discuss this as having to do with the particle “also”, suggesting a link to the “only” interpretation, i.e., children were interpreting this as “There is only a cow in the box.” (Side note: M&C2014 talk about this as “There might only be a cow in the box.”, which is odd. I thought the Positive and Negative sentences were just the bare “There is/isn’t an X in the box.”) Anyway, they designed Experiment 2 to address this specific weirdness, which is nice.
In Experiment 2 though, there seems to me to be a potential weirdness with statements like “There might not be only a cow in the box”. Only has its own scopal impacts, doesn’t it? Even if might takes scope over the rest, we still have might >> not >> only (= “It’s possible that it’s not the case there’s only a cow.” = There may be a cow and something else (as discussed later on in examples 44 and 45) = infelicitous in this setup where you can only have one animal = unpredictable behavior from kids). Another interpretation option is might >> only >> not (= It’s possible that it’s only the case that it’s not a cow.” = may be not-a-cow (and instead be something else) = must be a horse in this setup = desired behavior from kids).
We then find that children in Experiment 2 decrease acceptance of Negative Weak True statements like “There might not be a cow in the box” to 33.3%. So, going with the hidden only story, they’re interpreting this as “It’s not the case that there might be (only) a cow in the box.” Again, we get infelicity if not >> only since there can only be one animal in the box at a time. But this could either be because of the the interpretation above (not >> might >> only) or because of the interpretation might >> not >> only (which is the interpretation that follows surface scope, i.e., not reconstructed.) So it’s not clear to me what this rejection by children means.
(5) Discussion clarification: What’s the difference between example 46 = “It is not possible that a cow is in the box” and example 48 = “It is not possible that there is a cow [is] in the box”? Do these not mean the same thing? And I’m afraid I didn’t follow the the paragraph after these examples at all, in terms of its discussion of how many situations one vs. the other is true in.
(6) Semantic Subset Principle (SSP) selectivity: It’s interesting to note that M&C2014 say the SSP is only invoked when there are polarity restrictions due to a lexical parameter. So, this is why M&C2014 say it doesn’t apply when the quantifier every is involved (in response to Musolino 2006). This then presupposes that children need to know which words have a lexical parameter related to polarity restrictions and which don’t. How would they know this? Is the idea that they just know that some meanings (like quantifier every) don’t get them while others (like quantifier some) do? Is this triggered/inferrable from the input in some way?
No comments:
Post a Comment