Monday, November 2, 2015

Some thoughts on Pietroski 2015 in press

One of the things that stood out most to me from this article is the importance of the link between structured sequences and intended meanings (e.g., with the eager/easy to please examples). Pietroski is very clear about this point (which makes sense, as it was one of the main criticisms of the Perfors et al. 2011 work that attempted to investigate poverty of the stimulus for the canonical example of complex yes/no questions). Anyway, the idea that comes through is that it’s not enough to just deal with surface strings alone. Presumably it becomes more acceptable if the strings also include latent structure, though, like traces? (Ex: John is easy to please __(John) vs. John is eager (__John) to please.) At that point, some of the meaning is represented in the string directly.

I’m not sure how many syntactic acquisition models deal with the integration of this kind of meaning information, though. For example, my islands model with Jon Sprouse (Pearl & Sprouse 2013) used latent phrasal structure (IP, VP, CP, etc) to augment the learner’s representation of the input, but was still just trying to assign acceptability (=probability) to structures irrespective of the meanings they had. That is, no meaning component was included. Of course, this is why we focused on islands that were supposed to be solely “syntactic”, unlike, for instance, factive islands that are thought to incorporate semantic components. (Quickie factive island example: *Who do you forget likes this book? vs. Who do you believe likes this book?). Is our approach an exceptional case, though? That is, is it never appropriate to worry only about the “formatives” (i.e., the structures in absence of interpretation)? For instance, what if we think of the learning problem as trying to decide what formative is the appropriate way to express a particular interpretation — isn’t identifying the correct formative alone sufficient in this case? Concrete example: Preferring “Was the hiker who was lost killed in the fire?” over “*Was the hiker who lost was killed in the fire?” with the interpretation of "The hiker who was lost was killed in the fire [ask this]".

Some other thoughts:

(1) My interpretation of the opening quote is that acquisition models (as theories of language learning and/or grammar construction) matter for theories of language representation because they facilitate the clear formulation of deeper representational questions. (Presumably by highlighting more concretely what works and doesn’t work from a learning perspective?) As an acquisition chick who cares about representation, this makes me happy.


(2) For me, the discussion about children’s “vocabulary” that allows them to go from “parochial courses of human experience to particular languages” is another way of talking about the filters children have on how they perceive the input and the inductive biases they have on their hypothesis spaces. This makes perfect sense to me, though I wouldn’t have made the link to the term “vocabulary” before this. Relatedly, the gruesome example walkthrough really highlights for me the importance of inductive biases in the hypothesis space. For example, take the assumption of constancy w.r.t. time for what (most) words mean (so we never get green before time t or blue after time t as possible meanings, even though these are logically possible given the bits we build meaning out of). So we get that more exotic example, which gets followed up with more familiar linguistic examples that help drive the point home.

References:
Pearl, L., & Sprouse, J. (2013). Syntactic islands and learning biases: Combining experimental syntax and computational modeling to investigate the language acquisition problem. Language Acquisition20(1), 23-68.

Perfors, A., Tenenbaum, J. B., & Regier, T. (2011). The learnability of abstract syntactic principles. Cognition118(3), 306-338.



No comments:

Post a Comment