Wednesday, January 26, 2011

Next time: Hruschka et al. (2009) & Lightfoot (2010)

Thanks to all who were able to join us this time for a vigorous discussion of Lidz (2010)! Next time on Feb 9, we'll be discussing two shorter articles dealing with models of language change, and how they (might) inform us about properties of language acquisition:

Hruschka et al. (2009)

Lightfoot (2010)

Monday, January 24, 2011

Thoughts on Lidz (2010)

Something that was extremely interesting to me was the linguistic phenomena that Lidz talked about in the later sections of the review - these are things that really look hard to learn from the data, and yet which children (and sometimes extremely young children) seem to have very clear intuitions about. These are far more complex than the standard poverty of the stimulus examples that are often tackled in the current modeling literature, like structure dependence and anaphoric one. They also make clear what domain-specific information that doesn't seem derivable from the input looks like, e.g., the connections between the observable features in the data and the underlying linguistic objects that express these features. Lidz clearly believes that there's a place for statistical learning (and in fact, a great need for statistical learning) even in a world that has Universal Grammar, since the learner has to track various input features. What Universal Grammar does is provide something like an attentional focus - which features are significant, etc. Moreover, UG also tells how these features connect to the abstract entities, rather than just saying "hey, look at these features - they're probably important for something" (unless we think it's obvious how some of the features map to the abstract structural knowledge).

Some more targeted thoughts:

  • The way Lidz used analysis by synthesis on p.203 seemed much more in line with what I thought it was compared to some of the definitions of AxS we saw in last time's reading.

  • I thought the 18-month-old constituent knowledge was interesting, but it wasn't as convincing to me as some of the later examples since infants at that age do have experience with constituents moving in their native language - so it's not like they haven't ever seen constituents moving before. In contrast, the interpretations connected with the Spanish & Kannada dative alternations are not something children seemed to encounter very much at all in the input. The same goes for the generic vs. existential interpretations on bare nouns.

Wednesday, January 12, 2011

Next time: Lidz (2010)

Thanks to all who were able to join our vigorous discussion of analysis-by-synthesis at this past meeting of the reading group. Next time, we'll have a look at a recent article by Lidz (2010) (which can be found here) that provides a nativist perspective on statistical learning in language acquisition.

Monday, January 10, 2011

Thoughts on Bever & Poeppel (2010)

So this is definitely a bit of topic shift from our previous readings, but I think it's very useful to see how the analysis-by-synthesis (AxS in Bever & Poeppel's terms) idea is being thought about by people who care very much about the biological underpinnings of language (they are writing for Biolinguistics, after all). The part that spoke most to me was the syntactic section - for one thing, the demonstration of how difficult the mapping can be from syntactic form to meaning (example (4)) was very clear. A main question that arose was whether grammatical derivations are computed as part of the processes of comprehension - this has strong connections to syntactic theory, which views grammatical derivations as the basic operation that has to happen.

Something else of note, which I'd like to think more about, is how to connect the description of AxS that they have here to how we've been thinking about it in the Bayesian models we've looked at already. It almost seemed like some of the distinctions B&P were making were almost orthogonal to how I normally think of a Bayesian model. For instance, in example 9, they deal with the distinction between habits accumulated via induction over experiences (which they seem to liken to pattern recognition) and novel computation (which is presumably the generative part). Mapping this to the Bayesian reasoning stuff we've been looking at, the habits accumulated from induction works pretty well as the general inference from data; would the novel computation then be what happens when the Bayesian learner tries to deal with novel data? But B&P seem to be thinking of this as a process (or processes) that apply during language comprehension, so is all of it happening when the novel data is encountered during comprehension?

Later, in section 7, where they move to AxS in visual perception (and in particular visual object recognition), they again seem to have the division into the quick heuristic/pattern recognition and the slower computation. Would we view this as a two-step inference process when the novel data point is encountered? Step 1: update prior based on quick heuristic ==> new posterior; Step 2: update new posterior based on slower computation ==> newer posterior. The same sort of question would apply when we talk about AxS for sentential meaning (section 8.3): the quick heuristic is the literal meaning, while the slower computation is the pragmatic knowledge.

They do touch on applying this to acquisition at the very end of the article - how the child builds up statistical generalizations over time, with the example of learning the canonical sentence form. The interesting part is that the novel computation is triggered by noticing that there's not a one-to-one mapping from form to meaning all the time. Does this make any clear acquisition trajectory predictions? It seems like it might have something to say about children's understanding of this mapping, and how many non-canonical forms they've seen. But maybe sometime more concrete can be said?

Monday, January 3, 2011

Meetings for Winter 2011 Quarter

After looking at people's availability, it looks like the best time to meet for this quarter is, somewhat surprisingly, the same time as last quarter: Wednesdays, from 1:15pm to 2:30pm in SBSG 2341. My apologies to anyone who is unable to make it at this time this quarter. Hopefully you'll be able to make it for next quarter. But, in the meantime, always remember that you're welcome to post on the discussion board here.

This quarter, we'll be looking at a couple of different topics, with the idea of getting a view of some of the current perspectives in the field:

Jan 12: Analysis by Synthesis (a common assumption in generative models)
Jan 26: Nativist views on statistical learning
Feb 9 & Feb 23: Language acquisition + Language change
Mar 9: Language evolution, with respect to syntax

The selected readings are posted on the home website's schedule page.

As always, feel free to let me know of any topics or particular readings you would be interested in for future CoLa reading group meetings.

Looking forward to seeing you on January 12!