Wednesday, October 23, 2013

Next time on 11/6/13 @ 2:30pm in SBSG 2221 = Marcus & David 2013

Thanks to everyone who was able to join us for our lively and informative discussion of Ambridge et al. (in press)! Next time on November 6 at 2:30pm in SBSG 2221, we'll be looking at an article that discusses how probabilistic models of higher-level cognition (including language) are used in cognitive science:

Marcus, G. & Davis, E. 2013. How Robust Are Probabilistic Models of Higher-Level Cognition? Psychological Science, published online Oct 1, 2013, doi:10.1177/095679761349541.

I would also strongly recommend a target article and commentary related to this topic that were written fairly recently:

Jones, M. & Love, M. 2011. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34 (4), 169-188.

Chater, N., Goodman, N., Griffiths, T., Kemp, C., Oaksford, M., & Tenenbaum, J. 2011. The imaginary fundamentalists: The unshocking truth about Bayesian cognitive science. Behavioral and Brain Sciences, 34 (4), 194-196.

(Both target article and commentary are included in the pdf file linked above.)

See you then!

Monday, October 21, 2013

Some thoughts on Ambridge et al. in press

This article really hit home for me, since it talks about things I worry about a fair bit with respect to Universal Grammar and language learning in general -- so much so, that I ended up writing a lot more about it than I typically do for the articles we read. Conveniently, this is a target article that's asking for commentaries, so I'm going to put some of my current thoughts here as a sort of teaser for the commentary I plan to submit.


The basic issue that the authors (AP&L) highlight about proposed learning strategies seems exactly right: What will actually work, and what exactly makes it work? They note that …nothing is gained by positing components of innate knowledge that do not simplify the problem faced by language learners” (p.56, section 7.0), and this is absolutely true. To examine how well several current learning strategy proposals work that involve innate, linguistic knowledge, AP&L present evidence from a commendable range of linguistic phenomena, from what might be considered fairly fundamental knowledge (e.g., grammatical categories) to fairly sophisticated knowledge (e.g., subjacency and binding). In each case, AP&L identify the shortcomings of some existing Universal Grammar (UG) proposals, and observe that these proposals don’t seem to fare very well in realistic scenarios. The challenge at the very end underscores this -- AP&L contend (and I completely agree) that a learning strategy proposal involving innate knowledge needs to show “precisely how a particular type of innate knowledge would help children acquire X” (p.56, section 7.0).  

More importantly, I believe this should be a metric that any component of a learning strategy is measured by.  Namely, for any component (whether innate or derived, whether language-specific or domain-general), we need to not only propose that this component could help children learn some piece of linguistic knowledge but also demonstrate at least “one way that a child could do so” (p.57, section 7.0). To this end, I think it's important to highlight how computational modeling is well suited for doing precisely this: for any proposed component embedded in a learning strategy, modeling allows us to empirically test that strategy in a realistic learning scenario. It’s my view that we should test all potential learning strategies, including the ones AP&L themselves propose as alternatives to the UG-based ones they find lacking.  An additional and highly useful benefit of the computational modeling methdology is that it forces us to recognize hidden assumptions within our proposed learning strategies, a problem that AP&L rightly recognize with many existing proposals.

This leads me to suggest certain criteria that any learning strategy should satisfy, relating to its utility in principle and practice, as well as its usability by children. Once we have a promising learning strategy that satisfies these criteria, we can then concern ourselves with the components comprising that strategy.  With respect to this, I want to briefly discuss the type of components AP&L find unhelpful, since several of the components they would prefer might still be reasonably classified as UG components. The main issue they have is not with components that are innate and language-specific, but rather components of this kind that in addition involve very precise knowledge. This therefore does not rule out UG components that involve more general knowledge, including (again) the components AP&L themselves propose. In addition, AP&L ask for explicit examples of UG components that actually do work. I think one potentially UG component that’s part of a successful learning strategy for syntactic islands (described in Pearl & Sprouse 2013) is a nice example of this: the bias to characterize wh-dependencies at a specific level of granularity. It's not obvious where this bias would come from (i.e., how it would be derived or what innate knowledge would lead to it), but it's crucial for the learning strategy it's a part of to work. As a bonus, that learning strategy also satisfies the criteria I suggest for evaluating learning strategies more generally (utility and useability).


Tuesday, October 1, 2013

Next time on 10/23/13 @ 2:30pm in SBSG 2221 = Ambridge et al. in press

It looks like the best collective time to meet will be Wednesdays at 2:30pm for this quarter, so that's what we'll plan on.  Due to some of my own scheduling conflicts, our first meeting will be in a few weeks on October 23.  Our complete schedule is available on the webpage at 

On Oct 23, we'll be looking at an article that examines the utility of Universal Grammar based learning strategies in several different linguistic domains, arguing that they're not all that helpful at the moment:

Ambridge, B., Pine, J., & Lieven, E. 2013 in press. Child language acquisition: Why Universal Grammar doesn't help. Language.

See you then!