Wednesday, February 23, 2011

Next time: Progovac (2010)

Thanks to everyone who was able to join us for a spirited discussion of the linguistic evolution perspective in Chater & Christiansen (2010)! Next time on March 9th, we'll be reading a paper on linguistic evolution by Progovac (2010) that is likely to have a more nativist perspective:

Progovac (2010)

Note: We'll be meeting slightly earlier on March 9th - starting at 1pm, instead of 1:15pm. We'll use the same location as this past meeting: SSL 427.

Monday, February 21, 2011

Thoughts on Chater & Christiansen (2010)

I think one of the main things that stuck with me throughout this paper was the assumption made about UG: namely, that whatever is in it is, by definition, arbitrary. I'll certainly grant that this is one way the knowledge of UG has been characterized sometimes, but I'm not sure I agree that it's a necessary property. So, if we find out that innate, language-specific biases have their origins in, say, pragmatic/communication biases, does that make them not UG? To my mind, it doesn't. But I think it does for the authors - and that seems to be one of their main claims for saying it couldn't possibly have evolved. So they strike out arbitrary, innate, domain-specific knowledge - does that means that all innate, domain-specific knowledge is ruled out? It seems like they want to claim that (and instead attribute everything to innate, domain-general processes), but I don't think I agree.

That being said, I do sympathize with the position that it seems more likely that language adapted to the brain, rather than the brain adapting to language. It seems reasonable to me that language evolution is too fast for the processes of gene evolution to catch up with, i.e., language is too much a "moving target".

A few things that also stuck with me:

p. 1135, where they say that aspects of language that are difficult to learn or process will be rapidly stamped out: Given this, should we expect that all persistent gaps (i.e., Russian inflectional paradigms, or why you can't say "Who did you see who did that?" to mean "Who did you see do that?" in English, even though you can do the equivalent in German) to be more difficult in some way? I suppose it's possible, but it seems less plausible to me.

p.1143: I love that they're looking at binding phenomena, because it's true that this is traditionally been an example held up by UG proponents as something that is very likely to be a part of UG. I'm not quite sure that their story about dependency resolution (how clauses get "closed off") would work for all environments where we use regular pronouns instead of reflexive ones, though. However, I think they're satisfied to show at least as few connections between these syntax principles and other non-syntax constraints - they say something to the effect of "This doesn't account for everything, but since no one can account for anything, this is as good as anything else." While it's true that no story accounts for everything yet, I suspect the syntactic accounts might go a bit further than the account the authors have sketched here.

Wednesday, February 9, 2011

Next time: Chater & Christiansen (2010)

Thanks to all who were able to join us for our exciting discussion of the articles on language change and language acquisition! Next time on Feb 23, we'll be continuing to look at the intersection of these two fields, with a recent article by Chater & Christiansen:

Language Acquisition Meets Language Evolution

Note: We will be changing our location to SSL 427.

Monday, February 7, 2011

Thoughts on Hruschka et al. (2009) & Lightfoot (2010)

So the Hruschka et al. (2009) article seems to be targeting a wider audience, and is obviously a collaborative effort of many researchers from different backgrounds. One of the aspects I really thought was interesting was the stuff in Box 1, where they talk about how acquisition biases that are too weak to appear in psycholinguistic experiments might show up when learning persists over generations. This is the iterated learning paradigm stuff, which I think is very interesting indeed, and I would love to think about to apply this to more sophisticated linguistic stuff (like the phenomena that Lightfoot (2010) talks about, as opposed to something like verb-object vs. object-verb word order). They do mention a study by Daland et al that examines the persistence of lexical gaps, and this starts to get into more complex territory, I think. But can something like that be applied to the persistence of island constraints in syntax, for example?

Something that struck me as a little funny in the Hruschka et al. article: they claim to present a framework for language change understanding, as exemplified in figure 1. But to me, all that looks like they're saying is "Yes, all these things are important." Maybe that's in response to someone with a perspective like Lightfoot's, which I guess Hruschka et al. might call variationist?

Another part of the Hruschka et al. article discussed agent-based approaches that aren't explicitly tied to empirical data. Granted, empirical data for language change (especially historical language change) isn't that easy to come by, but I do feel a bit skeptical about models that aren't grounded that way. On the other hand, this kind of agent-based modeling is very common in mathematical and game theoretic approaches to cognitive phenomena.

Turning to Lightfoot (2010), clearly Lightfoot is coming from one very specific perspective that endorses innate domain-specific knowledge in the form of cues. One thing that was very interesting to me was the possibility that cues could lead a child to a grammar that isn't actually the optimal grammar for the current data set. This reminds me very much of current theories of English metrical phonology - it turns out that child-directed and adult-directed speech input are more compatible with some non-English grammar variants than they are with the official English grammar. This could mean that either the ideas about the English grammar are wrong, or that something like a cues-based learner is causing these mismatches. (On the other hand, you would expect these mismatches not to persist over time - the cue-based learning is the instigator of language change in Lightfoot's view.)

Something else that occurred to me when reading Lightfoot: let's suppose that it's true that children need something like principle (4) [Something can be deleted if it is (in) the complement of an adjacent, overt word.]. This has wide coverage and explains some of the puzzling phenomena that Lightfoot brings up. Could this be selected from the hypothesis space (or maybe inferred from the data somehow) exactly because it applied to lots of data, including these phenomena? That seems to jive with the idea of indirect evidence a la Perfors, Tenenbaum, & Regier, as well as classic ideas of how parameters are supposed to be set (one parameter is connected to lots of different data types).