Friday, January 25, 2013

Some thoughts on Stabler (2009b)

One of the things I really appreciated about this article was the clear intention to connect the kind of computational models & problems learnability researchers typically worry about with the kind of realistic language acquisition and language use problems that linguistic & psychology researchers typically worry about. A nice example of this was the connection to syntactic bootstrapping, which showed up in some of the later sections.  I also found myself thinking a few times about the connection between some of these ideas and the issue of language evolution (more on this below), though I suspect this often comes up whenever language universals are discussed.

More targeted thoughts:

The connection with language evolution: I first thought about this in the introduction, where Stabler talks about the "special restrictions on the range of structural options" and the idea that some of the language universals "may guarantee that the whole class of languages with such properties is 'learnable' in a relevant sense." The basic thought was that if the universals didn't help language be learned, they probably wouldn't have survived through the generations of language speakers.  This could be because those universals take advantage of already existing cognitive biases humans have for learning, for example.

In section 1, Stabler mentions that it would be useful to care about the universals that apply before more complex abstract notions like "subject" are available. I can see the value of this, but I think most ideas about Universal Grammar (UG) that I'm aware of involve exactly these kind of abstract concepts/symbols.  And this makes a little more sense once we remember that UG is meant to be (innate) language-specific learning biases, which would therefore involve symbols that only exist when we're talking about language. So maybe Stabler's point is more that language universals that apply to less abstract (and more perceptible) symbols are not necessarily based on UG biases.  They just happen to be used for language learning (and again, contributed to how languages evolved to take the shape that they do).

I'm very sympathetic to the view Stabler mentions at the end of section 1 which is concerned with how to connect computational description results to human languages, given the idealized/simplified languages for which those results are shown.

I like Stabler's point in section 2 about the utility of learnability results, specifically when talking about how a learner realizes that finite data does not mean that the language itself is finite. This connects very well to what I know about the human brain's tendency towards generalization (especially young human brains).

Later on in section 2, I think Stabler does a nice job of explaining why we should care about results that deal with properties in languages like reversibility (e.g., if it's known that the language has that property, the hypothesis space of possible languages is constrained - coupled with a bias for compact representations, this can really winnow the hypothesis space). My take away from that was that these kind of results can tell us about what kind of knowledge is necessary to converge on one answer/representation, which is good. (The downside, of course, is that we can only use this new information if human languages actually have the properties that were explored.)  However, it seems like languages might have some of these properties, if we look in the domain of phonotactics.  And that makes this feel much more relevant to researchers interested in human language learning.

In section 3, where Stabler is discussing PAC learning, there's some mention of the time taken to converge on a language (i.e., whether the learner is "efficient").  One formal measure of this that's mentioned is polynomial time. I'm wondering how this connects to notions of a reasonable learning period for human language acquisition. (Maybe it doesn't, but it's a first pass attempt to distinguish "wow, totally beyond human capability" from "not".)

I really liked the exploration of the link between syntax and semantics in section 4. One takeaway point for me was evidence in the formal learnability domain for the utility of multiple sources of information (multiple cues). I wonder if there's any analog for solving multiple problems (i.e., learning multiple aspects of language) simultaneously (e.g., identifying individual words and grammatical categories at the same time, etc.). The potential existence of universal links between syntax and semantics again got me thinking about language evolution, too. Basically, if certain links are known, learning both syntax and semantics is much easier, so maybe these links take advantage of existing cognitive biases. That would then be why languages evolved to capitalize on these links, and how languages with these links got transmitted through the generations.

I also liked the discussion of syntactic bootstrapping in section 4, and the sort of "top-down" approach of inferring semantics, instead of always using the compositional bottom-up approach where you know the pieces before you understand the thing they make up. This seems right, given what we know about children's chunking and initial language productions.


No comments:

Post a Comment