Friday, January 23, 2015

Next time on 2/13/15 @ 11:30am in SBSG 2221 = Yurovsky & Frank 2014 Ms

Thanks to everyone who was able to join us for our educational discussion of Johnson 2013!  For our next CoLa reading group meeting on Friday February 13 at 11:30am in SBSG 2221, we'll be looking at a manuscript that explores a model of word learning, integrating non-linguistic aspects such as memory and attention.

Yurovsky, D. & Frank, M. 2014. An Integrative Account of Constraints on Cross-Situational Learning. Manuscript, Stanford University.


See you then!

Wednesday, January 21, 2015

Some thoughts on Johnson 2013

Something I really liked about this paper was Johnson’s sensitivity to the problems that occur during actual acquisition even as he gave an intuitive overview about different approximation algorithms used in machine learning. He also made a point to connect with linguistic theory related to acquisition (e.g., Universal Grammar, uniqueness constraint, etc.) This makes it much easier for acquisition people who aren’t necessarily modelers to understand why they should care about these approaches, especially when the particular structures Johnson uses for his demonstrations (PCFGs) are known to be not quite right (which Johnson himself helpfully points out right at the beginning).

Some more targeted thoughts:

(1) Johnson makes a point at the very beginning about the utility of joint inference of syntactic structure and grammatical categories (which he calls lexical categories), and how better performance is obtained that way (as opposed to solving one problem after another). This seems to be another example of this joint-inference-is-better thing, which is getting a fair amount of play in the acquisition modeling literature. Bigger point: Information from one problem can help usefully constrain another. Smaller quibble: I think grammatical categories may be learned earlier than syntactic structure, so we may want something like an informed prior when it comes to the grammatical categories if we still want syntactic structure and grammatical categorization to be solved simultaneously.

(2) This comment in section 3: “…suggesting the attractive possibility that at least some aspects of language acquisition may be an almost cost-free by-product of parsing. That is, the child’s efforts at language comprehension may supply the information they need for language acquisition.” This reminds me very strongly of Fodor’s (1998) “Parsing to Learn” approach, which talks about exactly this idea. (A number of follow up papers with William Sakas also tackle this issue.) Fodor’s learner was using parsing to help figure out Universal Grammar parameter settings, but the idea is exactly the same — because parsing is already happening, the learner can leverage the information from that process to learn about the structure of her language.

**Fodor, J. D. 1998. Parsing to learn. Journal of Psycholinguistic Research, 27(3), 339-374.

(3) Related to the smaller quibble above in (1): Johnson notes later on in section 3 that “it’s hard to see how any ‘staged’ learner (which attempted to learn lexical entries before learning syntax, or vice versa) could succeed on this data”. The important unspoken part is “using just this strategy”, I’m assuming — because certainly it’s possible to learn grammatical categories using other strategies just fine. In fact, most of the grammatical categorization models I’m aware of do just this (though some do incorporate aspects of syntactic structure in the grammatical category inference).

(4) This point in section 5 seems spot on to me: “…language learning may require additional information beyond that contained in a set of strings of surface forms.” Johnson jumps straight to non-linguistic information, but I’m imagining that semantics would still be counted as linguistic, and that seems super-important for a number of syntactic structure things (e.g., animacy for learning about the appropriate meanings for tough-constructions: The apple was easy to eat. vs. The girl was eager to eat (the apple).

(5) The production model by Johnson & Riezler (2002) discussed later on in that section was interesting, where the input is the intended logical form (hierarchical semantic structure…which presumably maps to syntactic structure?) and the output is the observed string. Presumably this is how you could design a generative learning model, where the goal is to infer the syntactic structure that corresponds to the observed string, with the idea that the syntactic structure was used to generate the string.

(6) This in the conclusion: “…in principle it should be possible for Bayesian priors to express the kinds of rich linguistic knowledge that linguists posit for Universal Grammar. It would be extremely interesting to investigate just what a statistical estimator using linguistically plausible parameters might be able to learn.  — Exactly this! I’ve long (vaguely) pondered how to connect the sorts of parameters in, say, a parametric representation of metrical phonology to the kinds of precise mathematical priors Bayesian models use. Somehow, somehow it seems possible…and then perhaps the two uses of “parameter” could be reconciled more precisely.

Friday, January 9, 2015

Next time on 1/23/15 @ 11:30am in SBSG 2221 = Johnson 2013

Hi everyone,

It looks like a good collective time to meet will be Fridays at 11:30am for this quarter, so that's what we'll plan on.  Our first meeting will be on January 23, and our complete schedule is available on the webpage at 


On January 23, we'll be discussing a book chapter that looks closely at the idea that language acquisition is a statistical inference problem, and examines how to translate current machine learning statistical inference approaches to implementations that would work for acquisition.

Johnson, M. 2013. Language acquisition as statistical inference. In Stephen R. Anderson, Jacques Moeschler,  and Fabienne Reboul, (eds.), The Language-Cognition Interface, Libraire Droz, Geneva, 109-134.

http://www.socsci.uci.edu/~lpearl/colareadinggroup/readings/Johnson2013_LangAcqStatInf.pdf

See you on January 23!