Wednesday, April 15, 2015

Some thoughts on Heinz 2015 book chapter, parts 1-5

For me, this was a very accessible introduction to a lot of the formal terminology and distinctions that computational learnability research trades in. (For instance, I think this may be the first time I really understood why we would be excited that generalizations would be strictly local or strictly piecewise.) From an acquisition point of view, I was very into some particular ideas/approaches:

(1) the distinction between an intensional description (i.e., theoretical constructs that compactly capture the data) and an extension (i.e., the actual pattern of data), along with the analogy to the finite means (intensional description) that accounts for the infinite use (extension). If there’s a reasonable way to talk about the extension, we get a true atheoretical description of the empirical data, which seems like an excellent jumping off point for describing the target state of acquisition.

(2) the approach of defining the implicit hypothesis space, i.e., the fundamental pieces that explicit hypothesis spaces (or generalizations) are built from. This feels very similar to the old school Principles & Parameters approach to language acquisition (specifically, the Principles part, if we’re talking about the things that don’t vary). It also jives well with some recent thoughts in the Bayesian inference sphere (e.g., see Perfors 2012 for implicit vs. explicit hypothesis spaces).

**Perfors, A. 2012. Bayesian Models of Cognition: What's Built in After All? Philosophy Compass, 7(2), 127-138.

(3) that tie-in between the nature of phonological generalizations, the algorithms that can learn those generalizations, and why this might support those generalizations as actual human mental representations. In particular, “Constraints on phonological well-formedness are SL and SP because people learn phonology in the way suggested by these algorithms.” (End of section 5.2.1)

When I first read this, it seemed odd to me — we’re saying something like: “Look! Human language makes only these kinds of generalizations, because there are constraints! And hey, these are the algorithms that can learn those constrained generalizations! Therefore, the reason these constraints exist is because these algorithms are the ones people use!” It felt as if a step were missing at first glance: we use the constrained generalizations as a basis for positing certain learning algorithms, and then we turn that on its head immediately and say that those algorithms *are* the ones humans use and that’s the basis for the constrained generalizations we see. 

But when I looked at it again (and again), I realized that this did actually make sense to me. The way we got to story may have been a little roundabout, but the basic story of “these constraints on representations exist because human brains learn things in a specific way” is very sensible (and picked up again in 5.4: “…human learners generalize in particular ways—and the ways they generalize yield exactly these classes”). And what this does is provide a concrete example of exactly what constraints and exactly what specific learning procedures we’re talking about for phonology.

(4) There’s a nice little typology connection at the end of section 5.1, based on these formal complexity ontologies: “…the widely-attested constraints are the formally simple ones, where the measure of complexity is determined according to these hierarchies”. Thinking back to links with acquisition, would this be because the human brain is sensitive to the complexity levels (however that might be instantiated)? If so, the prevalence of less complex constraints is due to how easy they are to learn with a human brain. (Or any brain?)



No comments:

Post a Comment