Wednesday, February 26, 2014

Some thoughts on Levinson 2013 + Legate, Pesetsky, & Yang 2013 reply (manuscript)

One of the take-away points I had from Levinson 2013 [L13] was the idea that center-embedding is not a structural option specific to syntax, since there are examples of this same structural option in dialogue. I had the impression then that L13 wanted to use this to mean this particular type of recursion is not language-specific, as dialogue is using language to communicate information and it's the information communicated (via the speech acts) that's center-embedded. (At least, that's how I'm interpreting "speech acts" as "actions in linguistic clothing".) I'm not quite sure I believe that, since I would classify speech acts as a type of linguistic knowledge (specifically, how to translate intention into the specific linguistic form required to convey that intention). But suppose we classify this kind of knowledge as not really linguistic, per se -- then wouldn't the interesting question be about how unique this type of structural option is to human communication systems, since that relates to questions about the Faculty of Language (broad or narrow)? And presumably, this then links back to whether non-human animals can learn these syntactic structures (doesn't seem to be true as far as we know) or these type of embedded interactions (also doesn't seem to be true, I think?)?

As a general caveat, I should note that while I followed the simpler examples of center embedding in dialogue, I was much less clear about the more complex examples that involved multiple center-embeddings and cross-serial dependencies (for example, deciding that something was an embedded question rather than a serial follow-up, like in example (14), the middle of (16), some of the embeddings in (17)). This may be due to my very light background in pragmatics and dialogue analysis, however. Still, it seemed that Legate, Pesetsky, & Yang 2013 [LPY13] had similar reservations about some of these dialogue dependencies.

LPY13 also had very strong reactions to both the syntactic and computational claims in L13, in addition to these issues about how to assign structure to discourse. I was quite sympathetic to (and convinced by) LPY13's syntactic and computational objections as a whole, from the cross-linguistic frequency of embedding to the non-centrality of center embedding for recursion to the not-debate about whether natural languages were regular. They also brought out a very interesting point about the restrictions on center embedding in speech acts (example (13)), which seemed to match some of the restrictions observed in syntax.  If it's true that these restrictions are there, and we see them in both linguistic (syntax) and potentially non-linguistic (speech act) areas, then maybe this is nice evidence for a domain-general restriction on processing this kind of structure. (And so maybe we should be looking for it seriously elsewhere too.)



More specific comments:

L13: There's a comment in section 4 about whether it's more complex to treat English as a large system of simple rules or a small system of complex rules. Isn't this exactly the kind of thing that rational inference gets at (e.g., Perfors, Tenenbaum, & Regier 2011 find that the context-free grammar works better than a regular or linear grammar on child-directed English speech -- as LPY13 note)? With respect to recursion, L13 cites the Perfors et al. 2010 study, which LPY13 correctly note doesn't have to do with regular languages vs. non-regular languages. Instead, that study finds that a mixture of recursive and non-recursive context-free rules (surprisingly) is the best, rather than having all recursive or all non-recursive rules, despite this seeming to duplicate a number of rules.

L13: Section 6, using the transformation from pidgin to creole as evidence for syntactic embedding coming from other capacities like joint action abilities: It's true that one of the hallmarks of pidgins vs. creoles is the added syntactic complexity, which (broadly speaking) seems to come from children learning the pidgin, adding syntactic structure to it that's regular and predictable, and ending up with something that has the same syntactic complexity as any other language. I'm not sure I understand why this tells us anything about where the syntactic complexity is coming from, other than something internal to the children (since they obviously aren't getting it from the pidgin in any direct way). Is it that these children are talking to each other, and it's the dialogue that provides a model for the embedded structures, for example?

LPY13: I'm not quite sure I agree with the objection LPY13 raise about whether dialogue embeddings represent structures (p.9). I agree that there don't seem to be very many restrictions, certainly when compared to syntactic structure. But just because there are multiple licit options doesn't mean there isn't a structure corresponding to each of them. It just may be exactly this: there are multiple possible structures that we allow in dialogue. So maybe this really more of an issue about how we tell structure is present (as opposed to linear "beads on a string", for example).

No comments:

Post a Comment