I think Bouchard (2012) actually takes a similar approach to Perfors et al. (2011) with respect to solving the structure-dependence problem, in the sense of redefining what the problem is and then stating that the solution to this problem does not involve UG learning biases. It's at this point that the two studies part ways, but there is, in fact, the fundamental similarity. Bouchard does believe that meaning is inextricably tied to the problem, but rejects the transformational approach that's traditionally assumed by Chomsky and colleagues. Instead, meaning is more foundational in how the structures are generated. One thing that isn't clear to me at all is whether the UG problem is solved, as the title would suggest. It seems to me that the components that Bouchard assumes involve a lot of knowledge about interpretation (ISSUE and its structural relationship to Tense, incompleteness relating to a non-tensed utterance, etc.), and it's unclear where this knowledge comes from, if it's not meant to be innate. Maybe "solving the UG problem" is just supposed to be about providing a complete specification of what's in UG?
Some more targeted thoughts:
- One of Bouchard's issues with the current ideas about UG is that the components of UG seem hard to explain evolutionarily. That is, if we accept the current UG formulation, it's hard to explain why this would come to be for any kind of adaptive reasons. This is a fair point, but I'm not sure the UG Bouchard proposes gets around this either.
- I think Bouchard does a nice review of the current approach to UG that's motivated by efficient computation. In particular, it's fair to ask if "efficiency" is really the crucial factor - maybe "effectiveness" would be better, if we're trying to relate this to some kind of evolutionary story.
- I'm not sure it's fair to criticize the transformational account by saying that children may not encounter declarative utterances before they encounter interrogative utterances. It should be enough that children recognize the common semantics between them, and assume they're related.
- I appreciate Bouchard's effort to specify the exact form of the rule that relates declarative and interrogative utterances (the four constraints on the rule). This is useful if we were ever interested in making a hypothesis space of rules, and having the child learn which one is the right one (it reminds me a bit of Dillon, Dunbar, & Idsardi (2011), with their rule-learner). Anyway, the main point is clear: The issue is that the actual rule is one of many that could be posited, even given the four constraints Bouchard describes, and we either need the right rule to fall out from other constraints or we need it to be learnable from the available possibilities.
- I agree with the basic point that "with a different order comes different meaning", but the point is that it's a related meaning. Even in example (21), the utterances are still about the event of seeing and involve the actors Mary and John.
- "Question formation is not structure dependent, it is meaning dependent" - Well, sure, but meaning dependent, especially as it's described here, is all about the structure. So "meaning dependent" is the same as saying "structure dependent", isn't it?
- The Coherence Condition of Conindexation (example 30): This sounds great, but don't we then need to specify what "coherent" means? This seems to be an example of describing what's going on, rather than explaining what's going on. For example, for (29), why do those two elements get coindexed, out of all the elements in the utterance? Presumably, this has to do with the structure of the utterance... This relates to a point slightly later on: "...due to the lexical specifications that determine which actant of the event mediates the link between the event and a point in time" - Where do these lexical specifications come from? Are they learned? This seems more a description than an explanation.
- p.25: "Whatever Learning Machine enables them to learn signs also enables them to learn combinatorial signs such as dedicated orders of signs" - This seems like a real simplification. The whole enterprise of syntax is based on the idea that meaning is not the only thing determining syntactic form (otherwise, how do you get ungrammatical utterances that are intelligible, like "Where did Jack think the necklace from was expensive?"). So the Learning Machine needs to have something explicit in there about how combinatorial meaning links to form.