So the Hruschka et al. (2009) article seems to be targeting a wider audience, and is obviously a collaborative effort of many researchers from different backgrounds. One of the aspects I really thought was interesting was the stuff in Box 1, where they talk about how acquisition biases that are too weak to appear in psycholinguistic experiments might show up when learning persists over generations. This is the iterated learning paradigm stuff, which I think is very interesting indeed, and I would love to think about to apply this to more sophisticated linguistic stuff (like the phenomena that Lightfoot (2010) talks about, as opposed to something like verb-object vs. object-verb word order). They do mention a study by Daland et al that examines the persistence of lexical gaps, and this starts to get into more complex territory, I think. But can something like that be applied to the persistence of island constraints in syntax, for example?
Something that struck me as a little funny in the Hruschka et al. article: they claim to present a framework for language change understanding, as exemplified in figure 1. But to me, all that looks like they're saying is "Yes, all these things are important." Maybe that's in response to someone with a perspective like Lightfoot's, which I guess Hruschka et al. might call variationist?
Another part of the Hruschka et al. article discussed agent-based approaches that aren't explicitly tied to empirical data. Granted, empirical data for language change (especially historical language change) isn't that easy to come by, but I do feel a bit skeptical about models that aren't grounded that way. On the other hand, this kind of agent-based modeling is very common in mathematical and game theoretic approaches to cognitive phenomena.
Turning to Lightfoot (2010), clearly Lightfoot is coming from one very specific perspective that endorses innate domain-specific knowledge in the form of cues. One thing that was very interesting to me was the possibility that cues could lead a child to a grammar that isn't actually the optimal grammar for the current data set. This reminds me very much of current theories of English metrical phonology - it turns out that child-directed and adult-directed speech input are more compatible with some non-English grammar variants than they are with the official English grammar. This could mean that either the ideas about the English grammar are wrong, or that something like a cues-based learner is causing these mismatches. (On the other hand, you would expect these mismatches not to persist over time - the cue-based learning is the instigator of language change in Lightfoot's view.)
Something else that occurred to me when reading Lightfoot: let's suppose that it's true that children need something like principle (4) [Something can be deleted if it is (in) the complement of an adjacent, overt word.]. This has wide coverage and explains some of the puzzling phenomena that Lightfoot brings up. Could this be selected from the hypothesis space (or maybe inferred from the data somehow) exactly because it applied to lots of data, including these phenomena? That seems to jive with the idea of indirect evidence a la Perfors, Tenenbaum, & Regier, as well as classic ideas of how parameters are supposed to be set (one parameter is connected to lots of different data types).