The basic issue that the authors (AP&L) highlight about proposed learning strategies seems exactly right: What will actually work, and what exactly makes it work? They note that “…nothing is gained by positing components of innate knowledge that do not simplify the problem faced by language learners” (p.56, section 7.0), and this is absolutely true. To examine how well several current learning strategy proposals work that involve innate, linguistic knowledge, AP&L present evidence from a commendable range of linguistic phenomena, from what might be considered fairly fundamental knowledge (e.g., grammatical categories) to fairly sophisticated knowledge (e.g., subjacency and binding). In each case, AP&L identify the shortcomings of some existing Universal Grammar (UG) proposals, and observe that these proposals don’t seem to fare very well in realistic scenarios. The challenge at the very end underscores this -- AP&L contend (and I completely agree) that a learning strategy proposal involving innate knowledge needs to show “precisely how a particular type of innate knowledge would help children acquire X” (p.56, section 7.0).
More importantly, I believe this should be a metric that any component of a learning strategy is measured by. Namely, for any component (whether innate or derived, whether language-specific or domain-general), we need to not only propose that this component could help children learn some piece of linguistic knowledge but also demonstrate at least “one way that a child could do so” (p.57, section 7.0). To this end, I think it's important to highlight how computational modeling is well suited for doing precisely this: for any proposed component embedded in a learning strategy, modeling allows us to empirically test that strategy in a realistic learning scenario. It’s my view that we should test all potential learning strategies, including the ones AP&L themselves propose as alternatives to the UG-based ones they find lacking. An additional and highly useful benefit of the computational modeling methdology is that it forces us to recognize hidden assumptions within our proposed learning strategies, a problem that AP&L rightly recognize with many existing proposals.
This leads me to suggest certain criteria that any learning strategy should satisfy, relating to its utility in principle and practice, as well as its usability by children. Once we have a promising learning strategy that satisfies these criteria, we can then concern ourselves with the components comprising that strategy. With respect to this, I want to briefly discuss the type of components AP&L find unhelpful, since several of the components they would prefer might still be reasonably classified as UG components. The main issue they have is not with components that are innate and language-specific, but rather components of this kind that in addition involve very precise knowledge. This therefore does not rule out UG components that involve more general knowledge, including (again) the components AP&L themselves propose. In addition, AP&L ask for explicit examples of UG components that actually do work. I think one potentially UG component that’s part of a successful learning strategy for syntactic islands (described in Pearl & Sprouse 2013) is a nice example of this: the bias to characterize wh-dependencies at a specific level of granularity. It's not obvious where this bias would come from (i.e., how it would be derived or what innate knowledge would lead to it), but it's crucial for the learning strategy it's a part of to work. As a bonus, that learning strategy also satisfies the criteria I suggest for evaluating learning strategies more generally (utility and useability).
Pearl, L., & Sprouse, J. 2013. Syntactic islands and learning biases: Combining experimental syntax and computational modeling to investigate the language acquisition problem. Language Acquisition, 20, 19–64.