One thing I really liked about this paper was the use of a formal model of a theory (in this case, a theory of pragmatic implicature cancellation) to predict specific usages that may not have originally been considered by previous researchers working on the theory. In this case, it's a certain combination of theory of mind and implicature cancellation which leads to specific predictions when the listener knows the speaker has partial vs. complete knowledge.
One thing I was curious about was G&S's use of the term of "partial implicature". They were specifically using it when, for example, the speaker had knowledge of two of the three apples and said "one apple is red". The listeners inferred that either one or two of the three apples apples were red, but not all three. If I'm understanding G&S correctly, the partial implicature is that the listeners didn't infer all three apples were red, but did allow for more than one apple to be red. To me, this is the listener thinking, "Ah, the speaker used one instead of two, which means that one of the apples the speaker saw definitely isn't red. But the third one the speaker didn't see might be red, so either one or two apples total are red." If this is true, then it seems to me like the "partial implicature" is really a regular implicature (i.e., one = one or two) over the restricted domain of two apples (the one the speaker saw that was definitely red and the one the speaker didn't see yet). If this is true, I'm not sure this result says something particularly different than the rest of the results (i.e., partial implicature, under this interpretation, doesn't seem different from regular implicature). More specifically, there are cases where the implicature is cancelled, and cases where it isn't, and this happens to be one where it isn't. The magic comes in on the domain restriction the listener imposes, rather than being anything about the implicature itself.
Some other thoughts:
(1) Intro, p.174: I was glad G&S were explicit about who argues for a purely modularized form of implicature, because when I first read that option, my thought was, "Really? That sounds a bit like a straw man." To me, pragmatics is definitely the aspect of language knowledge that seems most amenable to incorporating non-linguistic knowledge because it's about how we use language to communicate. From what I took of G&S's summary of the strongly modular theories, I would assume that those theories haven't yet looked at incorporating this issue of incomplete knowledge on the part of the speaker?
(2) Experiment 1, p.178: I was trying to work this through for the "one" case, with "some" being used. If Laura looks at one of three letters, and then says "Some of the letters have checks inside", my intuition about what this means becomes a little wonky. For me, "some" means more than one, but of course, Laura can't know about more than one of the letters. So how do I interpret what she's saying here? My first inclination is to think she's got some kind of magic knowledge about the other letters and so knows that at least one of the other two has a check, and I wonder if some of the respondents indicated this when G&S checked for how much knowledge the listeners believed Laura to have. For the ones left over who believed Laura only knew about one, we can see in Figure 2 that they basically wibbled between interpreting some as two or three.
(3) Figure 2 results for exact number words, p.182: When we look at the "one" row (bottom of (B) and (D)), it seems like the model prefers "one" to mean two when only one object is known (1st panel of B). However, people prefer "one" to mean three (1st panel of D). Notably, if we move up one row to the "two" row (middle of (B) and (D)), it seems like here the model happily prefers "two" to mean three (1st panel of B), as do people (1st panel of D). I'm not sure how to interpret this exactly -- it certainly seems like these are different preferences qualitatively at the very least, as one matches human preferences (interpreting "two") and the other doesn't (interpreting "one"). I'm also not sure why the model mechanics would yield different predictions in these two cases. Is it something about the number word meaning "one more" somehow? If so, that would explain why the model prefers "one" to mean two, and prefers "two" to mean three. However, it's not immediately obvious to me why this would fall out of the model mechanics.
(4) Conclusion, p.183: As a follow up from our discussion last time, it seems like there's some evidence from the pragmatic inference modeling literature that there's a boundary of two on the recursion depth for theory of mind: "...we assume only one such level of reasoning...The quantitative fits we have shown suggest that limited recursion and optimization are psychologically realistic assumptions." This provides another bit of evidence that the center-embedding limit for structural recursion probably isn't specific to syntax. (Though then it becomes interesting why other kinds of recursion in syntax don't seem to have this same limit, e.g., right-branching: "This is the dog who chased the cat who ate the rat who stole the cheese." We clearly can process these three embeddings without a problem.)