Richi Jennings breaks it down: Peter Brockman, and open questions on C/R success rate determination methodology. As Richi puts it,
"Statistics aside, asking C/R users if they're happy isn't the be-all and end-all of anti-spam research. C/R users may indeed be happy -- happily unaware that their spam filter is sending spam by replying to innocent third parties who's addresses have been forged by spammers."
Justin Mason's take on it is accurate and insightful, as well:
"Now, here’s the first problem. The “Spam Index” therefore considers a false negative as about as important as a false positive. However, in real terms, if a user’s legit mail is lost by a spam filter, that’s a much bigger failure than letting some more spam through. When measuring filters, you have to consider false positives as much more serious! (In fact, when we test SpamAssassin, we consider FPs to be 50 times more costly than a false negative.)"
Justin hits the nail on the head. Part of the problem a number of anti-spam "researchers" have in common is discounting the damage done (or even inaccurately counting FPs) by doing things like relating the number of "hits" a blacklist or spam filter gets and assuming that the more hits you get, the better.
Then add in the, um, awesomeness of C/R, in that you're bouncing unwanted spam back to unrelated parties who were forged in from lines. C/R is a good way to block spam, by bouncing it off your bad filter and in to somebody else's inbox. That's like keeping criminals away from you by helping them break into your neighbor's home. Yuck.