The paper cites drawbacks with both approaches. The amount of trust and overhead required to maintain consistent numbers may or may not be cost-effective, depending upon the resources available at both sites. For something like AdSense, where the publisher (the participating web site) takes a cut of the proceeds paid by the advertiser, it adds extra trust and overhead requirements as well, some of which the publisher may not understand or be able to carry out. At any rate, while this technique might (if implemented carefully) ease some tensions between publishers and advertisers when discrepancies occur, I don't think it resolves enough discrepancies that it would be adopted as a standard. Hypothetically speaking, if this is the best you could do in terms of limiting discrepancies, it doesn't change my mind that PPC is still a poor business model.
Since the paper was published in 1998, I think it is solid evidence that click fraud was a well-understood problem at the time PPC was implemented. That being the case, the plaintiffs in the lawsuit against the search engines can argue that the search engines were aware of the risks involved (or should have been). I don't think the search engines are guilty of collusion, but I could see the plaintiffs successfully arguing that many, perhaps most customers experienced some click fraud over the course of their campaigns. If these customers didn't receive discounts or refunds, the plaintiffs could argue they are entitled to them.