Whitehead’s attempt to provide a rigorous, formalized basis for ordinary arithmetic, it takes several hundred pages to strictly establish the proposition “1+1=2.” It takes a fairly advanced mathematical education to understand the explicit elaboration of a practice (counting, adding) that we expect most children to master. Just as a loose analogy, consider that in the Principia Mathematica, Bertrand Russell and A.N. And Hayek’s point was that this is often going to be difficult, if not impossible. But if examiners have to defend their judgments of obviousness, they’re essentially being required to translate their tacit knowledge into explicit knowledge–to turn an inarticulate knack into a formal set of rules or steps. The non-obviousness requirement, tied to the standard of an observer skilled in the appropriate art, is supposed to make the patent system sensitive to this kind of knowledge. I think the source of the problem in the patent system may be linked to a point Friedrich Hayek made long ago about our tendency to overrate the economic importance of theoretical knowledge and vastly underestimate the importance of tacit or practical knowledge. But, of course, in a rapidly evolving area of technology, someone’s always going to be the first to do something obvious. Maybe this is motivated by a version of the no-five-dollar-bills-on-the-sidewalk fallacy in economics: If nobody has done it before, it can’t have been all that obvious. The upshot is that unless someone else has done almost exactly the same thing before, you’ve got a good shot at getting the patent. And the result (says Tim) is that in practice the “non-obviousness” requirement has been largely conflated with a review of the “prior art” or previous related inventions.
The problem is that if an applicant wants to appeal, the examiner, who may well be a programmer, has to defend his subjective judgment of what’s “obvious” with some kind of explicit argument. The argument is that since it’s so difficult to explain obviousness, patent examiners just don’t bother, and instead focus on the “newness” part: However, Tim Lee and Julian Sanchez got into a discussion about the recent injunction against Microsoft Word over a blatantly obvious patent, and Julian did a great job explaining why obviousness and newness are different and why explaining obviousness can be so difficult. It could just be a natural progression or maybe it’s just an implementation that someone finally got around to doing. The problem, though, is that just because something is new doesn’t mean it’s not obvious. But, for years, the “non-obvious” part has basically been ignored in favor of the “new.” That’s because all the Patent Office looks at is “prior art.” I’ve had discussions with people in the comments who insist this makes perfect sense (most of these people are lawyers). Patents are only supposed to be awarded on things that are new and non-obvious to those skilled in the art. While things have become a little better due to the Supreme Court’s Teleflex ruling, which changed the standard for “obviousness” on certain patents, it’s still a major problem.
#Next meeting soon make it obvious series
I’ve been meaning to publish a series of posts on the problems with the current attempts at patent reform that I hope to get to soon, but the punchline to it is that the real problem with the patent system is that it does a terrible job evaluating “obviousness.” The various attempts at reform don’t deal with this issue at all, and thus the problems will continue. Fri, Aug 14th 2009 06:38pm - Mike Masnick