Patent examiner specialization

After reading their recent article, I asked Cesare Righi and Timothy Simcoe to write this explanatory guest-post for Patently-O. My basic take-away is that USPTO Tech-Centers differ substantially in their internal levels of tech-homogeneity — making the classification system useful for Chemistry, but not really useful for Computer-related combination inventions. –Dennis

Guest Post by Cesare Righi and Timothy Simcoe

A growing body of research shows that individual examiners at the USPTO produce systematically different outcomes along important dimensions such as the grant rate, claim narrowing and time to dispose an application.[1] This creates a difficult trade-off for the USPTO. One approach to “fairness” would be to give each application, and therefore each applicant, the same chance of getting a particular examiner. On the other hand, the quality and efficiency of the examination process are likely to benefit when examiners specialize in particular areas.

In a new working paper, we measure technological specialization by patent examiners within different art-units at the USPTO, and study the impacts of specialization on the examination process. Our research was partly motivated by several recent studies that assume applications are randomly assigned to examiners (within art-unit-years) in order to estimate the causal impact of granting a patent. We show that a core premise behind the statistical approach used in those papers is wrong.

We find significant heterogeneity across technology centers in terms of specialization, and show that more specialized examiners are also “tougher”. The data for our analysis come from all published original utility patent applications filed between November 29, 2000 (enactment of the American Inventor’s Protection Act of 1999) and the end of year 2012. Our main results are illustrated in the figure below.

The blue bars show how often we can reject the hypothesis that the primary USPC subclass of applications filed in a given year and examined by a given art unit are randomly allocated across examiners within the given technology center. This measure of specialization reveals far more clustering by technology in the Life Sciences and Chemistry areas, than in Computers and Communications. For technology centers 2100 and 2400, the random matching assumption appears plausible. One explanation of the pattern in Figure 1 is that computer examiners are generalists, at least relative to the rest of the examiner corps. An alternative story is that the USPC classification does a better job of measuring real technical distinctions outside of the computing technology centers.

For each technology center, the graph reports the percentage of Multinomial Tests of Agglomeration and Dispersion (MTAD) that reject the null hypothesis of random allocation within art-unit-year of applications to examiners in favor of agglomeration at the 1% significance level. Please see the paper for details on these tests.

The red bars in Figure 1 show a similar pattern of specialization for individual assignees – applicants are not randomly distributed across examiners, particularly in the Chemistry area. This pattern may be driven by technical specialization. When we examine assignee agglomeration within technological subclasses, the evidence of examiner specialization becomes weaker, although we still find some within the Chemistry and Biotechnology tech centers. Reassuringly, our analysis found no evidence that some examiners handle more applications that are particularly important (with large “families”) or broad (having a short first independent claim). To study how examiner specialization is related to examination outcomes, we created a measure based on the share of applications (in a particular year) having the same primary USPC subclass. The table below shows that as examiners’ workload is increasingly concentrated within a few subclasses, they have a lower allowance rate and produce larger increases in the length of the first claim (although the latter effect is not large). More specialized examiners also take slightly longer to process the application. The natural interpretation of these findings is that specialized examiners can more easily identify relevant prior art.

The table reports the coefficients and the standard errors (in parentheses) of ordinary least square regressions on a sample of observations at the art-unit-year-examiner level. For each application “share apps in same subclass” is the share of other applications examined by the same examiner within the same art-unit-year that have the same subclass of the focal application. To produce these estimates, we standardize “share apps in same subclass”, as well as “words added to 1st ind. claim” and “days from docketing to disposal”, and average all variables at the art-unit-year-examiner level. Please see the paper for details on sample and analysis.

To summarize, we find a significant degree of technological specialization among patent examiners working in the same art-unit. This specialization is less pronounced in some of the computer-related technology centers. We found no evidence that examiners specialize in handling important or controversial applications. And it seems that specialization is associated with a more stringent examination process, perhaps because it allows examiners to more easily identify relevant prior art.

In closing, we note that all of our research is enabled by the increased transparency and data availability made possible through the efforts of the Office of the Chief Economist of the USPTO. We thank them and hope these efforts will lead to more research on the production and the impact of patents.

About Cesare Righi: Cesare is a Postdoctoral Associate at Boston University, School of Law, Technology & Policy Research Initiative.

About Timothy Simcoe: Timothy is an Associate Professor of Strategy & Innovation at Boston University, Questrom School of Business, and a Research Associate with the Productivity, Innovation, and Entrepreneurship Program of the National Bureau of Economic Research.

21 thoughts on “Patent examiner specialization”

Give me the Examiner with subject matter expertise every time. More efficient prosecution. Better assurance of the claims navigating avoiding the prior art. Better understanding of the technology leads to better appreciation of the benefits (sometimes subtle) of quality inventions.

When I was a patent examiner (which was a quite a while ago), mostly all I examined was liquid crystal display cells. When I first started, anything that was related to the optics of liquid crystal display cells was examined by either me or one other Examiner whom also only examined liquid crystal display cells. Those applications that were highly chemical or highly electrical in nature when elsewhere. However, anything liquid crystal, with no other home, went to one of the two of us. The only thing that was random about the assignment was whether the case went to me or to him. When I left the patent office, there were 4 of us working of the liquid crystal display cell applications, instead of just two of us.

This statement from MPEP 2141.01(a)II that “Patent Office classification of references and the cross-references in the official search notes of the class definitions are some evidence of ‘nonanalogy’ or ‘analogy’ respectively” is pure nonsense.

No person, real or hypothetical, of any skill (below ordinary, ordinary, extraordinary) would consider the Patent Office classification as “evidence” of whether a reference was analogous or non-analogous.

On the other hand, a former partner of mine who did patent litigation told me that juries eat that nonsense up with a spoon. “Well, the Patent Office classified the patent in class/subclass JKL/MNO, but reference 1 is in class/subclass ABC/DEF and reference 2 is in class/subclass UVW/XYZ. Clearly the references are not analogous!” (Jurors nod their head in agreement.)

Of course if you made that argument to an examiner, they would find it completely unpersuasive. And rightly so.

So what’s it doing in the MPEP? I can only guess that it is/was somebody’s pet contribution and has survived untold edits since time immemorial.

“So what’s it doing in the MPEP? I can only guess that it is/was somebody’s pet contribution that has survived untold edits since time immemorial.”

It’s like you understand how bureaucracy and middle management works :) Nothing more dangerous to productivity or morale than a new manager who wants to “make their mark.” They are usually followed by another new manager who makes their mark by undoing everything the previous manager did, and the circle of life continues.

When I worked at the Office, the office of classification was staffed with examiners who were too incompetent to examine. Then the Office had contractors classifying the incoming applications. I have no idea how they do it now, but it wouldn’t surprise me to find that they’ve assigned a random number generator to do the classification.

Some applications I pick up have just 1 classification, and it is very spot-on, with only a few hundred publications in the classification to search. Others have just 1, and it is so broad as to be nearly useless.

Then there are applications classified with 10+ different classifications, and the total number of publications in those classifications can exceed 30,000-50,000, again making a classification search useless as anything other than a very course filter.

Another factor is that applications with claims to software manipulation of data but indicating its use in a particular product line may get classified by the latter and thus examined by whatever examiners normally handle those particular products.
A further factor is that examiners in art units that run low on pending applications will get re-assigned to another art unit with larger application backlogs even if they have no expertise in the technology of that other art unit.

“A further factor is that examiners in art units that run low on pending applications will get re-assigned to another art unit with larger application backlogs even if they have no expertise in the technology of that other art unit.”

Oddly enough, certain regulars here might view that as a good thing regardless of any actual impact on examination quality.

Of course, they do so out of an untowards (and decidedly unhealthy) “feeling” about innovation that deals with software.

In my experience, the sub-class is often misleading, but the same Examiners are assigned similar technologies. So, to me, the lack of correlation between sub-class and Examiners is more the result of the weakness of the classification than the result of non specialization of Examiners.

“show how often we can reject the hypothesis that the primary USPC subclass of applications filed in a given year and examined by a given art unit are randomly allocated across examiners within the given technology center.”

Instead of taking the low road with your snark, Ben, you could have realized that one may BE “completely familiar” with general statistics and still want to know how the “Multinomial Tests of Agglomeration and Dispersion (MTAD)” actually goes about “reject[ing] the null hypothesis of random allocation”

Can you explain that here, or are you just being smarmy?

What are the baseline assumptions that go into the model? Were any choices made in the modeling effort, and why were those choices made?