In recent years much has been written and spoken about the relative decline of the US patent system's ranking recorded by the Global Innovation Policy Center’s “IP Index”. While in IP generally, the Index states that the US remains in number one place overall, its patent score has slipped. Last year the US lost its number one ranking, falling to 10th, level with Hungary. This year, while the US patent score went up, it still slipped two places.

The centre is an affiliate of the US Chamber of Commerce and its analysis of the American patent system has been seized on by various members of the patent community who are highly critical of recent changes introduced by both the courts and Congress. But should the index be relied on as an accurate measure of relative global standing?

We have been highly critical of how data has been used and abused with regard to patents in both the US and elsewhere over the years. For example, decidely dodgy claims have been made about the activities of NPEs to justify patent reform in the US - something that is now spreading to Europe. Patents are a subject that legislators know little about and because of that the way data is presented really matters.Given how important innovation policy is, decisions have to be based on reality, not on spin. And that applies to all sides in the argument.

According to Unified Patents’ Shawn Ambwani and Jonathan Stroud there are serious question marks over the methodology used by those who compile the rankings. These, they say, should be addressed before the index is used by anyone to advocate for a particular position.

It's a hard-hitting piece and given some of the strong criticisms it contains, IAM has given the GIPC a right to reply. We will publish this as and when we get it.

In the meantime, here’s what Ambwani and Stroud have to say:

The Chamber of Commerce-affiliated Global Innovation Policy Center (GIPC) IP Index, which shows the United States ranked number one in the world overall as well as in terms of enforcement but number 12 in the world on patent rights, has been in recent weeks paraded around by op-ed writers, Congressmen and lobbyists as justification for new proposals to roll back recent reforms to the US patent system. It has been used as nationalist evidence that the US has fallen behind other countries in patent policy, ostensibly putting us at a disadvantage strategically or economically. Many believe the index is endorsed by or conducted in partnership with the US Government and the US Chamber. Neither is true; rather, it is a creature of policy lobbyists openly advocating for desired legislative outcomes.

Simply reading the study and speaking with GIPC’s staff make clear the index has serious biases, flaws and admitted lobbying goals that would first need to be addressed if it is to regain credibility, and before it should be relied upon by lawmakers. Some are outlined below:

1. Arbitrarily assigned scores

The scores the index assigns—which amount to eight points for patents—are generally qualitative; it is difficult to compare or analyse the selected scores. The US score actually rose .25 points between 2017 and 2018, when a new factor was introduced; other countries just rose more after rescoring. Singapore scored the highest (+.5 points), and another 10 jurisdictions scored just .25 points higher than the US.

At minimum, the scores relating to Patent Opposition lack consistency between the different countries. Many countries, despite lacking any transparency, public information or measure of patent oppositions, received scores higher or equal to the US. For example, with a score of 0.75 Saudi Arabia has a score higher than the US, though it is unclear why, as there are no records of any opposition activity there. Contrast that with the Philippines, which received a score equal to the US (0.5) for Patent Opposition, even though, unlike the US, in the Philippines success rates, usage statistics and even volume of oppositions are entirely unknown. Conversely, Singapore’s post-grant revocation proceedings are faster, over 100 times cheaper to file and ostensibly less transparent; yet Singapore ranks higher (0.75). The discrepancies are hard to square. It seems the index awarded scores based on subjective belief rather than objective measurement.

2. Misrepresentation of easily verifiable data

The report relies on just one source for its conclusion that the US score for patent opposition should be 0.5 out of a possible 1: “A third-party analysis of PTAB data in 2017 suggests that only about 5–15% of cases end with all claims being considered patentable” (Index at 157). When asked, GIPC revealed that the third-party “analysis” was an 2017 IPWatchdog blog post critical of the government’s own regularly published data.

Putting aside the odd choice of relying on a blogged editorial criticising the government’s data rather than the government’s data itself, the source itself misrepresents and reframes data based on transparent policy goals - again, a curious choice for a (we assume) supposedly objective ranking. IPWatchdog is well-known to be critical of the PTAB and the PTO, and is an unabashed advocate for increased IP enforcement - ie, an editorialising source entitled to its views but generally not the first stop when searching for objective data. Plus, the post itself contradicts the PTO’s own easily verifiable data, and that of other independent third-party providers, such as Docket Navigator.

But even if the numbers they relied on were then accurate, perhaps most substantially, the index ignores that between 30% and 40% of trials that aren’t instituted; ie, not even held. Without knowing more, we must infer that either the index did not fully research these respective IPR and opposition processes, or it knowingly misrepresented even the limited data it chose to rely on to generate their score. At a minimum, the study should look to other non-editorial sources of data. Either way, using biased or misrepresented data provides inaccurate, unreliable results.

3. Obscure policy-oriented baselines

The index uses baselines comprising its own stated policy goals of promoting enforced patent rights. The scores are generally "of a binary nature" and less than one point only if that policy point diverges from an undisclosed policy baseline. The baselines are "based on best practices regarding terms of protection, enforcement mechanisms (de jure and de facto), and/or model pieces of primary or secondary legislation", and "[w]here no adequate baselines are found . . . , the baselines and values used are based on what rights holders view as an appropriate environment and level of protection"; ie, the policy goals of the study's funders.

The index here defines patent opposition vaguely as “[m]easured by the availability of mechanisms for opposing patents in a manner that does not delay the granting of a patent (in contrast to a right of opposition before the patent is granted) and ensures fair and transparent opposition proceedings”. The USPTO’s post-grant proceedings fit the bill.

The USPTO’s PTAB is one of the only venues in the world that routinely provides transparent updates, rules revisions, notice-and-comment rulemaking, due process protections, and appellate review of its proceedings. As for fairness, studies have shown that PTAB claim results are very similar to those claims found unpatentable in district courts and the previous inter parte and ex parte reexamination. Indeed, the newer proceedings are granted institution far less often than ex parte reexamination requests and compare favorably to other countries’ opposition systems. The major difference is the cost to challenge (vs district court) and shorter time-to-decision, which provide quicker resolution at lower cost over other options.

Without provable, demonstrable data demonstrating that US proceedings are unfair or opaque, it is unclear how the authors justified the point penalisation. The definition should be improved and the baselines revealed if the report is to score patent opposition, and any such score should be put in historical context.

4. No link between scores and economic benefit

The index also lacks context. The GIPIC starts its analysis by asking: “Does a given economy’s intellectual property system provide a reliable basis for investment in the innovation and creativity lifecycle?” But it never addresses or defines those terms; if innovation, investment and economic benefit aren’t defined in the index, how are they measured?

The report ostensibly spends substantial effort developing methodology and scoring to compare countries based on self-selected “baselines” on the pro-enforcement policies of the GIPC, but it does not show how its scoring correlates to or reflects “investment in the innovation and creativity lifecycle”. There is no suggestion, much less proof, that the US economy has suffered at all, much less at the hands of patent policies. There is also no evidence connecting the eight indicators to historical “investment”. The correlation between IP enforcement to economic benefit for US companies or the US economy (absent equal detriment to other US companies) have yet to be shown by any quantitative measurement.

Until innovation can be demonstrably related to the scores in the index, it remains little more than a transparent lobbyist’s tool. The first step toward repairing credibility would be to demonstrate that the scores affect innovation before trying to measure them for the future; that is, unless the purpose of the index is really just to provide talking points to lobbyists and policy hawks to support the ease of patent monetisation against other US companies.