First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

“Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”

“Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

“In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

If big companies like Google, Facebook, and Amazon are prevented from acquiring startups, that actually reduces competition. The reason is that if there is less M&A due to legal uncertainty, there is a reduced incentive for angels & VCs to fund those startups in the first place.

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difﬁculty of ﬁnding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

Longtime TOTM blogger, Paul Rubin, has a new book now available for preorder on Amazon.

The book’s description reads:

In spite of its numerous obvious failures, many presidential candidates and voters are in favor of a socialist system for the United States. Socialism is consistent with our primitive evolved preferences, but not with a modern complex economy. One reason for the desire for socialism is the misinterpretation of capitalism.

The standard definition of free market capitalism is that it’s a system based on unbridled competition. But this oversimplification is incredibly misleading—capitalism exists because human beings have organically developed an elaborate system based on trust and collaboration that allows consumers, producers, distributors, financiers, and the rest of the players in the capitalist system to thrive.

Paul Rubin, the world’s leading expert on cooperative capitalism, explains simply and powerfully how we should think about markets, economics, and business—making this book an indispensable tool for understanding and communicating the vast benefits the free market bestows upon societies and individuals.

Like this:

On March 14, the
Federal Circuit will hear oral arguments in the case of BTG
International v. Amneal Pharmaceuticals that could dramatically influence the
future of duplicative patent litigation in the pharmaceutical industry. The court will determine whether the America
Invents Act (AIA) bars patent challengers that succeed in invalidating patents
in inter partes review (IPR) proceedings from repeating their winning arguments
in district court. Courts and litigants
had previously assumed that the AIA’s estoppel provision only prevented
unsuccessful challengers from reusing failed arguments. However, in an amicus
brief filed in the case
last month, the U.S. Patent and Trade Office (USPTO) argued that, although it
seems counterintuitive, under the AIA, even parties that succeed in getting
patents invalidated in IPR cannot reuse their arguments.

If
the Federal Circuit agrees with the USPTO, patent challengers could be strongly
deterred from bringing IPR proceedings because it would mean they couldn’t
reuse any arguments in district court.
This deterrent effect would be especially strong for generic drug
makers, who must prevail in district court in order to get approval for their
Abbreviated New Drug Application from the FDA.

Critics
of the USPTO’s position assert that it will frustrate the AIA’s
purpose of facilitating generic competition.
However, if the Federal Circuit adopts the position, it would also
reduce the amount of duplicative litigation that plagues the pharmaceutical
industry and threatens new drug innovation.
According to a 2017
analysis of over 6,500 IPR challenges filed between
2012 and 2017, approximately 80% of IPR challenges were filed during an ongoing
district court case challenging the patent.
This duplicative litigation can
increase costs for both challengers and patent holders; the median
cost for an IPR proceeding that results in a final
decision is $500,000 and the median cost for just filing an IPR petition is
$100,000. Moreover, because of
duplicative litigation, pharmaceutical patent holders face persistent
uncertainty about the validity of their patents. Uncertain patent rights will lead to less innovation because
drug companies will not spend the billions of dollars it typically costs to
bring a new drug to market when they cannot be certain if the patents for that
drug can withstand IPR proceedings that are clearly stacked against them.
And if IPR causes drug innovation to decline, a significant body of
research predicts that patients’ health outcomes will suffer as a result.

In
addition, deterring IPR challenges would help to reestablish balance between
drug patent owners and patent challengers.
As I’ve previously discussed here
and here,
the pro-challenger
bias in IPR proceedings has led to significant deviation in patent invalidation
rates under the two pathways; compared to district court challenges, patents
are twice as likely to be found invalid in IPR challenges. The challenger is more likely to prevail in IPR
proceedings because the Patent Trial and Appeal Board (PTAB) applies a lower standard of
proof for
invalidity in IPR proceedings than do federal courts. Furthermore, if the
challenger prevails in the IPR proceedings, the PTAB’s decision to invalidate a
patent can often “undo” a prior district court decision in favor of the patent
holder. Further, although both district court judgments and PTAB
decisions are appealable to the Federal Circuit, the court applies a more
deferential standard of review to PTAB decisions, increasing the likelihood
that they will be upheld compared to the district court decision.

However, the USPTO acknowledges that its position is counterintuitive because it means that a court could not consider invalidity arguments that the PTAB found persuasive. It is unclear whether the Federal Circuit will refuse to adopt this counterintuitive position or whether Congress will amend the AIA to limit estoppel to failed invalidity claims. As a result, a better and more permanent way to eliminate duplicative litigation would be for Congress to enact the Hatch-Waxman Integrity Act of 2019 (HWIA). The HWIA was introduced by Senator Thom Tillis in the Senate and Congressman Bill Flores In the House, and proposed in the last Congress by Senator Orrin Hatch. The HWIA eliminates the ability of drug patent challengers to file duplicative claims in both federal court and IPR proceedings. Instead, they must choose between either district court litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR proceedings (which are faster and provide certain pro-challenger provisions).

Thus, the HWIA would reduce duplicative litigation that increases costs and uncertainty for drug patent owners. This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and ensure that consumers continue to have access to life-improving drugs.

In my fifteen years as a law professor, I’ve become convinced that there’s a hole in the law school curriculum. When it comes to regulation, we focus intently on the process of regulating and the interpretation of rules (see, e.g., typical administrative law and “leg/reg” courses), but we rarely teach students what, as a matter of substance, distinguishes a good regulation from a bad one. That’s unfortunate, because lawyers often take the lead in crafting regulatory approaches.

In the fall of 2017, I published a book seeking to fill this hole. That book, How to Regulate: A Guide for Policymakers, is the inspiration for a symposium that will occur this Friday (Feb. 8) at the University of Missouri Law School.

(1) How, as a substantive matter, should regulation be structured in particular areas? (Specifically, what regulatory approaches would be most likely to forbid the bad while chilling as little of the good as possible and while keeping administrative costs in check? In other words, what rules would minimize the sum of error and decision costs?), and

(2) What procedures would be most likely to generate such optimal rules?

The symposium webpage includes the schedule for the day (along with a button to Livestream the event), but here’s a quick overview.

I’ll set the stage by discussing the challenge policymakers face in trying to accomplish three goals simultaneously: ban bad instances of behavior, refrain from chilling good ones, and keep rules simple enough to be administrable.

We’ll then hear from a panel of experts about the principles that would best balance those competing concerns in their areas of expertise. Specifically:

Hopefully, we can identify some common threads among the substantive principles that should guide effective regulation in these disparate areas

Before we turn to consider regulatory procedures, we will hear from our keynote speaker, Commissioner Hester Peirce of the SEC. As The Economist recently reported, Commissioner Peirce has been making waves with her speeches, many of which have gone back to basics and asked why the government is intervening and whether it’s doing so in an optimal fashion.

Following Commissioner Peirce’s address, we will hear from the following panelists about how regulatory procedures should be structured in order to generate substantively optimal rules:

Bridget Dooling (George Washington University; former official in the White House Office of Information and Regulatory Affairs);

Ken Davis (former Deputy Attorney General of Virginia and member of the Federalist Society’s Regulatory Transparency Project);

James Broughel (Senior Fellow at the Mercatus Center; expert on state-level regulatory review procedures); and

Justin Smith (former counsel to Missouri governor; led the effort to streamline the Missouri regulatory code).

As you can see, this Friday is going to be a great day at Mizzou Law. If you’re close enough to join us in person, please come. Otherwise, please join us via Livestream.

Rate this:

Share this:

Like this:

In the opening seconds of what was surely one of the worst oral arguments in a high-profile case that I have ever heard, Pantelis Michalopoulos, arguing for petitioners against the FCC’s 2018 Restoring Internet Freedom Order (RIFO) expertly captured both why the side he was representing should lose and the overall absurdity of the entire net neutrality debate: “This order is a stab in the heart of the Communications Act. It would literally write ‘telecommunications’ out of the law. It would end the communications agency’s oversight over the main communications service of our time.”

The main communications service of our time is the Internet. The Communications and Telecommunications Acts were written before the advent of the modern Internet, for an era when the telephone was the main communications service of our time. The reality is that technological evolution has written “telecommunications” out of these Acts – the “telecommunications services” they were written to regulate are no longer the important communications services of the day.

The basic question of the net neutrality debate is whether we expect Congress to weigh in on how regulators should respond when an industry undergoes fundamental change, or whether we should instead allow those regulators to redefine the scope of their own authority. In the RIFO case, petitioners (and, more generally, net neutrality proponents) argue that agencies should get to define their own authority. Those on the other side of the issue (including me) argue that that it is up to Congress to provide agencies with guidance in response to changing circumstances – and worry that allowing independent and executive branch agencies broad authority to act without Congressional direction is a recipe for unfettered, unchecked, and fundamentally abusive concentrations of power in the hands of the executive branch.

These arguments were central to the DC Circuit’s evaluation of the prior FCC net neutrality order – the Open Internet Order. But rather than consider the core issue of the case, the four hours of oral arguments this past Friday were instead a relitigation of long-ago addressed ephemeral distinctions, padded out with irrelevance and esoterica, and argued with a passion available only to those who believe in faerie tales and monsters under their bed. Perhaps some revelled in hearing counsel for both sides clumsily fumble through strained explanations of the difference between standalone telecommunications services and information services that are by definition integrated with them, or awkward discussions about how ISPs may implement hypothetical prioritization technologies that have not even been developed. These well worn arguments successfully demonstrated, once again, how many angels can dance upon the head of a single pin – only never before have so many angels been so irrelevant.

This time around, petitioners challenging the order were able to scare up some intervenors to make novel arguments on their behalf. Most notably, they were able to scare up a group of public safety officials to argue that the FCC had failed to consider arguments that the RIFO would jeopardize public safety services that rely on communications networks. I keep using the word “scare” because these arguments are based upon incoherent fears peddled by net neutrality advocates in order to find unsophisticated parties to sign on to their policy adventures. The public safety fears are about as legitimate as concerns that the Easter Bunny might one day win the Preakness – and merited as much response from the FCC as a petition from the Racehorse Association of America demanding the FCC regulate rabbits.

In the end, I have no idea how the DC Circuit is going to come down in this case. Public Safety concerns – like declarations of national emergencies – are often given undue and unwise weight. And there is a legitimately puzzling, if fundamentally academic, argument about a provision of the Communications Act (47 USC 257(c)) that Congress repealed after the Order was adopted and that was an noteworthy part of the notice the FCC gave when the Order was proposed that could lead the Court to remand the Order back to the Commission.

In the end, however, this case is unlikely to address the fundamental question of whether the FCC has any business regulating Internet access services. If the FCC loses, we’ll be back here in another year or two; if the FCC wins, we’ll be back here the next time a Democrat is in the White House. And the real tragedy is that every minute the FCC spends on the interminable net neutrality non-debate is a minute not spent on issues like closing the rural digital divide or promoting competitive entry into markets by next generation services.

So much wasted time. So many billable hours. So many angels dancing on the head of a pin. If only they were the better angels of our nature.

Postscript: If I sound angry about the endless fights over net neutrality, it’s because I am. I live in one of the highest-cost, lowest-connectivity states in the country. A state where much of the territory is covered by small rural carriers for whom the cost of just following these debates can mean delaying the replacement of an old switch, upgrading a circuit to fiber, or wiring a street. A state in which if prioritization were to be deployed it would be so that emergency services would be able to work over older infrastructure or so that someone in a rural community could remotely attend classes at the University or consult with a primary care physician (because forget high speed Internet – we have counties without doctors in them). A state in which if paid prioritization were to be developed it would be to help raise capital to build out service to communities that have never had high-speed Internet access.

So yes: the fact that we might be in for another year of rule making followed by more litigation because some firefighters signed up for the wrong wireless service plan and then were duped into believing a technological, economic, and political absurdity about net neutrality ensuring they get free Internet access does make me angry. Worse, unlike the hypothetical harms net neutrality advocates are worried about, the endless discussion of net neutrality causes real, actual, concrete harm to the people net neutrality advocates like to pat themselves on the back as advocating for. We should all be angry about this, and demanding that Congress put this debate out of our misery.

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services. The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

A recent working paper by Hashmat Khan and Matthew Strathearn attempts to empirically link anticompetitive collusion to the boom and bust cycles of the economy.

The level of collusion is higher during a boom relative to a recession as collusion occurs more frequently when demand is increasing (entering into a collusive arrangement is more profitable and deviating from an existing cartel is less profitable). The model predicts that the number of discovered cartels and hence antitrust filings should be procyclical because the level of collusion is procyclical.

The first sentence—a hypothesis that collusion is more likely during a “boom” than in recession—seems reasonable. At the same time, a case can be made that collusion would be more likely during recession. For example, a reduced risk of entry from competitors would reduce the cost of collusion.

The second sentence, however, seems a stretch. Mainly because it doesn’t recognize the time delay between the collusive activity, the date the collusion is discovered by authorities, and the date the case is filed.

Perhaps, more importantly, it doesn’t acknowledge that many collusive arrangement span months, if not years. That span of time could include times of “boom” and times of recession. Thus, it can be argued that the date of the filing has little (or nothing) to do with the span over which the collusive activity occurred.

I did a very lazy man’s test of my criticisms. I looked at six of the filings cited by Khan and Strathearn for the year 2011, a “boom” year with a high number of horizontal price fixing cases filed.

My first suspicion was correct. In these six cases, an average of more than three years passed from the date of the last collusive activity and the date the case was filed. Thus, whether the economy is a boom or bust when the case is filed provides no useful information regarding the state of the economy when the collusion occurred.

From July 2001 through September 2009, 24 of the 99 months were in recession. In other words, during this period, there was a 24 percent chance the economy was in recession in any given month.

Five of the six collusive arrangements began when the economy was in recovery. Only one began during a recession. This may seem to support their conclusion that collusive activity is more likely during a recovery. However, even if the arrangements began randomly, there would be a 55 percent chance that that five or more began during a recovery. So, you can’t read too much into the observation that most of the collusive agreements began during a “boom.”

In two of the cases, the collusive activity occurred during a span of time that had no recession. The chances of this happening randomly is less than 1 in 20,000, supporting their conclusion regarding collusive activity and the business cycle.

Khan and Strathearn fall short in linking collusive activity to the business cycle but do a good job of linking antitrust enforcement activities to the business cycle. The information they use from the DOJ website is sufficient to determine when the collusive activity occurred—but it’ll take more vigorous “scrubbing” (their word) of the site to get the relevant data.

The bigger question, however, is the relevance of this research. Naturally, one could argue this line of research indicates that competition authorities should be extra vigilant during a booming economy. Yet, Adam Smith famously noted, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” This suggests that collusive activity—or the temptation to engage in such activity—is always and everywhere present, regardless of the business cycle.

Like this:

Last week, Senator Orrin Hatch, Senator Thom Tillis, and Representative Bill Flores introduced the Hatch-Waxman Integrity Act of 2018 (HWIA) in both the Senate and the House of Representatives. If enacted, the HWIA would help to ensure that the unbalanced inter partes review (IPR) process does not stifle innovation in the drug industry and jeopardize patients’ access to life-improving drugs.

Created under the America Invents Act of 2012, IPR is a new administrative pathway for challenging patents. It was, in large part, created to fix the problem of patent trolls in the IT industry; the trolls allegedly used questionable or “low quality” patents to extort profits from innovating companies. IPR created an expedited pathway to challenge patents of dubious quality, thus making it easier for IT companies to invalidate low quality patents.

However, IPR is available for patents in any industry, not just the IT industry. In the market for drugs, IPR offers an alternative to the litigation pathway that Congress created over three decades ago in the Hatch-Waxman Act. Although IPR seemingly fixed a problem that threatened innovation in the IT industry, it created a new problem that directly threatened innovation in the drug industry. I’ve previously published an article explaining why IPR jeopardizes drug innovation and consumers’ access to life-improving drugs. With Hatch-Waxman, Congress sought to achieve a delicate balance between stimulating innovation from brand drug companies, who hold patents, and facilitating market entry from generic drug companies, who challenge the patents. However, IPR disrupts this balance as critical differences between IPR proceedings and Hatch-Waxman litigation clearly tilt the balance in the patent challengers’ favor. In fact, IPR has produced noticeably anti-patent results; patents are twice as likely to be found invalid in IPR challenges as they are in Hatch-Waxman litigation.

The Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts in Hatch-Waxman proceedings. In federal court, patents are presumed valid and challengers must prove each patent claim invalid by “clear and convincing evidence.” In IPR proceedings, no such presumption of validity applies and challengers must only prove patent claims invalid by the “preponderance of the evidence.”

Moreover, whereas patent challengers in district court must establish sufficient Article III standing, IPR proceedings do not have a standing requirement. This has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific pre-filing settlement demands. The lack of a standing requirement has also led to the exploitation of the IPR process by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet.

Finally, patent owners are often forced into duplicative litigation in both IPR proceedings and federal court litigation, leading to persistent uncertainty about the validity of their patents. Many patent challengers that are unsuccessful in invalidating a patent in district court may pursue subsequent IPR proceedings challenging the same patent, essentially giving patent challengers “two bites at the apple.” And if the challenger prevails in the IPR proceedings (which is easier to do given the lower standard of proof), the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision. Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision.

The pro-challenger bias in IPR creates significant uncertainty for patent rights in the drug industry. As an example, just last week patent claims for drugs generating $6.5 billion for drug company Sanofi were invalidated in an IPR proceeding. Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them. And, if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

The HWIA, which applies only to the drug industry, is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It eliminates challengers’ ability to file duplicative claims in both federal court and through the IPR process. Instead, they must choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR (which is faster and provides certain pro-challenger provisions). In addition to eliminating generic challengers’ “second bite of the apple,” the HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.

Thus, if enacted, the HWIA would create incentives that reestablish Hatch-Waxman litigation as the standard pathway for generic challenges to brand patents. Yet, it would preserve IPR proceedings as an option when speed of resolution is a primary concern. Ultimately, it will restore balance to the drug industry to safeguard competition, innovation, and patients’ access to life-improving drugs.

“Our City has become a cesspool,” according Portland police union president, Daryl Turner. He was describing efforts to address the city’s large and growing homelessness crisis.

Portland Mayor Ted Wheeler defended the city’s approach, noting that every major city, “all the way up and down the west coast, in the Midwest, on the East Coast, and frankly, in virtually every large city in the world” has a problem with homelessness. Nevertheless, according to the Seattle Times, Portland is ranked among the 10 worst major cities in the U.S. for homelessness. Wheeler acknowledged, “the problem is getting worse.”

This week, the city’s Budget Office released a “performance report” for some of the city’s bureaus. One of the more eyepopping statistics is the number of homeless camps the city has cleaned up over the years.

Keep in mind, Multnomah County reports there are 4,177 homeless residents in the entire county. But the city reports clearing more than 3,100 camps in one year. Clearly, the number of homeless in the city is much larger than reflected in the annual homeless counts.

The report makes a special note that, “As the number of clean‐ups has increased and program operations have stabilized, the total cost per clean‐up has decreased substantially as well.” Sounds like economies of scale.

Turns out, Budget Office’s simple graphic gives enough information to estimate the economies of scale in homeless camp cleanups. Yes, it’s kinda crappy data. (Could it really be the case that in two years in a row, the city cleaned up exactly the same number of camps at exactly the same cost?) Anyway data is data.

First we plot the total annual costs for cleanups. Of course it’s an awesome fit (R-squared of 0.97), but that’s what happens when you have three observations and two independent variables.

Now that we have an estimate of the total cost function, we can plot the marginal cost curve (blue) and average cost curve (orange).

That looks like a textbook example of economies of scale: decreasing average cost. It also looks like a textbook example of natural monopoly: marginal cost lower than average cost over the relevant range of output.

What strikes me as curious is how low is the implied marginal cost of a homeless camp cleanup, as shown in the table below.

FY

Camps

TC

AC

MC

2014-15

139

$171,109

$1,231

$3,178

2015-16

139

$171,109

$1,231

$3,178

2016-17

571

$578,994

$1,014

$774

2017-18

3,122

$1,576,610

$505

$142

It is somewhat shocking that the marginal cost of an additional camp cleanup is only $142. The hourly wages for the cleanup crew alone would be way more than $142. Something seems fishy with the numbers the city is reporting.

My guess: The city is shifting some of the cleanup costs to other agencies, such as Multnomah County and/or the Oregon Department of Transportation. I also suspect the city is not fully accounting for the costs of the cleanups. And, I am almost certain the city is significantly under reporting how many homeless are living on Portland streets.

The Food and Drug Administration has spoken, and its words have, once again, ruffled many feathers. Coinciding with the deadline for companies to lay out their plans to prevent youth access to e-cigarettes, the agency has announced new regulatory strategies that are sure to not only make it more difficult for young people to access e-cigarettes, but for adults who benefit from vaping to access them as well.

More surprising than the FDA’s paradoxical strategy of preventing teen smoking by banning not combustible cigarettes, but their distant cousins, e-cigarettes, is that the biggest support for establishing barriers to accessing e-cigarettes seems to come from the tobacco industry itself.

Going above and beyond the FDA’s proposals, both Altria and JUUL are self-restricting flavor sales, creating more — not fewer — barriers to purchasing their products. And both companies now publicly support a 21-to-purchase mandate. Unfortunately, these barriers extend beyond restricting underage access and will no doubt affect adult smokers seeking access to reduced-risk products.

To say there are no benefits to self-regulation by e-cigarette companies would be misguided. Perhaps the biggest benefit is to increase the credibility of these companies in an industry where it has historically been lacking. Proposals to decrease underage use of their product show that these companies are committed to improving the lives of smokers. Going above and beyond the FDA’s regulations also allows them to demonstrate that they take underage use seriously.

Yet regulation, whether imposed by the government or as part of a business plan, comes at a price. This is particularly true in the field of public health. In other health areas, the FDA is beginning to recognize that it needs to balance regulatory prudence with the risks of delaying innovation. For example, by decreasing red tape in medical product development, the FDA aims to help people access novel treatments for conditions that are notoriously difficult to treat. Unfortunately, this mindset has not expanded to smoking.

Good policy, whether imposed by government or voluntarily adopted by private actors, should not help one group while harming another. Perhaps the question that should be asked, then, is not whether these new FDA regulations and self-imposed restrictions will decrease underage use of e-cigarettes, but whether they decrease underage use enough to offset the harm caused by creating barriers to access for adult smokers.

The FDA’s new point-of-sale policy restricts sales of flavored products (not including tobacco flavors or menthol/mint flavors) to either specialty, age-restricted, in-person locations or to online retailers with heightened age-verification systems. JUUL, Reynolds and Altria have also included parts of this strategy in their proposed self-regulations, sometimes going even further by limiting sales of flavored products to their company websites.

To many people, these measures may not seem like a significant barrier to purchasing e-cigarettes, but in fact, online retail is a luxury that many cannot access. Heightened online age-verification processes are likely to require most of the following: a credit or debit card, a Social Security number, a government-issued ID, a cellphone to complete two-factor authorization, and a physical address that matches the user’s billing address. According to a 2017 Federal Deposit Insurance Corp. survey, one in four U.S. households are unbanked or underbanked, which is an indicator of not having a debit or credit card. That factor alone excludes a quarter of the population, including many adults, from purchasing online. It’s also important to note that the demographic characteristics of people who lack the items required to make online purchases are also the characteristics most associated with smoking.

Additionally, it’s likely that these new point-of-sale restrictions won’t have much of an effect at all on the target demographic — those who are underage. According to a 2017 Centers for Disease Control and Prevention study, of the 9 percent of high school students who currently use electronic nicotine delivery systems (ENDS), only 13 percent reported purchasing the device for themselves from a store. This suggests that 87 percent of underage users won’t be deterred by prohibitive measures to move sales to specialty stores or online. Moreover, Reynolds estimates that only 20 percent of its VUSE sales happen online, indicating that more than three-quarters of users — consisting mainly of adults — purchase products in brick-and-mortar retail locations.

Existing enforcement techniques, if properly applied at the point of sale, could have a bigger impact on youth access. Interestingly, a recent analysis by Baker White of FDA inspection reports suggests that the agency’s existing approaches to prevent youth access may be lacking — meaning that there is much room for improvement. Overall, selling to minors is extremely low-risk for stores. The likelihood of a store receiving a fine for violation of the minimum age of sale is once for every 36.7 years of operation, the financial risk is about 2 cents per day, and the risk of receiving a no sales order (the most severe consequence) is 1 for every 2,825 years of operation. Furthermore, for every $279 the FDA receives in fines, it spends over $11,800. With odds like those, it’s no wonder some stores are willing to sell to minors: Their risk is minimal.

Eliminating access to flavored products is the other arm of the FDA’s restrictions. Many people have suggested that flavors are designed to appeal to youth, yet fewer talk about the proportion of adults who use flavored e-cigarettes. In reality, flavors are an important factor for adults who switch from combustible cigarettes to e-cigarettes. A 2018 survey of 20,676 US adults who frequently use e-cigarettes showed that “since 2013, fruit-flavored e-liquids have replaced tobacco-flavored e-liquids as the most popular flavors with which participants had initiated e-cigarette use.” By relegating flavored products to specialty retailers and online sales, the FDA has forced adult smokers, who may switch from combustible cigarettes to e-cigarettes, to go out of their way to initiate use.

It remains to be seen if new regulations, either self- or FDA-imposed, will decrease underage use. However, we already know who is most at risk for negative outcomes from these new regulations: people who are geographically disadvantaged (for instance, people who live far away from adult-only retailers), people who might not have credit to go through an online retailer, and people who rely on new flavors as an incentive to stay away from combustible cigarettes. It’s not surprising or ironic that these are also the people who are most at risk for using combustible cigarettes in the first place.

Given the likelihood that the new way of doing business will have minimal positive effects on youth use but negative effects on adult access, we must question what the benefits of these policies are. Fortunately, we know the answer already: The FDA gets political capital and regulatory clout; industry gets credibility; governments get more excise tax revenue from cigarette sales. And smokers get left behind.