from the don't-mess-this-up-please dept

This is, of course, no surprise at all, but Google has officially asked the Supreme Court to fix the Federal Circuit's ridiculously bad ruling concerning copyright of APIs. Remember, this was the Federal Circuit's second awful ruling in this same case, both regarding the copyright status of APIs. The first bad ruling is still a travesty, in that a technically illiterate court couldn't comprehend that an API is like a recipe or instruction set that is not subject to copyright under Section 102(b) of the Copyright Act that explicitly states:

In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work.

However, when you get a bunch of technically illiterate judges together, and show them snippets of an API, which makes no sense to them, they assume it's the same thing as software code -- which clearly is covered by copyright. On the second trip through the courts, the Federal Circuit messed things up again, insisting that Google's reuse of certain Java APIs could not be fair use.

Google is asking the Supreme Court to hear this issue and overturn the Federal Circuit -- something that the Supreme Court has done with some regularity over the past dozen years or so (though, mainly on patent issues, where the Supreme Court has been quite good, and not on copyright issues, where the Supreme Court has been mostly bad). While Google's cert petition officially clocks in at 343 pages, much of that is just the appendices, which include the various lower court rulings. What's key is that Google is asking the Supreme Court to review both of the Federal Circuit's awful rulings in this case:

The questions presented are:

Whether copyright protection extends to a software
interface.

Whether, as the jury found, petitioner’s use of a
software interface in the context of creating a new computer
program constitutes fair use.

I'd still argue that the first question is the more important one here, though if that goes sideways, a good ruling on the second question (as the jury in the district court found) would at least be some level of relief from insanity.

The opening to the cert petition sums everything up nicely:

If allowed to stand, the Federal Circuit’s
approach will upend the longstanding expectation of software
developers that they are free to use existing software
interfaces to build new computer programs. Developers
who have invested in learning free and open programming
languages such as Java will be unable to use
those skills to create programs for new platforms—a result
that will undermine both competition and innovation.
Because this case is an optimal vehicle for addressing the
exceptionally important questions presented, the petition
for a writ of certiorari should be granted.

There is, of course, no guarantee the Supreme Court will hear the case (indeed, as everyone likes to point out, the Supreme Court denies most such petitions). However, allowing this ruling to stand would do serious harm to software development, and would be a real shame.

from the anti-competitive,-anti-consumer-issues dept

As expected, UK Parliament Member Damian Collins released a bunch of documents that he had previously seized under questionable circumstances. While he had revealed some details in a blatantly misleading way during the public hearing he held, he's now released a bunch more. Collins tees up the 250 page release with a few of his own notes, which also tend to exaggerate and misrepresent what's in the docs, and many people are running with a few of those misrepresentations.

However, that doesn't mean that all of these documents have been misrepresented. Indeed, there are multiple things in here that look pretty bad for Facebook, and could be very damaging for it on questions around the privacy protections it had promised the FTC it would put in place, as well as in any potential anti-trust fight. It's not that surprising to understand how Facebook got to the various decisions it made, but the "move fast and break things" attitude also seems to involve the potential of breaking both the law and the company's own promises to its users. And that's bad.

First, the things that really aren't that big a deal: a lot of the reporting has focused on the idea that Facebook would give greater access to data to partners who signed up to give Facebook money via its advertising or other platforms. There doesn't seem to be much of a bombshell there. Lots of companies who have APIs charge for access. This is kind of a standard business model question, and some of the emails in the data dump show what actually appears to be a pretty thoughtful discussion of various business model options and their tradeoffs. This was a company that recognized it had valuable information and was trying to figure out the best way to monetize it. There isn't much of a scandal there, though some people seem to think there is. Perhaps you could argue that allowing some third parties to have greater access Facebook has a cavalier attitude towards that data since it's willing to trade access to it for money, but there's no evidence presented that this data was used in an abusive way (indeed, by putting a "price" on the access, Facebook likely limited the access to companies who had every reason to not abuse the data).

Similarly, there is a lot of discussion about the API change, which Facebook implemented to actually start to limit how much data app developers had access to. And the documentation here shows that part of the motivation to do this was to (rightfully) improve user trust of Facebook. It's difficult to see how that's a scandal. In addition, some of the discussions involve moving specific whitelisted partner to a special version of the API that gives them access to more data... but in a way that the data is hashed, providing better privacy and security to that data, while still making it useful. Again, this approach seems to actually be beneficial to end users, rather than harmful, so the attempts to attack it seem misplaced -- and yet take up the vast majority of the 250 pages.

The bigger issues involve specific actions that certainly appear to at least raise antitrust questions. That includes cutting off apps that recreate Facebook's own features, or that are suddenly getting a lot of traction (and using the access they had to users' phones to figure out which apps were getting lots of traction). While not definitively violating antitrust laws, that's certainly the kind of evidence that any antitrust investigator would likely explore -- looking to see if Facebook held a dominant position at the time of those actions, and if those actions were designed to deliberately harm competitors, rather than for any useful purpose for end-users. At least from the partial details released in the documents, the focus on competitors does seem to be a driving force. That could create a pretty big antitrust headache for Facebook.

Of course, the details on this... are still a bit vague from the released documents. There are a number of included charts from Onavo included, showing the popularity of various apps, such as this:

Onavo was a data analytics company that Facebook bought in 2013 for over $100 million. Last year, the Wall Street Journal broke the story that Facebook was using Onavo to understand how well competing apps were doing, and potentially using that data to target acquisitions... or potentially to try to diminish those competing apps' access. The potential "smoking gun" evidence is buried in these files, but there's a short email on the day that Twitter launched Vine, its app for 6-second videos, where Facebook decides to cut off Twitter's access to its friend API in response to this move, and Zuckerberg himself says "Yup, go for it."

Now... it's entirely possible that there's more to this than is shown in the documents. But at least on its face, it seems like the kind of thing that deserves more scrutiny. If Facebook truly shut down access to the API because it feared competition from Vine... that is certainly the kind of thing that will raise eyebrows from antitrust folks. If there were more reasons for cutting off Vine, that should come out. But if the only reason was "ooh, that's a potential competitor to our own service," and if Facebook was seen as the dominant way of distribution or access at the time, it could be a real issue.

Separately, if the name Onavo sounds familiar to you, that might be because earlier this year, Facebook launched what it called a VPN under the brand name Onavo... and there was reasonable anger over it because people realized (as per the above discussion) that Onavo was really a form of analytics spyware that charted what applications you were using and for what. It was so bad that Apple pulled it from its App Store.

The other big thing that comes out in the released documents is all the way at the end, when Facebook is getting ready to roll out a Facebook app update on Android that will snoop on your SMS and call logs and use that information for trying to get you to add more friends and for determining what kinds of content it promotes to you. Facebook clearly recognized that this could be a PR nightmare if it got out, and they were worried that Android would seek permission from users, which would alert them to this kind of snooping:

That is bad. That's Facebook knowing that its latest snooping move will look bad and trying to figure out a way to sneak it through. Later on, the team is relieved when they realize, after testing, that they can roll this out without alerting users with a permission dialog screen:

As reporter Kashmir Hill points out, it's notable that this "phew, we don't really have to alert users to our sketchy plan to get access to their logs" came from Yul Kwon, who was designated as Facebook's "privacy sherpa" and put in charge of making sure that Facebook didn't do anything creepy with user data. From an article that Hill wrote back in 2015:

The face of the new, privacy-conscious Facebook is Yul Kwon, a Yale Law grad who heads the team responsible for ensuring that every new product, feature, proposed study and code change gets scrutinized for privacy problems. His job is to try to make sure that Facebook’s 9,199 employees and the people they partner with don’t set off any privacy dynamite. Facebook employees refer to his group as the XFN team, which stands for “cross-functional,” because its job is to ensure that anyone at Facebook who might spot a problem with a new app — from the PR team to the lawyers to the security guys — has a chance to raise their concerns before that app gets on your phone. “We refer to ourselves as the privacy sherpas,” says Kwon. Instead of helping Facebook employees scale Everest safely, Kwon’s team tries to guide them safely past the potential peril of pissing off users.

And yet, here, he seems to be guiding them past those perils by helping the team hide what's really going on.

This is also doubly notable for Kashmir Hill who has been perhaps the most dogged reporter on the creepy levels to which Facebook's "People You May Know" feature works. Facebook has a history of giving Hill totally conflicting information about how that feature worked, and these documents reveal, at least, the desire to secretly slurp up your call and SMS records in order to find more "people you might know" (shown as PYMK in the documents).

One final note on all of this. I recently pointed out that Silicon Valley really should stop treating fundamental structural issues as political issues, in which they just focus on what's best for the short-term bottom line, and should focus on the larger goals of doing what's right overall. In a long email included in the documents from Mark Zuckerberg, musing thoughtfully on various business model ideas for the platform, one line stands out. Honestly, the entire email (starting on page 49 of the document) is worth reading, because it really does carefully weigh the various options in front of them. But there's also this line:

If you can't read that, it's a discussion of how it's important to enable people to share what they want, and how enabling other apps to help users do that is a good thing, but then he says:

The answer I came to is that we’re trying to enable people to share everything they want, and to do it on Facebook.
Sometimes the best way to enable people to share something is to have a developer build a
special purpose app or network for that type of content and to make that app social by
having Facebook plug into it. However, that may be good for the world but it’s not good for
us unless people also share back to Facebook and that content increases the value of our
network. So ultimately, I think the purpose of platform – even the read side – is to increase
sharing back into Facebook.’

I should note that in Damian Collins' summary of this, he carefully cuts out some of the text of that email to frame it in a manner that makes it look worse, but the "that may be good for the world, but it's not good for us" line really stands out to me. That's exactly the kind of political decision I was talking about in that earlier post. Taking the short term view of "do what's good for us, rather than what's good for the world" may be typical, and even understandable, in business, but it's the root of many, many long term and structural problems for not just Facebook, but tons of other companies as well.

I wish that we could move to a world where companies finally understood that "doing good for the world" leads to a situation in which the long term result is also "good for us," rather than focusing on the "good for us" at the expense of "good for the world."

from the mostly-smoke,-minimal-fire dept

Earlier this week, UK politicians conveniently pounced on a US businessman to force him to turn over documents possibly containing info Parliament members had been unable to extract from Mark Zuckerberg about Facebook's data sharing. An obscure law was used to detain the visiting Six4Three executive, drag him to Parliament, and threaten him with imprisonment unless he handed over the documents MPs requested.

The executive happened to have on him some inside info produced by Facebook in response to discovery requests. Six4Three is currently suing Facebook over unfair business practices in a California court. The documents carried by the executive had been sealed by the court, which means the executive wasn't allowed to share them with anyone… in the United States. But he wasn't in the United States, as gleeful MPs pointed out while forcing him to produce information it wanted from another tech company unwilling to set foot in London.

It was all very strange, more than a little frightening, and completely bizarre. A lot of coincidences lined up very conveniently for UK legislators. The frightening part is it worked. This will only encourage Parliament to pull the same stunt the next time it thinks it can get information others have refused to hand over. Targeting third parties is an ugly way to do government business, especially when the UK government is attempting to obtain information from US companies. All bets are off once they're on UK soil, so traveling execs may want to leave sensitive info on their other laptop before landing at Heathrow.

But there's also a chance Six4Three wanted to put this information in the hands of UK legislators. Call it "plausible deniability" or "parallel construction" (why not both?!), but the ridiculousness of the entire incident lends it an air of theater that probably isn't entirely unearned.

Now there's more fuel for that conspiratorial bonfire. Court documents filed by Six4Three containing sensitive info about Facebook's API terms and the possible sale of user info made their way into the public domain. They were redacted to keep this sensitive information from being made public.

Well, let me rephrase that: they were "redacted" in such a way all sensitive info could easily be read by anyone who opened the PDF. Sure, the black bars are there, but selecting the "redacted" text and pasting it anywhere that can handle text allows this information to be read.

Facebook filed its removal petition on the eve of its deadline to serve its motions for summary judgment and mere days before the Superior Court’s ruling on Plaintiff’s discovery motions to obtain information from key Facebook executives, including Chief Executive Zuckerberg, regarding the decision to close Graph API that shut down Plaintiff’s business and many others. Plaintiff’s discovery to date provides evidence suggesting that the decision to shut down Graph API was made: (1) for anticompetitive reasons; (2) in concert with other large companies; (3) prior to October 2012 (even though Facebook waited to announce the decision until April 2014); (4) by Mr. Zuckerberg; and (5) with the active participation of at least six other individuals who reported directly to Mr. Zuckerberg. See Godkin Reply Decl. Exhibit 3, at 1-4. Plaintiff has yet to receive information regarding this decision that shut down its business. Rather, Facebook has produced documents only from low-level employees that Facebook unilaterally selected as custodians and who clearly had no involvement in the decision that shut down Plaintiff’s business.

Another fully-redacted paragraph points to a pay-to-play API offering, gleaned from emails obtained through discovery.

On October 30, 2012, Facebook Vice President of Engineering, Michael Vernal, sent a note to certain employees stating that after discussing with Mr. Zuckerberg, Facebook has decided to “limit the ability for competitive networks to use our platform without a formal deal in place” and that Facebook is going to “require that all platform partners agree to data reciprocity.” Mr. Vernal then describes a whitelisting system Facebook will implement, and did in fact implement, to determine data access based on this “reciprocity principle.” See Godkin Reply Decl., Exhibit 5 at FB-00423235-FB-00423236. The reciprocity principle is subsequently defined and discussed among Facebook employees on numerous occasions as shutting down access “in one-go to all apps that don’t spend…at least $250k a year to maintain access to the data.” See Godkin Reply Decl., Exhibit 6 at FB-00061251. Facebook then embarks on a campaign to reach out to large companies and extract significant payments from them with the threat that they will otherwise turn off the company’s data access. However, if a company were to agree to provide significant payments to Facebook, then Facebook would offer it an enormous advantage relative to its competitors. Facebook employees routinely discuss this fact in their email exchanges: “Removing access to all_friends lists seems more like an indirect way to drive NEKO adoption.” See Godkin Reply Decl., Exhibit 7 at FB00061439. In other words, Facebook’s decision to close access to data in its operating system (“removing access to all_friends_lists”), which shut down Plaintiff’s business, was designed to generate increased revenues on Facebook’s advertising platform (“drive NEKO adoption”) by offering an unfair competitive advantage to companies from which Facebook could extract large payments.

Now, the only thing holding this back from being a Six4Three effort to expose Facebook without running afoul of the court is the filing date. This redaction failure was filed nearly 10 months ago -- long before UK politicians talked a Six4Three exec out of potentially-damaging documents.

That being said, the London incident still smells super-fishy. And the information seen here doesn't indicate much more than Facebook considered selling access to Facebook user info. It appears Facebook never followed through with the plan. The lack of pay-for-play doesn't excuse its larger sins, but it does kind of put a dent in Six4Three's claims Facebook unfairly locked it out of API access when it kicked its shady bikini-photo-searching app to the curb.

More intrigue is sure to develop as Facebook attempts to have Six4Three held in contempt of court following its seemingly involuntary production of sealed documents during its exec's recent London trip.

from the good-for-show,-not-good-for-truth dept

As you may have heard, the UK Parliament put on quite a show on Tuesday in what it claimed was an attempt to go after Facebook for its "fake news" problem. Of course, in the process, the hearings themselves created some fake news that undermined the entire point. To be clear, upfront, Facebook does have many issues that should be taken seriously. But this hearing did not get at those, and actually showed how, when political grandstanding is the focus, it's quite easy to create "fake news" in the process. Still, boy, was that hearing theatrical. It was apparently the first time since 1933 that the UK Parliament had representatives from other countries participate in a hearing, and so there were nine other countries present, including Canada, France, Belgium, Brazil, Ireland, Latvia, Argentina and Singapore. On top of that, Facebook CEO Mark Zuckerberg made the bad decision of refusing to participate in the hearings, giving the Committee the opportunity for this classic photo op:

9 countries.24 official representatives.447 million people represented.

Facebook's VP of policy, Richard Allan appeared instead, and despite even him admitting that it didn't look great that Zuckerberg wasn't in attendance, he is actually someone who would probably be better positioned to answer actual substantive questions about Facebook's policies in these areas.

But that would only matter if the inquisitors were interested in discussing substantive policy matters. And it did not appear they did. They were there for the grandstanding, repeatedly blaming Facebook for reflecting back human nature and all its foibles. There were questions about what Facebook was doing to protect democracy -- which I don't think is actually Facebook's job (indeed, seems like that's the government's job, no?). But, of course, the main highlight of the show was the organizer of the hearing, MP Damian Collins, who you'll recall seized a bunch of documents, under questionable circumstances, from a US business exec who was visiting the UK.

The "documents" were supposed to be the star of the show, and Collins dropped the apparent big bombshell during the hearing: Facebook had, he claimed, actually been alerted to an attempt by Russian's to mess with the site all the way back in October of 2014. As summarized by Wired:

Collins cut right to the heart of the documents during the hearing. In October of 2014, he said, a Facebook engineer notified the company that entities with Russian IP addresses had been using a Pinterest API to pull out three billion data points a day from the Facebook friends API. Collins wanted to know what happened after that information was brought forward.

"Was that reported or was it kept, as so often seems to be the case, kept within the family?" he asked.

Ooooh. Intrigue.

Except... it was bullshit. Facebook revealed the redacted emails in question and it showed that while an engineer had initially raised concerns that it appeared that Russian IP addresses were using the Pinterest API access to get lots of data, further investigation showed that he was wrong. The initial email says that the person is seeing calls from "Russian IPs" and is having the Site Integrity team investigate, though it's quickly followed up with a note that "those might not have been Russian IPs after all, we are digging."

Then, on the very same day -- indeed, just a little over two hours after the initial alarm of Russian IPs -- the person emails that it was a false alarm.

If you can't see that, it says:

Ok, thinks are not as bad as they seemed, apologies for the trash. There was a series of unfortunate coincidences that made me think the worse.

1/ We verified that the endpoint has not been "leaked" and calls seem to be coming all from Pinterest servers.
2/ We verified that the volume of calls per day is actually around 6M successful and 40M failed due to invalid access.

In short, it wasn't 3 billion data points and it wasn't Russia. Also, it wasn't abuse of the system. And yet, the way Collins raised the issue, he suggested that Facebook was aware that Russians had abused the API to access 3 billion data points and then kept it secret. In other words, Collins' explanation of what happened was 100% incorrect and misleading. It was misinformation. Or, as some like to call it: fake news.

And while it will not be, this should be the lesson that the folks who held this hearing should learn: there are all sorts of ways to make incorrect claims. Some of them on purpose. Some of them by accident. Some of them because of confirmation bias of what you want to be true. And expecting Facebook to magically understand what is what... is insanity.

So, not that MP Damian Collins will respond to me (perhaps I should set up a dramatic photoshoot of an empty chair with his nameplate), but I'm wondering. Does he think Facebook should block all the stories reporting on his false claim about them supposedly "hiding" news of Russians abusing the API to extract 3 billion data points? Or would that, you know, be crazy?

But first a little background: Six4Three, developers of a scuzzy app that scanned profiles for bikini photos, is currently suing Facebook for yanking its API access. The lawsuit has traveled from the federal court system to a California state court, where Six4Three is hoping for a ruling declaring Facebook's actions to be a violation of various state-level competitive business laws.

During the course of this suit -- which was filed in January 2017 -- Six4Three has obtained internal Facebook documents through discovery. These documents may contain info related to Facebook's data-sharing and data-selling practices, which could possibly include its deals with Cambridge Analytica.

Somehow, members of Parliament found out one of Six4Three's lawyers execs was in London. So, this happened:

Damian Collins, the chair of the culture, media and sport select committee, invoked a rare parliamentary mechanism to compel the founder of a US software company, Six4Three, to hand over the documents during a business trip to London. In another exceptional move, parliament sent a serjeant at arms to his hotel with a final warning and a two-hour deadline to comply with its order. When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents.

Let's break this down: the UK government wants answers from Mark Zuckerberg and Facebook. Since Facebook hasn't been compliant, the UK government feels justified in taking documents obtained through discovery in a US lawsuit from an American lawyer currently suing Facebook… just because he happened to roam into its jurisdiction.

This is insane.

As an added twist, the documents the lawyer was forced to turn over are currently under seal. That means no one in the US other than the litigants and the judge have access to them. At least that was the case until Parliament's bizarre, heavy-handed move.

Facebook has responded with some fluff about Six4Three's creepy app (not really relevant) and a reminder that the documents seized from its opponent's lawyer are, at this point, privileged information. MP Collins has responded with a shrug, reminding Facebook's legal rep that the UK is not California so who cares what a local court has to say about who can see what documents.

It's unlikely the California court will find Six4Three's lawyer in contempt for being pretty much arrested and threatened with indefinite imprisonment if he didn't hand over documents it has ordered sealed. Facebook has asked that no members of Parliament view the documents until it has heard back from the California court. This has been greeted with a different kind of contempt:

Facebook said: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook.” Too late.

Facebook said: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook.” Too late.

Parliament may now have some of the answers Facebook has refused to provide. But was it worth it? The UK government acted more like an authoritarian dictatorship than a free country with this move. It detained a lawyer who didn't even work for Facebook and threatened him with jail time if he didn't turn over documents a judge in his home country had ordered sealed. The next few days should see some interesting iterations of the "ends justifies the means" pontificating from every Parliament member supportive of this damaging move.

from the this-is-bad dept

Oh, CAFC. The Court of Appeals for the Federal Circuit has spent decades fucking up patent law, and now they're doing their damndest to fuck up copyright law as well. In case you'd forgotten, the big case between Oracle and Google over whether or not Google infringed on Oracle's copyrights is still going on -- and it appears it will still be going on for quite a while longer, as CAFC this morning came down with a laughably stupid opinion, overturning the district court's jury verdict, which had said that Google's use of a few parts of Java's API was protected by fair use. That jury verdict was kind of silly in the first place, because the whole trial (the second one in the case) made little sense, as basically everyone outside of Oracle and the CAFC had previously understood (correctly) that APIs are simply not covered by copyright.

In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work.

And an API is pretty clearly a procedure, process, system or method of operation -- it's just instructions on how to access certain elements, similar to a recipe. But, CAFC (who shouldn't be hearing this case in the first place) simply couldn't be bothered to comprehend what an API is, and insisted that because to the judges non-technical brains, an API looks the same as software, it must be copyrightable as software. That was after Judge Alsup in the district court (following trial #1) had spent quite a lot of time explaining to CAFC why APIs are not copyrightable.

Thus, we had the second trial, which was weird, because all of the arguments about fair use, were couched in this weird "um, well, it's not really copyrightable at all, but CAFC says it's copyrightable, so let's just say it's fair use" argument. And the jury then said, yes, it's fair use.

But CAFC has now rejected that and sent the case back to the lower court for a third trial for damages. And, while we normally expect bad reasoning from CAFC decisions, this one is particularly stupid. In short, CAFC's reasoning is basically "we think this is infringement, and thus we're going to handwave around the law to make sure that it's infringement." It's bad. And, again, CAFC shouldn't even be hearing the case. CAFC hears appeals on patent cases, and originally there were a few patent claims in this case, but they all got dumped at a very early stage. So this case should have gone to the 9th Circuit (who also might have messed it up, but it has at least a marginally better record than CAFC).

Anyway, the CAFC does an awful lot of handwaving around historical precedent to justify its decision to basically start from scratch in going through the fair use four factors. As we've discussed multiple times, one of the problems with the four factors test is that it allows a court to choose who it likes better, and then twist the four factors to give it the outcome it wants. That appears to be what is happening here. On the first factor (nature of the use), CAFC basically says "we think the jury is stupid, this is obviously commercial use, and thus it goes against Google." Google had argued (and the jury had implicitly agreed) that because Google doesn't charge for Android, and interoperability and progress of innovation enabled by using similar API setups are not inherently commercial motives, that this was fair use. The court basically says "but Google has so much money!" which is not how a fair use argument works. But... CAFC.

That Google might also have non-commercial motives is irrelevant as a matter of law. As the Supreme Court
made clear when The Nation magazine published excerpts
from Harper & Row’s book, partly for the purpose of
providing the public newsworthy information, the question
“is not whether the sole motive of the use is monetary
gain but whether the user stands to profit from exploitation
of the copyrighted material without paying the customary
price.” Harper & Row, 471 U.S. at 562. Second,
although Google maintains that its revenue flows from
advertisements, not from Android, commerciality does not
depend on how Google earns its money. Indeed, “[d]irect
economic benefit is not required to demonstrate a commercial
use.” A&M Records, 239 F.3d at 1015. We find,
therefore, that, to the extent we must assume the jury
found Google’s use of the API packages to be anything
other than overwhelmingly commercial, that conclusion
finds no substantial evidentiary support in the record.
Accordingly, Google’s commercial use of the API packages
weighs against a finding of fair use.

On the question of transformative use, the lower court had suggested that using the APIs in a "fresh context" (such as for smartphones) could be seen as transformative. But, again CAFC says "nope."

Google’s arguments are without merit. As explained
below, Google’s use of the API packages is not transformative
as a matter of law because: (1) it does not fit within
the uses listed in the preamble to § 107; (2) the purpose of
the API packages in Android is the same as the purpose of
the packages in the Java platform; (3) Google made no
alteration to the expressive content or message of the
copyrighted material; and (4) smartphones were not a
new context.

CAFC also, once again, shows that it still doesn't understand why APIs and code are not the same thing:

That Google wrote its own implementing code is irrelevant
to the question of whether use of the APIs was
transformative. As we noted in the prior appeal, “no
plagiarist can excuse the wrong by showing how much of
his work he did not pirate.” Oracle, 750 F.3d at 1375
(quoting Harper & Row, 471 U.S. at 565). The relevant
question is whether Google altered “the expressive content
or message of the original work” that it copied—not
whether it rewrote the portions it did not copy.

But that's not the relevant question. The relevant question is how the copied text is being used, and in Google's case it's in a completely different context, which is why it had to write its own implementing code. Again, this is fallout from CAFC's earlier wrong decision in that it still does not understand that API instructions are not the same as implementing code itself.

The court also more or less laughs at the idea that Google had "good faith" reasons for doing what it did:

Ultimately, we find that, even assuming the jury was
unpersuaded that Google acted in bad faith, the highly
commercial and non-transformative nature of the use
strongly support the conclusion that the first factor
weighs against a finding of fair use.

On factor two of the four factors analysis -- the nature of the work -- CAFC again actually gives that one to Google and says that the jury could have found that weighed towards fair use, but immediately points out that it's going to give this factor less weight, because... it still doesn't understand the difference between APIs and software, and insists that if it gave this factor any weight, it would destroy the idea of copyright for software. Of course, that's only true if you can't tell the difference between an API and software.

We note, moreover, that allowing this one factor
to dictate a conclusion of fair use in all cases involving
copying of software could effectively negate Congress’s
express declaration—continuing unchanged for some forty
years—that software is copyrightable. Accordingly,
though the jury’s assumed view of the nature of the
copyrighted work weighs in favor of finding fair use, it has
less significance to the overall analysis.

On factor three, concerning the amount use, the court again is so incredibly confused it's frustrating. Google has long held (and the jury and the lower court appeared to agree) that it used the bare minimum of the Java API to enable basic interoperability in building apps for Android. But CAFC is so tied up in its own made up reality, that it insists no reasonable jury could agree with this:

Even assuming the jury accepted Google’s argument
that it copied only a small portion of Java, no reasonable
jury could conclude that what was copied was qualitatively
insignificant, particularly when the material copied
was important to the creation of the Android platform.

Eventually, though CAFC says factor 3 could be either "neutral" or "arguably weighs against" fair use.

On to factor four -- the impact on the market. Factors one and four tend to be the ones that courts focus on the most. And, here, Oracle has spent plenty of time whining that its own total failure to capitalize on Java in the mobile market should be blamed on Google's copying piece of its API as opposed to the reality, which is that Oracle completely botched its own efforts in the space, because it's Oracle. The court more or less ignores various points about how Oracle wasn't really in the mobile market and more or less says "Google was super successful, so clearly it was at the expense of Oracle."

Given the record evidence of actual and potential
harm, we conclude that “unrestricted and widespread
conduct of the sort engaged in by” Google would result in
“a substantially adverse impact on the potential market
for the original” and its derivatives.

And, thus, the court then weighs factors one and four most heavily, and says it's not fair use, and back to Northern California we go for trial number three, specifically on the damages question. I imagine Google will appeal this, either asking for an en banc rehearing or to the Supreme Court. The Supreme Court refused to hear the previous request on the question of copyrightability of APIs, so this is probably a long shot. However, there are a few oddities within the ruling that maybe will catch someone's attention at the Supreme Court (and the Supreme Court has spent years dunking on CAFC and mocking its dumb decisions, so maybe they'd like to do that again).

Honestly, the most concerning part of the whole thing is how much of a mess CAFC has made of the whole process. The court ruled correctly originally that APIs are not subject to copyright. CAFC threw that out and ordered the court to have a jury determine the fair use question. The jury found it to be fair use, and even though CAFC had ordered the issue be heard by a jury, it now says "meh, we disagree with the jury." That's... bizarre.

Anyway, considering the importance of interoperability in software, this case is yet another potential disaster, limited only by the fact that CAFC doesn't cover any particular region and most copyright cases aren't influenced by CAFC precedent since those cases would flow up through the other appeals courts. However, you can see how someone wishing to go after others for API infringement might just throw in a silly patent claim just get it into CAFC. Hopefully the Supreme Court steps in and fixes this, but if not, it would be nice (I know, I know...) if Congress actually stepped up and pointed out that APIs are not covered by copyright at all. And while they're at it, they should burn CAFC to the ground. It was a dumb idea in the first place and should be shut down.

from the officers:-are-you-tired-of-adhering-to-your-country's-Constitution? dept

Twitter has cut off another social media "surveillance" company from using its API. To date, the platform has forced third-party Dataminr to cut off connections to the CIA, DHS/law enforcement "fusion centers," and Geofeedia. All of these denials of service were the result of the company's policy against use of its API for surveillance.

Very little of what was being done could truly be considered "surveillance," since Dataminr's access to basically every tweet produced did nothing but cull data from public accounts. What Twitter seemed to have more of a problem with was the marketing tactics of companies like Geofeedia, which insinuated their products were perfectly suited for keeping tabs on First Amendment-protected activity, like protests.

As for the CIA and DHS, Twitter apparently felt these government agencies were far more involved in surveillance than the FBI, which just signed a contract with Dataminr for access to its every-tweet-ever API.

Media Sonar touts its social media monitoring software and algorithms as ideal tools for police and corporations to aggregate and filter data to improve safety and protect corporate assets.

But a U.S.-based investigation turned up marketing language that ran afoul of Twitter's policies, which state that posts on the popular social network should not be mined for surveillance purposes.

Media Sonar's emails to past clients explicitly stated that the software, which allows officers to comb through publicly available posts on the likes of Twitter and Instagram, could help police search for "criminal activity" and "avoid the warrant process" when flagging people who have come under scrutiny.

I'm sure Media Sonar never expected the contents of these marketing emails to be made public, but that's a risk you take every time you send something out inviting law enforcement to use your product to avoid complying with Canadians' rights. Of course, most of what's viewed by law enforcement with tools like these wouldn't require a warrant to obtain.

The move by Twitter may be seen as noble, but it does very little to curb government agencies' monitoring of publicly-available posts. If Twitter users want to remain off the government's radar, it's on them to take more control of the visibility of their tweets. For most users, this isn't a concern and while some may express dismay at law enforcement's use of their posts against them, there's nothing about this outcome that isn't preventable, even without Twitter's periodic announcements that it's cutting another third party off.

The problem isn't with the use of the API so much as it is the interpretation of obtained data. While hashtags may make it easy to track protests and other activity deeply tied to social media interaction, more nebulous data may show correlations that aren't actually there. Overreliance on monitoring tools could result in a lot of false positives, as Canadian Internet Policy staff lawyer Tamir Israel points out.

Israel said most social media monitoring companies rely on algorithms to parse the vast amount of data and pull out meaningful information for clients. Those algorithms, he cautioned, can be misleading.

Israel said they often analyze posts out of context and are unable to account for slang, cultural norms or other factors that give a post meaning.

He cited a recent example of a British tourist who tweeted about his intention to "destroy the United States" on an upcoming trip. His post raised alarm bells with U.S. security, but the tourist had been trying to express his plans to party while abroad using common British slang.

In addition to these concerns are privacy protections granted by Canadian law, which actually gives publicly-available social media posts more protection than those made by US citizens.

[Citizen Lab's Chris Parsons] said law enforcement and federal agencies must demonstrate a need for mining online data, adding that they cannot look through material indiscriminately.

"Just because I say something on Twitter doesn't mean the RCMP can hoover it up," he said. "There has to be a reason, and they have to be able to articulate it."

This may be why the company stealthily sold its product on its warrant-dodging merits. Allowing a third-party to sort and shape the data may allow Canadian law enforcement agencies to wash their hands of any "indiscriminate" hoovering/searching accusations. In any event, Media Sonar's product has suddenly become a lot less useful, and that's going to keep it from being a heavy hitter in the social media monitoring field.

from the off-to-the-races dept

It was only a matter of time until this happened, but Oracle has officially appealed its fair use Java API loss to the Federal Circuit (CAFC). As you recall, after a years-long process, including the (correct) ruling that APIs are not covered by copyright being ridiculously overturned by CAFC, a new trial found that even if APIs are copyright-eligible, Google's use was covered by fair use. Oracle then tried multiple times to get Judge William Alsup to throw out the jury's ruling, but failed. In fact, on Oracle's second attempt to get Alsup to throw out the jury's ruling, citing "game changing" evidence that Google failed to hand over important information on discovery, it actually turned out that Oracle's lawyers had simply failed to read what Google had, in fact, handed over.

And now the case will finally move up a level, as it was always going to do. There should be lots of fireworks here. CAFC is notoriously bad on a variety of issues, but it would take a pretty impressive level of confusion here to mess this up. Going against a jury's findings on fair use is a big ask, and Oracle is likely to try some silly games whining about jury instructions and such. Hopefully CAFC doesn't fall for it. If it does, hopefully, it doesn't muck stuff up as badly as it did with its first ruling in this case, that simply got confused over the nature of what an API actually is.

from the you-wouldn't-reimplement-an-api dept

Yikes. A month ago, we wrote about how Oracle was asking Judge Alsup to agree to another new trial in the Oracle v. Google API copyright case. I joked that they basically had no chance, and Judge Alsup had already rejected their attempts to overturn the jury ruling. But... I may have spoken too soon. In a hearing on the matter earlier this week, Oracle insisted that there needed to be a new trial because Google had withheld information on plans to offer Android on Chromebooks -- something that Google announced at this year's Google IO which happened (awkward!) while the trial was going on.

And this morning, Alsup issued an order telling both sides to provide sworn statements about this. For Google, it's why it had not updated its discovery responses to include the plans for Android on Chromebooks, and for Oracle, whether it, too, had neglected to update its discovery responses (and specifically calls out a misrepresentation by Oracle):

By THURSDAY AUGUST 25, AT NOON, Christa Anderson, counsel for Google, shall
submit a sworn statement explaining why the discovery responses referenced in Court yesterday
were not updated, including the full extent to which counsel knew Google’s intention to launch
a full version of Marshmallow, including the Google Play Store, for Chrome OS.

By the same date and time, Annette Hurst, counsel for Oracle, shall submit a sworn
statement setting forth, after full inquiry, the full extent to which Oracle neglected to update its
discovery responses by reason, in whole or in part, of one or more rulings by the judge. The
same statement shall explain why counsel repeatedly represented that the Jones v. Aero/chem
decision required an “evidentiary hearing” when that decision, as it turns out, made no mention
of an “evidentiary hearing” and instead remanded because no “hearing” or other consideration
at all had been given to the issue of discovery conduct by the district judge.

This does not mean that there absolutely will be a third trial, but it's at least more of a possibility than most observers thought possible. I honestly don't see how Android on Chromebook really matters for the fair use analysis. Oracle argues that since most of the talk on the market impact was limited to phones and tablets, that may have impacted the jury, but that's kind of laughable. The reality is that Oracle just wants another crack at a decision it disagrees with.

Of course, all of this is really a stupid side show. The real underlying problem was the Federal Circuit's decision that APIs were covered by copyright, despite almost no one actually thinking that's true. The whole fair use trial was an awkward mess, mainly because all of the arguments weren't so much about "fair use" but about whether or not anyone actually considered APIs to be covered by copyright. Going through another such trial would just be a mess.

But the other area where a new case may come about is that Judge Alsup made clear that Oracle was also free to bring new cases on new uses of Android to see if they were also fair use, meaning that any time Google does anything new with Android, it may face a new fair use trial. They wouldn't need to do this if the CAFC had just recognized that APIs are not covered by copyright, but it didn't and here we are in a big heaping mess.

By the same date, counsel shall meet and confer and advise the Court whether the form
of judgment should be amended to reflect that it is not a final judgment but a Rule 52(c)
judgment on partial findings, given that Oracle is entitled to challenge further uses of Android
herein.