from the I-went-to-the-Trade-War-and-all-I-got-was-this-lousy-reporting-requirement dept

American tech companies don't want to give up their cut of a $20 billion Russian software/hardware market, so they've been allowing purchasers to examine devices and vet source code before shelling out for new products. This isn't exactly ideal for American companies, but Russia is as concerned as anyone else products might be shipping with adversaries' backdoors pre-installed. American companies don't necessarily like having entities linked to Russia's government vetting source code, but the market is too big to be ignored.

Russia has every right to suspect government backdoors may be unlisted features. Checking products and source code before purchase just makes sense, what with leaked documents showing the NSA intercepts foreign-bound hardware to install backdoors and other leaks exposing a fair bit of the agency's exploit collection. But now that Russia appears to have engaged in cyberwarfare efforts during the 2016 election, legislators are demanding US companies let the US government know who's been poking around in their products.

The U.S. Congress is sending President Donald Trump legislation that would force technology companies to disclose if they allowed countries like China and Russia to examine the inner workings of software sold to the U.S. military.

To help ease its passage, the law isn't being allowed to stand up by itself. It's attached to a Pentagon spending bill, which has helped it avoid any scrutiny or heated arguments. Not that a bill like this wouldn't be popular at this time. It doesn't forbid companies sell to Russia and China. It only asks the government be informed if these purchasers do anything than grab boxed product off the shelves. China and Russia likely aren't going to be happy with this new development. If these customers in these lucrative markets decide they're no longer interested in buying American because their vetting will be made public, American companies may only have America to sell to.

What makes it an even harder pill to swallow is the reporting requirements, which could result in tech companies' secrets being publicly outed.

The legislation also creates a database, searchable by other government agencies, of which software was examined by foreign states that the Pentagon considers a cyber security risk.

It makes the database available to public records requests, an unusual step for a system likely to include proprietary company secrets.

The Business Software Alliance notes that the law is pretty much a ban, even if there's no ban on sales. The reporting requirements won't affect sales to American purchasers, just certain foreign countries. The path of least resistance would be pulling out of foreign markets targeted by this bill.

And, of course, there's a chance retaliatory legislation will be enacted in other countries in response. Some equivalent process may already be in place in countries where governments have more of a hand in every business transaction (not just the import/export business). But where nothing similar is in place, it may well be soon. This could result in US companies informing foreign governments about the US government's demands for source code and device access. The US government already does this -- repeatedly -- with court orders obtained from federal courts, including the NSA's home turf, the FISA court.

This may also force the US government to do a bit more due diligence before buying foreign goods. Incredibly, the US military does not currently engage in pre-vetting when purchasing from from foreign companies, meaning it could be importing artisanal backdoors created by, or for, foreign governments.

What this looks like is a bit more wintry air blowing across international relations, bringing us closer to a full-blown cyber Cold War. Markets are going to become increasingly siloed as world powers demand other governments open up their cloaks and present their daggers for inspection. Meanwhile, the world's exploit/malware dealers will continue to rake in the cash, cutting both governments and tech companies out of the loop.

from the legit-if-true dept

As the silly copyright lawsuit between PUBG and Epic Games has now come to a fortunate end, with the former dropping the lawsuit it filed over similarities in game genre and broad gameplay aspects that are absolutely not afforded copyright protection, it's probably worth highlighting a lawsuit that is the polar opposite in terms of its merits. Now, I want to stress at the outset that I have no idea as of yet whether or not the allegations that spurred this lawsuit are true or not, but it's the actual claims that are important. If adjudicated as true, those claims are absolutely valid from a copyright law standpoint.

Bethesda, makers of the Fallout franchise in its current iteration, has filed a lawsuit against Warner Bros. and Behavior Interactive, which together have released Westworld, a mobile park management simulation based on the hit HBO series. Bethesda has its own simulation of this kind, called Fallout Shelter. While Bethesda's filing does indeed make much of the clear similarities between the games animations and aesthetics, as well as some of the folks behind the Westworld game clearly saying they drew inspiration from Fallout Shelter, the important difference here is this ultimately comes down to reused specific code. How this code got reused is also part of the breach of contract allegations in the suit, as it turns out that Behavior Interactive was involved in creating Bethesda's original product.

Bethesda has stated that Behaviour Interactive was involved in the creation of Fallout Shelter, before going on to make the Westworld game a few years after. Court documents reportedly state that Bethesda believes Behaviour Interactive has stolen its designs, artwork, and code, going on to use them again in this latest project in conjunction with Warner.

Bethesda's filing goes into great detail showing not only aesthetic similarities in the overall game design and character illustration, but specifically in the animations involved in the game as well as how the game screen reacts when players interact. Reading through the filing, it's fairly clear that this was more than a game merely inspired by Fallout Shelters in terms of gameplay, but instead looked to be a pretty faithful recreation of it, except themed to Westworld. Still, despite all of that, Bethesda focused on the code it alleges is reused to achieve this similarity, which is important.

And, while Warner Bros. has responded claiming all of this is false, and that Behavior Interactive has assured it that no code was reused, there is some additional evidence that sure points to that not being the case.

Aside from these mostly aesthetic similarities, it turns out that there's one other pretty suspicious thing that Bethesda has noticed, potentially giving the game away even more. Apparently, the same bugs that were originally present in an early version of Fallout Shelter have also been found in Westworld. Oh dear..

We talk a great deal about the idea/expression dichotomy in copyright law specifically, but it should be acknowledged when a content creator gets this question right in its lawsuit allegations. Again, we don't know yet if the allegations of code reuse are true at this point. But someone should wave this filing in front of the folks at PUBG to show then what a legitimate copyright lawsuit in gaming looks like.

from the another-reason-not-to-ratify dept

It seems incredible, but the TPP trade deal is still staggering on, zombie-like. It's official name is now the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), but even the Australian government just calls it TPP-11. The "11" refers to the fact that TPP originally involved 12 nations, but the US pulled out after Donald Trump's election. The Australian Senate Standing Committee on Foreign Affairs, Defence & Trade is currently conducting an inquiry into TPP-11 as a step towards ratification by Australia. However, in its submission to the committee (pdf), Open Source Industry Australia (OSIA) warns that provisions in TPP-11's Electronic Commerce Chapter "have the potential to destroy the Australian free & open source software (FOSS) sector altogether", and calls on the Australian government not to ratify the deal. The problem lies in Article 14.17 of the TPP-11 text (pdf):

No Party shall require the transfer of, or access to, source code of software owned by a person of another Party, as a condition for the import, distribution, sale or use of such software, or of products containing such software, in its territory.

In its submission to the committee, the OSIA writes:

Article 14.17 of CPTPP prohibits requirements for transfer or access to the source code of computer software. Whilst it does contain some exceptions, those are very narrow and appear rather carelessly worded in places. The exception that has OSIA up in arms covers "the inclusion of terms and conditions related to the provision of source code in commercially negotiated contracts". If Australia ratifies CPTPP, much will turn on whether the Courts interpret the term "commercially negotiated contracts" as including FOSS licences all the time, some of the time or none of the time.

If the Australian courts rule that open source licenses are not "commercially negotiated contracts", those licences will no longer be enforceable in Australia, and free software as we know it will probably no longer exist there. Even if the courts rule that free software licenses are indeed "commercially negotiated contracts", there is another problem, the OSIA says:

The wording of Art. 14.17 makes it unclear whether authors could still seek injunctions to enforce compliance with licence terms requiring transfer of source code in cases where their copyright has been infringed.

Without the ability to enforce compliance through the use of injunctions, open source licenses would once again be pointless. Although the OSIA is concerned about free software in Australia, the same logic would apply to any TPP-11 country. It would also impact other nations that joined the Pacific pact later, as the UK is considering (the UK government seems not to have heard of the gravity theory for trade). It would presumably apply to the US if it did indeed rejoin the pact, as has been mooted. In other words, the impact of this section on open source globally could be significant.

It's worth remembering why this particular article is present in TPP. It grew out of concerns that nations like China and Russia were demanding access to source code as a pre-requisite of allowing Western software companies to operate in their countries. Article 14.17 was designed as a bulwark against such demands. It's unlikely that it was intended to destroy open source licensing too, although some spotted early on that this was a risk. And doubtless a few big software companies will be only too happy to see free software undermined in this way. Unfortunately, it's probably too much to hope that the Australian Senate Standing Committee on Foreign Affairs, Defence & Trade will care about or even understand this subtle software licensing issue. The fate of free software in Australia will therefore depend on whether TPP-11 comes into force, and if so, what judges think Article 14.17 means.

from the may-as-well-just-go-back-to-hunches dept

Good news for motorists: law enforcement is using something just as unreliable as $2 field drug tests to justify arrests and searches. Field drug tests have been known to declare donut crumbs meth and drywall dust cocaine. Yet they're still in use, thanks to their low price point. A costlier apparatus, used to determine blood alcohol levels during sobriety tests, appears to be just as broken as cheap drug tests.

Alcotest, made by German medical tech company Draeger, is used by a large number of US law enforcement agencies. Challenges to test results led to Draeger turning code over to defense attorneys, who soon discovered a lot of variables affected breath tests -- many of which weren't addressed by the device's software or default settings used by officers. Zack Whittaker at ZDNet has the full report:

One attorney, who read the report, said they believed the report showed the breathalyzer "tipped the scales" in favor of prosecutors, and against drivers.

One section in the report raised issue with a lack of adjustment of a person's breath temperature.

Breath temperature can fluctuate throughout the day, but, according to the report, can also wildly change the results of an alcohol breath test. Without correction, a single digit over a normal breath temperature of 34 degrees centigrade can inflate the results by six percent -- enough to push a person over the limit.

The quadratic formula set by the Washington State Patrol should correct the breath temperature to prevent false results. The quadratic formula corrects warmer breath downward, said the report, but the code doesn't explain how the corrections are made. The corrections "may be insufficient" if the formula is faulty, the report added.

The Washington State Patrol, whose device/software was being examined in this case, said it did not install the breath temp component. That eliminates one questionable variable in this case. Other law enforcement agencies may have installed the component without realizing it could result in false positives. But it's far from the only variable affecting test results the examination of Draeger's software uncovered. The Washington State Patrol also disabled another feature that might have prevented false positives.

The code is also meant to check to ensure the device is operating within a certain temperature range set by Draeger, because the device can produce incorrect results if it's too hot or too cold.

But the report said a check meant to measure the ambient temperature was disabled in the state configuration.

"The unit could record a result even when outside of its operational requirements," said the report. If the breathalyzer was too warm, the printed-out results would give no indication the test might be invalid, the report said.

The State Patrol was more equivocal in its repudiation of this finding. It said it had been "tested and validated in various ambient temperatures." Draeger itself insisted the unit will not produce readings if the device is operating outside of recommended temperature ranges.

The report also noted there appeared to no steps taken to counteract normal wear-and-tear. The fuel cell used to measure alcohol levels decays over time -- a time period accelerated by frequent use (sobriety checkpoints, for instance). This can also affect test results if the decay isn't factored in. Draeger says its devices should be re-calibrated every year. The Washington State Patrol only require one recalibration six months into the device's lifespan.

Challenges against the device's test results have occurred in other states. Massachusetts -- a state where substance abuse-related evidence has never been more unreliable -- hosted one legal battle over the devices' reliability. A ruling in 2014 declared test results obtained over the previous two years "presumptively unreliable" after it was discovered that only two of the state's 392 breathalyzers had ever been properly calibrated.

This battle between critics of the devices and their deployment methods (untested, uncalibrated) and a judicial system that still insists the devices are reliable enough has gone on for most of a decade. Added to the mix is Draeger's own legal action. This preliminary report, distributed to defense lawyers at conference last year, was the subject of a cease-and-desist letter from Draeger, which claimed the report violated a protective order it had obtained from a US court, prohibiting the distribution of its source code. But no source code was distributed and the C&D appears to Draeger attempting to prevent questions about its device's reliability from spreading further than a handful of court cases. And in those legal challenges, Draeger has been able to keep discussion of its devices and software under wraps via injunctions.

While the report's authors claim the report is still in its preliminary stages and should not be considered the final word on breathalyzer reliability, this initial examination doesn't suggest deeper digging will find a more reliable machine underneath the surface-layer flaws.

from the backdoors-for-all dept

Nobody trusts anybody, and it's probably going to end up affecting end users the most. The Snowden leaks showed the NSA's Tailored Access Operations routinely intercepted network hardware to insert backdoors. The exploits leaked by the Shadow Brokers indicated the NSA was very active on the software exploit front as well.

Russian authorities are asking Western tech companies to allow them to review source code for security products such as firewalls, anti-virus applications and software containing encryption before permitting the products to be imported and sold in the country. The requests, which have increased since 2014, are ostensibly done to ensure foreign spy agencies have not hidden any "backdoors" that would allow them to burrow into Russian systems.

According to the article, multiple US officials and company executives are tracing the uptick in review demands to a downturn in US-Russian relations following Russia's 2014 annexation of Crimea. But the NSA's hardware operations were exposed in mid-2014, so it's hard to believe the Snowden effect isn't in play.

[Some] reviews are… conducted by the Federal Service for Technical and Export Control (FSTEC), a Russian defense agency tasked with countering cyber espionage and protecting state secrets. Records published by FSTEC and reviewed by Reuters show that from 1996 to 2013, it conducted source code reviews as part of approvals for 13 technology products from Western companies. In the past three years alone it carried out 28 reviews.

Since these companies aren't willing to give up their share of an $18.4 billion market, compromises are being made. Examinations of code are being done in "clean rooms," with conditions somewhat controlled by the companies being vetted. But this isn't always the case. Nor are these precautions necessarily enough to prevent those doing the vetting -- some linked to the Russian government -- from finding undiscovered security holes and flaws. The vetting may help keep Russian government agencies and private companies from being spied on by the US, but it's not going to do much to keep the Russian government from spying on Russian companies and Russian computer users.

So far, only one company has publicly announced its refusal to submit its software for vetting. Symantec has rejected testing by Echelon, a Moscow-based lab with some tenuous ties to the Russian military.

But for Symantec, the lab "didn't meet our bar" for independence, said spokeswoman Kristen Batch.

“In the case of Russia, we decided the protection of our customer base through the deployment of uncompromised security products was more important than pursuing an increase in market share in Russia,” said Batch, who added that the company did not believe Russia had tried to hack into its products.

The company also provides testing for the Russian Ministry of Defense and multiple law enforcement agencies. Echelon claims it's wholly independent from the Russian government, but those assertions haven't been enough to overcome Symantec's objections. Other companies (the article lists HP and IBM) have allowed their products to be tested by Echelon, but neither were willing to comment on this story.

The Russians are checking for US backdoors while potentially seeking to install their own. US companies are given the choice of possibly aiding in Russian domestic surveillance or being locked out of the market. Any lost sales here can at least be partially chalked up to the Snowden leaks. If so, the fallout from the leaks is still causing harm to US companies, years down the road.

The exception was the plaintiff’s expert that said Oculus’s implementations of the techniques at issue were “non-literally copied” from the source code I wrote while at Id Software.

This is just not true. The authors at Oculus never had access to the Id C++ VR code, only a tiny bit of plaintext shader code from the demo. I was genuinely interested in hearing how the paid expert would spin a web of code DNA between completely unrelated codebases.

Early on in his testimony, I wanted to stand up say “Sir! As a man of (computer) science, I challenge you to defend the efficacy of your methodology with data, including false positive and negative rates.” After he had said he was “Absolutely certain there was non-literal copying” in several cases, I just wanted to shout “You lie!”. By the end, after seven cases of “absolutely certain”, I was wondering if gangsters had kidnapped his grandchildren and were holding them for ransom.

If he had said “this supports a determination of”, or dozens of other possible phrases, then it would have fit in with everything else, but I am offended that a distinguished academic would say that his ad-hoc textual analysis makes him “absolutely certain” of anything. That isn’t the language of scientific inquiry.

Now, ZeniMax was quick to hit back with its own statement pointing out that some of the code at issue was literally copied (though the jury seems to have found that little or none of that code was actually used), but this question of "non-literal copying" is far more important. This whole notion of experts doing textual analysis to find recurring patterns is a worrying one: for all the real science behind such methods, it's not at all hard to see how easily they could be manipulated to support a chosen result, or how difficult it would be to ensure a jury properly understands the arguments and affords them the appropriate weight. Indeed, Carmack goes on to explain how the expert's presentation was... lacking:

There are objective measures of code similarity that can be quoted, like the edit distance between abstract syntax trees, but here the expert hand identified the abstract steps that the code fragments were performing, made slides that nobody in the courtroom could actually read, filled with colored boxes outlining the purportedly analogous code in each case. In some cases, the abstractions he came up with were longer than the actual code they were supposed to be abstracting.

It was ridiculous. Even without being able to read the code on the slides, you could tell the steps varied widely in operation count, were often split up and in different order, and just looked different.

The following week, our side’s code expert basically just took the same slides their expert produced (the judge had to order them to be turned over) and blew each of them up across several slides so you could actually read them. I had hoped that would have demolished the credibility of the testimony, but I guess I overestimated the impact.

The notion of "non-literal copying" as applied to code is a weird one, and casts a light on how weird code copyright is to begin with. If copyright isn't supposed to cover functional choices, how can it be infringing to create new code that accomplishes the same function in a slightly different way? Are juries supposed to determine which "non-literally copied" aspects of the code were aesthetic, and which were purely functional? This sort of idea-expression divide question is muddy in the worlds of art and literature, but it should be simple in the world of code: what a program does is not covered by copyright, nor are any purely functional elements of how it achieves that.

But instead, we've got experts applying what more or less amounts to literary analysis to computer code, and even using that analogy (to which Carmack has an excellent response):

The notion of non-literal copying is probably delicious to many lawyers, since a sufficient application of abstraction and filtering can show that just about everything is related. There are certainly some cases where it is true, such as when you translate a book into another language, but copyright explicitly does not apply to concepts or algorithms, so you can’t abstract very far from literal copying before comparing. As with many legal questions, there isn’t a bright clear line where you need to stop.

The analogy that the expert gave to the jury was that if someone wrote a book that was basically Harry Potter with the names changed, it would still be copyright infringement. I agree; that is the literary equivalent of changing the variable names when you copy source code. However, if you abstract Harry Potter up a notch or two, you get Campbell’s Hero’s Journey, which also maps well onto Star Wars and hundreds of other stories. These are not copyright infringement.

After all this, you might be thinking that you want to go find out more about just what that expert had to say, and get more detail on how he reached his conclusion about copying. Too bad! Even the defendants didn't get to see the full report, and we get even less:

Notably, I wasn’t allowed to read the full expert report, only listen to him in trial, and even his expert testimony in trial is under seal, rather than in the public record. This is surely intentional -- if the code examples were released publicly, the internet would have viciously mocked the analysis. I still have a level of morbid curiosity about the several hundred-page report.

Several hundred pages to "prove" that software was "non-literally copied" because it does the same thing in similar ways, all by abstracting chunks of code into their platonic forms and comparing them? Well, I guess those experts have to earn their paycheques somehow.

from the after-all,-it's-only-democracy-that's-at-stake dept

The fact that Techdirt has been writing about e-voting problems for sixteen years, and that the very first post on the topic had the headline "E-voting is Not Safe," gives an indication of what a troubled area this is. Despite the evidence that stringent controls are still needed to avoid the risk of electoral fraud, some people seem naively to assume that e-voting is now a mature and safe technology that can be deployed without further thought.

In Australia, for example, e-voting is being used for the elections to the country's Senate, but the Australian Electoral Commission (AEC) has refused to release the relevant software, despite a Senate motion and a freedom of information request. Being able to examine the code is a fundamental requirement, since there is no way of knowing what "black box" e-voting systems are doing with the votes that are entered. A story by the Australian Associated Press (AAP) explains why AEC is resisting:

The Australian Electoral Commission referred AAP to a decision by the Administrative Appeals Tribunal [AAT] in December 2015.

In that decision, relating to a freedom of information request, the tribunal found the release of the source code for the software known as Easycount would have the potential to diminish its commercial value.

"The tribunal is satisfied that the Easycount source code is a trade secret and is exempt from disclosure," the AAT said.

Placing trade secrets above the public interest is a curious choice, to say the least. It seems particularly questionable given Australia's recent experience with e-voting software problems:

When the ACT Electoral Commission released its counting code, researchers at Australian National University found three bugs which were subsequently fixed before an election.

When the Victorian Electoral Commission made its electronic voting protocol available to researchers in 2010, University of Melbourne researchers identified a security weakness which was then rectified before the state election.

As Techdirt readers well know, bugs are commonplace, and there's no particular shame if some are found in a complex piece of software. But refusing to allow independent researchers to look for those bugs so that they can be fixed is inexcusable when the integrity of the democratic selection process is at stake.

from the but-that's-not-true dept

When the Federal Circuit Appeals Court (CAFC) initially made its nutty ruling saying that APIs are copyright-eligible subject matter, many in the copyright and tech world were not only shocked, but were tremendously worried about how the ruling would impact innovation and software development going forward -- while supporters on the other side brushed off such concerns.

Now that the second trial has found that, even if APIs are covered by copyright, Google's use of the Java APIs in Android was fair use, perhaps it's only fair that people on the losing side are lashing out in the same manner as people on the other side did after the CAFC ruling.

Annette Hurst, the lawyer who led the case on the Oracle side, posted her thoughts to LinkedIn, claiming that the ruling represents the "death of free software," and, more specifically, saying that the ruling "killed" the GPL (General Public License, even though at the trial one witness insisted it was the Gnu Public License). From reading her post, it appears that she either doesn't understand that software and APIs are not the same thing, or that she just doesn't care. The whole argument is strange, and starts off with a bizarre, and simply wrong, assertion that "no copyright expert" would have predicted this result:

The developer community may be celebrating today what it perceives as a victory in Oracle v. Google. Google won a verdict that an unauthorized, commercial, competitive, harmful use of software in billions of products is fair use. No copyright expert would have ever predicted such a use would be considered fair. Before celebrating, developers should take a closer look. Not only will creators everywhere suffer from this decision if it remains intact, but the free software movement itself now faces substantial jeopardy.

Except, of course, tons of copyright experts predicted exactly this result (and many more argued that APIs should not be subject to copyright at all). Famed copyright scholar Pam Samuelson has been writing extensively about the case, focusing both on why APIs should not be covered by copyright (and, why basically every other court has agreed) as well as why, even if it is covered, it's fair use. Hell, she even wrote a response to the Hurst piece, explaining why Hurst was wrong. It's weird for Hurst to take a position that actually seems at odds with a huge number of copyright experts, and then state that none would take the position that many did.

From there, she appears to misunderstand the point made by the other side in the very case she led:

While we don't know what ultimately swayed the jury, Google's narrative boiled down to this: because the Java APIs have been open, any use of them was justified and all licensing restrictions should be disregarded. In other words, if you offer your software on an open and free basis, any use is fair use.

If that narrative becomes the law of the land, you can kiss GPL goodbye.

Except she's exaggerating here and misrepresenting the key issues in the case. No one was arguing, as she implies, that any software that is described as "free and open" or that is using the GPL means that any use is fair. Again, she's conflating APIs with actual software. The ruling doesn't impact software the way she thinks it does because she doesn't seem to want to acknowledge that APIs are not software. They're just a structure -- a table of contents effectively.

No business trying to commercialize software with any element of open software can afford to ignore this verdict. Dual licensing models are very common and have long depended upon a delicate balance between free use and commercial use. Royalties from licensed commercial distribution fuel continued development and innovation of an open and free option. The balance depends upon adherence to the license restrictions in the open and free option. This jury's verdict suggests that such restrictions are now meaningless, since disregarding them is simply a matter of claiming "fair use."

This is simply not true. The case revolved around the fact that the API and its "declaring code" are fundamentally different from the actual source code within the operating system. It serves an entirely different purpose. Part of the reason why the use of the same API is considered fair use is because of that very nature of it: the API is more functional -- it's like a pointer or a reference, rather than an actual bit of code. It's only if you don't understand that the two things are different that this ruling leads to the problems that Hurst describes. A case with the same facts, but where straight up source code was copied would have a much tougher uphill battle on the fair use front.

Developers beware. You may think you got a win yesterday. But it's time to think about more than your desires to copy freely when you sit down at a keyboard.

Once again, this shows a rather unfortunate ignorance of how coding works. It's not about a desire to "copy freely." It's about building amazing and innovative services, and making use of APIs to increase interoperability, which increases value. Copying an API structure is also just much more about making developers comfortable in using new environments. You know, like how Oracle copied SQL from IBM. Because lots of people understood SELECT-FROM-WHERE and it made little sense to create a relational database that didn't use that structure. It's not about copying freely. It's about interoperability.

And, really, the idea that an Oracle lawyer is "concerned" about the future of the GPL is fairly laughable. Thankfully, many people have weighed in in the comments -- including plenty who are quite familiar with the GPL and software development to explain to Hurst why she's wrong. Somehow, I think she has some fairly strong reasons to ignore those responses.

The US government has made numerous attempts to obtain source code from tech companies in an effort to find security flaws that could be used for surveillance or investigations.

The government has demanded source code in civil cases filed under seal but also by seeking clandestine rulings authorized under the secretive Foreign Intelligence Surveillance Act (FISA), a person with direct knowledge of these demands told ZDNet. We're not naming the person as they relayed information that is likely classified.

With these hearings held in secret and away from the public gaze, the person said that the tech companies hit by these demands are losing "most of the time."

That's hardly heartening. The DOJ would only go so far as to confirm this has happened before, likely because there's no way to deny it. The documents from the Lavabit case have been made public -- with the DOJ using a formerly-sealed document to hint at what could be in store for Apple if it refuses to write FBiOS for it.

Unfortunately, because of the secrecy surrounding the government's requests for source code -- and the court where those requests have been made -- it's extremely difficult to obtain outside confirmation. Whittaker contacted more than a dozen Fortune 500 companies about the unnamed official's claims and received zero comments.

A few, however, flatly denied ever having handed over source code to the US government.

Cisco said in an emailed statement: "We have not and we will not hand over source code to any customers, especially governments."

IBM referred to a 2014 statement saying that the company does not provide "software source code or encryption keys to the NSA or any other government agency for the purpose of accessing client data." A spokesperson confirmed that the statement is still valid, but did not comment further on whether source code had been handed over to a government agency for any other reason.

Cisco is likely still stinging from leaked documents showing its unwitting participation in an NSA unboxing photo shoot and has undoubtedly decided to take a stronger stance against government meddling since that point. As for IBM, its statement is a couple of years old and contains a major qualifying statement.

Previously-leaked documents somewhat confirm the existence of court orders allowing the NSA to perform its own hardware/software surgery. Presumably, the introduction of backdoors and exploits is made much easier with access to source code. Whittaker points to a Kaspersky Lab's apparent discovery of evidence pointing to the NSA being in possession of "several hard drive manufacturers'" source code -- another indication that the government's history of demanding source code from manufacturers and software creators didn't begin (or end) with Lavabit.

The government may be able to talk the FISA court into granting these requests, given that its purview generally only covers foreign surveillance (except for all the domestic dragnets and "inadvertent" collections) and national security issues. The FBI's open air battle with Apple has already proceeded far past the point that any quasi-hearing in front of the FISC would have. That's the sort of thing an actually adversarial system -- unlike the mostly-closed loop of the FISA court -- tends to result in: a give-and-take played out (mostly) in public, rather than one party saying "we need this" and the other applying ink to the stamp.

from the new-goal:-a-leaker-a-year-for-the-next-decade! dept

The recent leak of the XKeyscore source code has raised an interesting question. Is there a second leaker? The report written by Jacob Appelbaum and others for DasErste.de detailed the NSA's targeting of Tor users (and even those who just read about Tor) and the harvesting of their communications, but very explicitly did not state that Snowden was the source of this code snippet.

Another expert said that s/he believed that this leak may come from a second source, not Edward Snowden, as s/he had not seen this in the original Snowden docs; and had seen other revelations that also appeared independent of the Snowden materials.

And, since Cory said it, I do not believe that this came from the Snowden documents. I also don't believe the TAO catalog came from the Snowden documents. I think there's a second leaker out there.

The TAO catalog was originally revealed by Der Spiegel with reporting by (again) Jacob Appelbaum and Greenwald/Snowden partner Laura Poitras. Nothing in the story explicitly states its origin, although the inclusion of Poitras at least suggests the documents can be traced back to Snowden's stash.

If so, then that's two people who have seen Snowden's documents, including one with ongoing access, claiming there's a second leaker. And if so, the NSA's problem, instead of gradually disappearing from the public eye, will become more severe. Coupled with the recent leak published by the Washington Post, which shows the agency harvests and stores plenty of unminimized non-terrorist communications with its 702 collections (the same collection the Privacy and Civil Liberties Oversight Board recently found to be more law-abiding and less Constitutionally unsound than the bulk metadata program), the agency now looks worse than ever. It was completely unprepared for the Snowden revelations, but at least by this point, it has a general feel for the leak release process. Now, it possibly has another leaker offering new data and info to journalists, one which is a totally unknown quantity.

At this point, all anyone has is speculation. If there's another leaker, it's doubtful he or she will make his/her identity known any time soon. Snowden revealed himself as a leaker and that hasn't exactly worked out well for him.

But there's also some indications that this snippet of code came from Snowden's leaks. Errata Security (the group of bloggers that exposed the fakery behind NBC's pre-Winter Olympics "report" that all visitors to Sochi would be instantly hacked) has done its own fisking of the code snippet and come to the following conclusions.

1. The signatures are old (2011 to 2012), so it fits within the Snowden timeframe, and is unlikely to be a recent leak.

2. The code is weird, as if they are snippets combined from training manuals rather than operational code. That would mean it is “fake”.

3. The story makes claims about the source that are verifiably false, leading us to believe that they may have falsified the origin of this source code.

4. The code is so domain specific that it probably is, in some fashion, related to real XKeyScore code – if fake, it's not completely so.

Errata Security notes some of the oddities of the code, pointing out that it looks more like something pulled from a training exercise or manual rather than directly from XKeyscore itself. More investigation by Errata Security and The Grugq (another security expert) apparently uncovered the fact that the text was pulled from a document (pdf, docx, etc.) rather than an actual source file. But the aspect that seems to indicate this is part of Snowden's stash is the timeline.

As this post to the Tor developer mailing list describes, the signatures in the code are old. The earliest date this file can be valid is 2011-08-08, when the Linux journal reported on TAILS. The latest date might be 2012-09-21, just before a new server was added to Tor that isn't in the XKeyScore list. Since this is shortly before Snowden first tried to contact Greenwald, the dates sync up.

If the code is unrecognizable by those who've had access to the documents, that's probably due to it being compiled from various pages and mocked up into a short code excerpt. Rob Graham at Errata Security doesn't feel it's necessarily fake, but believes the origin of the quoted source code may have been obscured -- hence, no citation of Snowden's leaks or any acknowledgment of existing NSA files.

Of course, this could mean another leaker is simply hiding behind Snowden, and has pulled files roughly in the same date range in order to deliver new leaks in order to remain undetected. If there is another leaker, my guess is he/she will be discovered rather than coming out publicly.

New leaker or no, the one-two punch of published leaks by Jacob Appelbaum and Barton Gellman (of the Washington Post) shows that the NSA is doing everything it's been accused of -- namely, hoovering up and holding onto incidental communications (even those originating from "untargeted" American citizens) and viewing anyone with even a passing interest in anonymity or encryption as "suspicious."