FACEBOOK HAS MADE A BIG DEAL of its transparency drive in the political advertising arena – political ads (and certain other categories) are supposed to be accompanied by a clear indication of who paid for them, in order to stop the spread of disinformation.

“All 100 sailed through the system, indicating that just about anyone can buy an ad identified as ‘Paid for by’ by a major U.S. politician,” the publication said (it didn’t follow through on buying the ads, incidentally). “What’s more, all of these approvals were granted to be shared from pages for fake political groups such as ‘Cookies for Political Transparency’ and ‘Ninja Turtles PAC.'”

The British journalist Carole Cadwalladr also spotted an ad that claimed to have been paid for by… Cambridge Analytica, a (supposedly) now-defunct operation that was hurled off Facebook’s platform with extreme prejudice over its abuse of the system. The ad used an image from the law-breaking pro-Brexit campaign BeLeave and linked to Leave.eu, though as Cadwalladr noted, “anybody can write anything here”.

As Vice News noted, the ineffectiveness of Facebook’s screening has the effect of giving fake ads an extra sheen of legitimacy. It certainly makes a mockery of the company’s efforts to stop being used as a political disinformation platform.

Dying to know what Facebook’s new lobbying chief, Nick “Time to Build Bridges Between Politics and Tech” Clegg, thinks of all this.

FACEBOOK FAILED TO CONVINCE THE BRITISH privacy regulator to reduce its fine over the Cambridge Analytica affair. In the event, the ICO levied the maximum fine it could: £500,000. Not exactly meaningful for a company such as Facebook, but as a statement of intent it got the message across – Information Commissioner Elizabeth Denham said plainly that she wished she could penalise the company more, but the offence took place in the pre-GDPR age.

THERE WAS A GOOD PIECE IN THE FT the other day, regarding big tech’s obsession with targeted advertising. The author, Rana Foroohar, noted that “it is a business model that causes endless collateral damage, as evidenced by the weekly drumbeat of scandal. But it is one that they will never walk away from voluntarily. It is simply too profitable.”

She continued: “I suspect the answer is transparency. Regulators need to force platforms to make their algorithms public. Only then will we fully understand the way in which the targeted advertising model is undermining liberal democracy, and begin to garner enough public support to force the attention merchants to shift their models.”

I’m not sure the algorithms necessarily need to be made public – not least because, if the people who designed them don’t fully understand them (as is increasingly the case with machine-learning tech), the public won’t grok them either.

Perhaps the regulators can audit the algorithms behind closed doors, which is a much more likely approach to get the firms on board – it’s not for nothing that they worry about rivals nicking their ideas. What is certainly needed, though, is for the companies to explain what the algorithms are trying to achieve and how they do this.

This requirement is already included in the GDPR, to a degree, in the bit about automated decision-making (Article 13), which says people can demand “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Looking forward to seeing how data protection authorities and the courts interpret this one in practice.

AMERICANS NOW HAVE THE RIGHT TO REPAIR their cars, phones and smart home equipment even if this means breaking the “locks” on the software that’s embedded in them. The change comes courtesy of new rules issued by the Copyright Office, and returns controls of such devices to people who would always have had this freedom to tinker, if it weren’t for that pesky software (I’m not including smart home gear in this statement, obviously, as it didn’t exist before software).

On the other hand, it’s bad news for manufacturers who want to be able to charge fat fees for repairs carried out through their preferred technicians, or who want to sell you a new gadget every time the one you’ve already bought gets buggy or broken.

If you’d like me to speak about digital rights at your event or provide advice for your business, drop me an email at david@dmeyer.eu.

SOME A-GRADE TROLLING FROM AUSTRALIA’S NSA equivalent, the Australian Signals Directorate, which celebrated the creation of its Twitter account with this first tweet: “Hi internet, ASD here. Long time listener, first time caller.”

THE GERMAN GOVERNMENT’S ATTEMPT TO BLOCK publication of leaked military reports about Afghanistan on the basis of copyright infringement is basically nonsense, according to the advocate general of the Court of Justice of the European Union.

“Such ‘raw’ information, that is to say, information presented in an unaltered state, is excluded from copyright, which protects only the manner in which ideas have been articulated in a work,” opined AG Maciej Szpunar. And even if copyright did apply: “Although the State is entitled to benefit from the civil right of ownership, such as the right to intellectual property, it cannot rely on the fundamental right to property as a means of restricting another fundamental right such as freedom of expression. The State is not a beneficiary of fundamental rights, but is rather under an obligation to safeguard fundamental rights.”

About the author

I’m David Meyer, a tech journalist with more than a decade’s experience writing about technology. I’ve covered many topics in that time, though I’m most interested in the policy decisions and technological breakthroughs that will shape our world. You can find me on Twitter as @superglaze and on Facebook as @davidmeyerwrites.