Yoshitaka Sakurada, 68, the deputy chief of the government’s cyber security strategy office has provoked astonishment by admitting he has never used a computer in his professional life.

In a stunning admission, Japan’s new minister in charge of cybersecurity has admitted that he has never used a computer.
During a question and answer session Yoshitaka Sakurada told a Lower House cabinet committee meeting that he had never found the need to use one during his career.
“I don’t use computers because since I was 25 I have been in a position of authority where secretaries and employees handle such tasks for me,” he said according to the Japan Times.
The Associated Press added that the 68-year-old was equally uncertain when asked about cybersecurity at nuclear power plants, appearing to not know what a USB drive is.
Lawmakers reportedly laughed at his replies, which were broadcast live on national TV.
Sakurada has been in office just over a month after being appointed by Prime Minister Shinzo Abe as part of a cabinet reshuffle.
For more on this story, please click here.
WATCH: How the dark web became the platform for all things illegal

How The Wall Street Journal is preparing its journalists to detect deepfakes

“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”

Artificial intelligence is fueling the next phase of misinformation. The new type of synthetic media known as deepfakes poses major challenges for newsrooms when it comes to verification. This content is indeed difficult to track: Can you tell which of the images below is a fake?

(Check the bottom of this story for the answer.)
We at The Wall Street Journal are taking this threat seriously and have launched an internal deepfakes task force led by the Ethics & Standards and the Research & Development teams. This group, the WSJ Media Forensics Committee, is comprised of video, photo, visuals, research, platform, and news editors who have been trained in deepfake detection. Beyond this core effort, we’re hosting training seminars with reporters, developing newsroom guides, and collaborating with academic institutions such as Cornell Tech to identify ways technology can be used to combat this problem.
“Raising awareness in the newsroom about the latest technology is critical,” said Christine Glancey, a deputy editor on the Ethics & Standards team who spearheaded the forensics committee. “We don’t know where future deepfakes might surface so we want all eyes watching out for disinformation.”
Here’s an overview for journalists of the insights we’ve gained and the practices we’re using around deepfakes.
How are most deepfakes created?
The production of most deepfakes is based on a machine learning technique called “generative adversarial networks,” or GANs. This approach can be used by forgers to swap the faces of two people — for example, those of a politician and an actor. The algorithm looks for instances where both individuals showcase similar expressions and facial positioning. In the background, artificial intelligence algorithms are looking for the best match to juxtapose both faces.
Because research about GANs and other approaches to machine learning is publicly available, the ability to generate deepfakes is spreading. Open source software already enables anyone with some technical knowledge and a powerful-enough graphics card to create a deepfake.
Some academic institutions such as New York University are taking unique approaches to media literacy. One class at the Interactive Telecommunications Program (ITP) at NYU Tisch — “Faking the News” — exposes students to the dangers of deepfakes by teaching them how to forge content using AI techniques. “Studying this technology helps us not only understand the potential implications but also the limitations,” said Chloe Marten, a product manager at Dow Jones and master’s candidate who enrolled in the NYU class.
Techniques used to create deepfakes
Deepfake creators can use a variety of techniques. Here are a few:
Faceswap: An algorithm can seamlessly insert the face of a person into a target video. This technique could be used to place a person’s face on an actor’s body and put them in situations that they were never really in.
Lip sync: Forgers can graft a lip-syncing mouth onto someone else’s face. Combining the footage with new audio could make it look like they are saying things they are not.
Facial reenactment: Forgers can transfer facial expressions from one person into another video. With this technique, researchers can toy with a person’s appearance and make them seem disgusted, angry, or surprised.

Motion transfer: Researchers have also discovered how to transfer the body movements of a person in a source video to a person in a target video. For instance, they can capture the motions of a dancer and make target actors move in the same way. In collaboration with researchers at the University of California, Berkeley, Journal correspondent Jason Bellini tried this technique out for himself and ended up dancing like Bruno Mars.

Journalists have an important role in informing the public about the dangers and challenges of artificial intelligence technology. Reporting on these issues is a way to raise awareness and inform the public.

From “Deepfake Videos Are Getting Real and That’s a Problem,” The Wall Street Journal, October 15, 2018.

How can you detect deepfakes
We’re working on solutions and testing new tools that can help detect or prevent forged media. Across the industry, news organizations can consider multiple approaches to help authenticate media if they suspect alterations.
“There are technical ways to check if the footage has been altered, such as going through it frame by frame in a video editing program to look for any unnatural shapes and added elements, or doing a reverse image search,” said Natalia V. Osipova, a senior video journalist at the Journal. But the best option is often traditional reporting: “Reach out to the source and the subject directly, and use your editorial judgment.”
Examining the source
If someone has sent in suspicious footage, a good first step is to try to contact the source. How did that person obtain it? Where and when was it filmed? Getting as much information as possible, asking for further proof of the claims, and then verifying is key.
If the video is online and the uploader is unknown, other questions are worth exploring: Who allegedly filmed the footage? Who published and shared it, and with whom? Checking the metadata of the video or image with tools like InVID or other metadata viewers can provide answers.
In addition to going through this process internally, we collaborate with content verification organizations such as Storyful and the Associated Press. This is a fast-moving landscape with emerging solutions appearing regularly in the market. For example, new tools including TruePic and Serelay use blockchain to authenticate photos. Regardless of the technology used, the humans in the newsroom are at the center of the process.
“Technology alone will not solve the problem,” said Rajiv Pant, chief technology officer at the Journal. “The way to combat deepfakes is to augment humans with artificial intelligence tools.”
Finding older versions of the footage
Deepfakes are often based on footage that is already available online. Reverse image search engines like Tineye or Google Image Search are useful to find possible older versions of the video to suss out whether an aspect of it was manipulated.
Examining the footage
Editing programs like Final Cut enable journalists to slow footage down, zoom the image, and look at it frame by frame or pause multiple times. This helps reveal obvious glitches: glimmering and fuzziness around the mouth or face, unnatural lighting or movements, and differences between skin tones are telltale signs of a deepfake.
As an experiment, here are some glitches the Journal’s forensics team found during a training session using footage of Barack Obama created by video producers at BuzzFeed.

The box-like shapes around the teeth reveal that this is a picture stitched onto the original footage.

Unnatural movements like a shifting chin and growing neck show that the footage is faked.

In addition to these facial details, there might also be small edits in the foreground or background of the footage. Does it seem like an object was inserted or deleted into a scene that might change the context of the video (e.g. a weapon, a symbol, a person, etc.)? Again, glimmering, fuzziness, and unnatural light can be indicators of faked footage.
In the case of audio, watch out for unnatural intonation, irregular breathing, metallic sounding voices, and obvious edits. These are all hints that the audio may have been generated by artificial intelligence. However, it’s important to note that image artifacts, glitches, and imperfections can also be introduced by video compression. That’s why it is sometimes hard to conclusively determine whether a video has been forged or not.
The democratization of deepfake creation adds to the challenge
A number of companies are creating technologies — often for innocuous reasons — that nonetheless could eventually end up being used to create deepfakes. Some examples:
Object extraction
Adobe is working on Project Cloak, an experimental tool for object removal in video, which makes it easy for users to take people or other details out of the footage. The product could be helpful in motion picture editing. But some experts think that micro-edits like these — the removal of small details in a video — might be even more dangerous than blatant fakes since they are harder to spot.

Weather alteration
There are algorithms for image translation that enable users to alter the weather or time of day in a video, like this example developed by chip manufacturer Nvidia by using generative adversarial networks. These algorithms could be used for post-production of movie scenes shot during days with different weather. But this could be problematic for newsrooms and others, because in order to verify footage and narrow down when videos were filmed, it is common to examine the time of day, weather, position of the sun, and other indicators for clues to inconsistencies.
Both Adobe and Nvidia declined to comment.
Artificial voices
Audio files can also be manipulated automatically: One company, Lyrebird, creates artificial voices based on audio samples of real people. One minute of audio recordings is enough to generate an entire digital replica that can say any sentence the user types into the system. Applications of this technology include allowing video game developers to add voices to characters.
Off-the-shelf consumer tools that make video and audio manipulation easier may hasten the proliferation of deepfakes. Some of the companies behind these tools are already considering safeguards to prevent misuse of their tech. “We are exploring different directions including crypto-watermarking techniques, new communication protocols, as well developing partnerships with academia to work on security and authentication,” said Alexandre de Brébisson, CEO and cofounder of Lyrebird.
Deepfakes’ ramifications for society
While these techniques can be used to significantly lower costs of movie, gaming, and entertainment production, they represent a risk for news media as well as society more broadly. For example, fake videos could place politicians in meetings with foreign agents or even show soldiers committing crimes against civilians. False audio could make it seem like government officials are privately planning attacks against other nations.
“We know deepfakes and other image manipulations are effective — this kind of fakery can have immediate repercussions,” said Roy Azoulay, founder and CEO of Serelay, a platform that enables publishers to protect their content against forgeries. “The point we need to really watch is when they become cheap, because cheap and effective drives diffusion.”
Lawmakers like senators Mark Warner and Marco Rubio are already warning of scenarios like these and working on possible strategies to avoid them. What’s more, deepfakes could be used to deceive news organizations and undermine their trustworthiness. Publishing an unverified fake video in a news story could stain a newsroom’s reputation and ultimately lead to citizens further losing trust in media institutions. Another danger for journalists: personal deepfake attacks showing news professionals in compromising situations or altering facts — again aimed at discrediting or intimidating them.
As deepfakes make their way into social media, their spread will likely follow the same pattern as other fake news stories. In a MIT study investigating the diffusion of false content on Twitter published between 2006 and 2017, researchers found that “falsehood diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information.” False stories were 70 percent more likely to be retweeted than the truth and reached 1,500 people six times more quickly than accurate articles.
What’s next
Deepfakes are not going away anytime soon. It’s safe to say that these elaborate forgeries will make verifying media harder, and this challenge could become more difficult over time.
“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?” said Hany Farid, a photo-forensics expert, who will join the University of California, Berkeley faculty next year. He said the next 18 months will be critical: “I do think that the issues are coming to a head,” adding that he expects researchers will have made advances before the 2020 election cycle.
Despite the current uncertainty, newsrooms can and should follow the evolution of this threat by conducting research, by partnering with academic institutions, and by training their journalists how to leverage new tools.
And here’s the solution to our deepfake quiz above: The footage on the left was altered with the help of AI.

A team of researchers used a form of facial reenactment called “Deep Video Portraits” to transfer the facial movements of Barack Obama to Ronald Reagan’s face. Here’s what that looks like:

Francesco Marconi is R&D chief at The Wall Street Journal. Till Daldrup is a research fellow at the Journal and a master’s candidate at NYU’s Studio 20 journalism program.

Workers of the world, log on!Technology may help to revive organised labour

Trade unions are harnessing the same force that caused their decline

F THEY STALL, we will hit them where it hurts.” Jörg Sprave is a jovial German with a winning smile but he leaves no doubt that he is serious. If Google, YouTube’s owner, does not budge, he will call a strike. Mr Sprave runs “The Slingshot Channel”, dedicated to rubber-powered weapons, which boasts over 2m subscribers. He is also the founder of the YouTubers Union, which counts over 16,000 members. He launched the organisation in March after YouTube stopped showing adverts alongside many of his and others’ clips, following pressure from advertisers. It caused his income to drop from $6,500 to $1,500 a month. The group’s main demand is to stop such “demonetisation”.

It is easy to dismiss Mr Sprave as a crank. His channel walks a fine line between pranksterism and gun-nuttery. Membership of his union is simply a matter of signing up to a Facebook group and it is unlikely that other members would follow his call to take their content off YouTube if it failed to bend to his wishes. But the YouTubers Union does symbolise a new stage in the interplay between technological progress and union power. Unions have been in long-term decline across the rich world for decades—not least because of technological change. Now tech, from social media to artificial intelligence (AI), may help organised labour make a comeback.
Union gap
A revival of trade unionism would seem unlikely. Before the mid-19th century, almost no workers were unionised. Then industrialisation and urbanisation brought workers into closer proximity, providing both an opportunity to organise and a reason—to negotiate pay and conditions. America’s union-membership rate hit 10% of employees by 1915 before peaking at 30% by about 1950. Sweden reached around 40% in 1930 as did Britain by the 1950s, when 10m workers belonged to a union. The rapid decline which set in took almost everyone by surprise. Across rich countries unionisation has fallen sharply (see chart 1). Only one in ten American employees is in a union today. The median membership rate in the OECD is about 18%, down from a peak of more than 50% in the early 1980s.

There are many explanations for the rise and fall of unions. Some theories stress the role of restrictive laws. The earliest legal judgments on unions in America followed English law in holding them to be criminal conspiracies, whose intent was to raise prices and to inhibit trade. The legal environment for unions gradually became friendlier until, towards the end of the 20th century, the law turned again. In the 1980s, following the lead of Margaret Thatcher in Britain and Ronald Reagan in America, governments sought to combat strikes, restrictive working practices and inflationary wage demands with laws that greatly restricted union powers.
But robust research that shows a strong link between legal changes and membership is scarce. A paper by William Brown of Cambridge University, and his colleagues, which looked at the period between 1979 and 1997, supported the notion that “British legislative change has not exerted a major influence on union membership.” Indeed unions only started to flourish decades after they were decriminalised. And their power began to wither long before the stricter laws of the 1980s.
A flowering and fading of “class consciousness” is another explanation offered by historians for unionism’s ups and downs, though this is hard to measure. A more convincing theory, for which there is some empirical support, is that the state has obviated the need for unions by doing their job for them. Most rich countries now have guaranteed minimum wages. In many places workers’ rights have been enshrined in law and extended to include things such as parental leave and sick pay. What is left for unions to bargain over?
Yet the rise and fall of union membership has followed such a similar pattern in so many countries that a structural explanation, with technological change at its heart, is the most compelling of all. This interpretation also shows why a resuscitation of unionisation will be difficult.
Technology drove the ascent of industrial capitalism in the mid-19th century and changed patterns of employment. Under the “putting out” system of pre-industrial capitalism, workers were often sole traders who laboured at home. That made organising impractical. As a more formal system of employment in factories or mines became the norm, workers were lumped together, making it easier to organise. It also became more obvious to workers who was exploited and who was doing the exploiting.
Factors of production
Changing patterns of investment during the Industrial Revolution handed more power to organised labour, helping unions to grow. In the 19th century bosses began to spend vast sums on factories, mines and railways (see chart 2). As the amount of fixed capital grew, workers could exert greater power. Tim Mitchell argues in his book, “Carbon Democracy”, that coalminers could exploit new choke-points in an economy. Getting coal out of the ground required small groups of workers at the pit face who were not easy to replace. This gave them huge leverage because such was the dependence on coal throughout the economy, from power stations to railways, that a strike could soon bring a country to a grinding halt.
Over the past 30 years technological change has caused unions to fall away. The cost of collecting and processing information has fallen, making it easier to assess the output of individual workers. In America the share of jobs with some element of performance-related pay rose from 30% in the late 1970s to more than 40% in the 1990s. If pay corresponds to personal output, employees may feel that their energies are better directed towards working harder than to organising with others.
In the rich world capital-intensive industries such as manufacturing and mining, the base of unionisation, have shrunk. They have been replaced by the services sector, which is intrinsically less welcoming to unions. Rich economies now rely more on “intangibles”, such as software and patents. It is easier to move a call-centre to a different location, including to a new country, than a shipyard. Workers who are happy that their jobs exist at all are unlikely to bargain for more.
The decline of unions has revived arguments over the benefits they offer to workers and to the economy as a whole. The number of working days lost to strikes in the rich world has been dropping alongside declining union power. That boosts annual output. No longer is there much risk of runaway inflation as unions and employers battle over wage rises. Weaker unions can lower entry barriers to a labour market, making it easier for the young, women and ethnic minorities to find employment.
Left-leaning wonks counter that the decline of unions is responsible for the drop in the “labour share”—the proportion of GDP accruing to workers in the form of pay and benefits. The evidence is mixed. Research on the British economy from Andy Haldane, the chief economist of the Bank of England, finds that a rise of ten percentage points in the rate of unionisation raises wage growth by around 0.25 percentage points a year. But a paper for the Brookings Institution, a think-tank, that looks at American data finds a “statistically imprecise relation between cross-industry changes in unionisation rates and sectoral declines in payroll shares”.

Even if the advantages to workers are not clear cut, support for organised labour is rising again (see chart 3). And technology may again play a central role in helping a revival—particularly in America, where activists are trying inventive new ways to organise workers.
Use of social media is taking the place of the shopfloor meeting in what is called “connective action”. Facebook, Reddit and WhatsApp, as well as tools such as Hustle, a texting service, allow labour groups to do three things: collect information, co-ordinate workers and get the word on campaigns out to the wider world.
Start with information. Although they work independently, many Uber drivers are active in chat groups and other online forums. The ride-hailing firm often tests new features of its app on a small group of drivers—without telling them what is going on. Online communications are an attempt to overcome this “information disadvantage”, says Alex Rosenblat, author of “Uberland”, a new book about the firm.
Comparing notes is also widespread among users of global crowdsourcing platforms such as Mechanical Turk and Freelancer, where digital labour is traded. Of 658 online workers in sub-Saharan Africa and South-East Asia interviewed by Mark Graham and his colleagues at Oxford University, 58% said that they are in digital contact with other workers at least once a week, mostly on social media. They usually talk about how to build a career online and avoid scams, but also about prices for jobs and how to divvy them up.
The logic of connective action
As for the second objective, co-ordination, without digital tools teachers’ strikes in West Virginia and other American states earlier this year would not have been as successful as they were, explains Jane McAlevey, a longtime organiser and author of several books on unions in America. In West Virginia teachers set up a Facebook group that was open only to invited colleagues. Nearly 70% of the state’s 35,000 teachers joined. The group became the hub of discussions on what to demand and how to organise protests.
The West Virginia strike is a good example of the third objective: getting the word out. The Facebook group turned into a factory for hashtags and “memes”, memorable images or video clips that spread virally online. The same sort of thing happened when Starbucks, a chain of coffee shops, refused to let baristas show their tattoos. Management caved in after employees took pictures of their body art and uploaded them to social media.
However, services such as Facebook and WhatsApp are not designed for mass activism. That means they have limitations. They lack tools to move beyond discussion to more involved forms of organising. WhatsApp, which is used by many Uber drivers, limits the size of texting groups. They are also prone to misinformation and trolling. “On Facebook, if you ask about your rights when you are pregnant, only a few comments may be helpful,” says Andrea Dehlendorf of the Organisation United for Respect (OUR), which supports retail workers at Walmart and elsewhere.
As a result, activists have started to develop digital services specifically for labour groups. Coworker.org is an early instance. Founded in 2013, the website helps workers condense their demands in a petition and spread them on social media. Starbucks employees have launched several successful campaigns, and not only about tattoos. They pushed the firm to minimise “clopening”, for example—where the same person closes a store late in the evening and opens it at the crack of dawn the next day.
Reorganised labour
Coworker.org was long an isolated example. Recently similar services have flourished by mimicking the startup approach and “unbundling” the roles of official unions. These startups are parcelling the various functions of unions into a series of discrete digital alternatives. In this way a new breed of activists is changing the way that workers can organise.
Some startups aim to fulfil the role of informing workers and recruiting members. Two years ago OUR launched WorkIT, a smartphone app for Walmart workers. After signing up, users are presented with a simple chat interface where they can ask questions about the retail chain’s complex workplace regulations. Volunteers, often Walmart employees themselves, answer.
Others concentrate on helping workers voice their opinions. Union bosses have often been criticised for not paying much heed to the rank-and-file’s demands. Workership is a platform that attempts to bring structure to often freewheeling discussions online and to enable employees to pipe up without fear of repercussions (posts are anonymous). Collective-bargaining agreements, for instance, are broken down into small segments which members can discuss.
Then comes finding ways to make money to finance activities. The Independent Workers Union of Great Britain has resorted to crowdfunding its legal actions against Deliveroo, an online-delivery firm, which it accuses of having denied employment rights to its riders. TurkerView, an American website that collects and shows for free reviews of clients who post jobs on Mechanical Turk, is toying with the idea of a premium service that charges users who want fast automated access to its data.
Some of these projects are spreading. WorkIT, which licenses its system to other labour organisations, has six takers, including the Pilipino Workers Centre in Los Angeles and United Voice, an Australian union. Coworker.org has been used by employees from more than 50 companies. For Starbucks it has become a union of sorts. Over 42,000 people in 30 countries are connected via the service.
Yet, as any startup will confirm, launching a new service is much easier than expanding one. Most of the fledgling labour-tech projects rely on donations from philanthropists, socially minded investment funds and similar sources. It is not clear where the capital would come from to allow them to grow. In addition, these services lack the legal standing and political power of conventional unions, points out David Rolf of America’s Service Employees International Union.
Labour startups may need the support of existing unions if they are to turn into a force to be reckoned with. The best outcome would be if grassroots groups and conventional unions teamed up, says Ayad Al-Ani of the Alexander von Humboldt Institute for Internet and Society. Unions could become service providers for self-organising groups, helping them with things such as legal advice and lobbying.

Online to the picket line

The digital world has been embraced by some unions. Worried about the rise of crowd-working, Germany’s IG Metall, the country’s largest union, now allows self-employed workers to join. In 2015 it also launched a site to compare conditions on different crowdworking platforms, called Fair Crowd Work.
Some unions have even set up innovation units. One is HK Lab, created a year ago by the National Union of Commercial and Clerical Employees, Denmark’s biggest union. Experiments include a chatbot for member inquiries and a service centre for freelancers. America’s National Domestic Workers Alliance operates Fair Care Labs, a service to improve the lot of nannies, carers and house cleaners. It will soon launch Alia, a portable-benefits service. Clients make voluntary payments of $5 per job, which allows cleaners to get some insurance coverage and paid time off.
Labour’s lost
However promising such projects, they are unlikely to help labour regain its erstwhile bargaining power soon. But if the digital labour movement has proven anything so far, it is that information and data are ever more powerful. Coworker.org used online polls to confirm that Uber had again cut fares across the country, thus also reducing drivers’ pay. Bad publicity is the digital equivalent of the picket line, says Michelle Miller, co-founder of Coworker.org.
Obtaining more and better data could give rise to what Fredrik Soderqvist of Unionen, a Swedish union, refers to as “predictive unionism”. His organisation is building a system that could mine information it has about its members as well as data from other sources. The idea is to offer services such as telling workers when they should ask for a raise. Algorithms could also predict the likelihood of lay-offs, if say a new chief executive takes over, and hence the need to get members ready to act.
Perhaps the best example for the power of data so far is Mystro, an app for drivers for ride-hailing services such as Lyft and Uber. It allows them to switch easily between services, evaluates trip requests, rejects unprofitable ones and keeps track of all kinds of information that helps drivers make better decisions.
For now, unions still look weak. Membership continues to decline. But their history shows that the relative power of labour and capital is constantly in flux. Recent decades have been tough on labour, largely as a consequence of technological change. But technology may also be the thing that helps turn their fortunes around.

This article appeared in the Briefing section of the print edition under the headline “Workers of the world, log on!”

DeepMasterPrints created by a machine learning technique have error rate of only one in five

An image from the New York University paper, DeepMasterPrints. Photograph: Philip Bontrager

Researchers have used a neural network to generate artificial fingerprints that work as a “master key” for biometric identification systems and prove fake fingerprints can be created.
According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed “DeepMasterPrints” by the researchers from New York University, were able to imitate more than one in five fingerprints in a biometric system that should only have an error rate of one in a thousand.
The researchers, led by NYU’s Philip Bontrager, say that “the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.” As with much security research, demonstrating flaws in existing authentication systems is considered to be an important part of developing more secure replacements in the future.
In order to work, the DeepMasterPrints take advantage of two properties of fingerprint-based authentication systems. The first is that, for ergonomic reasons, most fingerprint readers do not read the entire finger at once, instead imaging whichever part of the finger touches the scanner.
Crucially, such systems do not blend all the partial images in order to compare the full finger against a full record; instead, they simply compare the partial scan against the partial records. That means that an attacker has to match just one of tens or hundreds of saved partial fingerprint in order to be granted access.

Indian court upholds legality of world’s largest biometric database

Read more

The second is that some features of fingerprints are more common than others. That means that a fake print that contains a lot of very common features is more likely to match with other fingerprints than pure chance would suggest.
Based on those insights, the researchers used a common machine learning technique, called a generative adversarial network, to artificially create new fingerprints that matched as many partial fingerprints as possible.
The neural network not only allowed them to create multiple fingerprint images, it also created fakes which look convincingly like a real fingerprint to a human eye – an improvement on a previous technique, which created jagged, right-angled fingerprints that would fool a scanner but not a visual inspection.
They compare the method to a “dictionary attack” against passwords, where a hacker runs a pre-generated list of common passwords against a security system.
Such attacks may not be able to break into any specific account, but when used against accounts at scale, they generate enough successes to be worth the effort.

The quantum world is a weird one. In theory and to some extent in practice its tenets demand that a particle can appear to be in two places at once—a paradoxical phenomenon known as superposition—and that two particles can become “entangled,” sharing information across arbitrarily large distances through some still-unknown mechanism.
Perhaps the most famous example of quantum weirdness is Schrödinger’s cat, a thought experiment devised by Erwin Schrödinger in 1935. The Austrian physicist imagined how a cat placed in a box with a potentially lethal radioactive substance could, per the odd laws of quantum mechanics, exist in a superposition of being both dead and alive—at least until the box is opened and its contents observed.
As far-out as that seems, the concept has been experimentally validated countless times on quantum scales. Scaled up to our seemingly simpler and certainly more intuitive macroscopic world, however, things change. No one has ever witnessed a star, a planet or a cat in superposition or a state of quantum entanglement. But ever since quantum theory’s initial formulation in the early 20th century, scientists have wondered where exactly the microscopic and macroscopic worlds cross over. Just how big can the quantum realm be, and could it ever be big enough for its weirdest aspects to intimately, clearly influence living things? Across the past two decades the emergent field of quantum biology has sought answers for such questions, proposing and performing experiments on living organisms that could probe the limits of quantum theory.

Advertisement

Those experiments have already yielded tantalizing but inconclusive results. Earlier this year, for example, researchers showed the process of photosynthesis—whereby organisms make food using light—may involve some quantum effects. How birds navigate or how we smell also suggest quantum effects may take place in unusual ways within living things. But these only dip a toe into the quantum world. So far, no one has ever managed to coax an entire living organism—not even a single-celled bacterium—into displaying quantum effects such as entanglement or superposition.
So a new paper from a group at the University of Oxford is now raising some eyebrows for its claims of the successful entanglement of bacteria with photons—particles of light. Led by the quantum physicist Chiara Marletto and published in October in the Journal of Physics Communications, the study is an analysis of an experiment conducted in 2016 by David Coles from the University of Sheffield and his colleagues. In that experiment Coles and company sequestered several hundred photosynthetic green sulfur bacteria between two mirrors, progressively shrinking the gap between the mirrors down to a few hundred nanometers—less than the width of a human hair. By bouncing white light between the mirrors, the researchers hoped to cause the photosynthetic molecules within the bacteria to couple—or interact—with the cavity, essentially meaning the bacteria would continuously absorb, emit and reabsorb the bouncing photons. The experiment was successful; up to six bacteria did appear to couple in this manner.
Marletto and her colleagues argue the bacteria did more than just couple with the cavity, though. In their analysis they demonstrate the energy signature produced in the experiment could be consistent with the bacteria’s photosynthetic systems becoming entangled with the light inside the cavity. In essence, it appears certain photons were simultaneously hitting and missing photosynthetic molecules within the bacteria—a hallmark of entanglement. “Our models show that this phenomenon being recorded is a signature of entanglement between light and certain degrees of freedom inside the bacteria,” she says.
According to study co-author Tristan Farrow, also of Oxford, this is the first time such an effect has been glimpsed in a living organism. “It certainly is key to demonstrating that we are some way toward the idea of a ‘Schrödinger’s bacterium,’ if you will,” he says. And it hints at another potential instance of naturally emerging quantum biology: Green sulfur bacteria reside in the deep ocean where the scarcity of life-giving light might even spur quantum-mechanical evolutionary adaptations to boost photosynthesis.
There are many caveats to such controversial claims, however. First and foremost, the evidence for entanglement in this experiment is circumstantial, dependent on how one chooses to interpret the light trickling through and out of the cavity-confined bacteria. Marletto and her colleagues acknowledge a classical model free of quantum effects could also account for the experiment’s results. But, of course, photons are not classical at all—they are quantum. And yet a more realistic “semiclassical” model using Newton’s laws for the bacteria and quantum ones for photons fails to reproduce the actual outcome Coles and his colleagues observed in their laboratory. This hints that quantum effects were at play in both the light and the bacteria. “It’s a little bit indirect, but I think it’s because they’re only trying to be so rigorous in ruling out things and claiming anything too much,” says James Wootton, a quantum computing researcher at IBM Zurich Research Laboratory who was not involved in either paper.

Advertisement

The other caveat: the energies of the bacteria and the photon were measured collectively, not independently. This, according to Simon Gröblacher of Delft University of Technology in the Netherlands who was not part of this research, is somewhat of a limitation. “There seems to be something quantum going on,” he says. “But…usually if we demonstrate entanglement, you have to measure the two systems independently” to confirm any quantum correlation between them is genuine.
Despite these uncertainties, for many experts, quantum biology’s transition from theoretical dream to tangible reality is a question of when, not if. In isolation and collectively, molecules outside of biological systems have already exhibited quantum effects in decades’ worth of laboratory experiments, so seeking out these effects for similar molecules inside a bacterium or even our own bodies would seem sensible enough. In humans and other large multicellular organisms, however, such molecular quantum effects should be averaged out to insignificance—but their meaningful manifestation within far smaller bacteria would not be too shocking. “I’m a little torn about how surprising [this finding] is,” Gröblacher says. “But it’s obviously exciting if you can show this in a real biological system.”
Several research groups, including those led by Gröblacher and Farrow, are hoping to take these ideas even further. Gröblacher has designed an experiment that could place a tiny aquatic animal called a tardigrade in superposition—a proposition much more difficult than entangling bacteria with light owing to a tardigrade’s hundreds-fold–larger size. Farrow is looking at ways to improve on the bacterial experiment; in the next year he and his colleagues hope to entangle two bacteria together, rather than independently with light. “The long-term goals are foundational and fundamental,” Farrow says. “This is about understanding the nature of reality, and whether quantum effects have a utility in biological functions. At the root of things, everything is quantum,” he adds, with the big question being whether quantum effects play a role in how living things work.

Sign up for Scientific American’s newsletters.

It might be, for example, that “natural selection has come up with ways for living systems to naturally exploit quantum phenomena,” Marletto notes, such as the aforementioned example of bacteria photosynthesizing in the light-starved deep sea. But getting to the bottom of this requires starting small. The research has steadily been climbing toward macrolevel experiments, with one recent experiment successfully entangling millions of atoms. Proving the molecules that make up living things exhibit meaningful quantum effects—even if for trivial purposes—would be a key next step. By exploring this quantum–classical boundary, scientists could get closer to understanding what it would mean to be macroscopically quantum, if such an idea is true.

Advertisement

Jonathan O’Callaghan is a freelance space and science journalist based in London. You can follow him on Twitter @Astro_Jonny.

Cameras for the Internet of Things will have to be fast, cheap, and powerful—and might not look like cameras at all

By
Stacey Higginbotham

Photo-illustration: Stuart Bradford

The rise of computer vision has given us robot chefs and cameras that detect gas flares in fuel production. It’s also led to an increase in connected cameras that are trying to run at the edge of the network.
“Running at the edge” means these cameras are not only communicating wirelessly with the cloud but also communicating with local gateways and working with built-in logic boards to complete a task. The task might be as simple as notifying a manufacturer when a production line produces a defective item or as complex as identifying a person to determine if the system should sound an alarm.
But as we connect more cameras and ask them to perform more complicated tasks, their fundamental architecture is changing. Today we see changes in the silicon that handles image processing and computing. In a few years, we may see our notion of cameras change to meet the needs of digital eyes, not human ones.
There are two challenges driving the silicon shift. First, processing power: Many of these cameras try to identify specific objects by using machine learning. For example, an oil company might want a drone that can identify leaks as it flies over remote oil pipelines. Typically, training these identification models is done in the cloud because of the enormous computing power required. Some of the more ambitious chip providers believe that in a few years, not only will edge-based chips be able to match images using these models, but they will also be able to train models directly on the device.
That’s not happening yet, due to the second challenge that silicon providers face. Comparing images with models requires not just computing power but actual power. Silicon providers are trying to build chips that sip power while still doing their job. Qualcomm has one such chip, called Glance, in its research labs. The chip combines a lens, an image processor, and a Bluetooth radio on a module smaller than a sugar cube.
Glance can manage only three or four simple models, such as identifying a shape as a person, but it can do it using fewer than 2 milliwatts of power. Qualcomm hasn’t commercialized this technology yet, but some of its latest computer-vision chips combine on-chip image processing with an emphasis on reducing power consumption.
But does a camera even need a lens? Researchers at the University of Utah suggest not, having invented a lensless camera that eliminates some of a traditional camera’s hardware and high data rates. Their camera is a photodetector against a pane of plexiglass that takes basic images and converts them into shapes a computer can be trained to recognize.
This won’t work for jobs where high levels of detail are important, but it could provide a cheaper, more power-efficient view of the world for computers fulfilling basic functions. We can also apply this thinking to how we generate image data for computers. Researchers at the University of Washington, for example, have been studying ways to use disruptions in Wi-Fi signals to teach computers how to understand gestures.
A camera doesn’t have to look like a camera anymore. It just needs to match incoming data to a statistical model to tell us what something looks like. If it can do this cheaply and without sucking up too much power, it could change beyond our recognition, even as it becomes more essential.
This article appears in the November 2018 print issue as “New Eyes for the IoT.”

Facebook Failed to Police How Its Partners Handled User DataFacebook Failed to Police How Its Partners Handled User Data

Image

Sheryl Sandberg, Facebook’s chief operating officer, testified before the Senate Intelligence Committee in September. Ron Wyden, a committee member, has pressed the company on its data privacy protections.CreditCreditTom Brenner for The New York Times

Facebook failed to closely monitor device makers after granting them access to the personal data of hundreds of millions of people, according to a previously unreported disclosure to Congress last month.
Facebook’s loose oversight of the partnerships was detected by the company’s government-approved privacy monitor in 2013. But it was never revealed to Facebook users, most of whom had not explicitly given the company permission to share their information. Details of those oversight practices were revealed in a letter Facebook sent last month to Senator Ron Wyden, the Oregon Democrat, a privacy advocate and frequent critic of the social media giant.
In the letter, a copy of which Mr. Wyden provided to The New York Times, Facebook wrote that by early 2013 it had entered into data-sharing agreements with seven device makers to provide what it called the “Facebook experience” — custom-built software, typically, that gave those manufacturers’ customers access to Facebook on their phones. Those partnerships, some of which date to at least 2010, fall under a consent decree with the Federal Trade Commission drafted in 2011 and intended to oversee the company’s privacy practices.
[Read more about Facebook’s data-sharing partnerships with device makers.]
Facebook ultimately entered into dozens of similar data-sharing partnerships, most of which the company began winding down this spring after revelations that it had allowed Cambridge Analytica, a political data firm, to acquire the personal information of tens of millions of people. The firm used some of that information in efforts to aid President Trump’s 2016 campaign.

When a team from PricewaterhouseCoopers conducted the initial F.T.C.-mandated assessment in 2013, it tested Facebook’s partnerships with Microsoft and Research in Motion, maker of the BlackBerry handset. In both cases, PricewaterhouseCoopers found only “limited evidence” that Facebook had monitored or checked its partners’ compliance with its data use policies. That finding was redacted from a public version of PricewaterhouseCoopers’s report released by the F.T.C. in June.
“Facebook claimed that its data-sharing partnerships with smartphone manufacturers were on the up and up,” Mr. Wyden said. “But Facebook’s own, handpicked auditors said the company wasn’t monitoring what smartphone manufacturers did with Americans’ personal information, or making sure these manufacturers were following Facebook’s own policies.” He added, “It’s not good enough to just take the word of Facebook — or any major corporation — that they’re safeguarding our personal information.”
In a statement, a Facebook spokeswoman said, “We take the F.T.C. consent order incredibly seriously and have for years submitted to extensive assessments of our systems.” She added, “We remain strongly committed to the consent order and to protecting people’s information.”
Facebook, like other companies under F.T.C. consent decree, largely dictates the scope of each assessment. In two subsequent assessments, Facebook’s October letter suggests, the company was graded on a seemingly less stringent policy with data partners. On those two, Facebook had to show that its partners had agreed to its data use policies.
A Wyden aide who reviewed the unredacted assessments said they contained no evidence that Facebook had ever addressed the original problem. The Facebook spokeswoman did not directly address the 2013 test failure, or the company’s apparent decision to change the test in question.

Because the United States has no general consumer privacy law, F.T.C. consent decrees have emerged as the federal government’s chief means of regulating privacy practices at Facebook, Google and other companies that amass huge amounts of personal data about people who use their products. In letters and congressional testimony, F.T.C. officials have pointed to the decrees as evidence of robust consumer privacy protection in the United States.
A spokesman for PricewaterhouseCoopers acknowledged in a statement that Facebook defines the privacy procedures, known as “controls,” that are tested during the assessments.
“Changes to controls may occur as platforms evolve, such that a control tested in one period may not be identical in a subsequent period,” the spokesman said.
Facebook’s letter disclosing the assessors’ findings came in response to questions Mr. Wyden raised during an intelligence hearing in September. The hearing was held just weeks after The Times reported that Facebook had struck data-sharing deals with dozens of phone and tablet manufacturers, including Microsoft, BlackBerry and Amazon.

Read Facebook’s Letter
The October letter to Senator Ron Wyden details Facebook’s oversight of its partnerships with device makers.

2 pages, 0.66 MB

While the assessment reports were publicly released by the F.T.C. in June, they included significant redactions, which Facebook and PricewaterhouseCoopers said were necessary to protect trade secrets.
Mr. Wyden, whose staff had viewed the full assessments, said at the hearing that he found parts of the unredacted reports “very troubling” and pressed Sheryl Sandberg, Facebook’s chief operating officer, to release them in their entirety.

The Electronic Privacy Information Center, a Washington-based consumer rights group that helped obtain the 2011 consent decree, is currently suing the agency for release of the full assessments, arguing that the public cannot otherwise judge how effectively the F.T.C. is policing privacy violations.
“What is clear is that the F.T.C. has failed to enforce the consent order,” said Marc Rotenberg, the president of the privacy rights group. “And this has come at enormous cost to American consumers.”
The F.T.C. declined to comment.
Facebook’s compliance with the consent decree is the subject of a new F.T.C. investigation opened in the wake of the Cambridge Analytica scandal.
In the letter last month, Facebook’s vice president for United States public policy, Kevin Martin, noted that the assessors’ findings had not caused Facebook to fail PricewaterhouseCoopers’s overall evaluation: The assessors concluded that Facebook was operating “with sufficient effectiveness to provide reasonable assurance” that it was protecting its users’ privacy.
It remains unclear whether Facebook has ever scrutinized how its partner companies handled personal data. A spokeswoman declined to provide any examples of the company’s doing so.
A BlackBerry official, who declined to discuss details of the companies’ data-sharing agreement, said BlackBerry did not think that Facebook had ever audited its data use, but noted that BlackBerry’s business model relies on protecting users’ personal information.

The world’s most ambitious “smart city,” known as Quayside, in Toronto, has faced fierce public criticism since last fall, when the plans to build a neighborhood “from the internet up” were first revealed. Quayside represents a joint effort by the Canadian government agency Waterfront Toronto and Sidewalk Labs, which is owned by Google’s parent company Alphabet Inc., to develop 12 acres of the valuable waterfront just southeast of downtown Toronto.
In keeping with the utopian rhetoric that fuels the development of so much digital infrastructure, Sidewalk Labs has pitched Quayside as the solution to everything from traffic congestion and rising housing prices to environmental pollution. The proposal for Quayside includes a centralized identity management system, through which “each resident accesses public services” such as library cards and health care. An applicant for a position at Sidewalk Labs in Toronto was shocked when he was asked in an interview to imagine how, in a smart city, “voting might be different in the future.”
Other, comparatively quaint plans include driverless cars, “mixed-use” spaces that change according to the market’s demands, heated streets, and “sensor-enabled waste separation.” The eventual aim of Sidewalk Labs’s estimated billion-dollar investment is to bring these innovations to scale — first to more than 800 acres on the city’s eastern waterfront, and then to the world at large. “The genesis of the thinking for Sidewalk Labs came from Google’s founders getting excited thinking of ‘all the things you could do if someone would just give us a city and put us in charge,’” explained Eric Schmidt, Google’s former executive chair, when Quayside was first announced.
From the start, activists, technology researchers, and some government officials have been skeptical about the idea of putting Google, or one of its sister companies, in charge of a city. Their suspicions about turning part of Toronto into a corporate test bed were triggered, at first, by the company’s history of unethical corporate practices and surreptitious data collection. They have since been borne out by Quayside’s secret and undemocratic development process, which has been plagued by a lack of public input — what one critic has called “a colonizing experiment in surveillance capitalism attempting to bulldoze important urban, civic and political issues.” In recent months, a series of prominent resignations from advisory board members, along with organized resistance from concerned residents, have added to the growing public backlash against the project.
A few weeks ago, Ann Cavoukian, one of Canada’s leading privacy experts and Ontario’s former privacy commissioner, became the latest stakeholder to resign from the project. Cavoukian was brought on by Sidewalk Toronto (as the collaboration between Waterfront Toronto and Google-sibling Sidewalk Labs is known) as a consultant to help institute a proactive, “privacy by design” framework. She was initially told that all data collected from residents would be deleted and rendered unidentifiable. Cavoukian learned last month, however, that third parties would be able to access identifiable information gathered at Quayside. “I imagined us creating a Smart City of Privacy, as opposed to a Smart City of Surveillance,” Cavoukian wrote in her resignation letter. Her concerns echoed those of residents who have long pointed to the privacy implications of handing over streets to the world’s most profitable data hoover.
In response to questions from The Intercept about Cavoukian’s resignation, a spokesperson for Sidewalk Labs said, “Sidewalk Labs has committed to implement, as a company, the principles of Privacy by Design. Though that question is settled, the question of whether other companies involved in the Quayside project would be required to do so is unlikely to be worked out soon, and may be out of Sidewalk Labs’ hands.”
Now, in an effort to get ahead of Quayside’s development before it’s too late, a coalition of experts and residents have launched a Toronto Open Smart Cities Forum. The group represents the latest and largest effort by Torontonians to start having the kinds of public conversations, teach-ins, and debates that should have “taken place last year, when this project was first announced,” according to Bianca Wylie, co-founder of Tech Reset Canada and one of the lead organizers of the opposition to Sidewalk Toronto. “The process Sidewalk Toronto has started has been so anti-democratic that the only way to participate is to be proactive in framing the topic,” Wylie continued.
Toronto Open Smart Cities Forum is taking the lead in the local fight against the commodification of its city’s data. The group’s struggle is one that urban residents around the world have been watching closely. Even those who never set foot in Canada may soon be subject to the products, norms, and techniques produced by Sidewalk Toronto, simply by virtue of using Google’s earth-spanning services. “This isn’t just about data being sold,” Wylie said. “It’s also about how is this data being used with other kinds of data in other products. You can move a lot of information around within Alphabet without having to sell it, and we need to talk about that.” The outcome of Toronto’s ability to rein in the Google affiliate, in other words, has ramifications not just for Canadians, but also for the future of who controls our civic life.

Conceptual image of Sidewalk Toronto.
Image: Sidewalk Toronto

A City of Surveillance
Sidewalk Toronto’s ongoing controversies may serve as the latest warning sign for cities who are considering signing over public spaces to major tech companies. Cavoukian’s decision to quit represents only the most recent resignation in a series of departures that Wylie has referred to as an “ongoing bulldozing of stakeholders.” In addition to Cavoukian, a Waterfront Toronto board member and two Waterfront Toronto digital advisers have also resigned in the last five months. Three more digital advisers have also threatened to resign unless major changes are made to the project’s planning process.
In anticipation of the negative press, Sidewalk Labs has allocated $11 million of its initial $50 million budget to “communications/engagement/and public relations.” This includes a strategy of building influencers “to ensure support for the Master Innovation and Development Plan among key constituents in Toronto.” Last week, iPolitics reported that Sidewalk Labs has begun lobbying at least 19 federal departments, including the prime minister’s office, Environment and Climate Change Canada, the Public Health Agency of Canada, and the Treasury Board, among others. The meetings all took place days after the resignation of Cavoukian, the former Ontario privacy commissioner.
But so far, the project has been losing allies more quickly than it’s been making them. When Saadia Muzaffar, a prominent technologist and the founder of TechGirls Canada, resigned from Waterfront Toronto’s Digital Strategy Advisory Panel in October, it was due in part to the partnership’s “blatant disregard for resident concerns about data and digital infrastructure.” In her viral letter of resignation, Muzaffar criticized Sidewalk Toronto’s dishonest negotiations process: “There is nothing innovative about city-building that disenfranchises its residents in insidious ways and robs valuable earnings out of public budgets, or commits scarce public funds to the ongoing maintenance of technology that city leadership has not even declared a need for.”
If Google’s other global projects are any indication, Sidewalk Lab’s venture in Canada may hew closely to the Silicon Valley model of offering free services in exchange for the right to virtually limitless data collection. Sidewalk Labs-associated LinkNYC and InLinkUK kiosks have already been installed in New York and London. The kiosks — which include three cameras, 30 sensors, and Bluetooth beacons — aggregate anonymized data for advertising purposes in exchange for providing passersby with free Wi-Fi services.
Given that there is no genuine way to opt out of public space, Torontonians have been asking questions about what meaningful consent would look like. In the case of Quayside, the terms of any agreement wouldn’t just cover Wi-Fi but could also extend to basic government services. Julie Di Lorenzo, a real estate developer who left Waterfront’s board in July, explained to the AP that questions she had asked about residents who might not consent to share data had gone unanswered. She wanted to know if those who didn’t opt-in to the city would be told that they couldn’t live there. “It’s one thing to willingly install Alexa in your home,” wrote Toronto journalist Brian Barth. “It’s another when publicly owned infrastructure — streets, bridges, parks and plazas — is Alexa, so to speak.”
Adding to these concerns is the fact that Sidewalk Labs has asked potential local consultants to hand over all of their intellectual property, according to a recent Globe and Mail investigation. As Jim Balsillie, the former CEO of Blackberry, recently pointed out in an op-ed, Waterfront Toronto has left the ownership of intellectual property and data unresolved in its latest agreement; this means that it would default to Sidewalk Labs, giving the company a gross market advantage. Indeed, in an announcement last year, Schmidt went as far as to thank Canadian taxpayers for creating some of Alphabet’s key artificial intelligence technology, the intellectual property of which the company now owns. Balsillie noted that what happens in Toronto will “have profound and permanent impacts on the digital rights and prosperity of all Canadians because IP [intellectual property] and data — our century’s most valuable extractive resources — spread seamlessly.” This is why current and former stakeholders in Waterfront Toronto have called for the public to receive financial benefits from the project, emphasizing that Canada’s largest city should not simply be seen as a U.S. company’s urban laboratory.
The Sidewalk Labs spokesperson said that the company’s “relationship with its contractors does not impact its agreements with Waterfront Toronto in any way, including its commitment to the process laid out in the PDA, which says that in the future Waterfront Toronto may have rights to certain Sidewalk Labs IP. Of course, if Sidewalk Labs does not own the IP created by the planning process, it would not have the power to share or convey that IP to Waterfront Toronto or anyone else.”

Yet until recently, Sidewalk Labs refused to say who will own data produced by Quayside’s visitors, workers, and residents in what it calls “the most measurable community in the world.” Nor had the company clarified, despite facing pointed questions at public town hall-style meetings, whether or how the information streaming in from sensors in park benches, traffic lights, and dumpsters would be monetized. (The writer Evgeny Morozov has summed up Google’s strategy as “Now everything is permitted – unless somebody complains.”)
In an apparent response to the mounting public pressure against the project, Sidewalk Labs recently released its first proposal for the digital governance of its collected data. Most significant among these plans was the suggestion that all data be placed in a “civic data trust.” On the company’s blog, Alyssa Harvey Dawson, Sidewalk Labs’ Head of Data Governance, explained that with the proposed creation of a civic data trust, no one would have the “right to own information collected from Quayside’s physical environment — including Sidewalk Labs.” This would represent, she wrote, “a new standard for responsible data use that protects personal privacy and the public interest while enabling companies, researchers, innovators, governments, and civic organizations to improve urban life using urban data.”
According to experts who have been following the project closely, the details of how this trust might be implemented are vague and at times contradictory. On one hand, the proposal states that Sidewalk Labs would get no preferential access to any data that is collected. On the other, as Sean McDonald points out, “the proposed trust would grant licenses to collect and use data — and the more sensitive the data, the more proprietary it would be.” There is also the question of just how anonymous certain data would be, and whether such anonymity would be reversible when it came to sharing information with law enforcement. Some residents are opposed to Sidewalk Labs having any involvement with this data proposal. “It is as if Uber were to propose regulations on ride-sharing, or Airbnb were to tell city council how to govern short-term rentals. By definition, there is a conflict of interest,” writes Nabeel Ahmed, a smart city expert and member of the Toronto Open Smart Cities Forum.
Part of the mission of the new Toronto Open Smart Cities Forum is to shift the public conversation away from debating the latest minutiae of the company’s proposed terms and toward a broader consideration of whether the project should move forward under any terms at all. This conversation, Wylie emphasizes, should be taking place between residents and the government; Sidewalk Labs should not be the only voice setting the terms and advancing the agenda. “We need to state clearly and unambiguously that this infrastructure is public,” Wylie said. “You can say in March, ‘This data isn’t being collected,’ but then in July, it’s updated to do something else. This infrastructure creates plausible surveillance so long as you always keep the door open to what’s possible.”

It was a particularly chilly cold case. At 1 am on November 18, 2010, officers from the Los Angeles Police Department responded to reports of gunfire in a leafy cul-de-sac near Universal Studios. They found Jong Kim lying in front of his home and shot at least five times. Kim, a 50-year-old liquor store owner, later died in a hospital without regaining consciousness.
The most promising clues were grainy surveillance images that showed a two-tone Honda Prelude with a sunroof and fancy rims, but no visible license plate. Paperwork revealed a veritable haystack of vehicles—5,000 Preludes registered in Los Angeles County alone.
Despite a $50,000 reward for information, the investigation stalled for more than a year. Then in May 2012, detectives turned to a new Automatic License Plate Reader system. ALPRs use digital cameras attached to buildings, street lights, and patrol cars to snap photos of passing cars. Computer-vision technology can determine the make and model of the car, and “read” the license plate—turning public streets into massive databases of almost every car on the road.
After looking at hundreds of photos, detectives focused on cars with the suspect Prelude’s modifications. One stood out. Although it was painted a different color in 2012, officers searched through the ALPR database and confirmed that in 2010 the car matched the surveillance video. It was enough to identify a suspect, who in 2015 was convicted of Kim’s murder and jailed for 50 years.
The analysis was made possible by software provided by Palantir, Peter Thiel’s shadowy intelligence startup. The LAPD was one of Palantir’s first local law enforcement customers, after it had cut its teeth on Pentagon, CIA, and NSA contracts. Since signing with Palantir in 2009, the LAPD has spent more than $20 million on its software and hardware. Documents obtained through a public-records request suggest at least $5.8 million of that went to ALPR technologies.
Hundreds of Requests a Day
The LAPD has never divulged how many ALPR searches it conducts. But an email obtained by WIRED through the public record request says its police officers tapped the system 200 to 300 times a day in 2016. Los Angeles County sheriffs performed a similar number of searches through Palantir, according to the email. Police in Long Beach, the city south of LA, made an additional 30 searches a day. Together, the three departments make hundreds of thousands of searches a year.
It’s hard to put that number in perspective, because there are few statistics on police use of license plate readers. However, Peter Bibring, director of police practices for the American Civil Liberties Union of California, considers the total “enormous.” He says it shows that it is now standard practice for officers to use ALPR “as a surveillance tool” even when “officers have no reason to believe a driver was involved in any criminal activity.”

Using Palantir’s software, LAPD officers can see everywhere a car has been photographed in a given time period.

Palantir/LAPD

The LAPD has not said how many ALPR cameras it owns or has access to, although documents show that it shares images with cameras from the Los Angeles Sheriff’s Department, Long Beach, Glendale, and Burbank police, as well as with Burbank, LAX, and Van Nuys airports. It has also used images from civilian ALPR cameras, typically deployed in shopping malls, universities, and transit centers for security.
ALPR systems are now widely used by law enforcement in the US and around the world. Surveys collected by the federal Bureau of Justice Statistics in 2013 found that 93 percent of police departments in cities with 1 million or more people used ALPR. Even half the departments serving towns as small as 25,000 people have ALPR. One ALPR company, Vigilant Solutions, claims that it carries out over a billion license plate scans each year.
Such systems traditionally have been used by law enforcement to help locate stolen vehicles. But when ALPR data is stored for years and then integrated with analytical systems like Palantir, it becomes much more powerful—and potentially open to abuse.
According to a Palantir user guide for the LAPD disclosed to WIRED, detectives start by logging in with their police credentials. When the system launched in 2009, officers had to type in why they were searching. Many would just enter “investigation” or the numeric code for a crime, like “459” for burglary, according to a 2014 email to users from John Gaw, a sheriff’s department sergeant responsible for working with Palantir.
Gaw warned Palantir users to better justify requests for ALPR searches, lest the department attract more attention from civil liberties groups. “[We] are facing potential limitations on the use … of the ALPR database by the ACLU,” he wrote. “We may be challenged to … show we are … making sure every user has a valid reason.”
Officers misusing law enforcement databases for their own purposes is a perennial problem at the LAPD and elsewhere. Some officers weren’t happy about Gaw’s team auditing ALPR requests and questioning vague rationales. “Unfortunately they have been contacting every user that puts in an unacceptable search reason,” wrote another LASD sergeant, Peter Jackson.
Searching by Plate Number, or by Area
In its latest version that went live in May 2016, called ALPR 2.0 or TBird, Palantir’s ALPR system requires users to enter a specific case number, and to select a purpose for each search via a pulldown menu. However, some of the reasons are vague, such as “protect critical infrastructure” or “protect the public during events.”
“The broad language for categorizing searches suggests that no indication of criminality is required to make a query,” says the ACLU’s Bibring. “If that’s the case, it’s basically the Wild West for when officers can query millions of data points.” Neither Palantir nor the LAPD responded to multiple requests for comment on this story.
Once in the system, detectives have several options. If they have a full or partial plate number, they can search using that. The system will then display matching plates, and a map view showing every time each plate was captured by the ALPR system.
A Timeline button brings up a chart showing how many times a plate has been searched, while a Freq Analysis feature displays a table showing those hits by time of day, and day of the week. These can help detectives spot patterns, such as where a vehicle’s driver might live or work.
Palantir’s TBird also works in reverse. Users can draw a circle on a map, then ask for a list of all the license plates spotted in that area at a given time. Sarah Brayne, a sociologist at the University of Texas at Austin who studied data surveillance at the LAPD for several years, detailed how this helped solve the murder of someone dumped at a remote tourist location. A nearby ALPR camera captured three cars around the time the body was dumped. The registered owner of one of the cars was affiliated with a gang then at war with the victim’s gang. Using that information, officers obtained a search warrant, examined the car for evidence, and arrested a suspect.
Not every police department with ALPR can use the technology as a time machine. Sixteen states and numerous cities have legislation regulating use of ALPR. Some limit how long data can be retained—from three years in the case of Colorado, to just 21 days in Maine. Los Angeles County keeps ALPR data for at least five years, whereas the California Highway Patrol can only hold it for 60 days. In the absence of regulations, several private companies specify that they will retain ALPR data “as long as it has commercial value.”
Palantir’s TBird can also be set to automatically track cars spotted in the greater Los Angeles area, sending users an alert whenever they are captured by an ALPR device.

Officers also can see details of each sighting.

Palantir/LAPD

The powerful tool quickly became popular among LAPD detectives. “Due to a lot of hard work and promoting by training, Palantir has gained great traction,” wrote one of the company’s instructors at the LAPD in 2016. “Former detractors have come to now believe in the capabilities of Palantir.”
The same year, TBird got an upgrade, powered by artificial intelligence. Previously, the system could only identify license plate letters and numbers. Now it could also use machine learning to recognize the color, make, and style of vehicles photographed by ALPR cameras, as well as accessories like spare tires. Palantir used automobile image classification software from a Los Angeles-based startup called Intrinsics.
“The problem it’s trying to solve is the case where an eyewitness sees a vehicle leave the scene of a crime and all they get is a visual description, like it was a white Ford pickup,” says Intrinsics cofounder and CTO Eric Cheng. “[Our algorithm] allows investigators to sift through the database to find images that match that description.”
The ACLU’s Bibring says AI technology can be unreliable, and that agencies that use such systems should be “very clear to the public and to their oversight bodies what kind of surveillance they’re using, and what policies they have in place to protect against abuse.”
In its early days in Los Angeles, Palantir’s software was slow and glitchy, and some of its advanced features did not always work well, the records show. When the system failed, officers could complain to Palantir engineers embedded in the department. In 2013, a detective asked LAPD’s Palantir contact to try to identify a Chevrolet Impala used as a getaway car. “I ran the locations and times and came back mostly with patrol cars,” replied the Palantir engineer. “Clearly we need to step up our vehicle description … game to surpass [rival IBM software] CopLink.”
Emails from 2016 and earlier show LAPD officers generally liked Palantir’s ALPR system, but they complained about persistent slowness, maps not displaying properly, and connection problems. Palantir’s solution: LAPD should buy more technology “cores” (hardware and software packages), including two dedicated to the ALPR system—at a total cost of $280,000.
Records show that many ALPR systems, including some used by the LAPD, LASD, and Burbank police, as well as by LAX and Burbank airports, have relied on software from 3M called BOSS. Data from the 3M systems were then integrated into Palantir. The Electronic Frontier Foundation previously found that more than 100 cameras using 3M’s software were freely accessible online, including live video feeds.
Palantir’s software also has on occasion lost important data, the records show. In 2014, an officer complained that a car he had been tracking for several years had disappeared from the ALPR system. More worrying, perhaps, Palantir has misplaced hundreds of reported crimes. In 2015, an LAPD officer working on an annual crime analysis noted: “When I run all aggravated assaults for [my district] for 2012, Palantir only returns four for the year. In 2012, we had over 390 aggravated assaults.” That time, Palantir had simply misclassified the crimes; on another occasion in 2015, 327 violent crimes disappeared from another district’s records. It’s not known whether these problems were resolved.
Critics of ALPR say that the technology expands the net of Big Data surveillance. “Even though ALPRs are dragnet surveillance tools that collect information on everyone, rather than merely those under suspicion, the likelihood of being inputted into the system is not randomly distributed,” writes Brayne of the University of Texas. “Minority individuals and individuals in poor neighborhoods have a higher probability of being in the … surveillance net than do people in neighborhoods where the police are not conducting data-intensive forms of policing.”
LAPD remains committed to ALPR and to Palantir. In 2016, it signed a $2.9 million contract to upgrade TBird to ALPR 2.1, with unspecified new features. Palantir is reported to be considering a public offering next year, at a valuation above $35 billion.

More Great WIRED Stories
Luxuriate in this teardown of a 1974 Harley DavidsonLock down what websites can access on your computerQuantum physicists found a new, safer way to navigateWhat a school bus schedule can teach us about AIPHOTOS: The scrap yards sending copper to ChinaGet even more of our inside scoops with our weekly Backchannel newsletter

Related Video

Design
How Apple’s iPhones Change the Smartphone Market Every Year
The launch of Apple’s iPhone X brought face recognition, animoji, and the notch into the mainstream.

Why it matters: The hype was just that. The odds of a company turning blockchain “headlines into reality” are slim, as Forrester Research predicts.

The prospect of incorporating blockchain technology or cryptocurrency into businesses excited investors and drove up share prices temporarily — just look at Kodak, beverage company Long Blockchain, or Hooters franchisee Chanticleer Holdings — so it’s no wonder executives wanted shareholders to know that they too might get in on the new technologies.
At the peak earlier this year, “blockchain” was mentioned 173 times, according to an analysis of company transcripts by Axios. The number has since fallen as much as 80%. Bitcoin was never as popular. Dropping that word or “cryptocurrency” was most common in the first quarter of this year — with a mere 68 mentions.
Bitcoin can’t exist without blockchain, but one is clearly less controversial than the other. If you buy the way IBM sells it, the benefits of blockchain in business include “reduced time,” “decreased costs” and “alleviated risk.”
Cryptocurrency, meanwhile, has loud critics plus a reputation for volatile trading.
Two corporate examples:
IBM, which is responsible for over 70 mentions of “blockchain” in the first quarter of 2017, is throwing a lot of cash at the technology, with a 1,500 blockchain-specific staff. It even recruited Walmart.Then there’s technology consulting company DXC Technology. Executives there dropped “blockchain” five times during a May earnings call, without offering any concrete plans of investment. The company hasn’t mentioned it in the two earnings calls it has held since then, and did not respond to a request for comment.
Yes, but: That doesn’t mean in all cases companies’ that bought into the blockchain or bitcoin hype haven’t followed through on their announcements.

And if they can, what are the implications for the future of creativity?

By Artist #1 (See bottom of story for key). Credit: Artwork commissioned by
GumGum

Advertisement

As AI becomes an unstoppable force, it raises some difficult questions about the future role of humans in an increasingly automated world. Initial studies are showing that we can add the most value by focusing on four key areas: critical thinking, problem solving, managing human interactions, and above all else, expressing creativity. In short, our future role involves embracing these last bastions of human exclusivity and becoming more “human.”
But just last month, AI-generated art arrived on the world auction stage under the auspices of Christie’s, proving that artificial intelligence can not only be creative but also produce world class works of art—another profound AI milestone blurring the line between human and machine.
Naturally, the news sparked debates about whether the work produced by Paris-based art collective Obvious could really be called art at all. Popular opinion among creatives is that art is a process by which human beings express some idea or emotion, filter it through personal experience and set it against a broader cultural context—suggesting then that what AI generates at the behest of computer scientists is definitely not art, or at all creative.

Advertisement

By artist #2 (see bottom of story for key). Credit: Artwork Commissioned by
GumGum

The story raised additional questions about ownership. In this circumstance, who can really be named as author? The algorithm itself or the team behind it? Given that AI is taught and programmed by humans, has the human creative process really been identically replicated or are we still the ultimate masters?
AI VERSUS HUMAN
At GumGum, an AI company that focuses on computer vision, we wanted to explore the intersection of AI and art by devising a Turing Test of our own in association with Rutgers University’s Art and Artificial Intelligence Lab and Cloudpainter, an artificially intelligent painting robot. We were keen to see whether AI can, in fact, replicate the intent and imagination of traditional artists, and we wanted to explore the potential impact of AI on the creative sector.

By artist #3 (see bottom of story for key). Credit: Artwork Commissioned by
GumGum

To do this, we enlisted a broad collection of diverse artists from “traditional” paint-on-canvas artists to 3-D rendering and modeling artists alongside Pindar Van Arman—a classically trained artist who has been coding art robots for 15 years. Van Arman was tasked with using his Cloudpainter machine to create pieces of art based on the same data set as the more traditional artists. This data set was a collection of art by 20th century American Abstract Expressionists. Then, we asked them to document the process, showing us their preferred tools and telling us how they came to their final work.

By artist #4 (see bottom of story for key). Credit: Artwork Commissioned by
GumGum

Intriguingly, while at face value the AI artwork was indistinguishable from that of the more traditional artists, the test highlighted that the creative spark and ultimate agency behind creating a work of art is still very much human. Even though the Cloudpainter machine has evolved over time to become a highly intelligent system capable of making creative decisions of its own accord, the final piece of work could only be described as a collaboration between human and machine. Van Arman served as more of an “art director” for the painting. Although Cloudpainter made all of the aesthetic decisions independently, the machine was given parameters to meet and was programed to refine its results in order to deliver the desired outcome. This was not too dissimilar to the process used by Obvious and their GAN AI tool.

Advertisement

By artist #5 (see bottom of story for key). Credit: Artwork Commissioned by
GumGum

Moreover,until AI can be programed to absorb inspiration, crave communication and want to express something in a creative way, the work it creates on its own simply cannot be considered art without the intention of its human masters. Creatives working with AI find the process to be more about negotiation than experimentation. It’s clear that even in the creative field, sophisticated technologies can be used to enhance our capabilities—but crucially they still require human intelligence to define the overarching rules and steer the way.
THERE’S AN ACTIVE ROLE BETWEEN ART AND VIEWER
How traditional art purveyors react to AI art on the world stage is yet to be seen, but in the words of Leandro Castelao—one of the artists we enlisted for the study—“there’s an active role between the piece of art and the viewer. In the end, the viewer is the co-creator, transforming, re-creating and changing.” This is a crucial point; when it’s difficult to tell AI art apart from human art, the old adage that beauty is in the eye of the beholder rings particularly true.

Sign up for Scientific American’s newsletters.

By artist #6 (see bottom of story for key). Credit: Artwork Commissioned by
GumGum

But as it stands, it’s almost impossible to accurately define and translate “creativity” into a clearly structured set of rules; inspiration is not something that can be easily programmed into a machine at this stage. As part of the study we spoke to Max Fresn, chief creative at Born AI, who made the point that research has tended not to look deeply at why we as humans are creative. In this sense, he believes it’s difficult to program AI with reason or intent to create art.
Instead of worrying about AI’s threat to human creative supremacy, the future will be about embracing new technologies and the possibilities it brings for enhancing the process. It’s better to think of AI as your next creative partner; beautiful pieces of work can be produced in collaboration with it.

The views expressed are those of the author(s) and are not necessarily those of Scientific American.

ABOUT THE AUTHOR(S)

Ken Weiner
Ken Weiner is chief technology officer at GumGum where he leads the engineering and product teams. Weiner is a guest columnist for VentureBeat and Forbes, a frequent speaker at conferences and a member of industry groups such as IAB’s OpenRTB Working Group and various LA Ad Tech Meetups.

In early tests, this laser-activated silk and gold material held wounds together better than stitches or glue

By
Megan Scudellari

Images: iStockphoto

On Star Trek: The Next Generation, Commander Riker had an impressive ability to receive head wounds. Luckily for him, Dr. Crusher could whip out the “dermal regenerator,” a handheld sci-fi tool that healed skin wounds with a colorful laser.
Luckily for us, Kaushal Rege and colleagues at Arizona State University are developing essentially the same thing. Well, close enough. In a new paper out from the journal Advanced Functional Materials, the engineers successfully repaired animal wounds with a silk and gold nanomaterial activated by a laser.
In this proof-of-concept study, the technology quickly sealed soft-tissue wounds in pig intestines and on mice skin. In the pig intestines, for example, the seal proved to be roughly seven times stronger than traditional sutures.
When sealing wounds, sutures, staples, or glue can often cause problems such as leakages at the repair site and slow recovery of the tissue. “We’re trying to seal incisions faster and heal them at an earlier point of time,” says Deepanjan Ghosh, a PhD student in Rege’s lab and co-author on the paper.

Photos: Russell Urie/Advanced Functional Materials

This comparison shows the effects on a wound of conventional suturing, skin glue, and laser sealing at 0 and 2 days after injury.

To use a laser to seal skin, one must focus the heat of the light using some sort of photoconverter. Rege’s lab opted for gold nanorods and embedded them in a silk protein matrix purified from silkworm cocoons. A silk protein called fibroin binds to collagen, the structural protein that holds together human skin cells. When near-infrared light hits the gold nanorods, they produce heat and activate the silk and skin to create bonds, forming a sturdy seal.
The near-infrared laser operates at a wavelength of about 800 nanometers, which is powerful enough to heat the gold without damaging the skin.
The engineers created two disc-shaped sealants: one for wet environments that does not dissolve in water and one for dry environments that does. The first was used to repair samples of pig intestine. When the team pumped colored liquid through pieces of repaired intestine, the laser-activated sealant was seven times better than traditional sutures or glue at preventing liquid from escaping. In fact, the laser-repaired intestines performed just as well as normal, undamaged intestines, according to Ghosh.
Next, the group tested the water-dissolving sealant on mouse skin. When applied in a paste to a one-centimeter incision in the skin, the team’s treatment resulted in significantly higher skin strength two days after surgery compared to stitches or skin glue. Plus, it was fast—spending only about four minutes under the laser.
Because near-infrared light can penetrate fairly deeply into tissue, Ghosh and colleagues hope to use the technology to eventually repair things like blood vessels and nerves—tissues that are often deep in the body and time-consuming to repair. “Suturing takes a lot of effort given the dimensions of a nerve or a blood vessel, even for fully trained surgeons,” says Ghosh.
Ghosh expects the cost of the silk-gold material will not be prohibitively expensive, and the lasers would be a one-time equipment cost for medical centers.
They are currently watching how the laser-activated seals hold up in living rats. If that goes well, they’ll move to pigs, and perhaps eventually, humans.
Dr. Crusher would be so proud.