What's up in emerging technology

The Olympics has been hit by new and destructive malware

The news: Following a spate of hacks, Cisco security researchers have announced the discovery of malware at the Olympics that’s designed for destruction. It deletes backups and boot files, in order to brick computers and servers.

The damage so far: The Guardian reports that the malware has briefly taken down the Pyeongchang Olympics website, shut down wi-fi networks, and grounded drones. It could well strike again.

Who’s behind it: So far, that’s unclear. Researchers at Crowdstrike suggest Russia; those at Intexer say China. Whoever it is appears to have deemed the attacks worthy of the time required to build new tools.

US Special Counsel Robert Mueller (pictured above) has charged 13 Russians and three organizations, including the Internet Research Agency, with alleged interference in the 2016 presidential election.

Misinformation Inc.: The meddling was widely known, but the indictment provides new insights into how it worked. Russians visited the US in 2014 to conduct research and then built a sophisticated operation that included sizable departments handling search optimization, data analytics, and IT. One project had 80 people working on it.

Purple gain: The Russians concentrated on influencing opinion in so-called “purple states”, such as Colorado, Virginia and Florida, where the electoral gap between Republicans and Democrats was slim.

Virtual Americans: To hide their origins, the Russians rented space on servers based in the US and set up a virtual private network so that it looked like messages were coming from within the country.

That’s not all folks: Mark Weatherford, a former senior official at the Department of Homeland Security, says it’s pretty rare for the US to indict foreign nationals for information warfare. But he thinks we’ll see more such cases in future as technological advances make it easier to work out who’s behind online propaganda efforts.

Why? The study shows that automation will disproportionately hit middle- and low-income jobs. Its benefits—such as productivity gains and new investment—accrue mainly to highly skilled workers and large companies. (This “winner-take-all” effect has been known for a while.)

The more you earn, the more you save: As more income shifts to the top earners, who save more of it, less of money goes back into the economy.

What this means for growth: The savings will be funneled into investment, which will temporarily boost economic growth. But as the report says, “This growth isn’t based on effective demand and actually creates a misleading signal about how sustainable it is.” Eventually demand-led growth stops altogether, or even reverses, leading to “deeply unbalanced economies,” says the report.

Editor's Pick

“We’re in a diversity crisis”: cofounder of Black in AI on what’s poisoning algorithms in our lives

Artificial intelligence is an increasingly seamless part of our everyday lives, present in everything from web searches to social media to home assistants like Alexa. But what do we do if this massively important technology is unintentionally, but fundamentally,...

Artificial intelligence is an increasingly seamless part of our everyday lives, present in everything from web searches to social media to home assistants like Alexa. But what do we do if this massively important technology is unintentionally, but fundamentally, biased? And what do we do if this massively important field includes almost no black researchers? Timnit Gebru is tackling these questions as part of Microsoft’s Fairness, Accountability, Transparency, and Ethics in AI group, which she joined last summer. She also cofounded the Black in AI event at the Neural Information Processing Systems (NIPS) conference in 2017 and was on the steering committee for the first Fairness and Transparency conference in February. She spoke with MIT Technology Review about how bias gets into AI systems and how diversity can counteract it.

Germany says it won’t use killer robots, but soldiers are torn

Autonomous weapons remain incredibly controversial, and the debate even extends to the soldiers that might be working with them.

Germany says no: At this week’s Munich Security Conference, notes Reuters, the head of Germany’s Cyber and Information Space… Read more

Autonomous weapons remain incredibly controversial, and the debate even extends to the soldiers that might be working with them.

Germany says no: At this week’s Munich Security Conference, notes Reuters, the head of Germany’s Cyber and Information Space Command spoke out against killer robots. “We have a very clear position,” explained Lieutenant General Ludwig Leinhos. “We have no intention of procuring ... autonomous systems.”

Soldiers are mixed: Politico notes that while many soldiers may support the use of killer robots, Leinhos isn’t alone in his views. Marcel Dickow, an autonomous weapons expert from the German Institute for International and Security Affairs, says there’s a “rift running through essentially every military” about them right now.

Get The Download delivered to your inbox every day.

A detailed virtual house will help robots train to become your butler

A new digital training ground that replicates an average home lets AI learn how to do simple chores like slicing apples, making beds, or carrying drinks in a low-stakes environment.

Background: We all want a robot to run around our home and fetch us… Read more

A new digital training ground that replicates an average home lets AI learn how to do simple chores like slicing apples, making beds, or carrying drinks in a low-stakes environment.

Background: We all want a robot to run around our home and fetch us a beer. But teaching them to do it in the real world is expensive, because they’re still clumsy and make tons of mistakes.

Virtual beer fetching: So researchers have turned to training AI in virtual settings. Typically those spaces are video games like Doom or Grand Theft Auto. But IEEE Spectrum reports a new training ground, called AI2-THOR, lets AI interact with objects like refrigerators and furniture in something like the real world.

Why it matters: Think of all the unbroken glassware. Beyond saving time and money, AI2-THOR could teach AIs skills that are genuinely useful. Exploring the relatively complex, messy settings that humans inhabit could let AI learn more as we do, too.

Why it matters: Bloomberg points out that simulations by the FAA suggest that drone collisions are more dangerous than bird strikes to larger aircraft (blame the metal parts). Meanwhile, the number of drones being flown is quickly increasing.

What next: Currently, US drone pilots are supposed to keep their small aircraft below 400 feet, in line of sight, and away from airplanes and helicopters. But another Bloomberg article explains that aviation authorities are lobbying for tighter rules to avoid further collisions.

Researchers are struggling to replicate AI studies

Missing code and data are making it difficult to compare machine-learning work—and that may be hurting progress.

The problem: Science reports that from a sample of 400 papers at top AI conferences in recent years, only 6 percent of presenters shared… Read more

Missing code and data are making it difficult to compare machine-learning work—and that may be hurting progress.

The problem: Science reports that from a sample of 400 papers at top AI conferences in recent years, only 6 percent of presenters shared code. Just a third shared data, and a little over half shared summaries of their algorithms, known as pseudocode.

Why it matters: Without access to that information, it’s hard to reproduce a study’s findings. That makes it all but impossible to benchmark newly developed tools against existing ones, so it’s hard for researchers to know which direction to push future research.

How to solve it: Sometimes a lack of sharing may be understandable—say, if intellectual property is owned by a private firm. But there seems to be a wider-spread culture of keeping details under wraps. Some meetings and journals are now encouraging sharing; perhaps more ought to follow.

You’ve heard of CRISPR as a way to edit or delete genes. Now, two leading biologists say it could also be used to detect cancer or viruses.

What it did:Jennifer Doudna’s team at the University of California, Berkeley used a CRISPR-based test to accurately detect DNA from cancer-causing strains of human papilloma virus in human cells. Meanwhile, Feng Zhang’s lab at the Broad Institute used CRISPR to find tumor DNA in blood samples from lung cancer patients, as well as Zika and dengue virus.

How it works: The researchers attached a signaling molecule to CRISPR. When the CRISPR system finds the DNA it's looking for, it cuts it up the genetic material around it and releases the signaling molecule, indicating that it’s found foreign DNA.

Why it matters: The CRISPR-based test could be used for many things, like testing people during a disease outbreak or finding mutations in patients that reveal drug resistance or cause cancer.

All about timing: Specifically, the timing for drying concrete. If the lower layers don’t dry fully before the next are added, they buckle under the weight. You don’t want wonky walls.

The solution: These equations, developed by Akke Suiker from the Eindhoven University of Technology, tell you how much material is needed and how long it needs to dry between layers. Time to fire up the concrete printer.