About a quarter of internet users use a virtual private network, a software setup that creates a secure, encrypted data connection between their own computer and another one elsewhere on the internet. Many people use them to protect their privacy when using Wi-Fi hotspots, or to connect securely to workplace networks while traveling. Other users are concerned about surveillance from governments and internet providers. However, most people – including VPN customers – don’t have the skills to double-check that they’re getting what they paid for. A group of researchers I was part of do have those skills, and our examination of the services provided by 200 VPN companies found that many of them mislead customers about key aspects of their user protections.

Would you like to rid the internet of false political news stories and misinformation? Then consider using — yes — crowdsourcing. That’s right. A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation.

Every day, more and more people interact with the Internet of Things (IoT) in daily life. The IoT includes the devices and appliances in our homes — such as smart TVs, virtual assistants like Amazon’s Alexa or learning thermostats like Nest — that connect to the internet. The IoT also includes wearables such as the Apple Watch or Bluetooth chips that keep track of car keys. Our cars themselves, if equipped with sensors and computers, are also part of the IoT. In an age where data theft and cyberattacks are increasingly routine, the IoT has security vulnerabilities that must be addressed as the popularity of IoT devices grows.

The digitization of biological and medical science is providing exciting and promising new pathways for improving health and daily life for mankind and our environment. The possibilities for new treatments, better fitness, and less prevalence of genetic diseases are numerous. However, these technologies and the information associated with emerging techniques carry certain risks and vulnerabilities. It is through understanding these risks and continuing to develop mitigation strategies for them, especially during the technology conceptualization and development phases, that we can continue to build promising new tools to improve life with confidence while addressing how they should be properly used.

U.S. intelligence chiefs are sounding alarms about an ever more perilous future for the United States, one in which the country is in danger of seeing its influence wane, its allies waiver, and key adversaries team up to erode norms that once kept the country safe and the world more stable. “It is increasingly a challenge to prioritize which threats are of greatest importance,” Dan Coats, Director of National Intelligence, said, sharing testimony that often and repeatedly contradicted past assertions by President Donald Trump. “During my tenure as DNI now two years and I have told our workforce over and over that our mission was to seek the truth and speak the truth,” Coats pointedly stated. Driving many of the concerns, according to intelligence officials, is a growing alliance between Russia and China competing against the U.S. not just for military and technological superiority, but for global influence.

A new RAND report examines current Russian hostile measures in Europe and forecasts how Russia might threaten Europe using these measures over the next few years. “Whatever the U.S. response, preparation for involvement in a wide range of conflicts can help reduce the risk of mismanagement, miscalculation, and escalation,” the report’s authos say.

With almost every online purchase, a person’s personal information — name, date of birth and credit card number — is stored electronically often in the “cloud,” which is a network of internet servers. Now, as more people buy from online businesses, researchers hope to employ a new strategy in the ongoing struggle to protect digital information in the cloud from targeted cyberattacks. The strategy establishes a new artificial intelligence system to combat digital intrusions.

Whether a piece of information is private, proprietary, or sensitive to national security, systems owners and users have little guarantees about where their information resides or of its movements between systems. As is the case with consumers, the national defense and security communities similarly have only few options when it comes to ensuring that sensitive information is appropriately isolated, particularly when it’s loaded to an internet-connected system. A new program seeks to create new software and hardware architectures that provide physically provable assurances around data security and privacy.

Three members of a far-right militia, who were convicted of plotting to massacre Muslims in southwest Kansas immediately after the November 2016 election, were sentenced Friday to decades in prison. The terrorist plot was foiled after another militia member informed the police. Defense attorneys, in their sentencing memo, vigorously presented what came to be known as The Trump Defense: They argued that Trump’s anti-Muslim rhetoric during the 2016 election made attacks against Muslims appear legitimate. The defense attorneys also argued that the plot architect had been “immersed” in Russian disinformation and far-right propaganda, leading him to believe that if Donald Trump won the election, then-President Barack Obama would declare martial law and not recognize the validity of the election — forcing armed militias to step in to ensure that Trump became president.

Bombs exploding, hostages taken and masked gunmen firing machine guns are all types of terrorist attacks we’ve seen. According to a new study, it’s the attacks we don’t see – cyberattacks – that happen more often and can cause greater destruction. “Little work has been done around the use of the internet as an attack space,” said Thomas Holt, Michigan State University professor of criminal justice and lead author. “The bottom line is that these attacks are happening and they’re overlooked. If we don’t get a handle understanding them now, we won’t fully understand the scope of the threats today and how to prevent larger mobilization efforts in the future.”

The word “hacker” often conjures up the stereotype of a nefarious genius typing away on a computer in a darkened room, stealing personal information — or worse. And thirty years ago, hacking was viewed as criminal activity. But the culture has changed. Now companies like Google, Facebook, and United Airlines offer rewards to people who discover and report vulnerabilities in their software.

Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind’s AlphaGo, Tesla’s self-driving vehicles, and IBM’s Watson. This type of artificial intelligence is referred to as Artificial Narrow Intelligence (ANI) – non-human systems that can perform a specific task. With the next generation of AI the stakes will almost certainly be much higher. Artificial General Intelligence (AGI) will have advanced computational powers and human level intelligence. AGI systems will be able to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for. The introduction of AGI could quickly bring about Artificial Super Intelligence (ASI). When ASI-based systems arrive, there is a great and natural concern that we won’t be able to control them.

By studying how more than 16,000 American registered voters interacted with fake news sources on Twitter during the 2016 U.S. presidential election, researchers report that engagement with fake news was extremely concentrated. Only a small fraction of Twitter users accounted for the vast majority of fake news exposures and shares, they say, many among them older, conservative and politically engaged.

“One of the characteristics of Virtual Terrorism is that it allows countries like North Korea (and Iran) to punch well above their weight in the cyber arena, and conduct their own form of ‘diplomacy’ on the cyber battlefield. These countries have already attacked the U.S. and other countries – all countries with the capability to do so, do so,” says Daniel Wagner. “The best way to fight it is to help ensure that as many people as possible understand what it is, what some of the challenges are in fighting it, and what can we do about it.”

We agree to give up some degree of privacy anytime we search Google to find a nearby restaurant or use other location-based apps on our mobile devices. The occasional search may be fine, but researchers says repeatedly pinpointing our location reveals information about our identity, which may be sold or shared with others. The researchers say there is a way to limit what companies can glean from location information.

The long view

Russia’s attack on American elections in 2016, described in Special Counsel Robert Mueller’s recent report as “sweeping and systematic,” came as a shock to many. It shouldn’t have. Experts had been warning of the danger of foreign meddling in U.S. elections for years. Already by 2016, the wholesale adoption of computerized voting had weakened safeguards against interference and left the United States vulnerable to an attack. So, too, the shift to digital media and communications had opened new gaps in security and the law that could be used for manipulation and blackmail.

When former U.S. Special Counsel Robert Mueller testified before the House Intelligence Committee last week about his investigation into Russian interference in the 2016 presidential election, some saw his comments about Moscow’s ongoing meddling attempts as the most important statement of the day. “It wasn’t a single attempt,” he said when asked about the spread of disinformation and whether Moscow would replicate the efforts again. “They’re doing it as we sit here and they expect to do it during the next campaign.” It’s not clear, however, who can or will lead the charge in this “war on disinformation.” Even as experts say the problem is worsening, it is unlikely that the current divided government could produce anything close to a solution.

A little-known science fiction book penned by the late father of U.S. Attorney General William Barr is being sold online at astronomical prices by sellers eager to attract Jeffrey Epstein conspiracy theorists. Space Relations: A Slightly Gothic Interplanetary Tale by Donald Barr has been thrust into the spotlight in the wake of the convicted pedophile’s apparent suicide, and eBay sellers — quick to link the two men — are now hawking it for as much as $4,999.

The preliminary results of Facebook’s long-awaited “bias” audit are out. The key takeaway? Everyone is still unhappy. The report is little more than a formalized catalog of six categories of grievances aired in Republican-led congressional hearings over the past two years. It doesn’t include any real quantitative assessment of bias. There are no statistics assessing the millions of moderation decisions that Facebook and Instagram make each day. The results are all the more remarkable because the audit was an exhaustive affair, the fruit of about a year of research led by former Republican Sen. Jon Kyl, encompassing interviews with scores of conservative lawmakers and organizations. “Despite the time and energy invested, the conspicuous absence of evidence within the audit suggests what many media researchers already knew: Allegations of political bias are political theater,” Renee DiResta wites.

The Israeli-Palestinian conflict has long been a global battle, fought by hundreds of proxies in dozens of national capitals by way of political, economic, and cultural pressure. As the internet has evolved, so have the tools used to wage this information struggle. The latest innovation — a pro-Israel smartphone app that seeds and amplifies pro-Israel messages across social media — saw its first major test in May 2019. It offered a glimpse of the novel methods by which future influence campaigns will be conducted and information wars won.

Caution and restraint are not known as the hallmarks of the digital revolution. Especially when there’s the admirable possibility of increasing participation by going digital, the temptation to do so is strong—and rarely resisted. But a decision reportedly taken by the Democratic National Committee, however, presents a significant display of caution that deserves both attention and praise. “Showing restraint usually isn’t exciting or flashy,” Joshua Geltzer writes. “But it can be admirable. And, here, organizations like the DNC that take these steps deserve our collective applause for erring on the side of caution, especially in a world replete with cybersecurity and election interference threats.”