Month: July 2015

REMEMBER THE OPENING scene of the first Fast and Furious film when bandits hijacked a truck to steal its cargo? Or consider the recent real-life theft of $4 million in gold from a truck transiting from Miami to Massachusetts. Heists like these could become easier to pull off thanks to security flaws in systems used for tracking valuable shipments and assets.

Vulnerabilities in asset-tracking systems made by Globalstar and its subsidiaries would allow a hijacker to track valuable and sensitive cargo—such as electronics, gas and volatile chemicals, military supplies or possibly even nuclear materials—disable the location-tracking device used to monitor it, then spoof the coordinates to make it appear as if a hijacked shipment was still traveling its intended route. Or a hacker who just wanted to cause chaos and confusion could feed false coordinates to companies and militaries monitoring their assets and shipments to make them think they’d been hijacked, according to Colby Moore, a researcher with the security firm Synack, who plans to discuss the vulnerabilities next week at the Blackhatand Def Con security conferences in Las Vegas.

The same vulnerable technology isn’t used just for tracking cargo and assets, however. It’s also used in people-tracking systems for search-and-rescue missions and in SCADA environments to monitor high-tech engineering projects like pipelines and oil rigs to determine, for example, if valves are open or closed in areas where phone, cellular and Internet service don’t exist. Hackers could exploit the same vulnerabilities to interfere with these systems as well, Moore says.

The tracking systems consist of devices about the size of a hand that are attached to a shipping container, vehicle or equipment and communicate with Globalstar’s low-earth orbiting satellites by sending them latitude and longitude coordinates or, in the case of SCADA systems, information about their operation. A 2003 article about the technology, for example, indicated that the asset trackers could be configured to monitor and trigger an alertwhen certain events occurred such as the temperature rising above a safe level in a container or the lock on a container being opened. The satellites relay this information to ground stations, which in turn transmit the data via the Internet or phone networks to the customer’s computers.

According to Moore, the Simplex data network that Globalstar uses for its satellites doesn’t encrypt communication between the tracking devices, orbiting satellites and ground stations, nor does it require the communication be authenticated so that only legitimate data gets sent. As a result, someone can intercept the communication, spoof it or jam it.

“The integrity of the whole system is relying on a hacker not being able to clone or tamper with a device,” says Moore. “The way Globalstar engineered the platform leaves security up to the end integrator, and so far, no one has implemented security.”

Simplex data transmissions are also one-way from device to satellite to ground station, which means there is no way to ping back to a device to verify that the data transmitted was accurate if the device has only satellite capability (some of the more expensive Globalstar tracking devices combine satellite and cell network communication for communicating in areas where network coverage is available).

Colby Moore intercepts a Globalstar satellite communications from a plane with his homemade transceiver

Moore says he notified Globalstar about the vulnerabilities about six months ago, but the company was noncommittal about fixing them. The problems, in fact, cannot be implemented with simple software patches. Instead, to add encryption and authentication, the protocol for the communication would have to be re-architected.

Globalstar did not respond to a request from WIRED for comment.

Top Companies Rely on Globalstar Satellites

Globalstar has more than four dozen satellites in space, and it’s considered one of the largest providers of satellite voice and data communications in the world. Additionally, its satellite asset-tracking systems—such as the SmartOne, SmartOne B and SmartOne C—provide service to a wide swath of industry, including oil and gas, mining, forestry, commercial fishing, utilities, and the military. Asset-tracking systems made by Globalstar and its subsidiaries Geforce and Axon can be used to track fleets of armored cars, cargo-shipping containers, maritime vessels, and military equipment or simply expensive construction equipment. Geforce’s customers include such bigwigs as BP, Halliburton, GE Oil and Gas, Chevron and Conoco Phillips. Geforce markets its trackers for use with things like acid and fuel tanks, railway cars, and so-called “frac tanks” used in fracking operations.

The company noted in a press release this year that since the launch of its initial SmartOne asset-tracking system in 2012, more than 150,000 units were being used in multiple industries, including aviation, alternative energy and the military.

In addition to asset-tracking, Globalstar produces a personal tracking system known as the SPOT Satellite Messenger for hikers, sailors, pilots and others who travel in remote areas where cell coverage might not be available so that emergency service personnel can find them if they become lost or separated from their vehicle.

Moore tested three Globalstar devices that he bought for tracking assets and people, but he says all systems that communicate with the Globalstar satellites use the same Simplex protocol and would therefore be vulnerable to interference. He also thinks the problem may not be unique to Globalstar trackers. “I would expect to see similar vulnerabilities in other systems if we were to look at them further,” he says.

The Simplex network uses a secret code to encode all data sent through it, but Moore was able to easily reverse-engineer it to determine how messages get encoded in order to craft his own. “The secret codes are not generated on the fly and are not unique. Instead, the same code is used for all the devices,” he says.

Moore spent about $1,000 in hardware to build a transceiver to intercept data from the tracking devices he purchased, and an additional $300 in software and hardware for analyzing the data and mimicking a tracking device. Although he built his own transceiver, thieves would really only need a proper antenna and a universal software radio peripheral. With these, they could intercept satellite signals to identify a shipment of valuable cargo, track its movement and transmit spoofed data. While seizing the goods, they could disable the vehicle’s tracking device physically or jam the signals while sending spoofed location data from a laptop to make it appear that the vehicle or shipment was traveling in one location when it’s actually in another.

Each device has a unique ID that’s printed on its outer casing. The devices also transmit their unique ID when communicating with satellites, so an attacker targeting a specific shipment could intercept and spoof the communication.

In most cases, attackers would want to know in advance, before hijacking a truck or shipment, what’s being transported. But an attacker could also just set up a receiver in an area where valuable shipments are expected to pass and track the assets as they move.

“I put this on a tower on a large building and all the locations of devices [in the area] are being monitored,” Moore says. “Can I find a diamond shipment or a nuclear shipment that it can track?”

It’s unclear how the military is using Globalstar’s asset-tracking devices, but conceivably if they’re being used in war zones, the vulnerabilities Moore uncovered could be used by adversaries to track supplies and convoys and aim missiles at them.

Often the unique IDs on devices are sequential, so if a commercial or military customer owns numerous devices for tracking assets, an attacker would be able to determine other device IDs, and assets, that belong to the same company or military based on similar ID numbers.

Moore says security problems like this are endemic when technologies that were designed years ago, when security protocols were lax, haven’t been re-architected to account for today’s threats.

“We rely on these systems that were architected long ago with no security in mind, and these bugs persist for years and years,” he says. “We need to be very mindful in designing satellite systems and critical infrastructure, otherwise we’re going to be stuck with these broken systems for years to come.”

Like this:

PUT A COMPUTER on a sniper rifle, and it can turn the most amateur shooter into a world-class marksman. But add a wireless connection to that computer-aided weapon, and you may find that your smart gun suddenly seems to have a mind of its own—and a very different idea of the target.

At the Black Hat hacker conference in two weeks, security researchers Runa Sandvik and Michael Auger plan to present the results of a year of work hacking a pair of $13,000 TrackingPoint self-aiming rifles. The married hacker couple have developed a set of techniques that could allow an attacker to compromise the rifle via its Wi-Fi connection and exploit vulnerabilities in its software. Their tricks can change variables in the scope’s calculations that make the rifle inexplicably miss its target, permanently disable the scope’s computer, or even prevent the gun from firing. In a demonstration for WIRED (shown in the video above), the researchers were able to dial in their changes to the scope’s targeting system so precisely that they could cause a bullet to hit a bullseye of the hacker’s choosing rather than the one chosen by the shooter.

“You can make it lie constantly to the user so they’ll always miss their shot,” says Sandvik, a former developer for the anonymity software Tor. Or the attacker can just as easily lock out the user or erase the gun’s entire file system. “If the scope is bricked, you have a six to seven thousand dollar computer you can’t use on top of a rifle that you still have to aim yourself.”

The exposed circuitboards of the Tracking Point TP750 that Runa Sandvik and Michael Auger hacked to control where the rounds hit.

Since TrackingPoint launched in 2011, the company has sold more than a thousand of its high-end, Linux-power rifles with a self-aiming system. The scope allows you to designate a target and dial in variables like wind, temperature, and the weight of the ammunition being fired. Then, after the trigger is pulled, the computerized rifle itself chooses the exact moment to fire, activating its firing pin only when its barrel is perfectly oriented to hit the target. The result is a weapon that can allow even a gun novice to reliably hit targets from as far as a mile away.

But Sandvik and Auger found that they could use a chain of vulnerabilities in the rifle’s software to take control of those self-aiming functions. The first of these has to do with the Wi-Fi, which is off by default, but can be enabled so you can do things like stream a video of your shot to a laptop or iPad. When the Wi-Fi is on, the gun’s network has a default password that allows anyone within Wi-Fi range to connect to it. From there, a hacker can treat the gun as a server and access APIs to alter key variables in its targeting application. (The hacker pair were only able to find those changeable variables by dissecting one of the two rifles they worked with, using an eMMC reader to copy data from the computer’s flash storage with wires they clipped onto its circuit board pins.)

Though the rifle’s scope seemed to be pointed at the target on the right, the researchers were able to make it hit the bullseye on the left instead

In the video demonstration for WIRED at a West Virginia firing range, Auger first took a shot with the unaltered rifle and, using the TrackingPoint rifle’s aiming mechanism, hit a bullseye on his first attempt. Then, with a laptop connected to the rifle via Wi-Fi, Sandvik invisibly altered the variable in the rifle’s ballistic calculations that accounted for the ammunition’s weight, changing it from around .4 ounces to a ludicrous 72 pounds. “You can set it to whatever crazy value you want and it will happily accept it,” says Sandvik.

Sandvik and Auger haven’t figured out why, but they’ve observed that higher ammunition weights aim a shot to the left, while lower or negative values aim it to the right. So on Auger’s next shot, Sandvik’s change of that single number in the rifle’s software made the bullet fly 2.5-feet to the left, bullseyeing an entirely different target.

The only alert a shooter might have to that hack would be a sudden jump in the scope’s view as it shifts position. But that change in view is almost indistinguishable from jostling the rifle. “Depending on how good a shooter you are, you might chalk that up to ‘I bumped it,’” says Sandvik.

The two hackers’ wireless control of the rifle doesn’t end there. Sandvik and Auger found that through the Wi-Fi connection, an attacker could also add themselves as a “root” user on the device, taking full control of its software, making permanent changes to its targeting variables, or deleting files to render the scope inoperable. If a user has set a PIN to limit other users’ access to the gun, that root attack can nonetheless gain full access and lock out the gun’s owner with a new PIN. The attacker can even disable the firing pin, a computer controlled solenoid, to prevent the gun from firing.

One thing their attack can’t do, the two researchers point out, is cause the gun to fire unexpectedly. Thankfully TrackingPoint rifles are designed not to fire unless the trigger is manually pulled.

In a phone call with WIRED, TrackingPoint founder John McHale said that he appreciates Sandvik and Auger’s research, and that the company will work with them to develop a software update to patch the rifle’s hackable flaws as quickly as possible. When it’s ready, that update will be mailed out to customers as a USB drive, he said. But he argued that the software vulnerabilities don’t fundamentally change the gun’s safety. “The shooter’s got to pull the rifle’s trigger, and the shooter is responsible for making sure it’s pointed in a safe direction. It’s my responsibility to make sure my scope is pointed where my gun is pointing,” McHale says. “The fundamentals of shooting don’t change even if the gun is hacked.”

Runa Sandvik fires a round from a Tracking Point TP750 rifle at a target 50 yards away as husband and fellow security researcher Michael Auger uses a laptop to hack into the rifle’s Wi-Fi, changing the angle of its shot

He also pointed out that the Wi-Fi range of the hack would limit its real-world use. “It’s highly unlikely when a hunter is on a ranch in Texas, or on the plains of the Serengeti in Africa, that there’s a Wi-Fi internet connection,” he says. “The probability of someone hiding nearby in the bush in Tanzania are very low.”

But Auger and Sandvik counter that with their attack, a hacker could alter the rifle in a way that would persist long after that Wi-Fi connection is broken. It’s even possible (although likely difficult), they suggest, to implant the gun with malware that would only take effect at a certain time or location based on querying a user’s connected phone.

In fact, Auger and Sandvik have been attempting to contact TrackingPoint to help the company patch its rifles’ security flaws for months, emailing the company without response. The company’s silence until WIRED’s inquiry may be due to its financial problems: Over the last year, TrackingPoint haslaid off the majority of its staff, switched CEOs and even ceased to take new orders for rifles. McHale insists that the company hasn’t gone out of business, though it’s “working through an internal restructuring.”

A view through the scope of the Tracking Point TP750.

Given TrackingPoint’s financial straits, Sandvik and Auger say they won’t release the full code for their exploit for fear that the company won’t have the manpower to fix its software. And with only a thousand vulnerable rifles in consumers’ hands and the hack’s limited range, it may be unlikely that anyone will actually be victimized by the attack.

But the rifles’ flaws signal a future where objects of all kinds are increasingly connected to the Internet and are vulnerable to hackers—including lethal weapons. “There are so many things with the Internet attached to them: cars, fridges, coffee machines, and now guns,” says Sandvik. “There’s a message here for TrackingPoint and other companies…when you put technology on items that haven’t had it before, you run into security challenges you haven’t thought about before.”

The US Census Bureau Director, John H. Thompson, revealed on Friday that his institution experienced a data breach the past week, but no sensitive or private information was leaked.

On a post on the bureau’s blog penned by Mr. Thompson himself, he revealed how attackers got access to an external facing database belonging to the Federal Audit Clearinghouse.

This database contained details about the names of the person submitting information to the US Census Bureau, organization addresses, phone numbers, usernames, and other types of data the bureau did not consider confidential.

US Census Bureau Experiences Data Breach, Anonymous Hackers at Fault

Regarding private information collected from US citizens and businesses, Mr. Thompson said, “That information remains safe, secure and on an internal network segmented apart from the external site and the affected database. Over the last three days, we have seen no indication that there was any access to internal systems.”

The group Anonymous Operations is to blame for the attack

The breach was announced on Twitter by a hacker group calling itself Anonymous Operations and was carried out in protest to the TTIP (Transatlantic Trade and Investment Partnership) and TTP (Trans-Pacific Partnership) trade agreements.

The tweet also contained a link to their own website, where four other URLs linked to the info obtained in the data breach.

The nationality of the hackers is unknown, but their anger against the TTP and TTIP agreements should narrow down the search.

While not as severe as other attacks on US government bodies, the bureau’s IT staff took the servers offline within 90 minutes after having found out of the attack, and this is how they’ll remain until their investigation completes.

From initial findings, “it appears the database was compromised through a configuration setting that allowed the attacker to gain access to the four files posted to the hacker’s site,” said Mr. Thompson.

Like this:

THE MOST SENSITIVE work environments, like nuclear power plants, demand the strictest security. Usually this is achieved by air-gapping computers from the Internet and preventing workers from inserting USB sticks into computers. When the work is classified or involves sensitive trade secrets, companies often also institute strict rules against bringing smartphones into the workspace, as these could easily be turned into unwitting listening devices.

But researchers in Israel have devised a new method for stealing data that bypasses all of these protections—using the GSM network, electromagnetic waves and a basic low-end mobile phone. The researchers are calling the finding a “breakthrough” in extracting data from air-gapped systems and say it serves as a warning to defense companies and others that they need to immediately “change their security guidelines and prohibit employees and visitors from bringing devices capable of intercepting RF signals,” says Yuval Elovici, director of the Cyber Security Research Center at Ben-Gurion University of the Negev, where the research was done.

The attack requires both the targeted computer and the mobile phone to have malware installed on them, but once this is done the attack exploits the natural capabilities of each device to exfiltrate data. Computers, for example, naturally emit electromagnetic radiation during their normal operation, and cell phones by their nature are “agile receivers” of such signals. These two factors combined create an “invitation for attackers seeking to exfiltrate data over a covert channel,” the researchers write in a paper about their findings.

Researchers Hack Air-Gapped Computer With Simple Cell Phone

The research builds on a previous attack the academics devised last year using a smartphone to wirelessly extract data from air-gapped computers. But that attack involved radio signals generated by a computer’s video card that get picked up by the FM radio receiver in a smartphone.

The new attack uses a different method for transmitting the data and infiltrates environments where even smartphones are restricted. It works with simple feature phones that often are allowed into sensitive environments where smartphone are not, because they have only voice and text-messaging capabilities and presumably can’t be turned into listening devices by spies. Intel’s manufacturing employees, for example, can only use “basic corporate-owned cell phones with voice and text messaging features” that have no camera, video, or Wi-Fi capability, according to a company white paper citing best practices for its factories. But the new research shows that even these basic Intel phones could present a risk to the company.

“[U]nlike some other recent work in this field, [this attack] exploits components that are virtually guaranteed to be present on any desktop/server computer and cellular phone,” they note in their paper.

Though the attack permits only a small amount of data to be extracted to a nearby phone, it’s enough to allow to exfiltrate passwords or even encryption keys in a minute or two, depending on the length of the password. But an attacker wouldn’t actually need proximity or a phone to siphon data. The researchers found they could also extract much more data from greater distances using a dedicated receiver positioned up to 30 meters away. This means someone with the right hardware could wirelessly exfiltrate data through walls from a parking lot or another building.

Although someone could mitigate the first attack by simply preventing all mobile phones from being brought into a sensitive work environment, to combat an attack using a dedicated receiver 30 meters away would require installing insulated walls or partitions.

The research was conducted by lead researcher Mordechai Guri, along with Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Elovici. Guri will present their findings next month at the Usenix Security Symposium in Washington, DC. A paper describing their work has been published on the Usenix site, though it’s currently only available to subscribers. A video demonstrating the attack has also been published online.

Data leaks via electromagnetic emissions are not a new phenomenon. So-called TEMPEST attacks were discussed in an NSA article in 1972. And about 15 years ago, two researchers published papers demonstrating how EMR emissions from a desktop computer could be manipulated through specific commands and software installed on the machine.

The Israeli researchers built on this previous knowledge to develop malware they call GSMem, which exploits this condition by forcing the computer’s memory bus to act as an antenna and transmit data wirelessly to a phone over cellular frequencies. The malware has a tiny footprint and consumes just 4 kilobytes of memory when operating, making it difficult to detect. It also consists of just a series of simple CPU instructions that don’t need to interact with the API, which helps it to hide from security scanners designed to monitor for malicious API activity.

The attack works in combination with a root kit they devised, called the ReceiverHandler, that gets embedded in the baseband firmware of the mobile phone. The GSMem malware could be installed on the computer through physical access or through interdiction methods—that is, in the supply chain while it is enroute from the vendor to the buyer. The root kit could get installed through social engineering, a malicious app or through physical access to the targeted phone.

The Nitty Gritty

When data moves between the CPU and RAM of a computer, radio waves get emitted as a matter of course. Normally the amplitude of these waves wouldn’t be sufficient to transmit messages to a phone, but the researchers found that by generating a continuous stream of data over the multi-channel memory buses on a computer, they could increase the amplitude and use the generated waves to carry binary messages to a receiver.

Multi-channel memory configurations allow data to be simultaneously transferred via two, three, or four data buses. When all these channels are used, the radio emissions from that data exchange can increase by 0.1 to 0.15 dB.

The GSMem malware exploits this process by causing data to be exchanged across all channels to generate sufficient amplitude. But it does so only when it wants to transmit a binary 1. For a binary 0, it allows the computer to emit at its regular strength. The fluctuations in the transmission allow the receiver in the phone to distinguish when a 0 or a 1 is being transmitted.

“A ‘0’ is determined when the amplitude of the signal is that of the bus’s average casual emission,” the researchers write in their paper. “Anything significantly higher than this is interpreted as a binary ‘1’.”

The receiver recognizes the transmission and converts the signals into binary 1s and 0s and ultimately into human-readable data, such as a password or encryption key. It stores the information so that it can later be transmitted via mobile-data or SMS or via Wi-Fi if the attack involves a smartphone.

The receiver knows when a message is being sent because the transmissions are broken down into frames of sequential data, each composed of 12 bits, that include a header containing the sequence “1010.” As soon as the receiver sees the header, it takes note of the amplitude at which the message is being sent, makes some adjustments to sync with that amplitude, then proceeds to translate the emitted data into binary. They say the most difficult part of the research was designing the receiver malware to decode the cellular signals.

For their test, the researchers used a nine-year-old Motorola C123 phone with Calypso baseband chip made by Texas Instruments, which supports 2G network communication, but has no GPRS, Wi-Fi, or mobile data capabilities. They were able to transmit data to the phone at a rate of 1 to 2 bits per second, which was sufficient to transmit 256-bit encryption keys from a workstation.

They tested the attack on three work stations with different Microsoft Windows, Linux, and Ubuntu configurations. The experiments all took place in a space with other active desktop computers running nearby to simulate a realistic work environment in which there might be a lot of electromagnetic noise that the receiver has to contend with to find the signals it needs to decode.

Although the aim of their test was to see if a basic phone could be used to siphon data, a smartphone would presumably produce better results, since such phones have better radio frequency reception. They plan to test smartphones in future research.

But even better than a smartphone would be a dedicated receiver, which the researchers did test. They were able to achieve a transmission rate of 100 to 1,000 bits per second using a dedicated hardware and receiver from up to 30 meters away, instead of a proximity phone. They used GNU-Radio software, a software-defined radio kit, and an Ettus Research Universal Software Radio Peripheral B210.

Although there are limits to the amount of data any of these attacks can siphon, even small bits of data can be useful. In addition to passwords, an attacker could use the technique to siphon the GPS coordinates of sensitive equipment to determine its location—for example, a computer being used to operate a covert nuclear program in a hidden facility. Or it could be used to siphon the RSA private key that the owner of the computer uses to encrypt communications.

“This is not a scenario where you can leak out megabytes of documents, but today sensitive data is usually locked down by smaller amounts of data,” says says Dudu Mimran, CTO of the Cyber Security Research Center. “So if you can get the RSA private key, you’re breaking a lot of things.”

Like this:

Valve’s Steam is the biggest platform in the PC gaming market, with Valve themselves being one of the most prominent companies in the gaming industry as a whole. Steam has millions of accounts all over the world, and in some cases people have invested literally thousands of dollars into their own accounts. Which is why a security breach like the one that just occurred a few days ago is something to take very seriously.

Reports are still blurry and information keeps coming out – Valve themselves are yet to make an official statement on the issue – but according to a demonstration that was posted on YouTube, a hacker could abuse the “forgotten password” feature in Steam’s log-in service, completely bypassing the stage where they have to enter a security code, and being granted access to reset the password of the account.

Steam Hit by Major Security Breach, Many Accounts Hacked

All an attacker needs to carry out this exploit is the account name of a Steam user. It’s not yet clear if Steam Guard offers sufficient protection from the exploit, as there have been some reports from users claiming that their accounts have been compromised even with Steam Guard enabled.

Valve have closed the loophole already, but not before significant amounts of damage were done to many users. Among the affected are various prominent Twitch streamers, who’ve had their accounts hijacked and locked down. Valve have apparently started to impose a 5-day “ban” on accounts that have been compromised in the incident, but it’s not clear if there will be any additional consequences for those who have been affected.

Some users have been worried about the possibility of “VAC bans” – Valve’s anti-cheat system is quite notorious for its permanent bans, and even in cases where users have had their accounts hijacked, Valve typically never revert these bans.

On the other hand, users who actively trade on the Steam Market have been worried that they might lose some of their hard-earned items, which is a real danger now that their accounts have been compromised. This could be one of the reasons for the 5-day lockdown, as it would allow Valve to carefully sort out the mess without people trading and getting in their way.

Some have pointed out that Valve’s silence on the matter has been worrying. It’s been nearly 24 hours since the issue started spreading publicly, and considering the large number of potentially compromised accounts, the responsible thing would be to notify users as soon as possible so they can take steps to secure their own accounts.

However, Valve haven’t commented on the situation yet and it’s not clear when they are going to speak up. Various social media sites have been discussing the issue very actively, such as reddit, where it’s already popped up in many popular sections and has been getting a lot of attention.

Users are advised to keep an eye on their e-mail accounts. If an e-mail related to password recovery is received, the user should definitely not ignore it, and proceed to verify that their account is still accessible.

It’s important to note that the information contained in the e-mail itself is not necessary to carry out the attack. Receiving this e-mail is simply a sign that the user is being targeted with the attack. However, some have reported that even changing their password has been ineffective, as the hackers are able to simply keep resetting it over and over again, and there was no good way to stop them.

A new anonymous web browser capable of delivering encrypted data across the dark web at high speeds has been developed by security researchers.

HORNET (High-speed Onion Routing at the Network Layer), created by researchers from Zurich and London, is capable of processing anonymous traffic at speeds of more than 93 Gb/s, paving the way for what academics refer to as “internet-scale anonymity”.

The research paper detailing the anonymity network reveals that it was created in response to revelations concerning widespread government surveillance that came to light through the US National Security Agency (NSA) whistleblower Edward Snowden.

A new browser for the dark web could offer significantly higher browsing speeds than Tor

HORNET has also been designed to overcome the flaws identified with other anonymous web browsers, such as Tor.

“Recent revelations about global-scale pervasive surveillance programs have demonstrated that the privacy of internet users worldwide is at risk,” the researchers have stated.

“To protect against these and other surveillance threats, several anonymity protocols, tools, and architectures have been proposed. Tor is the system of choice for over 2 million daily users, but its design as an overlay network suffers from performance and scalability issues: as more clients use Tor, more relays must be added to the network.”

Due to Tor’s system of encryption between the servers or relays that make up its network, web browsing can be a much slower experience than on the open web.

In order to achieve higher speeds, HORNET uses “source-selected paths and shared keys between endpoints and routers to support [anonymous communication]”, meaning that data is not encrypted as often as Tor, but still remains anonymous.

According to its creators, HORNET is also less vulnerable to attacks that have been used to reveal the identity of Tor users. The Tor Project has declined to comment on HORNET until the research has been peer-reviewed.