Categories

Researchers

Researchers at Norway-based security firm Promon have demonstrated how thieves with the necessary hacking skills can track and steal Tesla vehicles through the carmaker’s Android application.

In a video released this week, experts showed how they could obtain the targeted user’s credentials and leverage the information to track the vehicle and drive it away. There are several conditions that need to be met for this attack and the victim must be tricked into installing a malicious app on their mobile phone, but the researchers believe their scenario is plausible.

According to Promon, the Tesla mobile app uses HTTP requests and an OAuth token to communicate with the Tesla server. The token is valid for 90 days and it allows users to authenticate without having to enter their username and password every time they launch the app.

The problem is that this token is stored in cleartext in the app’s sandbox folder, allowing a remote attacker with access to the device to steal the data and use it to send specially crafted requests to the server. Once they obtain this token, criminals can use it to locate the car and open its doors. In order to enable the keyless driving feature and actually steal the vehicle, they need to obtain the victim’s username and password as well.

Experts believe this can be achieved by tricking the user into installing a piece of malware that modifies the Tesla app and steals the username and password when the victim enters them in the app. According to researchers, the legitimate Tesla app can be modified using one of the many vulnerabilities affecting Android, such as the issue known as TowelRoot. The TowelRoot exploit, which allows attackers to elevate privileges to root, has been used by an Android malware dubbed Godless.

In order to get the victim to install the malicious app, the attacker can use various methods, including free Wi-Fi hotspots.

“When the Tesla owner connects to the Wi-Fi hotspot and visits a web page, he is redirected to a captive portal that displays an advertisement targeting Tesla owners. In [our] example, an app was advertised that offers the Tesla owner a free meal at the nearby restaurant. When the Tesla owner then clicks on the advertisement, he is redirected to the Google Play store where the malicious app is displayed,” experts said.

While there are multiple conditions that need to be met for the attack to work, researchers pointed out that many devices run vulnerable versions of Android and users are often tricked into installing malware onto their devices.

Promon has not disclosed any technical details about the attack method. The company says it has been working with Tesla on addressing the issues. It’s worth noting that Tesla has a bug bounty program with a maximum payout of $ 10,000 for each flaw found in its websites, mobile apps and vehicle hardware.

This is not the first time researchers have demonstrated that Tesla cars can be hacked remotely. A few weeks ago, experts at China-based tech company Tencent showed that they could remotely control an unmodified Tesla Model S while it was parked or on the move. Tesla quickly patched the vulnerabilities found by Tencent, but downplayed their severity, claiming that the attack was not fully remote, as suggested in a video released by experts.

SecurityWeek has reached out to Tesla for comment and will update this article if the company responds.

Tesla car owners are urged to update their car’s firmware to the latest version available, as it fixes security vulnerabilities that can be exploited remotely to take control of the car’s brakes and other, less critical components.

The vulnerabilities were discovered by researchers from Tencent’s Keen Security Lab, and responsibly disclosed to Tesla. The company’s Product Security Team confirmed them, and implemented fixes in the latest version of the firmware.

Tencent’s researchers understandably didn’t reveal details about the flaws, but have provided a video demonstration of the attacks:

They have managed to remotely open various Tesla cars’ sunroof, turn on the blinkers, move the car seat, and open doors, all while the cars were in parking mode. But they have also managed to control windshield wipers, fold the side rearview mirrors, open the trunk, and manipulate the brakes from 12 miles away.

“As far as we know, this is the first case of remote attack which compromises CAN Bus to achieve remote controls on Tesla cars. We have verified the attack vector on multiple varieties of Tesla Model S. It is reasonable to assume that other Tesla models are affected,” they noted.

“The issue demonstrated is only triggered when the web browser is used (web browser functionality not enabled in Australia). Our realistic estimate is that the risk to our customers was very low, but this did not stop us from responding quickly,” a Tesla spokesperson told ZDNet.

The software update fixing the flaws has already been deployed over-the-air, so details about them should soon be revealed.

Security researchers from China-based tech company Tencent have identified a series of vulnerabilities that can be exploited to remotely hack an unmodified Tesla Model S while it’s parked or on the move.

An 8-minute video published on Monday by Tencent’s Keen Security Lab shows that researchers managed to perform various actions. While the vehicle was parked, the experts demonstrated that they could control the sunroof, the turn signals, the position of the seats, all the displays, and the door locking system.

While the car was on the move, the white hat hackers showed that they could activate the windshield wipers, fold the side view mirrors, and open the trunk. They also demonstrated that a remote hacker can activate the brakes from a long distance (e.g. 12 miles, as shown in the experiment).

According to Keen Lab researchers, the attacks they demonstrated are possible due to a series of vulnerabilities that have been chained together.

“As far as we know, this is the first case of remote attack which compromises CAN Bus to achieve remote controls on Tesla cars,” the researchers said. “We have verified the attack vector on multiple varieties of Tesla Model S. It is reasonable to assume that other Tesla models are affected.”

Based on the video made available by Keen Lab, it appears that a specific Tesla Model S can be identified and hacked while its owner is searching for nearby charging stations.

The vulnerabilities have been disclosed to Tesla Motors through the company’s Bugcrowd-hosted bug bounty program. According to Keen Lab, Tesla has confirmed the flaws and is working on addressing them. Fortunately, Tesla can release over-the-air firmware updates, which means that, unlike other carmakers, the company does not need to recall vehicles to apply security patches.

SecurityWeek has reached out to Tesla for comment and will update this article if the company’s representatives respond.

Tesla launched its bug bounty program in June 2015, more than a year after researchers started demonstrating that its vehicles could be hacked. After initially offering only up to $ 1,000 per vulnerability, in August 2015, the company decided to increase bug bounty payouts to a maximum of $ 10,000 for each flaw found in websites, mobile applications and vehicle hardware.

Research conducted over the past years by several experts – the most well-known are Charlie Miller and Chris Valasek, who have managed to hack cars both locally and remotely – has led to the launch of companies and departments that specialize in automotive security. Earlier this month, Volkswagen announced that it has teamed up with Israeli security experts to launch a new firm called CYMOTIVE Technologies.

A group of researchers from Brigham Young University has been tracking users’ neural activity while they are using a computer, and have discovered that security warnings are heeded more if they don’t pop-up right in the middle of a task or action that requires the users’ attention.

Humans are generally bad at multitasking, and they will ignore such messages in most cases when they are watching a video, typing, or inputing a confirmation code, i.e. when we can’t attend to the message without it affecting the quality of our first task or give enough attention to it.

The best moments to spring a security warning is when the user waits for a web page to load or a file to be downloaded/processed, switches to another site, or after he or she is done watching a video.

Anybody who has ever used a computer and ignored their fair share of security messages will not be surprised by the results of this study.

But it is surprising that the software industry hasn’t already made it so that all security messages that don’t require immediate attention are shown when a task is started, finished, or the user is waiting for a task to complete.

While it might seem that this study was a waste of time that proves something we all know, it will have an impact on our daily lives – or, more specifically, on the lives of Google Chrome users.

The research was performed in collaboration with Google Chrome security engineers, and its results convinced them to tweak the timing of the security messages in future versions of the Chrome Cleanup Tool.

Hopefully, other software makers will follow. With the human element consistently being the weakest point of the security chain, we need all the help we can get to make the right choices.

Malware analysts often need to share samples with each other. This might involve sending malicious files as password-protected email attachments or providing a link where the specimen might be downloaded. Because of the risks and the associated security precautions, sharing malicious program artifacts with other researchers can be tricky. Below are some considerations for engaging in such activities. See the end of this post for the summary of advice on sharing malware samples.

Password-Protecting the Archive

The most common way of sharing a malware sample with another researcher involves embedding the malicious file in a zip archive that has been protected with the password “infected”. Password-protecting the file aims at getting the specimen past antivirus scanners and makes it harder for the recipient to inadvertently infect their system.

The informal poll I conducted on Twitter confirmed the use of “infected” as the most common password, which has long been considered the industry standard. It was followed by the password “malware” as the distant second, which happens to be my choice for several reasons:

Researchers are so used to the password “infected”, that they might type it without giving it a second thought. I prefer the recipient to give explicit consideration to the nature of the file they are about to extract from the archive.

Antivirus tools know about the password “infected” and can use it to extract and scan the archive’s contents. This action can cause unnecessary alarms and can prevent the sample from reaching the intended recipient.

The classic problem with “infected” was outlined by Brian Baskin, who noticed that Gmail was blocking access to email attachments that contained malware zip’ed with that password. This behavior occurred due to the automated actions performed by the antivirus engine used by Google to scan email attachments.

On the other hand, using the password “infected” is convenient when uploading the sample to third-party tools that know about this common practice. For instance, VirusTotal automatically tries this password when you upload a zip’ed file to this popular malware-researching site, as shown in the screenshot below.

Using a less common password increases the chances that the sample won’t be blocked or flagged when you share it with someone, though this approach is not without fault.

Malware Attachments vs. Download Links

Email gateways might be configured to block messages that contain password-protected archives, regardless of the password used to protect them. Rather than sending the malware archive as an email attachment, consider uploading it to a website from which the other researcher will be able to download it.

You could upload the file to a malware-specific site, such as Malwr or VirusTotal, then send the fellow analyst a link or simply specify the hash value of the file. You might not even need to upload the specimen to such repositories, since they might already contain the specimen.

If using a general-purpose file-sharing service, first confirm that doing this doesn’t violate the service’s terms of use. The malware research blog Contagio had to move to its samples from Mediafire after Google Safe Browsing services notified Mediafire that the Contagio account was hosting “harmful” files there. As the result, “Mediafire suspended public access to Contagio account.” Contagio ended up moving its samples to a Dropbox Business account. (I’ve been using private Google Drive links for sharing samples without problems so far.)

I encountered a slight issue hosting a malware sample that I originally zip’ed and protected with the password “infected”. The sample accompanied my Introduction to Malware Analysis webcast and resided on my web server. While the file didn’t affect the ranking on my site on commonly-used blacklists, searching for my domain name on VirusTotal did highlight the presence of this known malicious file on my server, which could’ve had an adverse effect on my site’s rating at some point.

I should’ve used a different password for that file. Moreover, I should have avoided using zip as the archival algorithm because it doesn’t cloak the archive’s contents as well as some alternatives.

Zip vs. Other Archival Formats

It’s convenient to use zip when sharing malware samples, because this format is supported by most decompression utilities. However, this file format exposes several attributes about the original file that could be used to flag it as malicious and, therefore, interfere with the sharing objective. The standard zip algorithm doesn’t conceal names of the archived files even when they’ve been password-protected. It also reveals CRC checksums of the archived files, which could be used to detect that the zip’ed file contains malicious contents.

One archival format that is relatively popular among malware researchers is 7-Zip. This open source tool, available for most platforms, gives you the option of encrypting file names when password-protecting the archive. The command-line version of the tool calls this “archive header encryption,” which you can accomplish using the “-mhe=on” parameter. This option is usually turned off by default.

Hashes and Other Considerations

Email gateways, web security and other defenses might block access to password-protected archives that they cannot scan. In these situations, your best option might be to share the sample with the researcher offline, perhaps by mailing the person a USB key that contains the archived malware specimen.

Regardless of how you share the malware sample, it’s a good idea to specify the hash of the malicious file, so the analyst can confirm the integrity of the file they received from you. It’s common to see MD5 used for such purposes. However, given the possibility of hash collisions, it’s better to employ SHA256 as the hashing algorithm.

Before sharing a malware sample with other researchers, make sure they actually expect to receive the malicious specimen. Check with them which sharing method will work best for them. Oh, and before releasing the specimen to someone outside your organization, make sure you’re allowed to share this potentially-sensitive file with a third party.

Malware Sample Sharing Tips

Here’s the summary of the recommendations that I outlined above for sharing malware samples with other researchers:

Confirm for yourself that your employer or client obligations don’t prohibit you from sharing the specimen.

If practical, ask the recipient regarding the approach they prefer to use for receiving the file.

Don’t send the malware sample as a normal file. Instead, password-protect it inside an archive.

The zip format is frequently used, but you should consider using the 7-Zip format to better conceal the archive’s contents.

The password “infected” is frequently used, but you should consider using a less common password, such as “malware”.

Rather than emailing the specimen as an attachment, consider sending the researcher a link where they could download the file.

Specify the hash of the malware sample using a modern algorithm such as SHA256, so the recipient can confirm that they obtained the right file.

Confirm with the recipient that they successfully retrieved the malware sample that you shared with them.

On a related note, if you’re wondering where to obtain malware samples, take a look at my Malware Sample Sources for Researchers page.

DEF CON A quest to build a smart computer system that finds and patches bugs faster and more efficiently than humans is off to a good start with all the teams in DARPA’s Cyber Grand Challenge performing very well indeed.

The contest, held at the DEF CON hacking conference in Las Vegas, was organised by the research arm of the US military and saw seven teams test out their automated seek-and-patch-ware in a simulated operating system. The eight-hour contest saw the teams find and patch 420 flaws and create 650 proofs of concepts.

“Our mission is to change what’s possible so we can take huge strides forward in our national security capabilities,” said Arati Prabhakar at the post-contest press conference. “We did it today and it was a very satisfying experience.”

Each team was equipped with a server containing 128 Intel Xeon processors running at 2.5 GhZ and boasting over a thousand processing cores, 16TB of RAM and a liquid cooling system that required 250 gallons of water per minute to cool the big iron. They were let loose on a custom-designed operating system and instructed to find flaws, patch them automatically, and provide proof of concepts for flaws in each other's systems.

At the same time seven other similar system were used by the judges to monitor the progress of the event as the systems ran 96 rounds lasting 270 seconds, with 30 second breaks in between rounds. At stake was US$ 3.75m in government greenbacks.

The competition, which has taken three years and $ 55m to set up, is designed to automate the whole process of bug hunting.

Mike Walker, the DARPA program manager overseeing the Cyber Grand Challenge, said that this was the first stage in a possibly decade-long process to automate security monitoring and make networks more resilient.

“We have redefined what is possible and we did it in the course of hours with autonomous systems that we challenged the world to build,” he said. “I want people to understand how difficult it is to build prototype revolutionary technology and field it in front of the eyes of the world. I have enormous respect for those folks.”

A DARPA representative told The Reg that at this stage the winning team, with 270,042 points, was the ForAllSecure team, founded by the Carnegie Mellon University professor of electrical and computer engineering David Brumley. Results aren't final, but if confirmed his team will scoop the $ 2m top prize.

The ForAllSecure team’s success was all the more surprising because a key bug finding system in the computer’s programming crashed around half way through the competition. It repaired itself and got back up and running before the competition ended but maintained a narrow lead until the end of the contest.

In second place, with 262,036 points, was the TechX team from GrammaTech and the University of Virginia, setting them up for a $ 1m payday. In third place was the Shellphish team, led by Professor Giovanni Vigna, director of the Centre for CyberSecurity at the University of California, Santa Barbara, who are in line for $ 750,000.

Once the results have been confirmed the winning system will be pitted against human foes in a capture the flag competition. Walker said that he didn’t expect the automated system to come close to matching fleshy competitors in the contest, but the first five minutes of the competition would give a good example of how computers could leverage their faster processing speed against human inventiveness.

This is a long road we are going to travel, Walker stressed. The first United States Computer Chess Championship took place in 1970 and it wasn’t until 1996 that IBM’s Deep Blue system finally beat a human grandmaster at the game - and then only at speed chess. But the fuse has been lit he said, and the clock is now ticking for professional bug hunters ... and perhaps the automated systems that could one day put them out to grass. ®

Researchers at Rapid7 spotted bugs in Fisher-Price and hereO products that could expose data.

Researchers at Rapid7 discovered vulnerabilities in Fisher-Price's Smart Toy and hereO's GPS platforms that could allow an attacker to collect the personal information of a user.

The Smart Toy is a stuffed animal that connects to an online account via Wi-Fi to provide users with a customizable educational and entertainment experience.

The toy's platform contained an improper authentication handling vulnerability that could allow an unauthorized user to obtain a child's name, age, date of birth, gender, spoken language and more, according to a Feb. 2 security blog post.

Many of the platform's web service application program interface (API) calls didn't appropriately verify the “sender” of messages and could allow a would-be attacker to send requests that shouldn't be authorized under ideal operating conditions, according to the post.

In addition to compromising privacy, an attacker could use the bug to launch social engineering campaigns or to force the toy to perform actions that users didn't intend, the researchers wrote.

The platform in a GPS tracker that allows family members to share their location with each other was also vulnerable to outside manipulation.

The hereO GPS platform contained an authorization bypass vulnerability which could allow an attacker to access every family member's location, according to the post.

Once exploited, an attacker could discreetly add their account to any family's network and manipulate notifications through social engineering to avoid detection.

Researchers gave the example of an attacker adding themselves to a family's network under the “name” 'This is only a test, please ignore,' in an attempt to avoid raising suspicion.

Both vulnerabilities were reported to their respective vendors and have since been rectified. Rapid7's Security Research Manager Tod Beardsley told SCMagazine.com in an email correspondence that these issues didn't require patches or firmware upgrades.

Beardsley said that both vendors acted “reasonably and responsibly” during the disclosure process. It's nearly impossible to ship products without some bugs when dealing with the internet of things (IoT) or software in general, he said.

”One, make sure that bugs are found in the design and development phases, and two, once vulnerabilities are identified after launch, they are easily and quickly remediated without too much effort by the end users,” he said.

Other IoT toys have been found to pose risks to users as well.

Last year, researchers identified security concerns in Mattel's Hello Barbie that could allow an attacker to extract, internal Mac addresses, Wi-Fi network names, account IDs, and MP3 files from the popular doll.

ToyTalk, the company that operates the doll's speech services, reportedly admitted the doll could be hacked but said the vulnerable information did not identify children, nor did it compromise any audio of a child speaking.