Some element of the safety-case given to the FAA must rest on the fact that there are no inputs into the passenger entertainment system, i.e. there aren’t any network ports in the cabin

Some airlines are moving to implement WiFi on aircraft, like Delta and Lufthansa.

Over the 30-year lifespan of an aircraft, the cabin will be upgraded, entertainment system changed and services added

Thirty years is a long time to rely on an IT system. There aren’t many operational systems now running that were implemented in 1981. Those that are still running are seen to be very vulnerable to attack and treated very carefully. This is because the types of attacks have evolved massively in this time, with systems implemented just months ago vulnerable to attack.

My question is: how will these security systems be maintained? What if a vulnerability is found in the firewall(s) itself? How will the safety case change if the parameters of the entertainment system change? Does the FAA have any recommendations of the logical segregation of traffic if data from, for instance, WiFi hotspots, or GSM/3G pico-cells implemented in cabins needs to run over the same cabling infrastructure?

Again, maybe I have the wrong end of the stick, but I am concerned that, seemingly, no-one’s really looking into the implications of this and, given my own experience, unless these systems are implemented by people with a very deep understanding of process control security, it may not have been thought about.

Like this:

Way back in 2008 there were a number of stories floating around that the new Boeing 787, the first production airframe of which was delivered this week, had a serious security weakness. It turns out that Boeing, in their infinite wisdom, had decided to not segregate the flight control systems from the seat-back entertainment systems and would, instead, firewall them from each other.

The airplane’s control, navigation, and communication systems are networked with the passenger cabin’s in-flight internet systems.In January 2008, Boeing responded to reports about FAA concerns regarding the protection of the 787’s computer networks from possible intentional or unintentional passenger access by stating that various hardware and software solutions are employed to protect the airplane systems. These included air gaps for the physical separation of the networks, and firewalls for their software separation. These measures prevent data transfer from the passenger internet system to the maintenance or navigation systems.

The reference to firewalls and air gaps leads me to suspect that these systems are not fully segregated. If this is the case, I really hope that they’ve had some seriously good information security advice.Process control systems, and this is a process control system of sorts, aren’t always as well implemented as they could be. Where there is a safety-critical element, air gaps or data diodes are the only ways to go.

Designing out the vulnerabilities has to be better than retrofitting security afterwards.

I’d welcome comments from anyone, especially those who know more about the actual implementation.

Like this:

The security systems at airports are an interesting example of security “theatre”, where much of what goes on is about re-assurance rather than being particularly effective. I’ve blogged before about this and had some interesting responses, especially around the intrusiveness of current processes versus their effectiveness and where vulnerabilities lie. For obvious reasons, I won’t go in to this.

However, the TSA in the United States is rolling out a new version of their full-body scanner, apparently in response to the criticism that the old-versions were a step too far: the TSA initially denied, for example, that pictures of people’s naked bodies could be stored until several incidents emerged of security staff doing exactly that. Apparently this will be available as a software upgrade. The question is, will the UK do the same?

The new scanner overlays identified potential threats from scans over a generic diagram representing the human form and so masking who the subject is. This has to be a good thing, but like I said in my earlier post, a reliance on technology rather than using intelligence-led investigations will always lead to vulnerabilities while inconveniencing that majority of people.

I’d rather the people who would do me harm never made it to the airport.

Like this:

A very particular problem that we face is around customised malware, aka targeted Trojans. These malicious programs are written specifically to avoid detection by our current anti-virus systems and are sent to carefully selected people within the institution. The purpose of these programs can only be inferred by the recipients.

LSE uses MessageLabs to protect our inbound email, primarily to reduce the flood of spam to as small a trickle as possible. One of the systems that MessageLabs use is something called skeptic, that tries to identify previously unseen malicious software and to block it.

We think that this has been quite successful, although it is impossible to know how many attacks have managed to get through. Using the information we get from this system, we can discuss the implications of being on the list with the people being targeted.

The uncomfortable facts are that:

LSE is a major target

academia is being systemically attacked by a number of groups

the threat is growing all of the time

There is no foolproof way of blocking every attack, but the intelligence gained from knowing the areas of interest of the attackers allows us to focus our efforts of the people at highest risk.

If you want more information on this or are at LSE and want specific advice, please contact me.

UPDATE: Martin Lee and I are proposing doing a talk about this at the RSA Conference 2012 in San Francisco. See the teaser trailer here.

Like this:

This week, LSE received a couple of calls from “Microsoft”, stating that they had detected a virus on the PC that the user was using and could they install an update. Luckily, the person they called is in our support team and she managed to string them along for a bit. We have managed to get the originating telephone number, apparently a Croatian number, and have passed it on to the Police.

It’s worth following up on these calls, which are blatant social engineering attempts and informing staff. We have had reports that Skype users are also being targeted.

So far, this year, hundreds of millions of users of online services have had their accounts compromised or sites taken down. From Sony, Nintendo, the US Senate, SOCA, Gmail to the CIA, the FBI and the US version of X-Factor. Self-inflicted breaches have occurred at Google, DropBox and Facebook. Hackers have formed semi-organised super-groups, such as LulzSec and Anonymous. Are we at the point where information security professionals are starting to say, “I told you so”?

The telling thing about nearly all of these breaches is simple it would have been to limit the impact: passwords have been stored in the clear, known vulnerabilities not patched, corporate secrecy getting in the way of a good PR message and variable controls on sites of the same brand.

The media’s response is often “hire the hackers!”, an idea that is fundamentally flawed. Would you hire a bank robber to develop the security for a bank? No. The fact is that there are tens of thousands of information security professionals, many of whom are working in the organisations recently attacked, who know very well what needs to be done to fix many of the problems being exploited.

Many corporations have decided to prioritise functionality over security to the extent where even basic security fundamentals get lost. There needs to be a re-assessment of every organisation’s priorities as LulzSec and Anonymous will soon realise that there are juicy and easier pickings away from the large corporates and Government sites, who have had the foresight to invest in information security controls.

While I am not a lawyer and others have said this before, notably Rob Carolina in his talk “The Cyberspace Frontier has Closed“, I thought it worth reviewing some recent developments that demonstrate the fact that the Internet is not lawless and behaviour online may well result in liabilities “in the real world”.

There still seems to be this perception that laws don’t apply to online activity. Take Joanne Fraill, the juror who was jailed for eight months for contempt of court by contacting one of the defendants in the trial she was on. She had received clear guidance from the Judge on the case, as had all of the other jurors, not to research the case online and definitely not to contact anyone related to the trial. I had exactly the same advice when I was a juror at the Old Bailey a couple of years ago.

And, yet, she still did it, no doubt believing that:

It wasn’t so bad, and;

She wouldn’t get caught anyway.

She was wrong. The trial collapsed.

This sort of thinking is rife online, which is exacerbated by the fact that any search will bring back results that confirm every point of view on every subject, thus not really being much help.

Other areas on the Internet that people should consider in terms of consequences, include:

Copyright infringements

Data protection issues

Harassment

Money laundering

Tax evasion

Libel

Some of these apply to corporate organisations in a different way to individuals. For example, a data protection breach has the potential to seriously damage an organisations reputation. Libel may get you a hefty fine.

Just because people have a romantic notion of the Internet where normal laws don’t apply, doesn’t make it reality.

Like this:

I want to a presentation by Robert Thibadeau on Thursday last week, who was talking at an ISSA UK Chapter meeting, relating to Advanced Persistent Threats (APT), specifically where an attacker is able to modify some part of the pre-boot code, prior to an Operating System being loaded. The thrust of the discussion was about encrypted hard drives being a part of the armoury against these types of attacks, along with Trusted Platform Modules (TPMs).

As we all know, the standard practise of secure erasure for hard disks is to overwrite every sector seven times.

And then there was this nugget of information that I found highly interesting: this won’t work on Solid State Drives (SSDs). The architecture of these drives is determined by the underlying memory technology. Each “sector” on an SSD can only be written to about 1,000 times. In order to deliver a decent lifespan on the more expensive drives, therefore, the drive actually contains significantly more storage than stated on the packaging, with all data going through a load-balancer, to distribute the “writes” across the drive.

This means that it is very difficult to use a process involving overwriting data as each sector may actually be in a completely different place each time you try to overwrite it.

Robert’s proposed solution to this is to encrypt all data on SSDs, regardless of whether they’re in mobile devices or not. This way, the data can be rendered unreadable simply by erasing the encryption key.

Like this:

News reaches us of the latest, unannounced Facebook feature: facial recognition. What this implies is that Facebook will trawl through all the photos on the site, automatically “tagging” you in pictures that the system think you’re in.

Great time saver, you might think, but there are several things to think about:

It was enabled, quietly, without user consent and requires users to actively disable the feature

No technology of this sort is 100% accurate, so if you don’t disable it, you may find yourself tagged in embarrassing pictures that aren’t of you

This is an indication of the power of data mining. What’s to stop Facebook mining Google or Bing, looking for pictures on other sites?

Like this:

Stephan Freeman is the Information Security Manager at LSE and enjoys discussing information security-related topics.
The aim of this blog is to raise awareness and prompt discussions about information security. Comments are always welcome.