Returning to Heathrow the other day after the Net-ID conference a small group of us spilled in to the imigration hall at that ridiculous high-speed walk you only see in airports. I went as the first person in the queue to see an immigration officer with my passport and at exactly that moment, another guy entered the glass cubicle with the Iris scanning equipment. My passport was taken off me, checked and handed back in about 5 seconds. The guy in the Iris scanner took about 10 seconds. As he came through, I jovially said to him “I beat you” to which he replied “only just”…

So I thought about this. You can register your Iris, which of course means a certain loss of privacy. Once that data has been revealed past first-discolsure, the only way to think of it is that it is permanently recorded and it will, over time, be used for more and more purposes. It may end up on the police and security services databases. Although policies and procedures currently stop this from happening, do I really trust they they’ll stay cast in stone *FOREVER* (that’s an important aspect to consider when you give up key data about yourself)? Not on your nelly. That data is going to gradually find its way all over government, and stands a pretty good chance of leeching out in to the commercial sector and if that happens then clearly organised crime aslo has a possible road to it in the future.

Then I thought about the convenience. For me it couldn’t have ben simpler. For him he had to operate the iris-scan machine etc. Of course, if I’d been sat at the back of the plane, by the time I entered the immigration hall the passport queue would have ben longer and I suspect the iris queue much smaller. So I give up my privacy to get more choices at the airport. But let’s wind this story forward 5 years. By then the convenience of this method will mean many more people will have opted for it. And I get this sneaking suspicion the queues will be the same length. And as it clearly takes about the same time for an immigration official to check your passport as it does for a machine to scan your iris, the rate at which each queue will be serviced will be similar. So the only advantage I get is when I’m, let’s say maybe 20 or 30 people deep in the queue, and the iris-scan option is still not widely used. As soon as the queues are equal in length, I get no advantage from either system.

I don’t think I’ll be giving up the unique (apparently) details of my Iris to the immigration service. I think I can live with a few queus until it all evens itself out in the future. Think about online check-in and so-called “Fast Bag Drop”. When it first came out wasn’t it great? Print your boarding pass at home and go straight to the head of the queue at the fast-bag-drop point. But what’s it like now? Sometimes the fast-bag-drop has a longer queue than the standard check-in desks and there are staff shepherding folks out of the fast bag-drop in to the standard check-in desks. If you happen to be in business class you can still use the business check in desks for fast bag drop anyway – so the only advantage then becomes seat selection. Some of the cheaper economy seats don’t even give you the option of choosing your seat anyway. So these amazing conveniences gradually lose their convenience as more people take advantage of them and they start to become the norm. Online check-in doesn’t make us reveal personally identifying information (PII) about ourselves, but over time the advantages of it have ben eroded. In my book, the cost of the temporary airport convenience of iris-scan immigration is not worth the invasion of my privacy. If I give up my iris, I have no idea what laws and legislation will apply to it in the future.

The big news in the UK this week around identity is that Norwich Union has received the bigest fine ever from the Financial Services Authority (FSA) for having shoddy practices in place to protect its customer data. Here – quoted from the register:

http://www.theregister.co.uk/2007/12/17/norwich_union_life_fsa_fine/The Financial Services Authority (FSA) has fined Norwich Union £1.26m for failing to safeguard customers against fraud.The City regulator said it had slapped the firm’s UK life insurance biz, Norwich Union Life, with a record-breaking financial penalty because of a number of glaring system weaknesses which exposed confidential customer data to fraudsters. Security lapses in the firm’s caller identification procedures allowed fraudsters to impersonate customers by using information, including names, addresses, and telephone numbers, obtained from public sources such as Companies House.The FSA said Norwich Union Life first learned that it was the victim of organised fraud in April last year. This led to 74 life policies being falsely surrendered with funds, said to be worth a total of around £3.4m, paid out to accounts controlled by the criminals.A further 558 policies were also put at risk where fraudulent attempts had been made.Norwich Union Life had failed to assess the risks posed to its business by financial crime and also failed in its duty of care to its customers in a timely manner, said the FSA.The regulator added that by the end of July 2006, Norwich Union Life discovered that a number of current and former directors of the firm and its parent company Aviva had been hit by the fraud scam. It identified and quickly informed nine of its directors that their life policies had been targeted.The FSA said Norwich Union Life prioritised protecting the risks posed to policyholders who were Aviva directors rather than responding to the security loophole in its caller identification system that exposed its seven million strong customer base to possible fraud.A number of FSA recommendations were issued to the life insurance provider in May last year, including a suggestion that callers wanting access to their account must give their policy numbers over the phone.Norwich Union Life ignored that advice on the grounds that it would impact customer service before backing down in October 2006 when it finally implemented the changes.FSA director of enforcement Margaret Cole said the fine sent out “a clear message” that the regulator takes information security seriously.“Norwich Union Life let down its customers by not taking reasonable steps to keep their personal and financial information safe and secure.“It is vital that firms have robust systems and controls in place to make sure that customers’ details do not fall into the wrong hands. Firms must also frequently review their controls to tackle the growing threat of identity theft,” she said.Norwich Union Life apologised to its customers for the monumental security cock-up and said it had taken appropriate steps to prevent such a problem arising in the future.“Whilst the number of customers affected is very small compared to the number of policies we manage overall, any breach in customer confidentiality is clearly unacceptable.“Our customers can, however, be assured that we have taken this matter extremely seriously and have thoroughly reviewed our systems and controls as a result,” said Norwich Union Life CEO Mark Hodges.The firm has until 31 December to pay the fine to the FSA in full. Norwich Union said it will compensate all the customers affected by the frauds. ®

——————————————-

It’s obvious that legislation is needed to counter these sorts of problems and I think there is an acceptance among identity and security professionals that it’s now a matter of when not if it will make it on to the statute.The paradoxical thing about the legislation though is that the government itself, the very body that will introduce the legislation, will be exempt from it. The IPS (Identity and Passport Service) are saying the reason the procurement process for the National Identity Card is so “intense” is because the government won’t accept any liability for failures or information leaks. In other words, if they are careless with your data, and your life is screwed up as a result, that’s just bad luck. The government’s reasoning is that such liability could bankrupt the UK. However all the private sector organisations will most likely be limited liability companies. I’m sure the government could include a limited liability clause of several billion pounds only for people who are materially affected by a breach. That’s the sort of thing that might demonstrate to citizens they are respected. It seems at the moment, if you are a citizen you don’t deserve respect, yet if you are a consumer you do.Well, according to the film director, Stanley Kramer – “It’s a mad, mad, mad, mad world”.

It’s amazing how people can be influenced by a convincing speaker who makes points with conviction, but who happens to be behind the curve in terms of technological developments. I talked to somebody recently who was of the opinion that technical attacks are how people have their identity stolen. The speaker who’d influenced her had said modern operating system security was to blame.

Had he been speaking 2 years ago, I think I’d have been on-side. But look at the situation today. Put yourself in the shoes of the “CEO” of an organised crime syndicate. Looking at the return-on-investment what would you do? The choices are to perform a technical attack (on the machine or its connection to say, a banking website) or a human attack (set up a phishing web site).

Technical attacks: you need to acquire the skills. You’re never going to break in to modern secure connection (say an SSL connection between your victim’s computer and their banking website). The cost of the computing power is too immense and would be too slow to render anything useful. You now have to try and get some of your malicious software on to the victim’s machine. 2 ways. Authenticated and unauthenticated. Things like worms exploit weaknesses in the operating system where certain facilities don’t require credentials. In other words, you don’t need a username/password to get your code to execute on somebody’s computer. Authenticated access – you do need a username and password to get it to execute. So you create malicious software and use some form of subterfuge to get the user to download it on to their machine and execute it. That way, they’ve already used their username and password to log on in the first place. Let’s call this scam lovebug.vbs and put it in an email titled “I Love you” for example.

But let’s have a look at that. Is that a technical attack? No. Your computer will execute the instructions embedded in to the software you run on it. The question is, how did the lovebug (or keylogger) get on to your machine in the first place? That was subterfuge wasn’t it? That’s a human attack. It has nothing to do with operating system security. So we’re back to unauthenticated attacks again.

The thing that has got in the way of unauthenticated attacks in the past couple of years has been the personal firewall. In the case of Windows XP, and Vista, built-in software that doesn’t permit data to enter your machine unless you initiated its download. So for example, requesting a web page would be permitted becuase you initiated the request, but if somebody tried to send some data to your machine it wouldn’t work because you never initiated it in the first place.

It’s not only XP and Vista, many Linux distributions and the Mac either use built-in versions, or computer owners are going to the computer store to buy personal firewall software.

Also, security patches are released so quickly when defects are discovered that these attacks never get the stranglehold they once did.

So the next thing might be to create a dangerous activeX control (that’s just a fancy way of saying software) that does something not in your best interests. You put it on a website and encourage your victimes to download it. Perhpas the controls searches every file on your hard disk looking for the string “password”. When it finds it, the file gets copied to the fraudster’s website, where they later examine it in detail hoping to extract a useful site/user-id/password combination. Hopefully, it’s your banking website.

But how do you encourage your victims to download the control? Especially these days where the default behaviour for most browsers is not to download controls. Well, I guess you put some instructions that say something like “You will receive a warning about this control. Ignore the warning and click OK”. If anybody falls for that, it’s a human attack not a technical attack. Again, they got some software on your machine and got you to execute it.

Microsoft has recently introduced features like User Account Control (UAC), where weird stuff happens when you do something that has all the hallmarks of a dangerous activity. A dialogue box pops up, the background goes dark and you can do nothing other than answer this dialogue. Even with all this weirdness happening, there are people who will ignore the weirdness and click OK to install the software. They have usually been foxed by a clever message describing exactly what will happen and encouraging them click OK. It’s a human attack.

The vast majority of the attacks that are performed on computers take place in the last 2 feet of the connection from the web server to the human being. The 2 feet between the screen and the user.

The classic phishing attack does this. You receive an email that says it’s from your bank. You have apparently made a large withdrawal and they encourage you to log in to your site here. In a panic you go right ahead and click here to reveal what looks like your banking website. It asks for your account number and password and so you dutifully type the secret you’d not even tell your best friend. Now the criminals have your account number and password you are doomed. According to anti-phishing.org, 54 hours later, the bogus web site will have disappeared of the face of the earth – with your money.

But that’s not a technical attack. It’s a human attack. It used subterfuge to fool a human in to doing something that’s against their best interests. Why does it work? Because as humans we’ve become conditioned to 2 things. We type our passwords in to web pages. We expect every web page we ever see to ask for our username in a different way. It’s a tragic weakness of the web that it allows stunning creativity. Each site likes to show off its individuality.

Compare this with the way you log in to an operating system like Windows. The Windows weakness is that the login experience is different between each version of windows. But if you are loggin in to say Windows XP at your employer. It doesn’t matter if you log in to your own machine, your friend’s machine or a machine on the 5th floor owned by somebody else – you will be prompted for your secret credentials in exactly the same way. Every time. Absolute consistency. So much so that if the screen looked different, if the experience was different, if it simply wasn’t “right”, you’d be suspicious that something had gone wrong. This consistency doesn’t exist and is certianly not expected on the internet. It’s one of the reasons why a phishing site doesn’t actually have to be that faithful a reproduction of its genuine counterpart. In many cases if the brand colour is approximately right and the correct logo appears on the site somewhere, that’s good enough. So we can say it’s the lack of consistency that is the biggst aid in fooling the human being. I was q ittle taken aback when this lady suggested it is this very lack of consistency that is protecting us from these attacks. She argued that if all the sites use a different technology, it makes it harder to compromise the “entire system”. But of course, she’s talking about technical attacks, not human attacks. As I said earlier, not very many criminals perform technical attacks. They can’t recruit the mind-numbingly cerebral skills required, and they can’t accquire enough computer power. They write simple software that logs keystrokes or searches your hard drive for plain text passwords and they use human attacks to try to get you to execute them. I only talked to her for about 40 minutes, on the telephone and she left the conversation convinced the problem is technology.

I think if you look at Kim Cameron’s 7 laws of identity, and the resulting standards for the identity metasystem and the idea of using Information Cards – these technologies directly address these human attacks. 2 of the laws in particular. Consistent Experience Across Contexts and Human Integration. When the Information Card standards were developed, discussed, opened to debate, it’s amazing the range of organizations that contributed. There are the obvious candidates – IBM, Microsoft, Novell, Sun Microsystems, CA, RSA and so on. But also the less usual – Privacy International, the Enterprise Privacy Group and other privacy lobbyist organisations. And then distinguished individuals like Lawrence Lessig.

Having just applied for a replacement passport, I was surprised at how easy it was. The binding between my old passport and the application being a couple of identical passport photos. Clearly somebody in the IPS office looks at the photos I slipped in the application, looksat the photo on my old passport and if it’s me – hey presto I get a new passport, no questions asked.

But I obviously didn’t look identical. I’m 10 years older for a start. This got me thinking. If I had maybe 2 years, I could fox the passport service in to giving a legitimate passport to somebody. Here’s my scam.

I meet my client. I take a photograph of him and one of me. I then use morphing software to come up with say 6 intermediate images. I apply for 48 page passports because I do a lot of travel…

It is therefore not unusual for the IPS toget maybe 6 passport applications in 2 years. I fake the stamps in the passport so the pages are full for each application. In the first application I use the first morphed photograph. It’s a close enough match to my existing passport photo for them to issue a new one with no suspicions. I then do the same thing with all the intermediatemorphed photos, until the last passport application, 2 years later I send the photograph of my client.

My client ends up being biometircally bound to my data. I feel this might be possible with any biometric data and the necessary morphing software in the situation that the applicant themselves are the collector of the data. THis would surely easily work with fingerprints, Iris scans etc, which are all merely images.

There are enough criminalcases in which fingerprint data is successfully challenged, because all that is loked for is points of similarity. The more points of similarity the higher the confidence that the prints found at the scene of crime are those of the suspect. Morphing software could easily circumvent this by maintaining enough points of similarity at each intermediate stage, but still making other changes.

The UK’s Criminal Records Bureau (CRB) and the Identity and Passport Service (IPS) have completed a trial with 96% of passport holders and 87% of ID Card Holders (not real ones, trial ones, the first cards won’t be issued until 2009) marking it as a great success. You can read about it here: http://www.silicon.com/publicsector/0,3800010403,39168609,00.htm?r=2

I have personal experience of the old system. I volunteered as a helper at my son’s scout troop (well beaver cubs actually) and had to undergo a CRB check before I’d be permitted to work with children. I had to fill out a form with biographical information to identify me, and put my passport number on the form. I then had to take the form along to the lady who runs the troop. It was her job to check the photograph on the form matched my face, and then, most importantly, to check that the passport number on the form, matched the one in my passport. A 2-way binding if you will. Me-to-passport via photograph. And passport-to-form via passport number. In fact, she got distracted as she flicked through it with the amount of country entry stamps and visas in it. If you only ever do European travel, you are unlikely to ever get a stamp in your passport if you live here. So for an unseasoned traveller to a see a passport full of them was for her, unusual. Long-haul travel invariably results in a stamped passport, however she showed her naivety in this area by saying “they never stamp mine – they always just wave me through”. It distracted her enough, that I don’t believe she did a proper check that the number on my passport, matched the number on the form. A human failure. But also, her relative lack of sophistication with the whole passport scenario-thing, makes me ask myself – how bad a fake could I have passed off to her, and ended up with a clean CRB record? She clearly had so little experience in handling passports that the system is now quite riddled with holes.

For the record, I do not have an entry on the CRB. But if I did, I could have easily circumvented the current system.

The trial involved a mixture of online and physical presence. I suspect more than anything, it is the mixture of technology (in the online environment), convenience (you can forget your identity documents, but still proceed, at home) and physical presence (you have to visit a registration agent) that makes this system a success.

I think it can be improved by looking in particular at 2 of Kim Cameron’s laws of identity. Human Integration and COnsistent Experience across contexts. So that the user uses an online system that includes the human asa component of the system (an Identity Selector), and alsoprovides for a consistent login experience, no matter if they are loggin in to the CRB system, the IPS service or their favourite music download site. If the experience is ALWAYS the same when they log in, any anomaly can immediately be thought of as suspicious. This would certainly help in these cases where the consequences of getting it wrong are so dramatic.

As governments around the world rush to create national identity schemes and to roll out national identity cards I wondered about the idea of “one card in your wallet”.

I recently went on a cub-scout campng trip with my 9 year old son. After the first day’s watersports activities, when the children were all “asleep in their tents” – yeah right – and us Dads were enjoying our “Cheese and Wine Evening” (I do enjoy cub-scout euphemisms), the topic of the national identity card came up. I thought I’d conduct a small poll of the assembled fathers – guys from all different backgrounds. A few techies like me, salespeople, insurance, air-conditioning, heavy engineering, builders, a food-designer (well, chemist really), project managers, fat, thin, small, tall, old, young, grey-on-top, hair-on-his-hair – a whole cross-section of life. I asked how many of them would be happy to keep their wallet full of cards as it stands at the moment, or to have it replaced by a single national identity card which would have all the necessary applications and data loaded on to it to perform the function of every card in their wallet.

Not quite unanimous, but almost total agreement with the idea that one card is better than many. But as the conversation went on, many decided to change their minds. There then followed a lot of “it depends” and “who is in charge of the system” and quite a lot of “what if I lost the card?”. A healthy debate on the trustworthiness of the government was the inevitable outcome. We split in to different camps (and as we were camping, that was right and proper). The “what about the audit trail?” camp. Dads worried about inferences the security services could make about the audit trail we leave behind us as we use the card for different reasons. Concerns that under current legislation, they have the right to lock you away for ever without trial if you are suspected. Could an innocent audit trail lead to unfounded suspicion? Unanimous agreement that it was unlikely – but what if it did happen. to You? There was definite unease in this campabout the idea.

The other camp was the “If I’ve got nothing to hide, I don’t need to worry”. They had faith that nothing like that could ever go wrong and thateven if it did, a few innocent people are sacrificed for the greater good of the many – a small price to pay for being able to monitor people. It increases our safety.

The final camp was the “how much of my other data am I revealing to unauthorised parties when I put the card in a card reader”. They didn’t like the idea that a hacker might modify a terminal and you’d think you’re in the off-license buying a bottle of wine when in fact the reader is extracting your life from the card. The idea that you’d typed your PIN in and authorised the machine to get at your data.

There were about 30 Dads there. There were approximately 10 people in each camp. Some people were in 2 camps at the same time. But overall, the idea of a single card to do everything in your life was not thought to be a good thing. It seemed to give control to the card issuer (the government) and they were tired of the amount of control and surveillance the government is gradually getting over our lives at the moment.

That got us on to the topic of speed cameras and CCTV cameras. An interesting observation from Dads who had had a credit card stolen and used (in the days before PINs I guess). They could tie up the time of the purchase with the location. Why were the police not interested in using in-store CCTV footage to catch the “real criminals”, but they oh so very interested in the footage of you breaking the 60 limit on the M25… A general sense that the more “criminal” you are, the less interested in catching you the police are…

We then planned a fantastic £multi-million bank robbery, but we were too drunk to pull it off by then. And despite the fact that the police would never be interested in investigating it because it’s not a motoring offence, it’s a criminal offence – most of the Dads declined to “do the job” because it’s illegal…

I was having a discussion with somebody a few weeks ago about the inconvenience of logging on to systems multiple times. I’m sure it’s a conversation every identity guy has had many times. To my surprise, this one person made the point that he didn’t find it particularly burdensome.

It turned out that his company had put password sync in place for all the applications he used. They’d also gone to the enormous trouble of ensuring that as many of the systems as possible used the same username.

This isn’t single-sign-on. He still had multiple sign-ons to perform.However, whenever he sees a username/password dialogue, he has no doubt what he should type in to it. It takes seconds and is no big deal.

I asked him how he’d feel if he didn’t have to log in to all these systems at all. If he just had to enter one username and password for everything. Now in the scheme of things, both approaches – password/username sync and SSO give the attacker the keys to kingdom once they find the username and password (and that’s a topic for a different blog entry). So the security risk is very similar (and I’m sure many will find subtle and esoteric arguements against that notion), however it was his personal attitude that struck me. “Each time I have to log in to a system, it’s a little reminder to me that I’m dealing with something important. If you took that away, I think I’d take my personal resonsibility to protect the information I work with less seriously – even if it’s always the same username and password”.

He clearly thought the absence of SSO promoted a clearer sense of personal responsibility for him. He liked the fact that it was easy to remember the username and password, but also liked the gentle reminder “you have to log in because you’re doing something important with this data”. SSO would erode this feeling for him over time. And for what particular level of inconvenience?

It’s something I’d love to survey more users with. Those who have implemented consistent-sign-on. How inconvenient is it to enter a consistent username/password?

From an architectural elegance point of view, it’s very ugly. It doesn’t sit neatly with us to have something so loosely coupled. But do those extra few moments every day really make any difference to an organisation’s overall efficiency? And in fact, does that incovenience have the effect of increasing the organisation’s data security because its staff take the data more seriously?

I doubt I’ll ever get the opportunity to study this any deeper – unless a reader wants to fund some research?