Welcome to NBlog, the NoticeBored blog

Nov 29, 2006

Thanks to a tip-off from Gideon Rasmussen on the insider threat email reflector, I've come across a series of information security podcasts by CERT, aimed at 'business leaders'. The podcast on security Return On Investment (ROI) contains an interesting comment relating to research by "a couple of economists at the University of Maryland named Lawrence Gordon and Martin Loeb" who are said to have determined that a security control investment should only go ahead if the cost is no more than 37% of the expected return. I find this a very curious statement: from a purely economic point of view, almost any net positive return is financially worthwhile provided that (a) there is sufficient funding available for the investment (i.e. it is not outranked by other higher return investments) and (b) the projected costs and returns are realistic ... which is perhaps the issue here. Security projects in the main create returns by reducing risks and hence reducing projected future losses compared to the do-nothing option. The economists seem to be saying the security and risk professionals are seriously overestimating projected savings. They may have a point.More security awareness and risk management resources

Nov 28, 2006

In Japan, "More than 71 percent of people worry their personal information will be leaked as a result of inadequate security measures, according to a recent government survey." The article summarizes an opinion survey regarding awareness of and support for Japan's data protection laws introduced last year. Judging by the large number of Japanese companies already certified against ISO 27001, Japan is taking information security very seriously but the Japanese populace is not yet comfortable.More links on ISO 27001 and data protection

Nov 25, 2006

This Way Up on National Radio in New Zealand interviewed Mike Berry, a famous scambaiter, about his activities. Mike clearly has a lot of fun baiting the 419 scammers through his 419eater.com website, even getting one to send impressive wooden sculptures of Creature Comforts characters and a Commodore 64 computer ... but there's a serious undercurrent to the story. Estimates vary but thousands of dollars are thought to be lost to 419ers every day. Thousands of New Zealanders and millions of Americans fall prey every year, getting drawn-in like obsessive gambling addicts convinced that the next payment will secure the promised windfall. Mike has received death threats. Later in the podcast, Liz McPherson of the NZ Ministry of Consumer Affairs warns the public about falling for the scams and promotes the NZ Ministry of Economic Development's consumer affairs scamwatch website.

Nov 21, 2006

The latest SANS Top 20 hotlist of information security vulnerabilities at last includes "humans" on the list of horrors alongside the usual range of Windows, UNIX and other technical security weaknesses. SANS specifically identifies the vulnerability to 'spear phishing' (i.e. highly targeted phishing/spoof email attacks), which is of course just one of a very large class of potential vulnerabilities. According to a recent article in Infoworld, SANS' Allan Paller feels that, in the face of ever increasing security threats (agreed), technical information security is improving (possibly true) whilst human being remain as weak as ever (hopefully not for NoticeBored customers!). Some of us have been saying that for years, and rather than simply 'blaming' users for being naive, a few of us are even doing something about it ...More security awareness resources

An audit checklist from the IT Compliance Institute (ITCi) explans what auditors would typically want to know about enterprise risk management practices. The checklist, written by the infamous Dan Swanson, offers practical advice to auditees as well as auditors. The ITCi "strives to be a global authority on the role of technology in business governance and regulatory compliance. Through comprehensive education, research, and analysis related to emerging government statutes and affected business and technology practices, we help organizations overcome the challenges posed by today’s regulatory environment and find new ways to turn compliance efforts into capital opportunities."More risk management and IT audit resources

Nov 20, 2006

Alex Hutton runs a weblog for IT risk geeks focusing on the FAIR (Factor Analysis of Information Risk) risk analysis method that his employer Risk Management Insight LLC (RMI) promotes. In his blog, Alex takes me to task for my previous blog entry about the FFIEC, FAIR enough, and complains that the NoticeBored blog only accepts comments from authenticated users. Well OK, I've relaxed the restriction to encourage more feedback although I'll be moderating comments, not to censor what people say but merely to block spam. [Alex: why didn't you email me? Gary@isect.com]Meanwhile, I took another look at FAIR. What follows is a rather harsh and cynical critique of the FAIR method as described in the undated draft FAIR white paper, partly because the paper's author, Jack Jones, invites comment towards the end of the document: "It isn’t surprising that some people react negatively, because FAIR represents a disruptive influence within our profession. My only request of those who offer criticism is that they also offer rational reasons and alternatives. In fact, I encourage hard questions and constructive criticism". So here goes.Right away, I was intrigued by a statement at the front of the paper regarding it being a "patent pending" method that commercial users are expected to license. Unless I'm mistaken (which is entirely possible!), "patent pending" means "not patented" i.e. it is not currently protected by patent law, or else it would presumably be labelled "Patented" and give a patent number. Judging by the content of the introductory paper, FAIR appears to be a conventional albeit structured risk analysis method so I'm not clear what aspect of it would be patentable in any event. [My snake oil-o-meter starts quivering around the 5% mark at this point.]"Be forewarned that some of the explanations and approaches within the FAIR framework will challenge long held beliefs and practices within our profession. I know this because at various times during my research I’ve been forced to confront and reconcile differences between what I’ve believed and practiced for years, and answers that were resulting from research. Bottom line – FAIR represents a paradigm shift, and paradigm shifts are never easy." [Not only is it claimed to be patentable, but it's a paradigm shift no less! The snake oil-o-meter heads towards 10%.]The paper defines risk as "The probable frequency and probable magnitude of future loss". [Strictly speaking, risk includes an upside too, namely the potential for future gain which FAIR evidently ignores.] FAIR considers six specific forms of loss: productivity, response, replacement, fines/judgments, competitive advantage and reputation. [Management's loss of confidence in any system of controls that fails is evidently not considered in FAIR - in other words, there is an interaction between management's risk appetite and their confidence in the control systems they manage, based on experience and, most importantly, perception or trust. These apparent omissions hint at what could potentially be a much more significant problem with the method: there is no clear scientific/academic basis for the model underpinning the method, meaning that there are probably other factors that are not accounted for. Obtuse references to complexity and this being an "introduction" to details that are to be descibed in training courses etc. further imply incompleteness in the model and hence limited credibility for the method. Snake oil-o-meter jumps to half way.]"A word of caution: Although the risk associated with any single exposure may be relatively low, that same exposure existing in many instances across an organization may represent a higher aggregate risk. Under some conditions, the aggregate risk may increase geometrically as opposed to linearly. Furthermore, low risk issues, of the wrong types and in the wrong combinations, may create an environment where a single event can cascade into a catastrophic outcome – an avalanche effect. It’s important to keep these considerations in mind when evaluating risk and communicating the outcome to decision-makers." [I wonder whether FAIR adequately describes risk aggregation, possible geometric increase and the "avalanche effect" noted here, in scientific terms? The author accepts the need to take such things into account but does FAIR actually do so? Snake oil-o-meter swings wildly around the half way point.][FAIR looks to me like a practitioners' reductionist model, something they have thought about and documented on the basis of their experience in the field as a way to describe the things they feel are important. FAIR might *help* an experienced information security professional assess risks but I'm not even entirely sure of that: the method looks complex and hence tedious (=costly) to perform properly. I wonder whether, perhaps, the FAIR method should be applied by a consultant such as someone from, say, RMI? Snake oil-o-meter settles around two-thirds full scale.]To give him his due, the author acknowledges some potential criticisms of FAIR, namely: the "absence of hard data" and "lack of precision" (which are simply discounted as inherent limitations as if that settles the matter); the amount of hard work involved in such a complicated method (which is tacitly accepted with "it gets easier with practice"); "taking the mystery out of the profession" and resistance to "profound" change (I know plenty of information security and IT managers who would dearly like to find an open, sound, workable and reliable method.) [FAIR does not appear to be a profound change so much as an incomplete extension of conventional risk analysis methods. Snake oil-o-meter creeps up again.]The appendix outlines a "basic risk assessment" in 4 stages: (1) "Identify scenario components" ("identify the asset at risk" and "identify the threat community under consideration"); (2) "Evaluate Loss Event Frequency (LEF)" ("estimate the probable Threat Event Frequency (TEF)", "estimate the Threat Capability (TCap)", "estimate Control strength (CS)", "derive Vulnerability (Vuln)" and "derive Loss Event Frequency (LEF)"); (3) "Evaluate Probable Loss Magnitude (PLM)" ("estimate worst-case loss" and "estimate probable loss"); and finally (4) "Derive and articulate Risk". Many of these parts clearly involve estimation, implying subjectivity (e.g. "estimate probable loss" could range between zero to total global destruction in certain scenarios: the appendix does not say how we are meant to place a mark on the scale). The details of the method are not fully described. For example, the TEF figure seems to be obtained by assigning the situation to one of a listed set of categories "Very High (VH): > 100 times per year"; "High (H): Between 10 and 100 times per year"; "Moderate (M): Between 1 and 10 times per year"; "Low (L)" Between .1 and 1 times per year"; or "Very Low (VL) < .1 times per year (less than once every ten years)." Similarly, the category boundaries for worst case loss vary exponentially for no obvious reason ($10,000,000; $1,000,000; $100,000; $10,000; $1,000; $0). [The use of two dimensional matrices to determine categories of 'output' value based on simple combinations of two 'input' factors is reminiscent of two-by-two grids favored by new MBAs and management consultants everywhere. The rational basis for using these nonlinear scales and the consequent effects on the derived risk probability are not stated, nor is method for determining the appropriate category under any given circumstances (how do we know, for sure, which is the correct value?). This issue strikes at the very core of the "scientific" (theoretically sound, objective, methodical and repeatable) determination of risk. The snake oil-o-meter rises rapidly towards three-quarters.]The appendix casually includes a rather worrying statement: "This document does not include guidance in how to perform broad-spectrum (i.e., multi-threat community) analyses." [Practically all real-world applications for risk analysis methods necessarily involve complex real-life situations with multiple threats, vulknerabilities and impacts. It is not clear whether the full FAIR method can cope with anything beyond the very simplest of cause-effect scenarios. Snake oil-o-meter creeps up to 80%]To close this critique, I'll return to a comment at the start of the FAIR paper that information risk is just another form of risk, like investment risk, market risk, credit risk or "any of the other commonly referenced risk domains". [The author fails to state, though, that risk is not managed 'scientifically' in these domains either. Stockbrokers, traders, insurance actuaries and indeed managers as a breed use, but cannot be entirely replaced by, scientific methods and models. Their salaries pay for their ability to make sound decisions based on expertise, meaning experience combined with gut feel - clearly subjective factors. Successful ones are inherently good at gathering, assessing and analysing complex inputs in order to derive simple outputs ("Buy Esso, sell BP" or "Your premium will be $$$"). From the outset, it seems unlikely the method will meet its implied objective to develop a scientific approach. Being based on "development and research spanning four years" is another warning sign since risk analysis in general and information security risk in particular, have been studied academically for decades. Although this is an 'introductory' paper with some strong points, it is quite naive in places. The snake oil-o-meter peaks out at around 80%.]

Nov 17, 2006

Some people clearly take the 'sport' of scam-baiting (i.e. retaliating against the 419 advance fee fraudsters) very seriously. A flash mob taking place this weekend is an opportunity to learn about 419ers and the techniques for taking their fake banking and lotter scam sites offline. The Artists Against 419 website is one of many scambaiter sites combining education with ironic humor.

Nov 16, 2006

A well-researched and well-written article about online banking user authentication discusses the range of authentication methods being used or trialled at a number of primarily US banks. Whereas the FFIEC regulations were anticipated to force US banks into using tokens for user authentication by the end of this year, banking customers are proving resistant to the technology and want an easier way to authenticate to the bank [the problem of the bank authenticating to the user merits a brief mention too]. User authentication is crucial to the issue of accountability: a customer cannot be held totally accountable for dubious transactions on his bank account if the bank cannot prove that the customer, rather than 'someone else' (normally a fraudster), logged in and submitted or authorized the transactions. The article discusses device as well as user authentication, in other words 'fingerprinting' the users' PCs to identify their normal machines. Not surprisingly, it barely touches on the back-end anti-fraud systems the banks are using to identify unusual customer activities that might be symptomatic of a fraud in progress: these details are proprietary to each bank (which limits the amount of information sharing between banks) and a closely guarded secret (to avoid tipping-off the very fraudsters they are designed to trap).

Amongst other police reforms, the new Police and Justice Act 2006 makes Denial of Service attacks illegal under British law and clarifies other aspects of computer misuse. The Computer Misuse Act 1990 made it an offence to alter a computer without authority, covering most hacking attacks but not explicitly DoS attacks. Criminal hackers who commit, for example, DoS-based extortion ("Send us loads of money or we will continue disrupting your online betting service ...") can now be called to account under the new Act. More links on laws, regulations and standards and accountability

Hot topic

NBlogger is ...

Dr Gary Hinson PhD MBA CISSP has an abiding interest in human factors - the ‘people side’ as opposed to the purely technical aspects of information security. Gary's career stretches back to the mid-1980s as both practitioner and manager in the fields of IT system and network administration, information security and IT auditing. He has worked and consulted in the pharmaceuticals/life sciences, utilities, IT, engineering, defense, financial services and government sectors, for organizations of all sizes. Since 2003, he has been creating security awareness materials for clients (www.NoticeBored.com) and supporting users of the ISO27k standards (www.ISO27001security.com). In conjunction with Krag Brotby, he wrote "PRAGMATIC security metrics" (www.SecurityMetametrics.com). He is a keen radio amateur, often calling but seldom heard by distant stations on the HF bands.