Category Archives: Security

Ever find yourself just click click clicking through every message box that pops up? Most people click through a warning (which in the land of Web Browsers usually means STOP DON’T GO THERE!!) in less than 2 seconds. The facts seem to be due to be from habituation – basically, you are used to clicking, and now we have the brain scans to prove it!

What does this mean for you? Well specifically you won’t be able to re-wire your brain, but perhaps you can turn up the settings on your web browser to not allow you to connect to a site that has the issues your web browser is warning against. Simple – let the browser deal with it and take away one nuisance.

From the study:

The MRI images show a “precipitous drop” in visual processing after even one repeated exposure to a standard security warning and a “large overall drop” after 13 of them. Previously, such warning fatigue has been observed only indirectly, such as one study finding that only 14 percent of participants recognized content changes to confirmation dialog boxes or another that recorded users clicking through one-half of all SSL warnings in less than two seconds.

ENISA released a study with a methodology identifying critical infrastructure in communication networks. While this is important and valuable as a topic, I dove into this study for a particularly selfish reason … I am SEEKING a methodology that we could leverage for identifying critical connected infrastructure (cloud providers, SAAS, shared services internally for large corporations, etc..) for the larger public/private sector. Here are my highlights – I would value any additional analysis, always:

Challenge to the organization: “..which are exactly those assets that can be identified as Critical Information Infrastructure and how we can make sure they are secure and resilient?”

Key success factors:

Detailed list of critical services

Criticality criteria for internal and external interdependencies

Effective collaboration between providers (internal and external)

Interdependency angles:

Interdependencies within a category of service

Interdependencies between categories of services

Interdependencies among data assets

Establish baseline security guidelines (due care):

Balanced to business risks & needs

Established at procurement cycle

Regularly verified (at least w/in 3 yr cycle)

Tagging/Grouping of critical categories of service

Allows for clean tracking & regular security verifications

Enables troubleshooting

Threat determination and incident response

Methodology next steps:

Partner with business and product teams to identify economic entity / market value

Identify the dependencies listed about and mark criticality based on entity / market value

Develop standards needed by providers

Investigate how monitoring to standards can be managed and achieved (in some cases contracts can support you, others will be a monopoly and you’ll need to augment their processes to protect you)

Refresh and adjust annually to reflect modifications of business values

I hope this breakout is helpful. The ENISA document has a heavy focused on promoting government / operator ownership, but businesses cannot rely or wait for such action and should move accordingly. The above is heavily modified and original thinking based on my experience with structuring similar business programs. A bit about ENISA’s original intent of the study:

“Major developments with Big Data, Cloud, Mobile, and Social media” – the context and reality here is cavernous.. “

My analysis and near-random break down of this tweet are as follows with quotes pulled from the panel.

First off – be aware that these key phrases / buzz words mean different things to different departments and from each level (strategic executives through tactical teams). Big Data analytics may not be a backend operational pursuit, but a revenue generating front end activity (such as executed by WalMart). These different instantiations are likely happening at different levels with varied visibility across the organization.

“Owning” the IT infrastructure is not a control to prevent the different groups from launching to these other ‘Major developments’.

The cost effectiveness of the platforms designed to serve businesses (i.e., Heroku, Puppet Labs, AWS, etc…) is what is defining the new cost structure. CIO and CISO must

>The cloud is not cheaper if it does have any controls. This creates a risk of the data being lost due to “no controls” – highlighted by Melanie from the panel. <– I don’t believe this statement is generally true and generally FUD.

Specifically – There is a service level expectation by cloud service providers to compensate for the lack of audit ability those “controls”. There are motions to provide a level of assurance to these cloud providers beyond the ancient method established through ‘right to audit‘.

A method of approaching these challenging trends, specifically Big Data, below as highlighted by one of the CISO (apologies missed his name) w/ my additions:

Data flow mapping is a key to providing efficient and positive ‘build it’ product development. It helps understand what matters (to support and have it operational), but also see if anything is breaking as a result.

Two observations impacting the CISO and information technology organization include:

The Board is starting to become aware and seeking to see how information security is woven within ERM

Budgets are not getting bigger, and likely shrinking due to expectations of productivity gains / efficiency / cloud / etc…

Rationalization on direction, controls, security responses, must be be fast for making decisions and executing…

Your ability to get things done has little do with YOU doing things, but getting others to do things. Enabling, partnering, and teaming is what makes the business move. CIO and CISO must create positive build-it inertia.

Support and partner with the “middle management” the API of the business if you will.

We to often focus on “getting to the board” and deploying / securing the “end points” .. Those end points are the USERS and between them and the Board are your API to achieving your personal objectives.

Vendor Management vs procurement of yester-year

Acquiring the technology and services must be done through a renewed and redeveloped vendor management program. The current procurement team’s competencies are inadequate and lacking the toolsets to ensure these providers are meeting the existing threats. To be a risk adaptive organization you must tackle these vendors with renewed. Buying the cheapest parts and service today does not mean what it meant 10 years ago. Today the copied Cisco router alternative that was reverse engineered lacks an impressive amount of problems immediately after acquisition. Buying is easy – it is the operational continuance that is difficult. This is highlighted by the 10,000+ vulnerabilities that exist with networked devices that will never be updated within corporations that must have their risks mitigated, at a very high and constant cost.

Is “it is your decision not ours” statement and philosophy a cop-out within the Information Security sphere?

This is a common refrain and frustration I hear across the world of information security and information technology. Is this true? Is it the result of personality types that are attracted to these roles? Is it operational and reporting structure?

In Audit it is required for independence and given visibility. Does not the business (the CIO) and the subject expertise (CISO) not have that visibility possess a requirement of due care to MAKE it work?

The perfect analogy is the legal department – they NEVER give in and walk away with a mumble, they present their case until all the facts are known and a mutual understanding is reached. Balance happens but it happens with understanding.

This point is so important to me, that it warranted a specific sharing of the thought. I hope we can reframe our approach, and to follow a presentation off TED – focus on the WHY. (need to find link…sorry) These individuals in these roles provide the backbone and customer facing layer of EVERY business.

Thoughts and realizations made from stumbling around our community and today during RSA resulting from the presentations with underlying tones.

The advent of user created, managed and handled passwords as the sole means of authenticating is coming to an end. The utility of these was defined in an era based on assumptions of brute force capability, system computing power and pro-active security teams. – After much debate and analysis … there is the thesis

This topic came up for me last year as I was working through some large amorphous business processes. The question of credentials was raised, and we challenged it. This is interesting as we had some pretty serious brains in the room from the house of auditing, security, risk, and business leaders. I am sharing my thoughts here to seek input and additional alternate perspectives – seeking more ‘serious brains’.

I will update as feedback comes in … this and other posts will serve as workspaces to share the analysis and perspectives to consider. I am breaking this topic across different posts to allow for edits and pointed (critical perhaps) feedback on a topic basis. This is LIVE research, so understand impressions today may change tomorrow based on information and insight. Looking forward to collaborating, and with that … lets jump right in!

————————————————————————

Passwords are designed to restrict access by establishing confirmation that the entity accessing the system is in-fact authorized. This is achieved by authenticating that user. Passwords / pass phrases have been the ready steady tool. The challenges to this once golden child cross the entire sphere, and I’ll be seeking your collaboration through the journey up to my RSA presentation in SFO at the end of February 2013!

False premise three – Password control objectives are disassociated from the origination and intent

FALSE PREMISE ONE: (Updated Jan.31.2013)

Passwords are great because they are difficult to break?

The idea here is that users are trained (continuously) to use complex, difficult, long, and unique passwords. The concept was that these attributes made it difficult for a password to be broken.

Lets explore what that meant… When a password was X characters long using Y variety of symbols it would take a computer Z time to break it. Pretty straight forward. (This example drawn is for a password hash that is being brute force attacked offline) This analogy and logic is also true with encryption, but it is based on poor premise:

Password cracking CPU cycles for a single machine are far more powerful than yesteryear, AND if we focus ONLY only on computing power, well the use of Cloud Armies to attack represent the new advantage for the cracking team

Password cracking by comparison pretty much made the CPU argument (and length of time to hack) moot. There exists databases FULL of every single password hash (for each type of encryption / hash approach) that can be compared against recovered passwords – think 2 excel tables .. search for hash in column A and find real world password in column B.

Interesting selective supporting facts:

A $3000 computer running appropriate algorithms can make 33 billion password guesses every second with a tool such as whitepixel

A researcher from Carnegie Mellon developed an algorithm designed for cracking long passwords that are made up of combined set of words in a phrase (a common best practice advice) – “Rao’s algorithm makes guesses by combining words and phrases from password-cracking databases into grammatically correct phrases.” This is research is being presented in San Antonio at the “Conference on Data and Application Security & Privacy” – New Scientist

Big Data introduces an opportunity that organizations see when merging silo product operations together forming a service layer or an enhanced hybrid product. Big Data also requires exceptional enterprise intelligence from the perspective of establishing the scaffolding for enterprise grwoth. That scaffolding requires advanced information technology system and business process matrix visibility. My thesis … let me elaborate below on a single thread here given this is a subject I have been developing on recently…

In order for Big Data to work it requires abundant access to systems, data repositories, and the merging and tweaking of data beyond original data owner expectations or comprehension. The enterprise that balances the advantage of Big Data analytics with superior scaffolding will appreciate higher run rates and profitability without unfunded cost centers and above trend OpEx generally. The opportunity of Big Data without this business intelligence will be squandered and the benefits not realized as a direct result.

The CIO has this ownership and it is the purview of the Audit Committee to ensure that these risks are understood and tackled. The Board of Directors have proven to value equally the aggressiveness of Data Analytics with the ongoing revaluation of the risk tolerance and acceptance points of the business. As one can imagine, this is a familiar yet distinct activity within the executive structure, but three key attributes / activities that indicate a successful approach are as follows:

Vertical awareness – product awareness, strategy, and full line of sight for each major revenue center

Senior strategy alignment – what does the Board seek in this DA movement; What does the CEO/CIO envision on these product expansions; What is the audit committee observations (meaning that they must have visibility and mindfulness to the impact)

Good short presentation on value of pattern based strategies, by Gartner

$29B will be spent on big data throughout 2012 by IT departments. Of this figure (Forbes)

Or a classic business case example:

“The cornerstone of his [Sam Walton’s] company’s success ultimately lay in selling goods at the lowest possible price, something he was able to do by pushing aside the middlemen and directly haggling with manufacturers to bring costs down. The idea to “buy it low, stack it high, and sell it cheap” became a sustainable business model largely because Walton, at the behest of David Glass, his eventual successor, heavily invested in software that could track consumer behavior in real time from the bar codes read at Wal-Mart’s checkout counters.

“He shared the real-time data with suppliers to create partnerships that allowed Wal-Mart to exert significant pressure on manufacturers to improve their productivity and become ever more efficient. As Wal-Mart’s influence grew, so did its power to nearly dictate the price, volume, delivery, packaging, and quality of many of its suppliers’ products. The upshot: Walton flipped the supplier-retailer relationship upside down.” – Changing The Industry Balance of Power

A good (no paywall) article on Forbes here breaks down the IT spent related directly to Big Data and compares against prior years up to 2012 & by industry.

Also check out this MIT Sloan article co-developed with IBM entitled Big Data, Analytics and the path from Insight to Value – most interesting for me was page 23 relating to Analytics trumping intuition. This relates to EVERY business process, product, sales opportunity, accounting, fraud detection, compliance initiative, security analytics, defense and response capabilities, power management, etc … A worthwhile read for each executive.

The Rapid7 folks ran scans for 5+ months searching for and finding systems vulnerable to 3 different types of vulnerabilities that relate to UPnP. The sheer volume, accessibility, diversity of vendor, and age of some of these systems is most interesting from an operational business standpoint. First a few statistics from the report:

23 million IPs are vulnerable to remote code execution through a single UDP packet

At least 6,900 product versions vulnerable through UPnP.

List encompasses over 1,500 vendors

1 UDP packet can exploit any one of 8 vulnerabilities to libupnp

Some vulnerabilities were 2+ years old, yet 300+ products still are using insecure version

A great write-up is available here by Darlene at ComputerWorld (chock full of links to additional facts & CERT) and of course all comments and feedback should be directed to HD Moore’s blog. The report was worth the read, and while the technical details are important, I would challenge the executives reading this paper to consider operationally how they would seek to manage the vulnerable systems in their organizations and how their internal processes are designed to ensure such similar technical (symptoms) vulnerabilities across different types of products do no recur. Or at least, devising a methodology to mitigate the risk to technology such as this that cannot be patched (vendor is gone; management tools non-existent, etc…) or addressed directly on the same system.

As our business processes further rely on network connected devices, the age and velocity of the industry is a risk that we must manage. Acquisitions, businesses going under, kickstarters coming & going, and simply protocols losing support in the dev environments ALL are mitigated by governance and risk assessment methodologies.

How is your strategic program designed; is it effective to these shifts in business; how can it be enhanced?

How is the partnership with procurement, M&A, and business relations teams? >> Consider the inputs as well as enhancing your program.