Tag Archives: security program

A question that many organizations struggle with is how much is the appropriate money to spend annually per user, per year on information security. While balancing security, privacy, usability, profitability, compliance, and sustainability is an art organization's have a new data point to consider.

Balancing – information security and compliance operations

The ideal approach that businesses take must always be based on internal and external factors that are weighted against the risks to their assets (assets in this case is generally inclusive of customers, staff, technology, data, and physical-environmental). An annual review identifying and quantifying the importance of these assets is a key regular exercise with product leadership, and then an analysis of the factors that influence those assets can be completed.

Internal and external factors include a number of possibilities, but key ones that rise to importance for business typically include:

Market demands (activities necessary to match the market expectations to be competitive)

At the aggregate and distributed based upon the quantitative analysis above, safeguards and practices may be deployed, adjusted, and removed. Understanding the economic impact of the assets and the tributary assets/business functions that enable the business to deliver services & product to market allows for a deeper analysis. I find the rate of these adjustments depend on the business industry, product cycle, and influenced by operating events. At the most relaxed cadence, these would happen over a three year cycle with annual minor analysis conducted across the business.

Mature organization's would continue a cycle of improvement (note – improvement does not mean more $$ or more security / regulation, but is improvement based on the internal and external factors and I certainly see it ebbing and flowing)

Court settlement that impacts the analysis and balance for information security & compliance:

Organization's historically had to rely on surveys and reading of the tea leaf financial reports where costs of data breaches and FTC penalties were detailed. These collections of figures showed the cost of a data breach anywhere between $90-$190 per user. Depending on the need, other organizations would baseline costing figures against peers (i.e., do we all have the same # of security on staff; how much of a % of revenue is spent, etc…).

As a result of a recent court case, I envision the below figures to be joined in the above analysis. It is important to consider a few factors here:

The data was considered sensitive (which could be easily argued across general Personally Identifiable Information or PII)

There was a commitment to secure the data by the provider (a common statement in many businesses today)

The customers paid a fee to be with service provider (premiums, annual credit card fees, etc.. all seem very similar to this case)

Those that had damages and those that did not were included within the settlement

The details of the court case:

The parties' dispute dates back to December 2010, when Curry and Moore sued AvMed in the wake of the 2009 theft of two unencrypted laptops containing the names, health information and Social Security numbers of as many as 1.2 million AvMed members.

The plaintiffs alleged the company's failure to implement and follow “basic security procedures” led to plaintiffs' sensitive information falling “in the hands of thieves.” – Law360

A settlement at the end of 2013, a new fresh input:

“Class members who bought health insurance from AvMed can make claims from the settlement fund for $10 for each year they bought insurance, up to a $30 cap, according to the motion. Those who suffered identify theft will be able to make claims to recover their losses.”

For businesses conducting their regular analysis this settlement is important as the math applied here:

$10 x (# of years a client) x client = damages .. PLUS all of the upgrades required and the actual damages impacting the customers.

Finally

Businesses should update their financial analysis with the figures and situational factors of this court case. This will in some cases reduce budgets, but others where service providers have similar models/data the need for better security will be needed.

As always, the key is regular analysis against the internal & external factors to be nimble and adaptive to the ever changing environment. While balancing these external factors, extra vigilance needs to ensure the internal asset needs are being satisfied and remain correct (as businesses shift to cloud service providers and through partnering, the asset assumption changes .. frequently .. and without any TPS memo).

Questions that must be managed by the COO and CIO of every business relates to dedicating finite resources across the company. The products and services sold the by the business are developed and delivered to market as rapidly as possible in a race to be competitive. In the startup realm the concept of building in security, compliance, and privacy elements is very low priority. In most cases startups (and skunkworks within larger enterprises) depend upon the security of the libraries (ruby on rails, java libraries, etc…) and product components (UL Certified) to deliver security. Unfortunately depending upon the security and safety of the individual pieces is insufficient and inadequate when the elements (from here forward meant to refer to technology code and physical product components) are brought together in a new and non-obvious way. The emergence of these new products and services introduces dependencies, communication channels, new operating environments, and custom elements that reduce or eliminate the security-compliance-privacy elements that existed individually.

Leadership must then prioritize as immediately possible to introduce security-compliance-privacy. Companies certainly benefit by building these natively within the products and services at the Design & Build stage, as it is cheaper to build once then to re-design / re-code to meet the market expectation of security-compliance-privacy. The case when the organization must review its existing portfolio and decide what should be done, is the focus of this article. An analysis is necessary to evaluate the landscape of necessary and appropriate security-compliance-privacy requirements, and which products or services should be updated.

Or stated another way …

Where on the game board do the services and products of our company get prioritized to receive compliance, security, and privacy ‘attention’?

Such an analysis should at least include:

Listing of all required regulations and business best practices

Listing of all legal and contractual obligations

Discovery of similar product / services in the market and list any requirements outlined resulting from litigation and similar government agency enforcement actions

Strategic roadmap review – identify any likely near term requirements

Listing of all requirements the individual products & services will be subject to from the customer’s perspective

At this point a robust listing exists on what the products and services should support. A cross-map of these requirements should then be produced for optimized adoption and sustained operation. The cross map will also provide the design specifications that will contribute to the use cases and product development life cycle. An example of such is below:

The above then (in sequence 1 to 5) are placed on your product / services game board and prioritization and risk management are possible. This is a process I designed in 2008 and have enhanced based on experience and client feedback building global security and compliance programs. Your program may need to consider additional facts and realities. I would love to hear your thoughts to enhance and challenge this method.

Today NoVA Blogger David Schuetz (@darthnull) was recognized for his hard work in uncovering the mystery of the UDID that Anonymous stated was pilfered from the FBI. The fact that the FBI had these (or the threat) opened an interesting and heated discussion around the privacy and security channels. The concerns were on privacy, rights of the users, and of course the weakness of the FBI security controls. Interesting enough the FBI was direct and absolute that they did not have this data / it was breached.

@darthnull interviewed on NBC News discovered it was actually stolen from a small (relative to the FBI) organization in Florida called BlueToad. This organization showed a 98% correlation in the datasets, and put out a press release stating the facts as they know them …

This scenario is actually worse than if the FBI had the data. In this situation we see a demonstration where a single small business recorded the UDID for their application. These UDID are used throughout the App Universes (Apple and Google), despite the providers recommending to NOT use these. The simple reason is these are essentially sensitive data – dare I say eventually PII.

The use of the UDID is even requested as a secure token for enterprise tools such as the GOOD email app messenger. In fact, many tools, apps, games, and other smartphone platform applications utilize these UDID as the key identifier of the user. The problem here is that if each App is collecting each UDID (even if done once and then switched to a better practice), that means there are A LOT of these databases lying around.

The quantity of such repositories of such UDID is large – marketing firms, analytics, games, productivity apps, enterprise MDM apps, etc… It would be interesting to determine how many Apps are using these IDs, but ultimately it is irrelevant when we realize the breadth of these records across so many parties, at some point we just accept the data is retrievable. The UDIDs are especially utilized across the mobile advertising & developer testing industry – as a means, for instance, of tracking marketing for instances, and within analytics (now part of a COPPA legal complaint).

The takeaway here is that enterprises should evaluate as part of their mobile strategy the authentication methods and dependencies deployed for these devices. If the UDID is being utilized in a “multi-factor” / “token” method, it should be reconsidered or at least the risk mitigated given the simplicity and likely broad amount of existing databases with such records. A positive note is that since March Apple has begun rejecting Apps that access these UDID, and a great write-up on the impact and effect can be found here at VentureBeat. To be clear there will be alternatives in the future to UDID, but their unique nature and “assignment” to a person will not likely reduce the sensitivity of the “token”, as it will be employed similarly.

Congrats to David for solving the mystery and helping illuminate this poor security practice of app developers.