Microsoft's End to End trust vision: Can this identity, trusted stack thing work?

Microsoft unveiled its "End to End Trust" security vision and now the real work begins: Will anyone buy into it?At RSA, Microsoft rolled out a whitepaper that at the very least is quite the conversation piece.

Reading through the whitepaper--something I encourage everyone to do (yes all 22 pages)--I can see a few phases on this one:

First, the mocking: "Who is Microsoft to pitch this trusted Internet thing?"

Then the details: "This whole thing is based on identity management on a grand scale. Sounds Big Brother-ish."

Then the idea of the trusted stack: "Is this like herding technology industry cats?"

And then some mild acceptance: "Maybe some of these ideas aren't half bad."

Ok, that last phase may take awhile (like maybe never), but the debate is worth having. Here are some key excerpts and my take.

Microsoft says:

This paper is an invitation to discuss how one might fundamentally “change the game,” and provides a framework for discussing the myriad of social, political, economic and technological issues that must be addressed if we want to create a meaningfully more secure and privacy-enhanced Internet. In short, in our view changing the game requires two things: (1) building a trusted stack, with suitably strong authentication of hardware, software, people and data; and (2) improving the ability to audit events to provide accountability. We must also grant people better control over their digital personas to enhance privacy. This trusted stack, combined with better mechanisms to protect privacy, will enable End-to-End (E2E) Trust — giving people, devices and software the ability to make and implement good decisions about who and what to trust throughout the ecosystem. This will help protect security and privacy as well as help bring criminals to justice when electronic malfeasance occurs. In sum, the opportunity exists to create a trusted, privacy-enhanced Internet.

My take: This passage appears on page 4. I can already hear the hackles and the privacy vs. tracking issue is huge.

Microsoft says:

Current strategy does not address effectively the most important issue: a globally connected, anonymous, untraceable Internet with rich targets is a magnet for criminal activity — criminal activity that is undeterred due to a lack of accountability. Moreover, the Internet also fails to provide the information necessary to permit lawful computer users to know whether the people they are dealing with, the programs they are running, the devices they are connecting to, or the packets they are accepting are to be trusted.

My take: Hard to argue with this one, but what entities will secure these identity items.

Microsoft says:

Although trust may be a complex issue, this does not alter the fact that certain foundational elements must be in place to create a more trustworthy environment. The most important element is an authenticated identity attribute (e.g., name, age or citizenship); absent the ability to authenticate a person (or a personal attribute), machine, software, and/or data — and absent the ability to combine that authenticated data with other trust information (e.g., prior experience, reputation), effective trust decisions cannot be made. Second, absent the ability to identify and prove the source of misconduct, there can be no effective deterrent — no effective law enforcement response to cybercrime and no meaningful political response to address international issues relating to cyber-abuse. To date, the “response” to computer abuse of all types has been to increase defenses, but the history of computer security shows that offense will beat defense in cyberspace because attackers have an abundance of time and resources, and may only need to find one weakness, whereas a defender must cover all avenues of attack. Experience shows that most cybercriminal schemes are successful because people, machines, software and data are not well authenticated and this fact, combined with the lack of auditing and traceability, means that criminals will neither be deterred at the outset nor held accountable after the fact. Thus the answer must lie in better authentication that allows a fundamentally more trustworthy Internet and audit that introduces real accountability.

My take: Complex issue indeed. Will this audit be real-time? And what are the chances of this identity scheme actually tracking down a criminal? It's not these folks will fork over any identity attributes. What happens if the identity isn't in the U.S.?
Microsoft says:

We must create an environment where reasonable and effective trust decisions can be made. We must also create an environment where accountability — and therefore deterrence — can be achieved. To do this, one must have access to a “trusted stack”: (1) security rooted in the hardware; (2) a trusted operating system; (3) trusted applications; (4) trusted people; and (5) trusted data. The entire stack must be trustworthy because these layers can be interdependent, and a failure in any can undermine the security provided by the other layers; for example, a document may be created by an identified individual, using secure hardware and a secure operating system, and sent to another as a signed attachment with integrity, but if it was created with an insecure application, it may not be trustworthy.

My take: This approach is more secure for sure. It doesn't sound user friendly at all. Can it be automated?
Microsoft says:

First, nothing in this paper is meant to suggest that anonymity on the Internet be abolished. To the contrary, anonymity should be preserved and enhanced through both technology and social policy...Second, nothing in this paper is meant to create unique, national identifiers, even if some countries are creating identity systems that do so...Third, nothing in this paper supports the creation of mega-databases that collect personal information...Fourth, there is no claim that creating an authenticated, audited environment has no impact on privacy...Fifth, any system can be abused and, if the risk of serious abuse is significant enough, then we might eschew the approach...Finally, universal buy-in and implementation is not necessary to achieve a modicum of success.

My take: This part of the whitepaper is the one where folks will freak out. Microsoft may have built a lot of security goodwill, but this stream of identity thoughts from the software giant are guaranteed to raise concerns. Why? The concept is coming from Microsoft and no amount of disclaimers will prevent conspiracy theories.
Microsoft says:

What benefits arise from the fact that people, devices, software and data are more robustly authenticated and their activities audited? In a general sense, the most obvious benefit of authentication is that it empowers better trust decisions. Auditing creates a better ability to hold people accountable for misconduct, and thereby deter such conduct, assuming that domestic cybercrime laws and international cooperation mechanisms are sufficient. Enabling better trust decisions and accountability will solve specific real-world problems. For example, a well-audited transaction between two authenticated parties serves to protect both sides of the transaction. A bank could more easily authenticate a customer’s identity, a customer would have greater assurance that the Web site that he or she was visiting was that of the bank, and both parties could determine what truly happened if any issue arose. By conducting device-to-device authentication, organizations could reduce the number of external hackers with access to their systems, in large part because a hacker would need access to an “authorized” machine to connect to the victim’s network. In addition, if an unauthorized access were to occur and better auditing records proved what happened, it would become much easier to apply physical-world mechanisms (e.g., law enforcement, political forces) to address cybercrime, economic espionage and information warfare. Because these mechanisms enable more effective trust decisions to be made throughout the ecosystem — by and about people, devices, software and data — we call this End-to-End Trust.

My take: Sounds like a security standard spat on deck.
Microsoft outlines the components of a trusted stack:

Because all software operates in an environment defined by hardware, it is critical to root trust in hardware. Today, many computers come with a Trusted Platform Module (TPM), a technology that will expand and enter new form factors...The operating system must be verifiable based upon keys stored in the hardware (e.g., “trusted boot”). This allows the device to claim that the operating system has not been tampered with to bad effect...Computers were, of course, designed to run code, without concern about its authorship or the intent of that author. Today there are multiple ways to help protect people from software vulnerabilities and malicious code. To protect users from vulnerabilities, code can be rewritten in safer languages, checked with analytic tools, compiled with compilers that reduce vulnerabilities (e.g., buffer overruns) and sandboxed when executed...A safer Internet needs to support the option of identities based directly or derivatively upon in-person proofing, thus enabling the issuance of credentials that do not depend upon the possession of a shared secret by the person whose identity or identity attribute is being verified. To some extent, government activities and markets themselves are driving in-person-proofing regimes...Applications should incorporate seamless mechanisms for applying signatures to their outputs, and read signatures before opening documents, so that data origin and data integrity can be easily checked....An audit trail is a record of a sequence of events from which a history may be reconstructed. An audit log is a set of data collected over a period of time for a specific component. A series of audit logs can be studied to determine a pattern of system usage that, over time, can be used to highlight aberrant behavior such as criminal activity or the existence of malware. Audit data is also necessary to roll back suspicious or harmful transactions.

My take: Trusted hardware and operating systems are a no-brainer and probably doable. Identities and the audit trail are much trickier. How would this stack work in practice? And is there a performance hit?