I've posted before about how many modern security efforts try to solve problems past researchers already solved or don't draw on INFOSEC history lessons. Well, I came across this nice timeline of INFOSEC history and contributions. It's divided into decades & brief mention. Not just the projects, but the changing beliefs and focus areas are enlightening.

The ACLU has released an app called "Police Tape" that makes it look like your device is off while recording video. It's intended for encounters with police, but I'm sure people will think of other uses.

In an earlier post, a reference was made to "cell phones" as "tracking devices". In fact, the "cell phone" has become the ubiquitous "my life" device as envisioned by Bill Gates. If "police tape" can make your phone look like it is not on or recording when the police want to confiscate it, then an app labeled "tracking tape" can make it look to you like your phone is NOT recording what you say or where you go! This crap works BOTH ways.

"A ruddy-faced, unshaven man bounds onstage. Wearing a wrinkled white polo shirt with a pair of red sunglasses perched on his head, he looks more like a beach bum who’s lost his way than a business executive. In fact, he’s one of Russia’s richest men—the CEO of what is arguably the most important Internet security company in the world. His name is Eugene Kaspersky, and he paid for almost everyone in the audience to come here. “Buenos dias,” he says in a throaty Russian accent, as he apologizes for missing the previous night’s boozy activities. Over the past 72 hours, Kaspersky explains, he flew from Mexico to Germany and back to take part in another conference. “Kissinger, McCain, presidents, government ministers” were all there, he says. “I have panel. Left of me, minister of defense of Italy. Right of me, former head of CIA. I’m like, ‘Whoa, colleagues.’”"

"The real reason for the official secrecy, in most instances, is not to keep the opposition (the CIA's euphemistic term for the enemy) from knowing what is going on; the enemy usually does know. The basic reason for governmental secrecy is to keep you, the American public, from knowing – for you, too, are considered the opposition, or enemy – so that you cannot interfere. When the public does not know what the government or the CIA is doing, it cannot voice its approval or disapproval of their actions. In fact, they can even lie to your about what they are doing or have done, and you will not know it. As for the second advantage, despite frequent suggestion that the CIA is a rogue elephant, the truth is that the agency functions at the direction of and in response to the office of the president. All of its major clandestine operations are carried out with the direct approval of or on direct orders from the White House. The CIA is a secret tool of the president – every president. And every president since Truman has lied to the American people in order to protect the agency. When lies have failed, it has been the duty of the CIA to take the blame for the president, thus protecting him. This is known in the business as "plausible denial." The CIA, functioning as a secret instrument of the U.S. government and the presidency, has long misused and abused history and continues to do so."

— Victor Marchetti, Propaganda and Disinformation: How the CIA Manufactures History

We already have security theater in our airports. In the wake of the Aurora, Colorado shootings, will we get theater security in our cinemas?

"Theater security: What will be done?

"Experts do not expect to see the airport security measures put in place after 9/11 carrying over to theaters. The heightened precautions seen at some major-league sports venues, such as a substantial armed security component, are unlikely, too."

This seems to be in contrast to past techniques where they would help the criminal (undercover?) to commit the crime and arrest them afterwards. Now they will arrest the criminal (?) BEFORE a crime is committed. I wonder what the charges will be?

Has anyone here looked at the Open NFC...? Any comments on the quality of their security

Err your question is ambiguous at best...

Open-NFC is based on various standards most of which have no security implicit in their design. As such it covers the equivalent of the bottom few layers of the ISO OSI stack model (ie physical up)

If you look at the Open-NFC web site it says,

In addition, because of the implicit security provided by the short range, the user interaction is very simple...

There is no such thing as "implicit security" when it comes to a radiating EM field system be it near field or far field. This fact has been demonstrated so many times you should treat any such statment as being at best "hand waving fudge" to at worst "snake oil".

However this is not Open-NFC's fault it is the fault of the standards on which their work is built.

The physical layer of NFC is to use the 13MHz ISM band to transfer both energy (for passive devices) and communications using an approximation to an "air-core transformer" in practice it is actually two compact loop antennas designed with very small "cross sectional area compared to wavelength" to work with the magnetic field component optimised for power transfer in the reactive near field when the antennas are in a common axis (broad side of coils facing and overlapping).

Due to the requirment to power passive devices (just like RFIDs and other tags using contactless technology) the active "master" device puts out a very strong field which can be picked up from several meters [1] dependant not just on it's radiating efficiency but also any objects in or near it' near field (includes conductors and dielectrics).

The mode of communication is Amplitude Shift Keying (ASK) with a modulation depth of either 10% or 100% (active to active). The modulating data wave form is either Manchester or Miller encoded respectivly. Neither modulation method is secure from spoofing especialy as the devices (in general) don't make field strength readings so bit by bit attacks are easily possible.

There is a broad (but incorrect) assumption that due to the physical alignment of the coil antennas a "Man In The Middle" attack is not possible [2]. This gave rise to ISO protocols that are often used that unfortunatly are open to "replay attacks".

Unfortunatly as you work your way up the stack the ISO standards do not realy mitigate the security issues and thus they need to be done at the application layer. Whether the app level protocol designers will do this or not is an altogether different subject [3], it is however not Open-NFC's issue ass they are providing a standards compliant stack.

[1] : The reactive near field of a "point source" antenna (which a small/compact loop antenna or large coil inductor resonant circuit forms) is aproximatly two wavelengths deep and the E&H fields make an orthagonal transition with repect to each other to the radiating near field. In the reactive near field the H field is predominat at very close range to the point source. As a very rough aproximation the "magnetic" H field drops intensity with volume and the E field drops intensity with the surface, with power droping to aproximatly 1/50th at two wavelengths where the E field predominates in the radiating near field and the fields now form a reasonable aproximation to a "plane wave" (in nearfield antenna measurments the rule of thumb is 3 wavelengths from the antenna aperture).

[2] : It is possible to make a "shim" device that will allow man in the middle attacks but their design is at best difficult for various reasons and it's usually easier to open the box and put a device in between the reader head and the following electronics or to replace the reader head entirely.

[3] : As I frequently say unless security is built in before the specification is written then it will be an after thought (as seen in the ISO standards used for NFC). The problem with after thoughts is generaly they result in either a significant cludge or "next revision maybe" behaviour, either way the apps are likely to be significantly less (or not) secure than required and be very vulnerable.

Best to look at some specifications:Connection Handover Technical Specification NFC Forum 1.2
2.10 Security Considerations:This section is meant to inform application developers and users of the security limitations in the Negotiated and Static Handover protocol described in this specification.
The Handover Protocol requires transmission of network access data and credentials (the carrier configuration data) to allow one device to connect to a wireless network provided by another device. Because of the close proximity needed for communication between NFC Devices and Tags, eavesdropping of carrier configuration data is difficult, but not impossible, without recognition by the legitimate owner of the devices.Transmission of carrier configuration data to devices that can be brought to close proximity is deemed legitimate within the scope of this specification. In case the legitimate owner of the devices has concerns over the confidentiality of the data, an additional security analysis is necessary that takes the system in question and the operating environment into account.

@ Clive Robinson describes the transport layer from an E&M perspective. Although correct, othe layers on top can handle security.

It makes sense to look at the transport layer and security layer as two separate functional components.
Here is a related (video) link from Infineon. @ Clive Robinson may like it, fits some of his ideas ;)

I've often said many of the modern security community are just reinventing the wheel over and over, that many current problems were solved by systems in the Orange Book days (and in academia today). Well, this author agrees & describes an early A1 candidate that was both ahead of its time & designed/implemented very well. Most modern secure systems, even small ones, still aren't designed this rigorously.

I particularly like the last paragraph. It tries to answer the question I've been asking myself.

"It may be that UNIX came along and swept up a new generation, and the “old skool”
operating systems and their “old guard” were not able to pass along the accumulated
knowledge. It may be that so many of the older papers and research and real-world
experience are not available online and, hence, not findable with a quick Google
search. Or it may be that the computer science and engineering curricula aren’t covering
the history of computing at all, let alone the history of secure computing. Whatever
the reasons, we’re losing a lot of time and energy rediscovering technology and
re-visiting the same failed solutions over and over again."

@ Wael
I woundn't drink too much of the Infineon security guard "kool aid". it is very likely to give you an enormous head ache without much added security...

Always keeping stored data Encrypted is a very difficult problem. Infineon has made some steps forward but they are a long way from really fixing the problem of unauthorized data recovery from smart cards.

Too late. I have drank that cool aid many years ago. I can tell you first hand, it's one of the few companies I have respect for in that area -- worked with (not for) them for several years.

BTW I dont want to discuss weaknesses / attacks publicly....

I don't and can't either when it comes to naming companies. That's one reason I try to keep the discussions generic, and basics / principles based. I understand "security people" need to keep a tight lip.

So who's drink should I be imbibing? Nick P's Orange Book "Tang"? Or Clive Robinson's "Warden Earl Grey Tea"? Whose "Kool aid" are you drinking these days? Maybe I'll give it a try :)

So who's drink should I be imbibing? Nick P's Orange Book "Tang"? Or Clive Robinson's "Warden Earl Grey Tea"?

May I suggest from the confines of my hospital bed, you try a reffeshing cocktail of cool Mint tea with a tangy citrus addition that includes the best of the ingredients available?

The reality is we are all coming at a very large and difficult problem from several direction and points on the computing stack. Between the pink squishy "organware" down to the vaguries of quantum mechanics and also trying to reach out from our limited perspective and assumptions of our tangible physicaly constrained view point, out to the intangible not physicaly constrained information view point which even fundemental physics has problems with (in theory it is not limited by forces or the speed of light which makes direct "physical" measurment a bit difficult at best).

As I've said before Nick P's approch is pragmatic and bassed on what is currently available and in turn is based on much work that has gone befor it which is well thought out and solidly reasond within the constraints and practices of the times.

It has however a fundemental concept underneath it which is the "single CPU" architecture and whilst that will always remain a valid viewpoint it has major limitations and is at the very begining of the "what's possible continuum". Primarily that "the hardware can be made trustworthy and it can be maintained as trustworthy" by methods that go back centuries.

We know from mathmatical work done before the electric computer was invented that a generaly usable system of logic (later called a Turing engine) cannot verify it's self. That is it has no way to detect it has not been modified in some way and that it is in effect lying to it's self and reporting those lies onwards.

We do know that although similar issues exist with non general systems of logic we can design limited state machines that are hard coded and will either report truthfully or will make incomprehensable noise or easily detected contradictions. Thus we can from the report have much greater confidence in what they are reporting (ie clear truth or incomprehensable and contradictory nonsense). If such a state machine is used to check a general purpose logic system providing it has been designed correctly it's reports will be more trustworthy than the general system could ever report (there are issues but they are probabilistic in nature which is a very important point to remember).

If you look at the requirments for A1 to ensure trustworthy hardware they are not realy practical these days (ie shipping with armed guards being just one of many extrodinarily costly and impractical requirments).

So we know the problems of the past were only realy considered in standalone single CPU architectures from the assumption of "trusted hardware" up the software stack untill you get to the squishy pink stuff. This is again problematical because we work in networked environments using and multi-distibuted-CPU systems. Which means the attack surface is in continuous flux and effectivly "unconstrained".

If you are only dealing with "outsider attacks" then the old rules still work very well. But not so with insider attacks by software developers or from supply chain attacks.

Some of what I've been proposing involves ensuring that systems are not "trustworthy on delivery" but remain trustworthy in hostile environments and offer greater protection from insider attacks (intentional or unintentional) from developers and to a lesser extent from supply chain attacks.

So please don't look at as it being Nick P -v- Clive Robinson it's not. In a final system all of Nick P's solutions remain valid from the users fingerprints downwards, I'm working further down the stack to augment the approach to trustworthy hardware. Robert T is working even further down the stack at the very bottom of the supply chain working upwards. So you could say I'm "piggy in the middle" ;)

The point behind the title of "Castles-v-Prisons" is in refrence to the hardware environment. A single CPU architecture with large memory spaces and usually only software attempting to enforce segregation is very much like the inside of a castle where the guests are usually trusted and given minimal constraint on movment. The system I'm proposing is to much much more control this internal space and is much more like the way a prison works where the inmates are not trusted their environment is kept small their communications heavily monitored and their environment subject to frequent searches.

Currently what Nick P proposes for the software end sits directly on what I'm doing, however below what I'm doing and above what Robert T is talking about/doing is a very very large chasam which I don't think can be realisticaly filled but only bridged and this is an area which researchers published papers realy don't cover or even consider currently.

So I've made a swinging and possibly incorrect assumption as a starting point of "It cannot currently be solved and thus only mittigated". and this is where you start getting into the joy of "probabalistic security" in many respects you can at the high level think on it in the same way the A-bomb designers did during the second world war. They knew that the only way they were going to get results they needed was with probabalistic mathmatics and this gave rise to Monte Carlo methods.

My personal view is we never will have anything other than probabalistic security in practical and usable systems, so we may as well start work on them now to save considerable pain later.

However I've a bad habit of being 20-30years ahead of the academics when it comes to ideas. This is not because I'm some kind of "super genius" or "in the right place" it's just that I refuse to be constrained by their out of date methods usually enforced by publishers and funders that make the researchers forward progress like trying to swim in cold treacle. I prefer to skip ahead across the surface and with light footsteps make the paths that other explorers will investigate with (I hope) considerable rigour.

Clive's above synopsis seems pretty accurate. Especially about our approaches really complementing rather than competing. If there was competition, it would be in the market. At this stage, though, we're simply working on different approaches and sometimes different problems. RobertT especially: he's at a much lower level than me & anything I'm working on might port over to whatever people like him come up with.

"As I've said before Nick P's approch is pragmatic and bassed on what is currently available and in turn is based on much work that has gone befor it which is well thought out and solidly reasond within the constraints and practices of the times."

True. Additionally, it fits within the existing certification frameworks, helping in marketability. However, I'll be clear that most of it isn't my approach: the principles & even many specific techniques were created by the likes of Schell, Karger, Irvine, Perrine, etc. A while back, Schell and Karger did an interesting piece that looks at the modern state of things compared to when they did the Multics system. Multics shows how a few tiny design & implementation choices can go a long way to providing good system security. Most modern systems make the exact mistakes Multics avoided... in the 70's. :( It's a nice, short read.

"Currently what Nick P proposes for the software end sits directly on what I'm doing, however below what I'm doing and above what Robert T is talking about/doing is a very very large chasam which I don't think can be realisticaly filled but only bridged and this is an area which researchers published papers realy don't cover or even consider currently."

The radical TIARA and SAFE architecture approaches achieve many similar goals to both Clive & I's designs. The software side of my systems can also work with those. This indirectly supports Clive's assessment of the situation. Anyone interested in Clive's stuff SHOULD check those two out, as they're DARPA-funded projects.

"My personal view is we never will have anything other than probabalistic security in practical and usable systems, so we may as well start work on them now to save considerable pain later."

We could say all security is probabilistic. There are also probabilistic processor architectures either on the market by now or being made into marketable products for various reasons. There's also probabilistic approaches to exploit detection, NIDS, covert channel suppression, etc. It's like a little cousin field to deterministic research coevolving. We do really need to start integrating the efforts & making choices about which are better. I think that even the Jason scholars effort to redesign the Govt classification system utilized a probabilistic approach, esp for information dissemination.

"However I've a bad habit of being 20-30years ahead of the academics when it comes to ideas. This is not because I'm some kind of "super genius" or "in the right place" it's just that I refuse to be constrained by their out of date methods usually enforced by publishers and funders that make the researchers forward progress like trying to swim in cold treacle. I prefer to skip ahead across the surface and with light footsteps make the paths that other explorers will investigate with (I hope) considerable rigour. "

Probably, but you're plenty knowledgeable/brilliant. It certainly helps. I find the classic definition of creativity, consistently reframing the problem, to be very helpful in coming up with good approaches. Some of it can be made systematic in a way that keeps paying off in different situations. The Shared Resource Matrix for covert channel detection is an example of one that can be used in myriad of different situations, even for non-covtchannel issues.

For me, though, I'm also trying to make stuff that works & can be used in the short-term. So, I look to the past for proven solutions. The paper I posted recently by Perrine on KSOS illustrates the opinion he and I share that ITSEC just keeps reinventing wheels rather than building on what we know. Example: Ancient MULTICS used tiny design change to immunize to buffer overflows, yet modern OS's try to do everything but this old thing that works (SourceT is an exception). So, recognizing this issue, I just started digging deep into the past to find about every idea & solution they came up with. Repeated that process for modern academics and commercial sector. Then, it's just a matter of defining the problem, breaking it into solvable subproblems, & mixing all the existing stuff together into different combinations until I find stuff that works.

I mean, sure I use real creativity & innovation at times. An example, my idea for append-only systems for log integrity predated a recent academic paper by almost a decade. ;) But, most of the time, there's so much good work that the "big unsolved problem" is actually both "solved" conceptually & the solution is just an engineering problem. Even the "discoveries" of Intel SMM mode, BIOS rootkits, etc. were obvious if you applied SRM technique to hardware on the old model. Matter of fact, there were older systems designed to prevent some of those issues in various ways. So, instead of trying to solve X, I look at the stuff that has been solved, see if I can apply those to X, and for many problems the solution was mostly already there. If I'm trying to shortcut it, the solution can look clunky & even ridiculous, but it's doable & by one guy at that. A professional solution would probably be far better.

Yet, the systems the pro's are building can be owned by an email attachment or network packet. That's saying something.

I also agree with your assessment. I will have to dig deep in the bowls of the blog to find out what RobertT works on. The picture I have in my mind now is him sitting on a bench with 10 Million dollars worth of equipment and cool toys. Probably an electron Microscope, some chemical etching materials, the works... I am guessing he is an ASIC designer.

What's up with your use of "I's design". I have never seen "I" used in the possessive form before.

As usual, your post contains an immense amount of information. I will address a selection of points you made. Later on, we can pick on some of what you talked about.

So please don't look at as it being Nick P -v- Clive Robinson it's not.

I never did. Since my first post on the subject, I always maintained that both your approaches are complimentary.

Robert T is working even further down the stack at the very bottom of the supply chain working upwards.

I don’t have, as yet, much awareness to the nature of RobertT’s approach or philosophy.

However I've a bad habit of being 20-30years ahead of the academics when it comes to ideas. This is not because I'm some kind of "super genius" or "in the right place"

Don’t be so humble! You are pretty sharp. If you lost 120 points from your IQ, maaaaaaaaybe you would be a genius ;)

it's just that I refuse to be constrained by their out of date methods usually enforced by publishers and funders that make the researchers forward progress like trying to swim in cold treacle. I prefer to skip ahead across the surface and with light footsteps make the paths that other explorers will investigate with (I hope) considerable rigour.

We are both in the same boat. Apparently I am seasick though! I can’t seem to convey my approach convincingly.

We know from mathmatical work done before the electric computer was invented that a generaly usable system of logic (later called a Turing engine) cannot verify it's self. That is it has no way to detect it has not been modified in some way and that it is in effect lying to it's self and reporting those lies onwards.

Continuing the discussion...
Are you suggesting, in your design, or have you considered that one Turing engine can verify another Turing engine -- Say in a multi-CPU system (SMP or otherwise)?

In my opinion, the best security solution really depends upon the constraints of the problem. Most of what Clive and Nick discuss fundamentally assumes that the attacker has never had access to the computing device. If access was granted / achieved at any stage than you fundamentally need to step back and question what forms of information leakage are possible. Unfortunately, for the security professional, this will often take them outside of their area of expertise.

To properly understand what attacks are possible, you need an in depth understanding of the physics of the problem, this means RF and Analog design expertise, as well a good knowledge of chip Failure analysis methods. This combination of skills is extremely unusual, but adding requirements of an in depth knowledge of CPU's architecture and modern day crypto, kinda results in an empty set.

Unfortunately modern day secure computing fundamentally requires oversight from someone with all of the above skills, for this very reason it is improbable that any significant security advances will be made. The new "high security" methods open as many doors as they close, so the job for hardware hackers is to stay current / ahead of the curve, this means they need to be refining their skill set as the target moves. In essence you need to understand what doors are being opened.

"Whose "Kool aid" are you drinking these days?"

I'm no longer in the business of using or creating secure computing devices, this frees me up to think about attacks that are a little outside the expected.

"I will have to dig deep in the bowls of the blog to find out what RobertT works on. The picture I have in my mind now is him sitting on a bench with 10 Million dollars worth of equipment and cool toys. Probably an electron Microscope, some chemical etching materials, the works... I am guessing he is an ASIC designer."

Afraid I' not an ASIC chip designer, I've always focused on full custom solutions, usually with significant Analog content. WRT to security this also means that I'm not constrained by the limitations of a binary world....

WRT to debug equipment I develop most of what I personally have and can usually convince an FA hardware supplier to equip me for FREE, they need a methodology....I need hardware, so we work together, off course I keep the hardware when we are finished.

WRT microscopes I usually dont use SEM's (electron microscopes) I prefer NSOM's and other forms of AFM. The only point that I would make about "cool equipment" is that it is all available to anybody that really understands how to use it. FIB equipment costs about $1500/hr and is easily purchased an hour at a time. If you know how to use this time efficiently, than even a couple of hours is enough time to make significant inroads into the security of the chip.

For probing the internals of a chip,I like to use methods like TIVA and LIVA, however I usually use these tools in an active closed loop form, whereby device performance is modified by the injected stressor. Think about what happens to a semiconductor lattice when it is voltage stressed, how does this modify its optical properties (linear and non-linear).

"Most of what Clive and Nick discuss fundamentally assumes that the attacker has never had access to the computing device. If access was granted / achieved at any stage than you fundamentally need to step back and question what forms of information leakage are possible. Unfortunately, for the security professional, this will often take them outside of their area of expertise."

True. Most A1-class stuff assumes physical protection & trusted administrator. Most of my stuff falls into this category, too. Those of us that have some expertise in stopping easy physical threats know how hard THAT is, much less doing things to the level RobertT talks about. Hence, we just say as a rule if the enemy has physical access you're screwed.

I have designs reducing that by putting TCB into physically tamper-resistant technologies. Not so many such designs & they're a pain to work with. Well, things like digital signatures are the obvious exception. RobertT discusses issues that must be overcome to eventually design things that can be trusted to chip level & do more general stuff in a trustworthy fashion.

If access was granted / achieved at any stage then you fundamentally need to step back and question what forms of information leakage are possible. Unfortunately, for the security professional, this will often take them outside of their area of expertise.

And way way beyond their comfort zones. Esentialy this is another way to look at part of ,

"...however below what I'm doing and above what Robert T is talking about/doing is a very very large chasam which I don't think can be realisticaly filled but only bridged and this is an area which researchers published papers realy don't cover or even consider currently."

There are a couple of ways you can look at,

"... assumes that the attacker has never had access to the computing device. If access was granted / achieved at any stage than you fundamentally need to step back and question...

Firstly the type of access, which can be,

1, Software only.
2, Hardware only.
3, A combination.

They are fundementaly very very different and it is important to keep this in mind at all times.

However the traditional way of dealing with such access is covered by "541t or Bust", that is you examine how you perceive information can escape and put a detector in place.

This is part of what Robert is talking about with,

"... what forms of information leakage are possible."

If you do not perceive the mechanism/route of the information leaking then you won't put a detector in place for it (except by accident or luck).

Thus when and if your detector trigers you examine what you think you see and then if you beleive its valid you assume you've been owned some how... You could call it the "Banging stable door system" in that you don't check the horse only what's stopping it moving out of the stable, and only if the door bangs hard enought to wake you up...

The other part of what Robert is refering to is "closing channels" but this likewise presupposes you know what the channels available actually are, and that you can actually close them down. For instance cache or I/O leaks are part and parcel of normal operation and thus cannot in reality be closed down...

Realisticaly these traditional aproaches can only be taken with strongly deliniated systems which have clearly identifiable tasks that can be easily segregated and fully monitorable at the perimeter of each stage. Which to be honest only applies at best to "single function" "single chip" "single CPU" type systems, unless you want to put more effort in than would justify the expense (Nick P can give you an idea of just how costly such segregated and heavily monitored these very limited function designs can be).

There is however a catch, monitoring is like continuous testing, in it's self it's a very significant back door with potentialy more leaking channels than the original system...

Although at first the problem looks insoluable it's not. The solution is to reduce the complexity of each stage to a minimum and have the minimum of inter stage communication using very precisly and simply designed protocols. Oh and absloutly no feedback other than "fail hard", "re-start from the begining". There are other issues such as "stage transparency" but these can generaly be solved by carefully selected "clock the inputs clock the outputs" for time based channels.

Now if we go back to my points about malicious access being software / hardware / combination and look at just the software access.

Software needs memory to exist which is accessable both to the attacker and to the execution unit. If the memory is not actually available, or that available to the execution unit is not available to the attacker then a software based attack can in effect be stopped before it starts. However there is a fly in the ointment which I shall for simplicity call "Interprative issues".

To be of use a computer acts as one of three things,

1, Data source.
2, Data sink.
2, Data modifier.

If designed correctly the data source and sink should not react to the data, that is they use out of band communication for control purposes that are unavailable to a potential attacker.

The data modifier is where interprative issues arise. I could go through each type but I won't but put simply if the data modifier operation is uneffected by data (say a binary to grey code converter) nore is it's operation effected by stored data (think a simple DSP action) then the interprative issues do not arise. If however either immediate source data or stored data effect or can effect the operation of the modifier then the modifer is "interperating" the data and can thus be covertly programed to a limited extent via the data.

Stopping "data interprettation" is not an easy task however many parallel programing algorithms actually work this way, which means there are known ways of reducing the effect.

Software is in effect a "down the stack" attack, whilst hardware attacks are in effect "up the stack". I'll let you have a think on how you would prevent up thee stack attacks from a higher level but as an indicator I'll say "Voting protocols".

I'm sure at this point however that both Nick and Robert will have other aspects to raise and no doubt some I won't have covered in the past (the subject is just to darn big for even quite a few blog posts or even academic papers ;-)

In my opinion, the best security solution really depends upon the constraints of the problem.

I am in agreement with you. The problem needs to be defined, and equally important, the goal of the solution needs to be stated as well. Then the "product" should be described.

To properly understand what attacks are possible, you need an in depth understanding of the physics of the problem, this means RF and Analog design expertise, as well a good knowledge of chip Failure analysis methods.

Yes, but this is only one requirement. When designing a system, the Security Architect cannot be stuck in the mentality of the Attacker. This is another discussion though...

For probing the internals of a chip,I like to use methods like TIVA and LIVA,

Yeah, I guessed that from one of your Tera Hertz laser posts. You seemed to have interest, knowlege, and access to similar equipment.

"Yes, but this is only one requirement. When designing a system, the Security Architect cannot be stuck in the mentality of the Attacker. This is another discussion though..."

Intentionally or accidentally, you're about to start a fun discussion. ;) I will say, ahead of time, that it isn't a bad thing to get stuck in w.r.t. the hardware security topic. Most modern chips failed b/c people didn't think like the attackers. They thought in terms of compliance, old attacks, etc. The security mindset has more out-of-the-box results. It's far from the only element in secure hardware, but it should be used extensively. The odds lean far in the attackers' favor on defeating chip security. At the moment, anyway.

Mondex is a perfect example of where an attacker's mindset should have been used extensively. All that effort into security, even EAL7-type software development, & it can probably be defeated using the typical attacks at Ross Anderson's lab. (shakes head slowly)

Don’t look at the individual skillset, look at the team's. You need to consider a team of subject matter experts

In theory yes in practice no.

You need to have a think about team dynamics and scope of skill sets.

The scope of skill sets is an interesting problem that usually is not an issue in normal engineering practice. However when it comes to security it is a real problem.

Think of it as a variation on the "putting fruit or eggs in a box" problem. No matter how you arrange the fruit or eggs you will only get minimal touching and plenty of space in between. And it is this space where security holes happen because generaly there is little or no overlap of expertise.

The other issue is a mixture of "group think" and "dominant personality" issues.

One way to resolve this is to have two teams where members rotate through one group to the next and back again. One team is the "design" or blue team the other the "attack" or red team and you treat it as a "competitive event".

The thing to avoid is it becoming a "code review" process where the "blind" stare aimlessly where they think the sighted preditors foot prints will be.

Ive personally advocated the same Red/Blue rotating strategy that Clive mentioned and agree it might work. Most low-defect and high assurance development processes do a form of it, minus rotation. The first to get it right was IBMs legendary Black Team.

@wael
"Don’t look at the individual skillset, look at the team's. You need to consider a team of subject matter experts. HW, SW, Solid State Physics, Chemical Engineers, RF, Analog / Digital, etc...
Hard to find a single person who is a specialist in all these areas. One or two of them, maybe, but a generalist in the rest is the best you can hope to find."

Security systems reality is that we must use teams of experts, unfortunately it means that all "experts" must be implicitly trusted because by definition they are the expert so oversight by another team member is pointless (if the expert really wanted to hide a backdoor he'd presumably know how to do this, so that no other team member could find his back-door)

There is a second issue to do with the dissemination of attack vector information, this is a point that I have discussed with Nick before (wrt open source code review) In essence if the coder does not understand a hardware attack (such as DPA) than he can never code against this attack. (for your information DPA was a common attack method far at least 15 years before anyone in the open source secure computing community even knew of the attack.)

There is a related problem that a true security expert, usually knows about , or strongly suspects, that a new attack method could be developed using whatever method. Generally he/she will never want to reveal this weakness, until they have a fix.

so the concept of a generalist overseeing an expert, works in most development situations but it is difficult to make work in the security field.

Merging threads – they are related… Security systems reality is that we must use teams of experts, unfortunately it means that all "experts" must be implicitly trusted…

At dusk (AKA end of the day), you have to trust someone. You can’t escape this fact, excluding @ Clive Robinson’s novel approach, which presumes nothing is trusted, including the hardware and the SW developers.

Back to team formation:
There should be more than one expert in each area. Experts, and testers are involved from the inception phase. Everything needs to be checked at each design stage by the group. It will be hard for one “security expert” to hide a back door from peers, penetration / unit / functional testers, and architecture / design / conformance, code reviewers.

so the concept of a generalist overseeing an expert,…
I am not proposing that either. This is not a working constellation. Allow me to clarify what I meant. What I am saying is a security architect (for example) needs to be a subject matter expert in at least two or three areas, and a generalist in other relevant areas. S/he cannot be a complete zero in those areas. That applies to the rest of the team members. Lets say we were to construct a Cartesian coordinate system with two axis (axes): X-axis and Y-axis. Label the X-axis with the technology areas, for example: Protocol Analysis, Cryptography, Hardware design, Software coding expertise, Penetration testing knowledge, Fuzzers, side channel attacks, etc. (this is not comprehensive, but should give you an idea what I mean). Label the Y-axis with numbers ranging from 0 to 100. 80 or above is considered an SME (we can talk about what SME means, but this is an irrelevant detail at this high level, and we can revisit it later). If you plot a graph of skillsets on the X-axis vs. the depth of knowledge on the Y-axis for an individual member, then a security architect should score 80 or above on two or three areas relevant to the architecture and design, and should score 40 – 50 in other areas, but not a zero in any of them. Apply that to the rest of the members. This is not a far-fetched requirement; such people do exist. You will need to construct the team such that the collective graph of the team is 80+ (or 90 or whatever, depending on other factors) on all areas.

I know this is practically – for various reasons – not easy to attain. I was more than once stuck in several roles where I had to design the protocol, write the code, and test my stuff because of lack of resources, or other factors. So I do understand where you are coming from. However, I am not proposing that a generalist oversee a specialist, per se.

Now I say, if a security designer is stuck in the mentality of a hacker (or a cracker), S/he will definitely miss some areas. Instead, they have to work with “principles” that cover classes of instances of attacks. This comes from past experience where I have seen failures and successes. I am sure your experiences will include other tweaks I have not mentioned.

At dusk (AKA end of the day), you have to trust someone. You can’t escape this fact

Actually you don't and with a little thought you can see why.

As @RobertT has mentioned and @Bruce has refered to with "thinking hinky" knowledge moves forward with time and new attacks become known and older attacks improved to the point of being practical.

So it's time for the quick Information-v-Knowledge speech which can be followed through to show there can be no such thing as a future state of "trusted" only a past state confined within current knowledge.

Information intangibly exists it has no physical form and is thus unconstrained by the laws of nature as we call them it is also as far as we know unbounded and thus is infinite. Knowledge on the other hand is information that has been crystallized into a physical form either chemicaly in the brain or encoded on physical objects for storage and communication. Thus knowledge is like a butterfly in a net it has a tangible physicality and is thus constrained and ensnared and bounded by our physical universe and is thus finite.

The simple example of the difference between information and knowledge is of a bloke with to much time on his hands having a thoughtfull "smoke" under a tree when an apple supposedly dropped on his head. The result was a chain of events that gave rise to "The Knowledge of Gravity" being formalized from the information about it which has fairly obviously been around since before man existed.

All security exploits that are or ever will be known exist already as information but mostly not as knowledge currently. Thus we live and work with "imperfect knowledge", we can also prove relativly easily that only a tiny fraction of information will ever become knowledge and that at some point knowledge will have to become unknown in order for other information to become known as knowledge.

Thus you will with a few more steps get around to the argument of "how do you trust that which is hidden from you" combined with "Imperfect knowledge means knowledge is always hidden".

The simple answer is in the CIA motto (with it's unofficial rider) of "In God We Trust (all others we check)"

Which we know realisticaly in the real world is either nolonger possible or requires constant checking thus in turn brings you back to,

Clive Robinson’s novel approach, which presumes nothing is trusted, including the hardware and the SW developers.

This is not to say they are being deliberatly untrustworth but that you just cannot prove they are beyond any mistake or malicious behaviour etc etc. Hence my digging into the notion of probabilistic security because we have to accept at some point there can be no trust to build on thus nothing can be assumed to be trusted.

Is probabalistic security a "Cludge"? yes, can we avoid it? I don't think so...

It might as you say be novel, but I cann't see any way to avoid it, but I'm always ready to be persuaded otherwise ;-)

Actually you don't and with a little thought you can see why.
As @RobertT has mentioned and @Bruce has refered to with "thinking hinky" knowledge moves forward with time and new attacks become known and older attacks improved to the point of being practical.

I think we are talking about two different things. Check RobertT's comment, and my subsequent reply.

Security systems reality is that we must use teams of experts, unfortunately it means that all "experts" must be implicitly trusted…

There's one heck of a difference between implicit and explicit.

RobertT further went on to say,

There is a related problem that a true security expert, usually knows about , or strongly suspects that a new attack method could be developed using whatever method.

To my reading that's one reasonable definition of "thinking hinky"

RobertT goes on to say,

Generally he/she will never want to reveal this weakness, until they have a fix.

Which again to my reading is a reasonable definition of "responsable disclosure" based on what is initialy "imperfect knowledge".

My point is systems cannot be "trusted" in the future, only in the past and then only within the constraints of imperfect knowledge. This is just one reason why zero days are such an isssue.

That is you have a system that you "asssum to be secure" and you then "trust it not to behave in a way you don't intend it to". However it's actually never been secure and at some point the information of how it can be attacked becomes limited knowledge to one or more individuals, they then use this information to make your trusted system behave in a maner which you "trust it not to".

I simply assume that no system or person can be trusted because at the very least they have imperfect knowledge about vectors that exist within the system but are as yet unknown.

It is thus far safer to assume the systems or persons are implicitly untrustworthy and work out methods by which to mitigate this issue or limit the potential damage.

One such method is "voting protocols" which NASA used to improve safety etc. Basicaly you have three systems that are not trusted and they all use different hardware and software to do the same job. If they all produce the same answer at very nearly the same time then either they are all acting as expected or an attacker has managed to infect with malware all three systems at almost exactly the same time. Which is more probable?

It's by no means perfect (nothing is) but it is way way more likely to mittigate or limit any damage than working under the false assumption a system is secure when infact it is not and somebody else now knows it is not.

Does that explain where I'm coming from when I say you do not and should not trust?

I hear you. But RobertT and I are talking about trusting a human being with a task. You are talking about trust in systems based on previous behavior. We are talking about two different things. Think of it this way, I want to build a team. Are you saying it's ok to build a team with members I have no trust in?

But RobertT and I are talking about trusting a human being with a task. You are talking about trust in systems based on previous behavior

And the difference is?

A human being that does a task for you is like anything else capable of doing a task such as an ox or donky used to turn a water pump, a homing pigeon used to carry a message or a dog or pony used for transportation of goods and people.

If you go back just a little over one hundred years the technology age was starting with standard parts from a catalogue to allow people to build and repair machines easily and Henry Ford and others started using humans as automata in a production machine.

Basicaly if you have a task you need doing that you are incapable of doing effectivly yourself you get some kind of tool to use as a "force multiplier" it could be as simple as a hammer and chisle that you use or you employ a craftsman to use because they are eitther more skilled at it or your time is more profitably employed else where.

You implicitly trust the hammer and chisel not to break but you do explicitly mitigate this by having a range of hammers and chisels with you and other tools etc to effect repairs at a later stage back at the workshop.

For the last millenia or so this is what happens a tool is used as a force multiplier it is implicitly trusted to do the job but it is usually explicitly mitigated by maintanence and repair or replacment as it wears or breaks.

A human is a tool, an animal is a tool, an enginee is a tool any kind of cutting edge is a tool and any kind of lever is a tool.

We know that the use of all tools is probablistic in nature, that is they are not perfect and they make errors or mistakes. Some causes are easy to see such as tool wear others not so. General the more complex the tool is the more complex the causes of it's mistakes. When a tool gets to a sufficient complexity it is considered a living thing (for instance the yeast for our beer and bread) and other factors come into play, at further complexity tools become effectivly self aware and have more complex patterns of behaviour bassed on percevied and actual needs.

The same rules however apply,

1, Previous behaviour is not a reliable predictor of future behaviour.
2, Previous desired behaviour may with future knowledge be seen as undesirable behaviour
3, Undesired behaviour should be expected and monitoring put in place to detect / mitigate it.
4, Systems should be designed to still work with some undesired behaviour as part of normal operation.

Tthe development of software or hardware is a manufacturing process using tools some of which are human. We know mistakes are made at all levels and we have of recent years put in place quality control systems to mitigate undesired behaviour within the manufacturing process.

As I've repeatedly said "Security is a Quality Process" it might be considerably more complex than most quality processess and be subject to way way more "imperfect knowledge" but with a little fore thought mitigation is possible esspecialy when for just a little increased complexity you get a whole lot more mitigation and ease of control.

All processes use tools and as I've indicated the tools might range from simple levers and cutting tools through the energy sources of sinue and muscle to the unknown behaviour of the little lumps of pink and grey porridge that occupiess our craniums. All processes have errors and by and large we can design systems to mitigate those errors.

So seen from that perspective what is the difference between a tool that is a computer system and a human that is also being used as a computational tool all be it beyond what any man designed machine can currently do?

So seen from that perspective what is the difference between a tool that is a computer system and a human that is also being used as a computational tool all be it beyond what any man designed machine can currently do?

No difference. Consequently, the "human tool" which happens to be designing the secure system needs to be subject to Quality Control as well. My QC department states the designer should not wear the cracker's/attacker's hat because the probability of miss-designing the system to respond to specific known attacks will be increased at the expense of missing other classes of attacks.

@wael
"My QC department states the designer should not wear the cracker's/attacker's hat because the probability of miss-designing the system to respond to specific known attacks will be increased at the expense of missing other classes of attacks."

I'm not sure what to make of this statement. Hackers / crackers come in all knowledge levels from script kiddies to Astrophysics Phd's, the good hackers share a common trait of seeing a minuscule problem where others just see the beauty of a well engineered beautiful product. It is this "hinky" trait that makes a great hacker and I'd go so far as to say, that this same trait is essential for any generalist security team leader. You have to be able to see that little hanging thread before you can start to pull on it and see what happens.

I definitely agree that you should not put a script kiddie into a security team leadership job, just because s/he can find one way to break into your system. If you do you will probably get a belts and braces protection against the exact attack, their script used, however in all likelihood they'll open up more back-doors then they'll ever close. This is what Clive is talking about wrt "imperfect knowledge" . The current trend in hardware is to spread the clocks to make DPA a less effective attack, however, as it is typically implemented it actually increases the DPA signal to noise by providing "processing gain". This is a perfect example of a fix which actually fixes nothing, it only moves the ball, the fix actually helps anyone that understands how to use Rake receivers.

I've argued this with NickP in the past wrt open source software. The gist of the argument being, how can you trust a disperse group of software writers to police each other when most of the group will never really understand the newest attacks?

If the average code cutters knowledge of security lags, upto 20 years, behind the expert hacker then they can never hope to address "zero days" or exotic hardware hacks. Nick, would probably reply, that a carefully constructed / QC'ed code that is 20 years behind the bleeding edge, is still far better than the average *inx / windoze alternative and good enough for 99.99 percentage of the population. As for the other 0.01% well they need air-gaps and secure boots and secure hardware and ...and...and so on, so NO solution will ever really meet their needs.

There is another aspect to the security problem that's worth mentioning, it is the "business relationship" and "financial liability" side of things.

In most good working business relationships there is an expectation that "shit happens" so a good measure of the business commitment is how quickly we can get the client up and working again, after a security breach or other failure. Since haste is the antithesis of security, it follows that the business relationship expectations of our partners, are often exactly opposite to whats required for good secure code.
As for financial liability, it would be nice if "joe average" juror actually understood 1/100 of what we are talking about here BUT they don't, so they will access penalties based mainly on how un/responsive they perceived the software supplier to be. SO delivering complete crap quickly actually provides the lowest financial risk. It's crazy but it's reality....

"I've argued this with NickP in the past wrt open source software. The gist of the argument being, how can you trust a disperse group of software writers to police each other when most of the group will never really understand the newest attacks?"

The last sentence is true of OSS, but I don't think the majority of my approaches fit the idea of "average" FOSS programmers using things 20 years behind the bleeding edge. Sure, MULTICS beat buffer overflows & did ring-based POLA almost 40 years ago. The A1 candidates were 90's. Yet, separation kernels, language based security, recovery architectures, & the likes of Flicker architecture are all past ten years (some 2010-2012). Additionally, even medium robustness (EAL5-equiv) requires specialist security engineering knowledge, so anything I'm calling good against good attackers is going to have at least one specialist on board or consulting, preferrably a group trained a bit in it. Many other proposals defeat the easy attacks and/or make hard attacks harder. Regular OSS software doesn't compare to things I advocate, as OSS concept alone doesn't hack it.

"As for the other 0.01% well they need air-gaps and secure boots and secure hardware and ...and...and so on, so NO solution will ever really meet their needs."

The one true part, although prolly 0.000001%. Here's the thing, though: it's hard to beat the defenses geared at that level & things are so bad nobody usually has to. The "advanced" hackers were compromising virtually all the major players with spearfishing & run-of-the-mill malware. Why? It's that easy. Change it to "esoteric attacks often only option" and "ridiculous amounts of labor to find one usable bug", then you have way less damage. Add modern detection methods, recovery architectures, robust system monitoring solutions (not AV, the good kind), etc. & the situation is far less bleak for the defender. Yet, for reasons that are part technical & more economical, I think the last part of your statement will be valid for many defenders.

Shipping product is priority No. 1. It keeps real-world customers happy. Money comes in. If bad things happen, that stream of money can deal with them acceptably. Steve Lipner, who did and cancelled an A1-class product, wrote an entry I won't forget on the subject. It's part of why your statement is true & why most software houses (even "quality"-centered) won't ever produce something that can be called "secure" in any realistic sense.

@NickP
"The last sentence is true of OSS, but I don't think the majority of my approaches fit the idea of "average" FOSS programmers using things 20 years behind the bleeding edge. ....."

Sorry Nick I did not mean to imply that your EAL7 and Orange book stuff should be compared with OSS, I was simply trying to demonstrate the "imperfect knowledge" problem and how it effects all secure systems development.

I'll be the first to agree that EAL7 systems are VERY hard to compromise, if only because physical access security is so stringent. Most systems with ratings above EAL5 are difficult to compromise, because even when you find a small crack it doesn't lead to a an enormous security breach, instead you find yourself teasing information out of the system bit by bit. This is a result of layered security and all the other elements that you mentioned.

WRT to Wael's comments about Infineon chips these related to devices that a hacker might realistically be able to acquire and physically hold and manipulate, even potentially reprogram. I don't know of any EAL7 systems that would be still regarded as secure after a known hacker physically possessed the device, for as long as they wanted and possibly reprogrammed it!

I appreciate your clarification. I particularly like how you describe a EAL5-type breach. Seems realistic. Yeah, I think I said in one of our discussions that even the certified systems assume physical security & trusted administrator.

I would only trust the physical security of a computer if it was in a tamper-resistant, self-destructive container designed by the best physical hacker in the field... who also destroyed the blueprints & shot himself afterward to ensure its continued security. Even then, I'd still loose sleep on a few nights wondering if they got in.

(And I don't see any physical security gurus volunteering for the job anytime soon...)

(And I don't see any physical security gurus volunteering for the job anytime soon...)

If you ease the requirement of shooting oneself, you may get some volunteers :)

In your designs, you stated physical security is an assumption. I think there is a way to counter lack of physical security though. Perhaps a discussion for another day... I aluded to that previously in a recent discussion with @ Dirk Praet, but never finished that discussion.

I don't know of any EAL7 systems that would be still regarded as secure after a known hacker physically possessed the device, for as long as they wanted and possibly reprogrammed it!

It depends what you mean by secure here! Does it mean:

1- Private key protection?
2- Immunity to overwriting a public key?
3- Private information protection?
4- Immunity to reverse engineering HW and extracting IP?
5- Using the device for impersonating the owner?
6- Understanding the security mechanism, exposing a hole or creating a weakness then publishing a method of attack for similar devices? (attack once, break all) that script kiddies can use.
7- All of the above
8- All of the above and more?

Maybe my question is meaningless since you are dealing with the chip level. Probably secure in this context means functions as advertised with immunity to side channel attacks, or out of spec attacks. The other questions may apply to a system level and a corresponding use case. Questions 5 and 6 should be "protection against". And attack once, break all could be break once, attack all.

The simple fact is no matter how you cut it or slice it anything outside of your 100% direct observation and full control has at some point been "owned" by somebody else.

It does not matter what systems they may have put in place as we know all systems can be tampered with when out of sight, thus it cannot be trusted.

The question then becomes one of "how do we test or mitigate this issue?"... after a certain quite small degree of complexity we know we cannot fully test every state etc. Therefore there is always the suspicion remaining that the system has been "backdoored" only we cann't find it because we have "imperfect knowledge".

As a real world example of trust and how it fails lets look at the bulk Gold or "bullion Market" supposadly people are trading the ownership of gold ingots that have been specialy tested for purity, uniquely serial numbered and put under secure lock and key in a repository of the bullion house. This is a very very expensive process which is why what is actually traded are "certificates" not ingots. Traders implicitly trust the certificates because the bullion houses are supposed to explicitly trust the ingots they have under lock and key because of the expensive process of getting an ingot into the bullion house repository.

Thus the only way to explicitly know you are getting what you pay for is to go through the whole "destructive" refining testing and recasting process yourself alone in your own refinery atttached to your own bullion repossitory that only you can ever enter...

The mitigation process is simple however buy insurance against the loss, but... after the LLoyds LMX spiral where "stop loss insurance" failed can you trust that the insurance will actually pay out if you claim? And do the insurers actually know the re-insurance market they use to spread risk actuall have the reserves to actually pay out?

Such are the joys of "imperfect knowledge".

Or how about the stratagie of wide diversification of investment, that seemed a "sure fire" mitigation stratagem prior to 2008 when the banks tanked and sucked the rest of the world into their deceitfullly greated financial black hole. Likewise Government bonds with Sovereign debt.In all cases somebody subverted the system for their own gain and effectivly got away with it. Because the reality was way to few institutions to give sufficient independance for diversification to work but the market in general was unaware of this so they had "imperfect knowledge".

Thus trust is an illusion unless you have sole control of all stages of the process. What actually makes trust work in the real world is knowing the other party genuinely has more to lose than you do which is why "to big to fail" and "limited liability" are very dangerous notions.

From the security asspect what is needed is the leverage of "more to lose" and "genuine diversification" of risk. The problem with this is the "imperfect knowledge" of cheating and how you mitigate it.

The interesting thing about "cheating" is in our tangible physical world it's "rewards process" is not a "step function" it has the constraint of time that limits it's reward process.

We see this with physical security whereby we have a "detect and respond" philosophy rather than an impossible to implement "impregnable fortress" philosophy. Which makes "physical security" probabilistic not determanistic in nature.

Now the question arises does intangible information have a time element. Sadly the answer is no as I've indicated before. However the processing of information as knowledge currently requires us to encode it onto physical objects (atoms etc) and this immediatly constrains it by the speed of light and physical forces.

Thus we can with care use "detect and respond" systems to limit the reward process for attackers and we can move away from the impossible to defend "impregnable fortress" philosophy that the current EAL systems impose to no long term useful effect.

However we have to have reliable methods to discover "cheating" in very short time periods for this to work so the fortress mentality of the lower EAL levels can just as with A60 "fire resistant" safes buy us time.

And yes this is an intrinsic part of the C-v-P idea, and as Nick P has noted some other researchers are now thinking along these lines (if only in software).

But as I noted before "software" is a top down solution where as supply chain attacks are a bottom up attack which leaves this large gulf to be either filled in or bridged.

Clive, you mentioned "imperfect knowledge" several times now. It is the antithesis of "complete awareness" I talked about. You also talked about "control" which I also referred to "Total assured control". I sense that we are getting closer to convergence.

One reason I avoided details like EAL, Common Criteria, and HW / SW implementations is I was trying to make order out of chaos first, unsuccessfully so far. Regarding EAL, you are basically trusting some one else's "evaluation", and I have seen my share of misrepresentations there ;)

PS: I dont know how to inject that I think light is not the fastest thing without getting a warning from the moderator. If only I can relate that to security :(

Which is actualy in line with physical securities probabilistic nature of "Detect and Respond" in theory the only part you need to trust is the "instrumentation of the detectors" and as I've indicated before this is best left to a fully determanistic state machine.

Which has certain charecteristics such as all states are known and tested and there is no feedback etc only hard coded state to state translation with the only data dependant action being to halt a running process and raise an exception to some kind of hypervisor.

Obviously you have to know in advance what you are going to "measure" in your detectors and what values are going to trigger the exception.

In essence this is the inverse of using a "matched filter" to pluck a signal out of the noise because any "noise" implies unknown / unexpected behaviour. This is a more advanced form of raising an exception on a "segmentation fault" and to do it requires that a process produces a clear signiture which in tern requires the process to have low complexity and be well defined.

Whilst this might at first appear to limit what can be done in practice it makes very little difference as has been shown by various "parallel progrmaing" methodologies.

The cost is many many small processes and a great deal more process to process communication.

In a single Complex Instruction Set Computer (CISC) context switching from process to process has a very high overhead. However using a Reduced Instruction Set Computer (RISC) and a stack based memory language task switching is as simple as pushing the CPU registers to stack and changing a single register to then load the registers with the next processes saved values and stack pointers. As efficient as this might be the use of a stack based language reduces the complexity of the CPU down to maybe 25-30 instructions and optomised register handling shrinks it even further which means that you could have a very large number of CPU cores when compared to a CISC system so process switching may not even be required for much of the operation and that that is required can be made very very efficient.

With such a system various new tricks can be performed, one of which is to have multiple CPU designs designed by different teams as macros in the chip design. This then allows voting protocols to be used alongg with dropping in "known test vectors" to check the CPU.

Trying to get beyond such near realtime detection methods would be very difficult and thus subverted hardware that gets triggered to rouge behaviour in some way has a very high probability of being detected very very quickly before much if any information is leaked.

Fianaly a parting thought William Shakespear wrote a lot of sonets etc which can be used as transportation for mems, one of which is "But soft what light through yonder window breaks..."

Such use as mems was used in clasified adds during Victorian times along with simple codes and ciphers for Victorian Romeos and their Juliets to communicate supposadly secretly. Charles Babbage and friends used to break such codes etc and place faux replies such that the participants new they had been uncovered but without knowing by whom.

The use of such is virtualy non existant these days but would certainly still work as a low security way to transfer information through as a side channel on many web pages.

However using a Reduced Instruction Set Computer (RISC) and a stack based memory language task switching is as simple as pushing the CPU registers to stack and changing a single register to then load the registers with the next processes saved values and stack pointers.

That's how task switching happens on CISC as well (short of the stack pointers). Unless I forgot!!!

As efficient as this might be the use of a stack based language reduces the complexity of the CPU down to maybe 25-30 instructions and optomised register handling shrinks it even further which means that ...

.NET uses a stack based language. Its not that efficient, because that's not what natively runs on the CPU. I presume you are referring to the instruction set being "stack based"?

Speaking of subliminal and side channel communications, I want to run an idea by you. But first, I want to see if you agree with this statement:

"All cryptography is based on a secret. Regardless whether symmetric or asymmetric."
Do you agree or disagree?

As for "Bill" Shakespeare, I guess I am allowed to talk about "poetry" again, WooooHoooo
Anyways, in light of your quoted sonnet, I would say, I am attracted to this one ;)

That's how task switching happens on CISC as wel(short of the stack pointers). Unless I forgot!!

Broadly, the reality is usually a lot lot more bagage, the use of a stack based language provides a greater level of abstraction which reducess the baggage (if done correctly).

I presume you are referring to the the CPU. I presume you are referring to the instruction set being "stack based"

Yes. For example back in the 1980's a version of Forth was implemented on a DSP chip as effectivly a Harvard architecture and IIRC only implemented twenty seven basic Forth words.

With regards,

"All cryptography is based on a secret. Regardless whether symmetric or asymmetric. Do you agree or disagree?

The answer is no if you believe "non secret" use algorithms such as hashes are cryptography. As hash functions can be made from either block or stream ciphers with a "known key" used as a public IV this extends to them as well. So if you are using crypto to keep secrets you need a secret so in this limited case the answer is yes.

That's what I know as well. Now what if we can use the delay / latency between the sender and the receiver instead of that "secret" (whether its a symmetric key, a private / public pair, or an algorithm) ?

The delay/latency is public information & might be manipulated by the enemy (see timing channels). The main reason we use a secret is b/c the communication path can't be trusted. The invention of quantum crypto is one of the attempts to secure comm link itself, along with a similar technology using classicial physics.

I am aware of all that. Just thinking of some ideas regarding close range communications (cpu to cpu or realtime OS applications) and anti-debugging methods. And since there is some serious brain power on this blog, just wanted to bounce a theory around.

Let me restate my question: all encryption / decryption schemes are based on a secret

As a general principle yes there is a "secret" or a "derived secret" at the base of most crypto systems used for maintaining confidentiality of information moved across an "open" or "insecure" channel.

However the "secret" may not be a "secret" it's self. If you look at something like the "Rip van Winkle" idea the secret key has been sent in "plain text" at some point in the distant past along with many many others. Thus the "secret" moves from "the key" to knowing "which key".

@Wael,
"Now what if we can use the delay / latency between the sender and the receiver instead of that "secret" "

thats an interesting idea but hardly a new idea. There are several secure HF comms systems that have used the exact multipath fading characteristics of the channel as part of the signal scrambling method, although this is largely an anti MITM method rather than a form of crypto.

Once you start to consider comms systems with multiple diverse send / receive antennas than you have an almost unique channel between the two systems. IF, and for RF it is a big IF, the channel remains stable long enough to fully map, than it is possible to create a connection that is only effective between the two end points.

Notice that a single RX/Tx pair cannot accomplish this because the attacker can always be closer to the Tx than the intended Rx. If you add geographic diversity to the system than it is possible to operate the system at upto 60dB below the intended Rx noise floor and still recover the signal. The solution for the attacker is obviously a system of geographically diverse MITM stations.

I've played around with MIMO/DIDO 1024 QAM over OFDM systems where the OFDM frame length was adjustable. IF the OFDM frame length can be adjusted to match the multipath environment than information can be transmitted by toggling between multipath modes. I'm not sure that the channel you get with this is any different to the typical OFDM system where the RX system analyses bit errors and sends this information to the TX system to adjust the carrier bit loading

Anyway, this is drifting way of topic and is probably of very little interest to most Schneier blog readers, so I'll leave it for now.

Sounds like Wael's rediscovering hyperencryption or applying some Book Cipher tricks to modern comms.

I have been toying with this idea since 2002, but never had the time to go further. Have not seen this Wiki link before. There are two reasons I thought of this:

1- I knew that all "cryptography" depends on a secret, obviously, since if there is no secret, everyone can intercept and "decrypt" the information. So I was looking at other unique characteristics (intrinsic secrets) between the sender and receiver. Of course that is not going to work for "internet connections" without further - - - -(fill in the blanks, and I don't know what goes in the blanks either).

2- What if someone can factor large prime numbers? Would they just go and post it? I don't think so! Yes, I know it's a 2000 year old problem with no published method of efficiently solving...

But there are no books, rediscovering, or applying tricks in my mind. I thought a first implementation would be in a close range communication between components on a computer in a real time (or known guaranteed timings).

All in all, nothing further in mind, except to see what the brains of this blog think.

@ Weal
You could use background rf say 15khz or in the microwave regions, they would be uniqueness based on area, the area ranching from local broadcasts to gps signals.
Timing the clocks might be difficult.

There was talk a couple months back about being able to using encrypted data, processing it, but without knowing the key.

Yes, multichannel could be an option as RobertT mentioned as well. Timing the clocks would be difficult as you say. Do you have more information on that talk? It's not clear to me what "processing" means in the talk's context. I know what I am proposing sounds redicilous since I am essentially saying The channel is the secret -- knowing very well that the channel is the weakness we are trying to protect :)

@ weal, the processing was on the encrypted data, you could say add a number to the encrypted data which wouldn't give the person that added the data the overall value, only the person that sent the data could decode it.

You could use routers between the path to modify the data, with only the end points knowing the answer. Like if you add the ip of the router to the packet.

One type of way would be byte split(patent pending) were you pick a bit between 1-7 say 5 were that is the encrypted data(aes), the rest is padding and a type of crc check

There are various investigations of "range" determination/limiting for detecting if a users credit card is directly in the EPOS machine or if somebody is inbetween (via a physical shim in the EPOS slot) to substitute a fake transaction etc.

It's a complex problem and one Near Field Communications for "cash transactions" is potential going to turn into a blood bath for consumers and the most well oiled of gravy trains for the banks and criminals alike.

One idea that is a range limiter for such, is a form of spread spectrum where the receiving party starts transmitting a True Random Digital Noise modulated carrier (using phase reverse keying) the data transmitting party receives this signal and uses it to modulate it's data with it.

The data receving party compares the TRDN phase difference (~=time) to determine the range. In this respect it works just like JPL ranging codes.

Two effects occure from the use of TRDN as opposed to PRDN importantly it cannot be predicted so it cannot be spoofed on range. So secondly to succeed the attacker has to be inside of the "range" to get a signal accepted, over very short distances that is very difficult to do.

The problem is it can fail to pre-recorded replies stored inside the range. The trick then is to use another protocol on top somewhat like a zero knowledge protocol.

And as RobertT has indicated adding one or more receivers can give a hard minimum range and position. It's a bit like turning GPS on it's head, which is something I've go on about from time to time whenever anyone talks about GPS receivers being spoofed ;)

However I should emphasise that this does not offer confidentiality just range limitation at the physical layer.

It's a complex problem and one Near Field Communications for "cash transactions" is potential going to turn into a blood bath for consumers and the most well oiled of gravy trains for the banks and criminals alike.

@Wael,
WRT NFC there are a few tricks that could keep a secret without the need for any crypto.

think about a system with five Rx and five Tx sections. Now lets assign one channel to be devoted to very fast (bit by bit) switching between the four information Rx channels. Lets also locate the 4 Rx channels symmetrically around the NFC card.

If the information is encoded differentially ( pie/4 shifted for each Rx) than the far field response can be almost completely canceled whereas the near field signal can be increased. Now if the 5th channel tells you which channel contains a small (information signal) than the NFC system can real time decode this while the attacker at some distance will be unable to separate the signals. This is a case of using large signal random data to mask a small signal transmission.

The closer the small signal is to the Shannon channel limit, the lower the probability of recovery by an attacker. So if the large random data signal is used to continuously map the channel, than the small signal (information) can be transmitted at the lowest power possible for a given probability of being correctly received.

There is another method that was used back in WW2 for secure RF comms. Instead of the Tx system encrypting the data the Rx system would (at the same time) broadcast pseudo random noise (which it subtracted at its own input stage) Since the Rx system knows what it sent there is no need for anyone else to know what the "masking" signal is. This method only works when Tx and Rx are very close so that the Rx broad-casted (noise) is seen (equally) by everyone trying to receive the Tx signal.

I'll leave it as an exercise to think about how this method can be extended to "back scatter RF" methods.

It's a bit to hot in the UK to be wearing a "thinking cap" and I hear bits of the US close to the Politicos in DC have had "early global warming" and in effect it's been hotter than a "snakes belly in a wagon wheel rut in Death Valley". And worse yet for the majority had no power for their fridges, freezers etc, unlike the politicos...

Hint; you may need more than one carrier, especially for passive RFID.

The RX system needs a good stable reference signal (maybe Tx carrier 1) and a VCO that can lock to the second Tx carrier.

It is also worth considering that in near field systems modulation of the Rx antenna coupling impedance / input impedance. is directly seen as a Tx load mismatch, and other Tx antennas within the near field are also seen as non-linear loads,

Oh what fun you can have when all these Tx and Rx elements are effectively coupled....

Fascinating thought that I will definitely look into. Remember, I was looking at cryptography with the intrinsic characteristics of the channel being the secret or "encryption / decryption key" though. I still have to recollect my thoughts and put more "theory" or "rigor" into it.

@Wael,
I'm not sure of your background but if you think about a wideband OFDM data stream where 4 or more nearfield antenna broadcast simultaneous, and then consider what is the Shannon channel limit for communications on this system is, you will notice that many of the non-linear effects of antenna near field loadings just become a channel distortion, which is mapped by the Rx system so that the secret information bit loading of the carriers happens at the channel noise limit. This process also creates a fairly unique channel distortion, which only the Tx/Rx pairs can ever fully map. So the channel is a secret shared between the Tx and Rx pairs. If the Tx signal strength is reduced to the point that there is only a low probability of successful Tx to Rx comms, than you have a method of secret comms that is very difficult to intercept. You will need to use very advanced FEC (forward error correction) methods to make a successful message transfer BUT there is very little probability of intercept, especially if the secret comms are masked by another large signal jammer.

I realize most of what I'm saying probably makes no sense at all, unless you happen to have a degree in comms theory but I think it is the missing piece of the puzzle for making the channel the secret.

If you have access to a Matlab simulator ,with the Simulink package, then I can recommend some OFDM examples to look at.

I realize most of what I'm saying probably makes no sense at all, unless you happen to have a degree in comms theory but I think it is the missing piece of the puzzle for making the channel the secret.

Not at all. Makes sense... Had a enough exposure to communication theory. It's just you are thinking of a different method, which still maybe valid, but still does not qualify as "cryptographically secure" - unless you correct me.

If an attacker added another near field Tx than it they would significantly change the channel characteristics as seen by the RX. this would be a definite warning to fail hard. adding another near field Tx would also change the ideal channel equalization so it would be very easy to notice. especially for comms tailored to be at the noise limit of the channel.

Notice that I'm not looking at single frequency comms but rather very wide band comms. because of the wide bandwidth finding a resonant way to amplify a single carrier (or small group of carriers) does not significantly increase the information leaked by the system. The FEC corrected info is spread across all available carriers possibly using a secure spreading algorithm.

@Wael
"It's just you are thinking of a different method, which still maybe valid, but still does not qualify as "cryptographically secure" "

The cryptographic security really depends on how you spread the information across the available carriers / bandwidth.

With a DSSS system the security is a function of the spreading code usually an Mcode. for a simple system we simple do Mcode XOR User data to get the raw bit stream. If both parties (Tx and Rx) have a way to synchronize their codes than the spreading could be done with a crypto secure stream cypher.

Unfortunately with DSSS correct channel equalization is difficult problem so these days we would typically think about a wide band system like this using an OFDM physical layer. The OFDM system might use over 1000 carriers (say in the region 20Mhz to 50Mhz). The information is than allocated to some (or all) of the carriers possibly using a secure carrier bit loading algorithm.

Here the crypto is a secret known by both the TX and Rx systems probably modified by a session key (usually generated by an Rx TRNG). This Rx session KEY, return channel comms, can happen at the same time as TX generates large signal random noise.

Usually you would want to generate a second session key that is shared at very low power. (equals very low probability of intercept)

In none of what I'm saying is the channel secure in the cryptographic sense, however if the second session key changes frequently ( and is shared at the channel noise limit) than the interceptor needs to be correctly receiving / decoding ALL information, to be able to maintain code lock.

BTW in near field systems both the Tx and Rx can know the exact channel characteristics without necessarily needing to reveal them to a distant third party. (think in S parameter terms what is the difference between Rx S11 and Tx S12)

Anyway, I don't want any TLA's knocking on my door so it is probably best to end this discussion.