Then, I guess this is a gift of a lifetime. You even have to understand brain function just to program it. ;)

It may very well be a gift. Fits within what we were discussing in C-v-P :)
One thing off my queue, I suspect we'll revisit it!
]]>
2014-06-25T02:26:22Z2014-06-25T02:26:22Ztag:www.schneier.com,2014:/blog//2.5320-comment:6089557Comment from ex-k2 on 2014-05-17ex-k2
(And here I thought K2 was a mountain...)]]>
2014-05-17T17:30:02Z2014-05-17T17:30:02Ztag:www.schneier.com,2014:/blog//2.5320-comment:5994316Comment from Wael on 2014-05-12Wael
@ Nick P,

a gift of a lifetime...

Thank you! It's not exactly what I was thinking, but I'll dig deeper into it and add it to the queue you accumulated on me ;)]]>
2014-05-13T02:30:26Z2014-05-13T02:30:26Ztag:www.schneier.com,2014:/blog//2.5320-comment:5990739Comment from Figureitout on 2014-05-12Figureitout
Clive Robinson
--I just don't get how you're unable to drop a throwaway email address, using a random wifi network and a tiny gadget you undoubtedly possess... unless there's some serious holes us mortals don't know about.

Anyway, to beat the issue to death, here's a cool breakdown of charge circuits (a counterfeit one too), this is why I'm iffy diagnosing the problem, the circuit isn't trivial and it's even worse to troubleshoot when the boards are stacked on each other (they put glue on the ROM-chip screws, perhaps so I would burn up the threads and have to dremel the screws out to get in). But I am getting 5.2V on an input cap, so...somewhere on the charge circuit has to be the problem.

Also if no one can see the relevance of a charging circuit here, where do you think one of the first places to start building a computer is? It needs a regulated DC voltage AND I want to try and filter the power good too to cut out power analysis attacks (besides just looking at consumption, actual injections and such).

You said an evolution is overdue. Something is certainly overdue. It's going to be a revolution, though, as it will be radically different from status quo...

We are in agreement. Between 2750 BC when Ancient Egyptian manuscripts mentioned electric eels (or fish), and around 1950 when “electronics based” computing machines were developed, is a span of 4700 years. That's the time it took computers to evolve from the start of observing a phenomenon to the time it was harnessed for computing. Next stop may be a liquid state computer, where chemists, not solid state engineers, design the beast. That would count as large scale evolution because it resembles a different species, so to speak. I would not count an optical computer as large scale evolution, even if it uses glass instead of copper, and mirrors and prisms instead of whatever their counter part is :) Another possible large scale evolution is a system with millions of tiny processors that behave like a human brain. Not likely to witness either in my lifetime...
Another possibility is the discovery of a new phenomena (equivalent in magnitude to the discovery of electricity) that gives rise to new ideas and implementations. Maybe Gravity is one candidate, I sent a link before talking about the speed of gravity, but this sounds too crazy...]]>
2014-05-12T02:08:12Z2014-05-12T02:08:12Ztag:www.schneier.com,2014:/blog//2.5320-comment:5968670Comment from Nick P on 2014-05-11Nick P
@ yesme

What I want to do is help readers understand what properties a secure system will have. I also promote any project, technique, etc that can be used to build secure systems. I'm also exploring new designs that prevent code injection or data leaks from the hardware up, while supporting integration with COTS I/O devices & development in safer languages. I've been posting various architectures and shortcuts here to that effect.

I might not be able to build the systems any longer. My goal is to give others what they need to do it. If they want secure & democracy-preserving technology, I've told them plenty about how to build it & they just have to put in effort/sacrifices. I've done [and still doing] my part. Just waiting for it to take-off or an existing project to get production ready. Then, I'll put a whole stack on it or [as usual] tell others how to do it right and simply so.

Systems I can trust that force surveillance states to work very hard are what I want. I developed them in the past. My old work isn't available anymore past what I've posted here. I want to see myself or someone else do this again for the modern threat. And it get put into widespread use.

What is it exactly that you want to do (just curious)? The discussion about operating systems and programming languages can only last that long. (unless it's just for fun)

I think most of us are aware that from a security pov C and C++ stinks. But Ada and its subset Spark isn't. So it's here already and for quite a long time now. And altough a bit bureaucratic, Ada is IMO a very professional and productive language. It is very fast, modular, has an increadible type system, can run embedded without OS and it has advanced features.

" Unfortunatly this way of doing things often suffers from the "Apollo problem", which means that instead of continuous improvment with time you get a burst of activity followed by a long period of inactivity which leaves technology in a culdersac for fourty years or more. "

That is an interesting way of looking at it. I'm not sure it's supported by evidence. Yet, there *was* a similar concept that focused on equilibriums, peaks, or something like that in ideas and adoptions. My memory fails me here. It said there were moments of improvements here and there, but otherwise not. It just wasn't as extreme an example as Apollo program.

"The main disadvantage with BASIC was also it's main advantage it was both interpreted and overly simple."

Not slow due to interpretation. There's BASIC's specifically designed to compile to fast native code, even for game engines. That's just what was common long ago and with more academic projects. Overly simple applies to many, yet there are BASIC's that address that too. And there are some that address it WAY too much. ;)

"I suspect that the notion of paralellBASIC is not wrong, but it won't be BASIC it will be a language that has pure functions supported by immutable variables otherwise concurancy and paralellism will be way to difficult to do either efficiently or effectivly, but it will have the "easy play" asspects of BASIC."

Well, ParellelBASIC is a joke so it's OK if it doesn't make it. Yet, the easiness of BASIC is definitely its appeal. That's why some are talking about Julia as a BASIC for scientific computing due to its combo of easy, performance, and legacy integration. Python is actually dominating that, though, as it's been integrated with fast scientific libraries and extended with capability to create fast, native codes with Python subsets. Your guess is on the mark so far.

"My guess is concurancy will be above the CPU level of the computing stack for a whole heap of reasons, "

I've seen a few processors that solved the concurrency problem in different ways. One accelerated message passing to make those models extra fast. One included a few changes that made multithreading more efficient and safe. Several essentially supported a form of transactions where a series of statements were executed as a whole without interference from other computations. Then, there was hardware such as i432 that even did scheduling at hardware level below OS. So, it certainly can be done in hardware.

Yet, I think it should be an OS thing with hardware only accelerating the primitives. Arguably, that was the case for the three designs. Goes to show hardware can have about as much effect as software in this.

"which is one of the reasons I was looking at less than microkernals in the C-v-P design with lightweight RISC CPUs with local scratch memory and arbitrated access to main memory being done via a hypervisor."

And our designs are converging a bit. I'm playing with the hardware level now in my designs more than in the past. Yet, I'm still looking for "this provably can't happen by good design" and on most critical aspects. So, you look at RISC CPU's and hypervisors, I look at whatever CPU can track context of operations to prevent obviously bad ones and allow probably good ones. Tags, capabilities, etc can do a lot in that area. Still searching.

@ Wael

"This is the first BASIC computer I used . I bought a humongous 8K memory module with it, forgot for how much.... I still have it, sitting next to my Commodore 128. The Timex Sinclair and the Vic-20 disappeared..."

Your first was a portable. Nice. Mine was portable... after enough gym time and with assistance of a vehicle. :)

"We cannot short-cut evolution, I think. "

I think the very existence of human brain has caused that plenty of times. We're ahead of it in many ways, yet still behind it (or controlled by it) in critical ways. Whether I can get the human race or market in general to do this in a specific way is another issue. A harder issue.

You said an evolution is overdue. Something is certainly overdue. It's going to be a revolution, though, as it will be radically different from status quo. I've mentioned many architectures that are largely evolutions of older one's with excellent safety, security, or verification properties. Yet, compared to existing architectures, it's like throwing out everything people know and do. That's the kind of change that takes... it's not easy to make happen in an overall market.

One would hope Snowden disclosures, pervasive malware threats, constant disruptions of availability, maintenance/integration woes in software, etc would do it. They largely haven't. So, I'm not sure what discussions, tipping points, etc. would lead to such a change. I am quite pessimistic about what majority or even significant market share will take up in this field. Smart card, DO-178B, etc markets give me about the only glimmer of hope as they have real quality or security improvements. Meanwhile, I continue doing what I do on principle hoping one day it might benefit many in practice.

Great poem. Gotta wonder who the two boys were. And I heard "Steeeeeerike 2" in Leslie Neilson's voice as a certain scene made it funnier that way.

The pocket computer was only for sale in the early 1980's and I'm guessing that you were old enough to earn money so 16 or older which puts you in your fourties or so...

Also it was mainly sold in Europe and Far East which might mean you spent part of your formative years outside of the US...

The first computer I purchased that had BASIC on it was the Apple ][ which cost me around 2000GBP when I bought it back in 1980 which was about three months middle class proffesional earnings or a family car equivalent back then...

However it was not the first computer I had bought or designed and built. You mentioned Sinclair well way back in the late 70's it was Cambridge Research and they sold an SC/MP based single board computer the MK14 for 40GBP which was still a lot of money... The first system I built from scratch was based around an 1802 processor (which are still made today) I had aquired whilst involved with some "space research" in Surrey in the UK. Back then memory was quite literly worth it's weight in gold and the advent of a 1K chip that was 256x4 bits was the hight of desirability. I wire wrapped the design on my desk at home and after repeated checking finaly powered it up and put in the first simple loop program from the front pannel switches. I later got hold of a copy of Forth for it and added a few niceties such as a UART to talk to a terminal and casset interface using a Signetics PLL chip. Over the years I also built 6502, Z80, 6800, 68K and 2900 bit slice designs on the same desk, and I've still got some of them around in my loft/garage along with a couple of Acorn Atoms and a BBC Home Computer, a ZX80 and Jupiter Ace Forth home computer as well as most bits of a PDP11-70 and a VMS box and other ancient bits and bobs like ICL core store 8inch floppy drives and other stuff to numerous to mention. Then there are the PC boxes Unix/Zenix boxes an early 68K based Netware box and parts of a NeXT box Apple Lisa, Sun kit etc. All used if not abused by me for various work and personal projects, and assumed to be still working... so more a dusty store than museum or scrap yard ;-)

]]>
2014-05-10T10:01:42Z2014-05-10T10:01:42Ztag:www.schneier.com,2014:/blog//2.5320-comment:5940131Comment from Wael on 2014-05-10Wael
I read a lot of complaints about "Code-cutting". I don't know why it's so stigmatized! This is what you get when you engage in Poem-cutting :)

Personaly I think the history of computing mainly shows the pragmatism of doing what's possible at the time with the resources available. Unfortunatly this way of doing things often suffers from the "Apollo problem", which means that instead of continuous improvment with time you get a burst of activity followed by a long period of inactivity which leaves technology in a culdersac for fourty years or more.

This is the dual of biological evolution. You are describing "Silicon Evolution" with it's two pillars: small-scale evolution, and large-scale evolution. We cannot short-cut evolution, I think. You are also implying that large-scale "Silicon Evolution" is over due... C-v-P could be a viable catalyst for this sort of evolution.]]>
2014-05-10T08:26:20Z2014-05-10T08:26:20Ztag:www.schneier.com,2014:/blog//2.5320-comment:5939362Comment from Clive Robinson on 2014-05-10Clive Robinson
@ Figureitout,

Yes email protocols are broken beyond repair not just from the security asspect but from the overloaded technology and social asspects as well, this has been the case for most of this century.

You only need consider one small aspect (spam) to see that we need to fix Email at all sorts of levels, but we also need to do it with considerable care lest we give rise to other issues or loss of usefull and needed aspects (anonymity and deniability etc) we have with physical mail.

One way is a "Dead Drop Box" system where you seperate the message and notification asspects, with the notification system also being made secure by some method. However such systems require the use of side channels of one kind or another and this is the major stumbling block currently.

This is the first BASIC computer I used . I bought a humongous 8K memory module with it, forgot for how much.... I still have it, sitting next to my Commodore 128. The Timex Sinclair and the Vic-20 disappeared...

Yes he was one of the "some" but there were others, and as always with such things there is a germ of truth in it. For instance Church-v-Turing. It's been argued that the Labda Calculus Church invented was unduely overshadowed by Turing's universal engine which gave rise to imperative systems that currently blight our thinking and hardware keeping us from the goodness of concurancy and parallism we needed to be in back in the 1990's...

Personaly I think the history of computing mainly shows the pragmatism of doing what's possible at the time with the resources available. Unfortunatly this way of doing things often suffers from the "Apollo problem", which means that instead of continuous improvment with time you get a burst of activity followed by a long period of inactivity which leaves technology in a culdersac for fourty years or more. It's one of the reasons we have the crazy IAx86 architecture with *nix or some failed improvment on *nix that's in effect a poorman's knock off dressed up like a pig in a ballgown. The alternatives that came along that were better in oh so many ways got killed off for not being porcine compatable...

History shows you need a tipping point where change has to happen, but for some reason we've not realy had one, and I wonder what it will be. As I've said befor we have got to the point where there is no ROI on trying to stick with Moore's Law and the only way to increase computing power cost effectivly is by concurancy and paralellism at various points in the computing stack. The chip makers know this which is why we have multiple core and multiple CPU systems but the OS and Apps have by and large failed to capitalise effectivly on this, the two questions being, Why? and What's going to give first?...

The main disadvantage with BASIC was also it's main advantage it was both interpreted and overly simple. It was thus very easy to learn by experimentation bordering on play, but painfully inefficient and slow.

I suspect that the notion of paralellBASIC is not wrong, but it won't be BASIC it will be a language that has pure functions supported by immutable variables otherwise concurancy and paralellism will be way to difficult to do either efficiently or effectivly, but it will have the "easy play" asspects of BASIC. The What follows Fortran article you linked to gives a number of options but none of them appear to be ready for Prime Time currently. I suspect it will be Python that will be the next BASIC but will it make concurancy / paralellism easy and at what point in the stack, if it does and at the right point then it will probably be the way of the future...

My guess is concurancy will be above the CPU level of the computing stack for a whole heap of reasons, which means the bottle neck will be as it has been for some time now the OS, which is one of the reasons I was looking at less than microkernals in the C-v-P design with lightweight RISC CPUs with local scratch memory and arbitrated access to main memory being done via a hypervisor.

]]>
2014-05-10T07:33:29Z2014-05-10T07:33:29Ztag:www.schneier.com,2014:/blog//2.5320-comment:5938539Comment from Figureitout on 2014-05-10Figureitout
Clive Robinson
--If you're so scared of providing a public email address to contact you w/, you're saying something by saying nothing regarding email security/protocols. Just wanted to demonstrate for readers out there; obviously at this point I don't care how awkward I can be. Email cannot be trusted whatsoever; entirely new protocols are needed.]]>
2014-05-10T06:55:42Z2014-05-10T06:55:42Ztag:www.schneier.com,2014:/blog//2.5320-comment:5937571Comment from Nick P on 2014-05-10Nick P
@ Clive Robinson

"according to some"

*cough* Dijsktra and his groupies *cough*

Did you read the "Bashing BASIC" section of the article? It quotes him on that and then provides counterpoints. I like one of them: " 'I’ll go out on a limb and suggest the degrading of BASIC by the professionals was just a little bit of jealousy–after all, it took years for us to develop our skill; how is it that complete idiots can write programs with just a few hours of skill?' (Kurtz) BASIC may not have made sense to people like Edsger Dijkstra. That was O.K.—it wasn’t meant for them. It made plenty of sense to newbies who simply wanted to teach computers to do useful things from almost the moment they started to learn about programming."

And one of them built so many useful tools that he became addicted to IT enough to turn into a security engineer of actual talent. ;)

"You might find this page of interest,"

It was fun. I've seen BASIC on all kinds of machines from microcontrollers to servers. I was thinking there's not much that can be done with it that stands out anymore. Then, it dawned on me that I haven't seen it used for one thing: supercomputers. A BASIC dialect along the lines of High Performance Fortran, X10 or Parasail shouldn't be too hard to do. With BASIC, the language always hid many details anyway. I just think it would be hilarious if the next Watson, simulated brain, etc. ran on a gazillion core supercomputer coded efficiently in... "ParallelBASIC."

Critics: "It's programmed in WHAT!? I mean... they aren't bright enough to even give it a good name. They don't make it sound like an element, a famous scientist, a word that would impress Comp Sci majors... they just combined "BASIC" and "Parallel." And this thoughtless language was allowed to execute on a $100+ million dollar machine? Aghhhh!!!"

So, I typed "Parallel BASIC" into Google just in case and saw HPC BASIC. (!) Turned out to be a reference to Julia language alluding to it being the modern BASIC of scientific programming. Seems my idea of bringing an actual BASIC to supercomputing is still novel. Or it shows that only one guy is crazy enough to even publish such nonsense on a public forum. Could go either way.

Freenet always interested me more due to it being a distributed data store. However, I bet I'd have a paper on it too if it was getting attention from smart researchers like these. Usable & robust anonymity is just really hard to do.

]]>
2014-05-09T02:05:31Z2014-05-09T02:05:31Ztag:www.schneier.com,2014:/blog//2.5320-comment:5910970Comment from Figureitout on 2014-05-08Figureitoutlow flying new born bouncing into somebodies iPad etc and taking their first selfie
Clive Robinson
--Oh you out did my joke! :p Thanks, what I was meaning was on board (I could send you pictures somewhere), I can't immediately find a schematic for the device; I suppose I could try to reverse engineer it...It won't be a NiCad either, a LiIon; so a little more complicated and more dangerous (I don't like the prospect of an exploding battery). So I'll see if I get a fresh battery if it can power up just off that (fingers crossed); and I just need to make a charger for it. Here's a design:

While searching for a file encryption program, I came across this. Bruce you would get a kick out of this:

HideIt! Pro belongs to the military class of cryptography systems. It utilizes the RSA 128-bit per key algorithm. For every password phrase you enter, HideIt generates 48 more passwords and applies the algorithm to all of them.

This means that the total encryption scheme is utilized by no more than 6144 bits, ensuring your privacy.

]]>
2014-05-08T23:49:41Z2014-05-08T23:49:41Ztag:www.schneier.com,2014:/blog//2.5320-comment:5907302Comment from vas pup on 2014-05-08vas puphttp://www.euronews.com/2014/04/29/driving-into-the-future/
New emotion detection application with substantial
potential for wide range of security applications (e.g. guys in IBM silos on controls, pilot of commercial airlines - no more 370 story, interrogation for Intel purposes (not for court as evidence). Time and again, any new technology is NOT substitute for LEOs to think with their own heads, but just aid). ]]>
2014-05-08T19:36:59Z2014-05-08T19:36:59Ztag:www.schneier.com,2014:/blog//2.5320-comment:5906453Comment from Noah Löfgren on 2014-05-08Noah LöfgrenPress Release | "United States of Secrets": How the Government Came to Spy on Millions of Americans

You have two generalised choices when it comes to the chip tools VHDL and Verilog.

The problem with Verilog is it's more akin to a traditional programing language and most beginers make the mistake of using it just like a programing language. The trouble is programers use loops and reentrant code as standard, hardware does not, this causes hugh netlists that are oh so slow and invariably don't work the way a programer expects.

VHDL has other issues but on a gate by gate basis it's easier (if more long winded) to pick up and tends to produce reasonable and working netlists even for beginers.

As I indicated I'm very "old school" and do gate designs in my head with paper and pencil, I then chuck it at a keyboard jocky to bang it into VHDL to get a simulation out that I then run a jaundiced eye over. It's not the way you should use such tools but old habits die hard for various reasons, one of which is the human brain can do trade offs "as they go" which untill recently the tools could either not do or not do well... There is a running joke about my abilities to beat CAD tools because I tell younger engineers with tracking and other problems "If I was you I wouldn't start from here..." (just like the farmer leaning on the gate in the original joke when asked by a couple for directions).

If you are keen on rolling your own I'd advise you to ges an FPGA demo board from a manufacturer that supplies free tools where you have the option of both VHDL and Verilog, start with VHDL and only when comfortable with that have a go at Verilog.

The other route is get a book with a CD rom of tools. On such is "Fundementals of Digital Logic with XXX Design" by Stephan Brown & Zvonko Vranesic. They do both a VHDL version and a Verilog version (substitute for the three Xs in title). However take care when buying the prices vary very wildly in price from around 40USD to a couple of hundred (why I'm not sure but the fact the cheeper versions are marked "student" give a clue it's the same book priced for companies or students with what the market can bare... Oh a starting ISBN is 978-0071-2688-06.

and maybe have a story where you popped out the womb, w/ umbilical cord still attached you did the "moonwalk"

No --I'm too old-- but I do have one about my son very nearly bungie jumping --on his unbilical cord-- off the end of the gurny when he shot through the Midwifes hands (nearly knocking the camera out of mine) having just caught him by his arm and a leg my son arm outstreached pointed an accusing index finger and started to bawl his head off... apparantly others have similar tales, so I'm waiting to hear one about a low flying new born bouncing into somebodies iPad etc and taking their first selfie.

Any way that aside charging of rechargable bateries. The simplest circuit is a mains transformer a bridge rectifier and a resistor (but importantly no smothing cap). Most but not all rechargables have two charging currents specified, the first is the standard charging current, the second is the trickle/holding charge current. Both are assumed to come from a constant current source (which old style chargers almost never are).

The two things you have to scale are the transformer output voltage and the series resister, it's usually safe to assume that the bridge rectifier has two silicon diode drops of ~0.7V thus a total of 1. 4V. Transformers are normaly rated at their full load RMS voltage with a peak voltage root two times that.

The trick is to pick a transformer voltage where on RMS rating minus the bridge drop, the resistor would give the standard charge current if the battery was shorted out and power rate the resistor to twice the short circuit power. If you don't know what the standard charge current is assume one tenth of the mA rating of a cell (so about 200mA for AA cells) this will charge the cells in about 24hours. Check that the peak voltage minus the bridge drop and fully charged cell voltage will give an RMS current that aproximates the trickle/hold current (if not known assume around a quater of the standard charge current so around 50mA for AA cells).

So a quick aproximation as a start point is pick a transformer with the RMS value being equall to the full charge cell value plus the bridge drop, which for three nicads is 3.6 + 1.4 = 5Vrms. The resistor value is going to be 3.6/0.2=18R with a power rating of 2*3.6*0.2=1.44W. Which gives a peak over voltage of 7.071V - 1.4 - 3.6 ~=2.1V, giving the trickle charge as 2.1/18 *~0.5 ~= 58mA or less which is abou right.

Thanks for the tips. Yes, all the timing issues are what I kept seeing in articles on the subject. It also appears synthesis tools are horrid for hardware compared to software. Article argued quite well why it was hard in general. I think better tools & hands on development of complex projects in academic are the only way new generations will even begin to catch up. Funny that there's at least one IT sub-industry still dominated by the old folks. And that doesn't involve COBOL. ;)

re Harvard

A truly pure-Harvard design seems to avoid certain problems. Yet, I'm not convinced it's even necessary given what I know of tagged and capability architectures. Many problems in software remained to be solved with a Harvard architecture. The solutions to many of these problems can also be used to protect code on a von Neumann machine. Harvard essentially creates two segments, code & data, with no further granularity. The machines like SAFE & CHERI can do so much more than that. Additionally, we've had so many exemplar von Neumann machines to build on & so few Harvard's dare I guess that we're more likely to screw up a secure Harvard architecture project.

Note: This is one of those topics that my opinion can change wildly day to day, month to month, and year to year.

re microprogramming & compilers

Good that you brought up compilers as it's exactly what I was looking into yesterday. I found these gems that show microprogramming can not only be made easy: it can involve almost no microprogramming. :)

So, higher-level microprogramming certainly can be done. All these papers are old, too, so I'm sure modern tools could push envelope even further. Only question is "Would it be easily done in the kind of chip I described and for abstract machine implementation?" The chips [mostly] use common functional units and the abstract machines can be modeled as state machines. So, I don't see [yet] what would prevent HLL microprogramming from being used there.

You can't live in a world without gates be it our human world or the digital word so you might as well get to grips with them ;-)

The reality of what you are likely to come up against is a logic cell that in essence is a programable map (memory) and multiplexor (MUX) and in some cases a latch/flipflop to give register functions etc. All of which you program with your required functionality. Amongst the many advantages of such cells is a constant delay time because you always end up going through a fixed number of gates (ie the AND OR array map and MUX).

Back in the good old days when you realy did play with gates one of the most time consuming things was working out the various delays that gave the higher level logic delays and metastability criteria.

Back then your next step was working out your data flows around functional blocks and the registers used to drive them and store the results. This gave rise to what used to be called the Register Transfer language (RTL) which is what you wrote your layer one microcode in. However it also defined the way your CPU functioned. Microcode can be as simple as a large memory map or as complex as a convaluted logic state machine. The former tends to lead to a very wide control bus the latter a slow throughput.

As IBM discovered the wider the control bus the more you can do per clock cycle but more importantly how quickly you can correct microcode mistakes with minimum disruption. I would urge you to consider this aspect quite seriously if you do end up "rolling your own", not just for the afore mentioned reasons but due to "heat death" of logic. Basicaly we've reached the point in miniturisation that we are now actually "Power Limited" not by geometary etc. As it happens memory is about the lowest power for any given area of silicon and thus is almost "free" when it comes to power disapation. It's this which has given rise to the large increase in simple cache memory we see on CPUs these days along with other memory types.

Layer one microcode used to be the simple operations you had to do every CPU cycle and to get data in and out of registers from or to the external CPU buses and internal functional blocks of the ALU etc. It usualy did not provide actual assembler instructions this was done by layer two microcode in simple RISC architectures and higher layer levels in complex CISC architectures. In the former an almost pure memory map system is possible in the latter it would have been seen as to costly in real estate and the layer two or three would have seen a state machine. These days some instruction decode and control sections rival a generation or so agos CPUs in functionality.

Irespective of the external control and data bus architecture most CISC systems are internaly Harvard architecture because it makes the use of "go faster" pipelines and caches easer to implement.

One of the problems in CPU design is the length of time required by ALUs and other internal functions to compleate. That is an XOR between two registers is very fast, but an ADD or MUL is not and gets slower the wider the internal data width is. The solution that used to be used was to throtle back the CPU cycle time to that of the slowest operation, which whilst simplifing the design makes it slower than it could be (this was usualy acceptable due to the likes of incrementing the program counter register etc).

Whilst many SoC systems for microcontrolers in embeded systems remain Harvard architecture through out the CISC CPUs used in more general systems join the code and data buses into one external memory bus so that programs etc can be loaded more easily. This has unfortunate consiquences for both security and high level languages making imperative not concurant systems easier to implement.

With regards TTA systems yes there are the problems you highlight but they are often only of relevance in single CPU systems. Single CPU systems are however a thing of the past in general purpose computers and in the case of GPUs often they are more powerfull than the main CPU in the system --when used correctly--, they are also more likely to be amenable to architectures that support concurancy of which TTA is one.

Whilst TTA systems are more complex for programers to get their heads around so are the multiplicity of CISC assembler instructions which few programers will ever even attempt to get their heads around. The solution has been for some time now to let the high level language compiler do the work and the same would apply to TTA systems provided the high level language supported concurancy at such a low level (which most high level languages in use today don't).

This brings up the issue of at what level in the computing stack do you become concurrant, and the answer is as is often the case a trade off based on what you are doing...

For graphics and digital signal processing and similar generaly the lower the better thus inside the CPU at the logic level below the microcode. However for more general computing above the microcode is where the sweet spot is likely to be with the CPU core being the accepted imperative variety.

]]>
2014-05-08T15:02:39Z2014-05-08T15:02:39Ztag:www.schneier.com,2014:/blog//2.5320-comment:5893101Comment from Figureitout on 2014-05-08Figureitout
Clive Robinson
--Up earlier in the thread I mentioned that I was unable to get the Cassiopeia E-115 to boot up. Well I finally got that f*cker to boot up; the solution turned out to be trivial of course. I know you're probably rolling your eyes right now, "kids these days", and maybe have a story where you popped out the womb, w/ umbilical cord still attached you did the "moonwalk" and got one of these old things to boot up w/ one hand. :p Maybe there's somethings you could help w/, before I just go off on my own researching, but what does a typical charging circuit look like (there's a few candidates), b/c I can't find any schematics of course on this. Also, do you know about "thermistors" on battery terminals and why I would get a voltage reading of 0.18V over '+' and '-' w/ the battery in, while getting a reading of 2.8V over 'T' and '-'? Also I was getting 5.2V on a capacitor very near the AC power.

Basically, besides an old battery, which I'm certain of, I think there may be other circuit issues like a charging circuit, and I wondered if you ever replaced those before in a "DIY" way. The solution ended up simply using my digital power supply and a couple wires to inject 3.7V directly on the '+' and '-' main battery leads; I know it sounds obvious but I was thinking that it needed contact w/ the thermistor lead too and maybe some other signal...but it didn't.

And right about now the MOD, Bruce, and a few readers are probably foaming at the mouth for me to STFU, and what's the security implications here or am I just chatting it up. I'll tell you:
1) Removable ROM chip on this device, would require some work but definitely doable w/ an engineering team

2) Everytime the backup battery is removed all program memory is *supposedly* wiped. So you make a file and store it to a memory card and now the memory card is what you need to protect.

3) According to the specs, no wifi and what really makes me happy...NO BLUETOOTH. 2 protocols I can most likely not worry about; but I still need to test this myself.

4) This is a commercial device that would require agents to "go back to the library" to find exploits; yet is still actually a very user-friendly device. It's even got frickin' Solitaire in the ROM.

I hadn't seen that. Really nice work they're doing. The Python crowd never ceases to amaze me at how many uses and tools they derive for the language. This tool should certainly benefit hardware designers, esp in prototypes & verification. That it's still essentially a HDL means I can't use it without learning such things. Hence, my continued look at things like microcode, PALcode, I.P integration, HLL to FPGA compilers, etc.

That's exactly the kind of intuitive response I was looking for. Thanks.

re concurrency

I've seen both processors and languages (ParaSail) that make the stuff easy while still allowing HLL languages or existing toolsets. So, we might not have to go entirely GPU on it. Plus, far as crypto goes, it's usually one of the fastest components in a system with others slowing things down. Fast primitives (eg Bernstein) or hardware acceleration of key primitives seems to suffice for it. It's why I've always loved the concept of FPGA logic in chip or FPGA's on the board. Just push off anything that needs acceleration to them while using the same SOC's or board gives volume pricing.

@ Clive Robinson

"As you have noted much has changed in that time in Europe. But if you think back to your own comments on how researchers are these days continuously reinventing the wheel over security that has done and dusted by the mid 1970s you might want to think if the authors perspective about the different research types is valid or not, and if not why not..."

It might have been correct back then. I wasn't in Comp Sci in 70's so I can't really speak to it.

"Some years ago a new design of architecture was proposed called Transport Triggered Architecture (TTA) [1] which exposed the internal data transport busses of the CPU to "the programer". "

This is interesting in that I was recently looking a high-level microcoding as a solution to one problem. More on that later.

"The TTA CPU design is used but mainly in "Application Processors" but it's design started me thinking on it's security issues and how you could expand the design in a more "programmer friendly" way."

The pages I read on it note that it sucks for interrupts, preemptive threads, context switches and so on. My & DARPA's designs knock out two of these but others remain. There's also the issue that the programmer has to worry about timing of everything. Would be great for covert channel analysis, a pain in the ass for... everything else. ;)

So, on to what I've been thinking about lately. The problems of the systems, esp causes of code injection, are well-known. I've posted many architectures & chips that prevent many by good hardware design. The trick is who wants to screw with several dozen full hardware projects at once? Additionally, for projects leveraging abstract machines (eg M-code, JVM), it's a new hardware project per language. Yet, I've seen two potential shortcuts: microcode & PALcode.

Microcode is the most obvious. Many of these CISC, safe, etc processors are actually typical data throwing machines underneath maybe with a few dedicated components (eg tagging unit). The microcode is used to effectively transform them into the other machine. Many changes to that machine can be done in microcode. So, it might be beneficial to just create a series of functional units that could emulate most of these processors, then make the microcode easier to write (tools or new languages). Then, people trying to improve & experiment with processors could start by microcoding existing hardware instead of learning everything about digital logic, etc. Speaking of myself, I thought the microcodes I've read were much more comprehensible than processor specs.

Similar idea with PALcode. Actually, you could say Alpha's PALcode was an implementation of my idea in limited way. It effectively allowed programmer to define new instructions out of existing instructions. These also executed as atomic instructions, a capability that would have *tremendous* effects in concurrency. PALcode was instrumental in VAX Security Kernel project to get kernel to run with security & performance. A processor with something like PALcode, albeit with power closer to microcode, could be very useful in these efforts.

Another component might be high-speed scratchpad memory only microcode or PALcode can access. Reason for this is to emulate aspects that weren't implemented in hardware yet. For example, the tagging engine might have state related to its job. If RISC core has no tag unit, then microcode or PALcode can emulate one with support of scratchpad memory which might store tags or access rules. Scratchpad is just one idea, as I'm quite open to whatever gives flexibility while maintaining performance.

So, to recap, the design effort must be kept minimal by continuously producing components that can be reused or extended elsewhere. The common functional units of processors will be created in HDL, of course. At least one RISC setup with components used 99+% of the time is integrated in HDL with microcoding and/or PALcode ability. That can, by itself, become any number of CPU's, IO-CPU's, or ASIP's. Hardware existions can initially be implemented by VM's partly written in microcode or PALcode, later written in HDL's extending the hardware. There might also be FPGA cells connected to it for this purpose. And all of this is to be written in an HDL that supports open source and/or provably correct synthesis tool use.

@ yesme

Definitely interesting work. Btw, if you want modern, the proper one to look at is A2 Bluebottle. It's the latest incarnation of the system.

The principles of Hoare & Wirth are certainly sound. One problem with Wirth, though, is that he's more focused on making the compiler and language simple. That means that he's more likely to leave off a great security feature & push the concern into application. That application designers can't be trusted to ensure security is one of the reasons we're discussing new hardware to begin with. So, certain critical features that support safety/security abstraction of developers must be implemented in hardware & always on. This increases complexity. So, as I said before, we can simplify things wherever possible but adding security will add complexity.

An example is Intel i432. They had plenty great features for making OS and app designer's more productive, maintainable, and secure. Yet, they put so much stuff into it that it was only 25% as fast as competitors. Utterly failed in the market. So, there's obviously a cutoff point for extra complexity. With i960MX, they transformed i432 by trimming off what fat they could. End result was a simpler, RISCy architecture that performed well & supported plenty key i432 features. Yet, it was more complex than competing RISC designs as it needed the extra capabilities.

So, in security engineering, we have to make tougher tradeoffs than those merely wanting speed or small compilers. We have to keep the machine easy to implement (HW), easy to manage (OS/runtime), and easy to develop on (apps/compiler). It's tricky and the solutions aren't always as simple as many would like. ;)

HOW I SEE IT...NO AUMF LEGITIMACYThe case exposes so much that is wrong with our government and the court's inherent tendency to protect government and not to justice for the citizenry. First, under the color of the AUMF, the courts are essentially ignoring the explicit stricture in the constitution requiring the "Declaration of War". If congress has had a problem with this concept (effectively since the Korean War) then amend the constitution--constructing a text to make the subtitles match the narrative of the movie but ignoring some of the words is not useful--especially when war is the LAST and MOST POWERFUL instrument of the state.

The reason the AUMF is flawed is based on constitutional purpose for requiring the "Declaration of War" from congress and how this flaw has abused the citizenry.

Congress is vested with the power to declare war, kings are not to be trusted with amassing armies for whatever purpose.

The declaration of war is required to enable the full force of the state to be used in repelling invasions or insurrections. The bombing on 9/11 represents an act of war, not an invasion or insurrection. The constitution is clear on this.

Declaration of war confines the use of armies by a would be king (the president). If the president is given power akin to war powers without enumerating it as war--then what restrictions can be overcome? This is dangerous--this is exactly why the statement in the constitution is stated so clearly.

Declaring war is not like rationalizing a sexual predilection for prostitutes. For example:a.) "You are authorized use of a non-traditionalsexual liaison." - [affair]

b.) "You are entitled to the rightsand appearance of a traditionalrelationship." - [marriage]

If the context can be changed, what constitutes a declaration of war, and instead enumerate the use of a power--employing the military to kill political enemies can be justified.

The colonists' truly feared the immense power of central governments--King George had given them plenty to fear. One person, based on their attitude, could move fleets with armies to their shores. By side stepping the reason to specifically declare war allows tertiary use of military and thus becomes an instrument of raw power.

How the subversion of constitutional purpose (spirit of the law) gets violated by the courts and judges is shameful--if not an indirect form of treason as it allows for the overthrow of the republic.

]]>
2014-05-07T10:20:10Z2014-05-07T10:20:10Ztag:www.schneier.com,2014:/blog//2.5320-comment:5874095Comment from Wael on 2014-05-07Wael
@ Nick P, @ Clive Robinson,
I liked the joke. As for concurrency, I tend to believe there should be as few locks as possible or performance could degenerate to a level that defeats the purpose of concurancy. I am referring to tens of thousands of hardware threads running on a multicore GPU. I would rather spend effort in this area than to see where "bad locks" are. Effort here means crypto algorithm highly concurrent implementations rather than the serial implementations in use these days.]]>
2014-05-07T09:37:27Z2014-05-07T09:37:27Ztag:www.schneier.com,2014:/blog//2.5320-comment:5873902Comment from yesme on 2014-05-07yesme
And one final errata:

About Würth FPGA microcode[1]:

Clearly “real”, commercial processors are far more complex than the one presented here. We concentrate on the fundamental concepts rather than on their elaboration. We strive for a fair degree of completeness of facilities, but refrain from their “optimization”. In fact, the dominant part of the vast size and complexity of modern processors and software is due to speed-up called optimization. It is the main culprit in obfuscating the basic principles, making them hard, if not impossible to study. In this light, the choice of a RISC (Reduced Instruction Set Computer) is obvious.

He is right of course. The OpenSSL accelerated assembly crypto code for AES on 586 platform alone is 2980 lines. The AES crypto has 15 such accelerated assembly platform support.

]]>
2014-05-07T09:26:32Z2014-05-07T09:26:32Ztag:www.schneier.com,2014:/blog//2.5320-comment:5873370Comment from yesme on 2014-05-07yesme
One final remark about simplicity and defencies.

The brilliant Tony Hoare once said[1]:

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature.

He also said:

[About Pascal] That is the great strength of PASCAL, that there are so few unnecessary features and almost no need for subsets. That is why the language is strong enough to support specialized extensions--Concurrent PASCAL for real time work, PASCAL PLUS for discrete event simulation, UCSD PASCAL for microprocessor work stations.

[About Ada] For none of the evidence we have so far can inspire confidence that this language has avoided any of the problems that have afflicted other complex language projects of the past. [...] It is not too late! I believe that by careful pruning of the ADA language, it is still possible to select a very powerful subset that would be reliable and efficient in implementation and safe and economic in use.

The recend discussions about Oberon made me look more into the language and OS. Of course the OS is now 30 years old and the GUI part compares more with RIO from Plan-9, the first versions of Turbo Vison and the ncurses library than what we expect from an OS of today, it is still a sane thought.

The 2013 updated documentation about Project Oberon[2] shows some really mindblowing features and numbers. It clearly showed that solving the problems at the core results in significant simplicity. Because Würth wrote 3 pages of FPGA RISC microcode he could reduce the compiler to less than 2900 lines of code[3]. This is the Oberon compiler!

And on even the rather slow hardware the compiler and OS compiles in 3 and 10 seconds! The OS now has NONE assembly code![3]

To bring back my first Tony Hoare's quote:

"so simple that there are obviously no deficiencies" -> Oberon

"so complicated that there are no obvious deficiencies" -> OpenSSL, C++, GCC (and a long list)

With regards the letter, I linked to it in respect to the historical context, it was written quite some time ago and as far as I can tell it is fairly accurate in it's outlook. It was writen a little while after they had helped on the Burrows computer design.

In essence it shows how Europe has moved from it's postwar position of the formative years of the 50s&60s and moved to the US model over the next thirty years or so.

As you have noted much has changed in that time in Europe. But if you think back to your own comments on how researchers are these days continuously reinventing the wheel over security that has done and dusted by the mid 1970s you might want to think if the authors perspective about the different research types is valid or not, and if not why not...

If you look at hardware and basic architecture, nearly all the research from the late 80s has been on how to make a non concurrent out of date hardware model stay on Moores Law and only in recent times have people in general finaly realised that parellel computing with multiple CPUs/cores and all it's concurancy issues is the way they have to go as all the tricks to make large imperative only systems have hit the ROI buffers.

Some years ago a new design of architecture was proposed called Transport Triggered Architecture (TTA) [1] which exposed the internal data transport busses of the CPU to "the programer". In some respects it is a mide way poit between RTL and assembler and if used correctly offers a lot of concurant processing inside a single CPU.

It is however a bit of a nightmare for 99.99% of programers as in many it breaks the Standard Model which we are taught in higher education.

The TTA CPU design is used but mainly in "Application Processors" but it's design started me thinking on it's security issues and how you could expand the design in a more "programmer friendly" way.

I don't think it's a waste of time. I am also not sure this is the most suitable approach. Embedded systems or otherwise, I think, is orthogonal to the proposed method. Discrete control theory is not my cup of tea. If I chose to use it, I would lean towards applying such methods to the hardware (cores) and the lower layers of the OS (scheduling, resource allocations, ...) -- not to the application layer components. Still, if it seems like a waste of time today, it maybe a different story in the future. I know this answer is not as analytic as you'd hoped, but it's not within my area of expertise -- if I ever had any...]]>
2014-05-07T06:13:51Z2014-05-07T06:13:51Ztag:www.schneier.com,2014:/blog//2.5320-comment:5867333Comment from Figureitout on 2014-05-06Figureitout
Waeladhering to some basic security principles can't hurt
--Oh I do, for the very few things that haven't yet been corrupted; basically boils down to discipline (which costs you time and friends). Doesn't matter much when you have agents breaking in your home (still attacking, this time the psychopath put out his cigarette in droplets of shower water in my shower) and they get the phone company to route all cable traffic straight out of my home. I don't know what judge is authorizing this behavior but it's continuing and it's straight up police state behavior. I think it's been long enough that I've demonstrated I'm not a threat, in fact I'm trying to secure our systems by starting w/ a system small enough that I can wrap my head around. Basically says to me that I've struck a nerve (agents should never let their emotions compromise their cover) and I'm such a difficult target that they had to resort to such extremes and get pleasure beating an already dead horse to a pulp.

Nick P
--You didn't give a specific protocol, I already was doing it and knew what you told me, and there's a lot of unanswered questions. I'd rather not bring others into my hell, it's not something I just bring up and I'd have to conduct a non-intrusive background check to check for obvious signs of cops or agents. I've only told one person in real life (and the internet now...) and even still I'm not sure, I'm at my wit's end at this point if all my closest friends ever betrayed me for agents... The main reason I'm doing this is b/c I think it would be very beneficial for others to recover your systems from a serious attack (think an attack that went undetected for months or more). I'd need at least 2 computers (unsafe/untrusted 'net one and shielded-airgap-24/7-never-leaves-my-sight one). Somehow...I need to have my infected hardware "touch" my "fresh" computer and not pass along whatever this is; not smart when I don't know what it is or more importantly...where. Ideally I need a device that takes in data to an isolated insecure area, and prints the *ALL* data to a screen where I manually check all data page-by-page to another area; that will be so hard I'm not sure it's possible how I envision it. I need to reflash the firmware in my router (not happening while in daddy's basement), I need a physically new location to access the net since getting a fresh copy of software just leaves more questions and is impractical sadly. I need a lot more but I'll save it for a more refined post that I'll link to in the future.

That's not trivial and I know it's highly unlikely I'll succeed. Still going to try though.

OT: For: Clive Robinson RE: Catching attackers w/ honeypots
--Figured I'd return the favor, I *always* return the favor. :p Interesting break down of a server intrusion which is then used for DoS. Lots of college students should be able to follow what's happening for the most part if you've taken a Unix class; and now we have another attack to test and try to defend against. More of these articles also let the attackers know...what seems like an easy target, may be a little too easy...haha

]]>
2014-05-07T01:21:45Z2014-05-07T01:21:45Ztag:www.schneier.com,2014:/blog//2.5320-comment:5866353Comment from Benni on 2014-05-06Benni
Here is something for the user "sceptical" or those who believe the USA have a free press:

For me as a german, these statements by hillary clinton just belong to the most awkward things that ive seen.

I could somewhat understand if putin sets up a propaganda news tv station, since this is generally something that an authocratic and anti-democratic government does, but a developed country, with a working democracy like the USA should never desire to "win the information war" by influencing the media to send propaganda. If a democracy behaves like that, then one might expect worse...

The Framework, I've termed Data, Document, and Information Management Policy Framework (DDIMPF, is the set of DDIMP components that are parametrically co-linear to a btree of nodal lines process/procedure/function--Bruce has used a similar risk analysis tree to describe a coherent model--but it's not that).

The ability to abstract loosely or tightly coupled DDIMP components with boundaries that allow integral or algebraic geometries with regard to complex operational chains--this is a highly efficient method to collapse a series of function objectives (quality control, auditing, change and configuration management, release and publication, etc.). The graphic I have for the model seems daunting--and maybe it is--haven't had anyone with enough cycles or sufficient background to review the work.

And no, the simple answer is this "business" or organizational process model that is highly formalized using data, documents (this is a broad description), information, and knowledge to establish a conformal (broad terms) system that is responsive and efficient. I see it as a completely new thesis in process management. This sprang from an analysis of the current situation we find ourselves in--I began the development and design process for this in October of 2012 understanding that the government had bedded and co-opted the tech community...there needed to be an organization response to a failed command and control process management model that leaves us "compartmentalized" in many ways.

I looked at SystemC again. It's definitely interesting for prototyping. In the process, I found this tool while I was looking it up. Btw, if you like Occam, check out this OS written in Occam.

The framework is interesting. The use of VHDL surprises me. I'm guessing this framework is about hardware rather than software design? Hard for me to imagine doing software effectively in VHDL. The Flow-based Programming scheme I linked to above is the closest thing to that. It seems that integrating that software scheme with a hardware language might produce some interesting results. I particularly think it might help in hardware/software codesign like the kind people do with FPGA accelerators in eg Mitrion-C. Making software more easily map to hardware avoids many problems in imperative programming.

Re control theory

Thanks for Adaptive Control Theory reference. I'll look into that.

@ Wael

"The desertation title is "Software failure avoidance using discrete control theory", but the main focus was deadlocks (the second problem). Then at the end of the presentation, in the discussion section, a question is asked: To what extent can tools, e.g., testing, static analysis, runtime analysis, and control synthesis, help eliminate software bugs?"

I know paper focuses on deadlock & askes a BS question at the end. Point of me bringing it up was to see what guys with embedded experience think about applying Discrete Control Theory (esp validation & synthesis) into software to more easily make it robust. Any potential for you see in that or is it a waste of time?

I knew who it was immediately because the page loaded & the tab had his name in it. So much for the surprise... (rolls eyes)

The first few paragraphs about made me want to reach across the Atlantic, slap him, and say "Get to the point!" The points he makes are a mix of possibly sound & otherwise just looking for ways to bash America. I wish he'd leave out the latter as I'm genuinely interested in comparing and contrasting various nationalities' approach to software. So, I'll comment on the points he made:

Not sure how that works. A person's goals, personality, social skills, & working style has more influence on this than anything. How far ahead we look is miniscule in comparison. Making some rules about the work environment (or project or whatever) that lets diversity work for rather than against the effort is quite beneficial. The rule about honesty & clarity he had, for example, is a good one.

- NSF funds work with short term focus

I think he's right about that with the funding, yet wrong about the result. We do know open-ended, long-term, fundamental research into fields will typically produce the biggest breakthroughs. Yet, most useful R&D results are a series of gradual improvements to existing work. Many of the papers I've posted here were NSF funded so it shows the organization is producing results. Also, long-term work still happens but it's broken into several short-term efforts with associate deliverables. Whether that is good or not is debatable. Yet, even Karger said in high assurance work it's best to ensure it keeps producing intermediate deliverables with value to justify ongoing investment into long-term work. That was a rule for commercial projects, though.

I'd be interested in knowing how European countries and companies handle R&D in comparison. Oh and one more point: NSF isn't only game around. Lots of governments, nonprofits, and for-profit companies fund research that's not expected to have an immediate payoff. Dare I say certain Universities' Comp Sci R&D has more of that than the other kind.

- scientists in America being considered eggheads and getting less respect

That doesn't happen in European countries? People don't think of techies as nerds or dispute what they say for political reasons? If so, then this is a true difference. However, the author is a little off in that this effect depends on the area. Different locations in US have different levels of respect for and trust in science. I mean, look how many scientists that are funded in this country. We do *tons* of science, honest and fraudulent, useful and moronic. We're flexible. :) Yet, the public is quite detached from it.

Maybe it's because science is wrong so much & Americans expect them to get answers right. Maybe American's just don't identify with it, hence treating it like a separate social class. Who knows. There is an effect like he described in most of the country, though. I suspect our poor educational system combined with incentives of our economic system are the biggest culprits. Little motivation to push reason, science and honesty when it pays of in so many ways to do the other things.

- Europe is Platonic, USA/Canada more pragmatic

We're definitely pragmatic as a whole, with sprinkles of Platonic. I'm not sure how Europe is or if it varies by country.

Those are definitely differences. There are empirical ways to handle soft sciences to a degree so they *are* sciences in that respect. If anything, this difference risks Europeans getting left behind in those fields if prejudice prevents innovations. I've seen (and used) plenty of interesting results from soft sciences. I'd love Europeans to put more effort as diverse perspectives are very important to the soft sciences.

- gripe at 'integralism'

It almost seems like the claim could be leveled directly at me & my security critiques here. ;) I see the value of the method he pushes. Yet, in our field, context & integration are utterly important to achieving the goals. If anything, they're where some of the worst problems happen. Not properly anticipating them while focusing on one aspect can lead to trouble. That said, being able to do razor sharp focus on one aspect of a design in isolation is quite useful. There's a balance here that's hard for me to articulate. I'd have to think more on it.

In America, Universities are seen as serving three functions: improving oneself; learning job skills to make more money; giving parents of teenagers at least 2-4 years of peace. That companies demand affects what is taught is often, but not always, true. Recall all the articles written in the US griping that you have to unlearn what you learn in college. If the author is correct, then that wouldn't be as necessary as college would be preparing them for work.

In reality, it depends on the institution & instructor. There are some that directly tie their offerings to in demand job skills. An example is one community college nearby offers COBOL, RPG, etc courses as the biggest employers need those skills & that's the *only* reason for it. Author hits nail on the head, here. Yet, at some other schools they teach engineering-style design methodologies, Scheme, etc. Totally opposite of industry. Our institutions also have done plenty of cutting edge work in languages, tools, software engineering, etc. They push the envelope every day whether industry wants a given innovation or not. So, if anything, the author once again tries to misrepresent American institutions as homogenous and inferior.

- American Comp Sci uses same language, publishers, manuals, etc

This seemed right when I first read it. Yet, we have to remember that the field over here is split among academic, hobbyists, and professionals existing in many areas. The different motivations & environments mean there's all kinds of stuff out there. Look at ACM & IEEE those people are similar at least in how and where they present. The rest, from focus of work to skill to practicality, varies quite a bit. Much of our most successful language work came from hobbyists, not ACM/IEEE. And their documentation approaches aren't similar. Overall, this claim of his is false.

See a pattern emerge? Have you figured out why he keeps getting it wrong? I'm going to shortcut to the answer: he falsely assumes computer science in US is homogenous. It's not. Our culture causes plenty of diversity, dare I say more so than in Europe. In a company or organization, conformity is often expected. In hobbies or marketplace, differentiators are preferred. I mean, there are certainly standards and traditions that many might conform to. Otherwise, they try to be different which results in the author's homogenity-based claims falling apart. He's intellectually trying to fit a round peg in a square hole because he desires to think of the peg as square.

- difficulty of programming & how it's like math

He's got some decent points on that. Yet, I don't think we have to jump right to it being about math. It's really about proper abstractions that map what's in our head to what's on a machine. The choice of language, tools, and engineering method solve these problems. I think the author has a strong math background and it's making him see programming as a mathematical thing. I also think certain programming work, esp in tools & scientific computing, can benefit from a mathematical perspective or straight up are mathematical. Yet, the success of COBOL et al shows that lay people without a math background can write useful apps that work reasonably well. So, it being doable with almost no math knowledge would seem reinforce my point that one doesn't inherently need the other far as coder is concerned.

- John von Nuemann's habit to describe systems & parts in anthropomorphic terminology; adopted in USA more than Europe.

I'm not sure what he's talking about. I'd like some examples. Maybe when people say "this app talks to that one," "remembers," etc? OOP & agent-oriented programming might do this too albeit with benefits. Anyway, most developers I know & books I read talk about software like it's something we produce that acts on data, does work, & interacts with user via an interface. That's more mechanical than anthropomorphic.

Come to think of it, there *is* a trend to think of business systems as a sort of organic thing that evolves and adapts to a changing environment over time. It's an interesting metaphor with some claimed benefits. Far as I know, it wasn't prevalent in his time.

- to forget that program texts can be interpreted as executable code

That's all we think about them, except for "code as data" people. I think he was just hanging around with some odd people. Or things were different back then.

- his trouble with LISP 1.5

That was HILARIOUS. It's ironic he had so much trouble with a radical, academic language rooted in mathematics after he's consistently implied Europeans would more easily learn non-standard or math-focused tech. Open mouth, insert foot.

- his recollection of ACM visit

Entertaining. Welcome to America haha.

- going through motions to please sponsor in America, not in Europe

This is a common thing in America. Sponsors don't get courtesy/preferential treatment or say over direction of research in any European countries? That would be a major difference.

- Americans having a greater capacity for dishonesty and honesty

I'll buy that claim just because it's a logical outcome of of our Constitution, economic model and culture. You put them together, you can get this result.

END OF REVIEW

Overall, beside a seeming prejudice corrupting his analysis, my overall gripe is that he compares America to Europe. I think this is a bad idea. Continents don't make software. Continents don't write or review mathematical proofs. Continents don't do about anything cohesively aside from stuff like NAFTA and EU. In our field, results are driven by individuals and groups. The proper comparison is between them, not countries or continents.

I'll illustrate that with a few groups in US and UK. Microsoft throws garbage together, ships it, and uses lock-in to keep making money. For a long time, they used monolithic architectures with poor reliability & security traits with little code inspection. On other side, Green Hills developed software and tools with a strong focus on good architecture (eg microkernel), low defects, and so on. In UK, Micro Focus uses regular software practices to construct IDE's that ensure your beloved code cutters can keep writing more COBOL (!) with predictable quality. On other end, Altran Praxis uses their "Correct by Construction" process to develop very low defect systems in many safety-critical industries.

So, it varies group by group, company by company, agency by agency. It's really about the group's goals, work ethic, tools, and attention to quality. These combine to separate the good IT from the bad IT. The country they're located in? If it has an effect, I'm just guessing it's minimal compared to the others I listed. That's just a guess though. I'm sure norms, cultures, and laws can create an environment that directly impacts software quality. I just have little hard data connecting these for European countries. Ours is individualistic, largely profit motivated, and has little to no liability so the quality is quite predictable. :(

]]>
2014-05-06T19:45:40Z2014-05-06T19:45:40Ztag:www.schneier.com,2014:/blog//2.5320-comment:5860355Comment from vas pup on 2014-05-06vas pup
@K2 • May 6, 2014 10:08 AM.
Because you and your interests are not part of their calculations at all which are driven by unrestrained greed (that is not applied to Canadian bankers - respected guys). Rationale is that you (and me, and I guess almost all other respected bloggers) are source of financial institutions' profits only, and your interests/needs could be taking somehow into consideration and protected by (I know most of you hate that idea - but there is no other option, because their self-regulation is your self-deception)government regulation and oversight until some next generation will see their attitude change to Canadian bankers type. By the way, the attitude of the latter was generated as result of such regulation. ]]>
2014-05-06T16:55:00Z2014-05-06T16:55:00Ztag:www.schneier.com,2014:/blog//2.5320-comment:5858596Comment from K2 on 2014-05-06K2
What is the rationale for finance institutions not giving you the ability to set a "notify only" contact channel, that'd tell you if your account changed, but would not allow you to use the channel to make such changes?
]]>
2014-05-06T15:08:28Z2014-05-06T15:08:28Ztag:www.schneier.com,2014:/blog//2.5320-comment:5857865Comment from vas pup on 2014-05-06vas pup
@Siderite May 4, 2014 5:45 PM.
Yeah, please check history of this respected blog on police state discussion/posts. You'll find good clarification on subject matter. Regarding Laws and Constitution. It was bitter joke in Soviet Union.
Man came to the KGB office and asked: 'Do I have a right? (for whatever was clearly state in the Constituition). KGB: Yes, you do. Man: Can I ? KGB: No, you cannot.' I guess you got the point that it is huge gap between you have right and you could use it without any trouble (be in jail, fired from work without any reason stated, black listed, emotionally harassed, etc.). Please read attetively Wikipedia's article on Stasi (Eastern German secret police). But time and again, accordeon laws with unclear/ambigues content are at the root of letting apply them selectively and secretly by all legal system (cop/LEO -> prosecutors -> courts -> detention facilities).
@NobodySpecial. You may try, and the let us know how Nazi uniform is working with Israelis. My point is they did not give that ....(you know stinky stuff) for political correctness/profiling when it is about security and preventing acts of real terrorism. ]]>
2014-05-06T14:25:14Z2014-05-06T14:25:14Ztag:www.schneier.com,2014:/blog//2.5320-comment:5857463Comment from name.withheld.for.obvious.reasons on 2014-05-06name.withheld.for.obvious.reasons
A REHASH--TITLE: Failure IS the option (formerly called "Plan B")

War is about conquer and conquest, at least from the victor's point of view, and in modern history it has been translated as action in the absence of sociopolitical resolve...sound familiar. By this definition congress is fighting a war with itself.

The drug war, a war upon itself, upon its citizens...represents the complete failure to achieve victory and is not just a failed war...it represents the systemic institution failure of governments and peoples. It's a failure to sufficiently enumerate a problem (classification of drugs, subjective, can be explained by the example of tobacco and alcohol as to the rational disconnect).

Risk based benefits to actions taken to address a relative abstract concept such as a war on things makes no sense. Seeking to conquer inanimate objects cannot be useful, understanding what produces the most effective strategy to address a "perceived" public ill should not be minimized--but we do. The history of our inability to deliberate beyond the visceral is extensive--American exceptional-ism --isn't.

Until we as a society can hypothesize and formulate solutions to problems, our actions amount to nothing more than the juvenile act of kicking sand in the face of reason resting on a beach blanket. Discourse around sociopolitical issues (the use of drugs, pills, food, drink, philosophy, etc.) is so narrow that no one is served by our current system(s), period.

Until intelligent life visits this planet, we are doomed to suffer the indignity of our own ignorance. This issue is reflective of our inability to understand, let alone address, causation in any number of our "abstract" system(s). Complete knowledge is fantasy--much as the NSA believes that if they have the data a solution becomes apparent--really?

If we are going to be intellectually honest, we need to call the "War on People who use or Abuse things we don't like" and execute the final war..."The War on Stupidity". I'd argue further, the war on drugs is a failure of ideas. Puritanical judao-christian norms provide subjective, emotive, and immutable "facts". The same is true of ignorance, it infrequently "solves" a problem but moreover sells a solution; when a mountain lion stalks you on the trail, turning around and saying "Here kitty, kitty." is probably not a good idea. Ignorance allows one to make decisions and policies without concern of or for the consequences. The mountain lion is simply a kitty...as long as you're not the one getting scratched.

]]>
2014-05-06T14:01:34Z2014-05-06T14:01:34Ztag:www.schneier.com,2014:/blog//2.5320-comment:5856693Comment from name.withheld.for.obvious.reasons on 2014-05-06name.withheld.for.obvious.reasons
@ Nick P
From a control theory perspective, the academic approach resemble the thesis of PID and hybrid control theory. Well known, understood, and completely actualized in the real-time taxonomy. That said I see that it is separable from the hardware layer--examples include exceptions thrown by an underlying operating system or an actual hardware failure. In theory, this approach works well until the "anomaly" or out of bounds condition. Boundary conditions are always a problem--whether represented as nyquist in a signal or parametrically by logic or level.

From a formal perspective, "Adaptive Control Theory" holds much in way forward with fault tolerance and performance. Having worked with massively parallel systems engineering the biggest issues surrounding these applications is recovery from error--to state it simply. Other problems exist at the "modeling" or application layer which is where this paper plays. I've always argued that we use highly symmetric systems to solve largely non-linear problems. Until a "non-linear" hardware solution (and I have done some research in this area) the advances in robust and "galaxy" scale computing will be limited. Meaning "re-booting" will happen a little less frequently...

A few years ago I wanted to go for the British Gas "Dual fuel" deal, which also required an online sign up for the service which was fair enough. What however was not acceptable was they wanted you to enter your bank details for a direct debit online via an insecure process... I took issue with this and sent the detaile by post on a number of occasions but they failed to act upon them.

But from my point of view it was even worse the bank I was with then changed it's terms and conditions and had hidden a new condition which basicaly said Put your bank details into any online service and you lose all protection agsinst fraud and theft irrespective of how it's commited.

Any way British Gas used it as an excuse to muck me about and put me on their most expensive service even though I had not signed for it and then failed to sort the problem out. This gave rise to other issues with other energy suppliers.

The upshot is I would advise anyone to treat British Gas as a bunch of scammers. And in this respect I am not alone another person treated in a similar way took them to court for harisment, British Gas lost big style and the judgment against them was compleatly scathing and in the process has set new legal president.

As far as I'm concerned the directors of British Gas knew that what they were doing was wrong both legaly and moraly and should be not just ashamed but pillored for their busniess practices that are used to extort money from people. This issue with passwords shows they still have compleate and utter contempt for their customers and are still carrying on their sleezy business practices presumably with the full knowledge and agreement of their board of directors and executives as well as some if not all of their major shareholders.