Re: "has to be written in C"

I can't help feeling a little less concern for the last fractional percent of performance could have saved millions of collective hours of misery if the same principles had been applied to Windows NT.

</quote>

From my experience, software written in "higher" level languages suffers from the same crappiness and malfunctions equally the same. It might not be a buffer overflow, but human idiocy will always find a way around.

I very much doubt it has anything to do with the performance, even at the time of standardization of the 4G, phones had more compute performance than PCs from 90's and in miniature / low-power form.

The elephant in the room is the incompatibility of the worldwide mobile standard set by an intergoverment entity with the desire of the goverments to be able to intercept their (and other) citizen communications.

In the ideal world, a modern telephony standard would maintain forward secrecy and the voice data would never be transmitted unencrypted, with the keys tied only to the handsets themselves and overridable by the users of the said handsets. This way, data which goes through the switching office would be perfectly useless from the point of the contents. It is not realistic to expect that the international public telecommunication standard insists on further secrecy, like mechanisms for preventing locating the originator and the destination of the call.

But even "just" secrecy of the contents, not so-called "meta"data is simply against laws set up in most countries nowdays which require an ability to do covert listening (after court order or with less oversight, depending on the country).

So, no, there will be no forward secrecy in a public telephony standard.

Re: Secure boot?

Looks like UEFI secure boot is the new bogeyman for some people.

The purpose of the secure boot is to establish a chain of trust from the power ON. The purpose of this is to help prevent modification of the boot files >in deployment<. However, if you own or have the access to the trusted certificate, you can make your own bootloader which does whatever you want to. System OEMs can put their certificates in the UEFI firmware and validate whatever they want.

Also, secure boot does not prevent an OS from launching anything after boot which is trusted (or not trusted but allowed by the system security policy). Once the OS is booted, it is completely up to the said OS configuration / security policy what to launch or not. If you, as a root/admin or OEM, install malware which does MITM - UEFI secure boot will not stop you (and it is not even designed to do that).

Now, if you have only trusted certificates installed - in UEFI firmware, validating OS files and in OS certificate store, validating executables run by the OS, then you have a system which has one more hurdle for a potential adversary to crack.

Great article, questionable title though...

While I applaud the author for a very nice explanation for the layman, I think the term "(de)coding" is too much abused today and used where it does not really belong.

This is very similar case as in, for example, scientists using term neural "coding" - saying that something is (de)coded implies that it has been "coded" in the first place. Of course, decoding compressed audio or video signal results in (almost, sometimes) original audio or video signal, but this is precisely because the said signal has been coded in the first place, we know, because we did the coding.

Gene information, on the other hand... not really. Genes (or, better, clumps of molecules we call "genes") are inherent part of the living beings, these molecules do not "code" anything, not any more than, say, crankshaft "codes" anything in the internal combustion engine. These things are parts of the process and not just "code".

While sometimes it can be very useful to compare or abstract living processes using concepts from the information theory or computing, this can be dangerously misleading if taken too far. Biological processes are not computation. While these processes can, to some extent, be compared to or modelled with concepts from the information theory, they are much more than that.

Please do not get me wrong, I applaud the science for working on understanding the processes responsible for keeping matter alive, and I am quite sure that better understanding of the molecular machinery will lead to better medicine and quality of life for humans and animals, but this "decoding", and, let's not forget, "-omics" fashion (for some reason, it became very fashionable to stick "-omics" name to things recently, probably having something to do with better grants) can give false sense that we understand something more than we actually currently do.

It awfully reminds me of claims that we'll "crack" the problem of intelligence in late 50s. 60 years later, we are still discovering new dimensions of the problem. I fear this will also apply to the molecular processes underlying "bootstraping" (damn it, I did it too) the living organism.

Re: Testing for a simulation

Here is one idea:

http://arxiv.org/abs/1210.1847

And a general audience version: http://www.phys.washington.edu/users/savage/Simulation/Universe/

TLDR: If we are living in a "beta" simulation, there are possible ways to find it out, and the paper proposes one way, by measuring ultra high energy cosmic rays, and check the direction of travel of the highest energy particles (near so-called GZK cut-off). The idea is that hypothetical simulation might reveal its symmetry if the highest energy particles are following a certain direction.

I am not a physicist, so I have no clue if this could work, or whether eventually detected phenomena can be explained with something else (most probably IMO).

"Frog" brain... or "any" brain...

If it would be able to catch a fly for its dinner, Dr. Modha would be most likely earning himself a Nobel prize.

Unfortunately, Dr. Modha is known for sensationalistic announcements (several years ago it was a "cat" brain, which sadly did not do much either) and little real material.

Putting bunch of simplified models of neurons together is nothing new. It has been done dozens of times before:

- In 2008, Edelman and Izhikevich made a large-scale model of human brain with 100 billion (yes, billion) of simulated neurons (http://www.pnas.org/content/105/9/3593.full)

- Since then, there have been numerous implementations of large-scale models, ranging from million to hundreds of millions of artificial neurons

- Computational neuroscience is my hobby, and I managed to put together a simulation with 16.7 million artificial neurons and ~4 billion synapses on a beefed-up home PC (http://www.digicortex.net/). OK, it was not really a home PC, but it will be in few years

- And, of course, there is a Blue Brain Project, which evolved into Human Brain Project. Blue Brain Project had a model of a single rat cortical column, with ~50000 neurons, but modelled to a much higher degree of accuracy (each neuron was modelled as a complex structure with few thousands of independent compartments with hundreds of ion-channels in each compartment).

--

All of these simulations have one thing in common: while they do model biological neurons with a varying degrees of complexity (from simple "point" process to complex geometries with thousands of branches), they all show "some" degree of network behavior similar to living brains, from simple "brain rhythms" which emerge and are anti-correlated when measured in different brain regions, to some more complex phenomena such as acquiring of receptive fields (so e.g. neurons fed with visual signal become progressively "tuned" to respond to e.g. oriented lines etc.) - NONE OF THEM is yet able to model large-scale intelligent behavior.

To put it bluntly, Modha's "cat" or "frog" are just a lumps of sets of differential equations. These lumps are capable of producing interesting emergent behavior, such as synchronization, large-scale rhythms and some learning through neural plasticity which result in simple neuro-plastic phenomena.

But they are NOWHERE near anything resembling "intelligence" - not even of a flea. Not even of a flatworm.

I do sincerely hope we will learn how to make intelligent machines. But we have much more to learn. At the moment, we simply do not know what level of modelling detail is needed to replicate intelligent behavior of a simple organism. We simply do not know yet.

I do applaud Modha's work, as well as work of every computational neuroscientist, AI computer scientist, AI software engineer and also all developers playing with AI as their hobby. We need all of them, to advance our knowledge of intelligent life.

But, for some reason, I do not think PR like this is very helpful. AI, as a field, has suffered several setbacks in the history thanks to too much hype. There is even a term, "AI winter" which came to be precisely as a result of one of those hype cycles, very early in the history of AI.

I am also afraid that Human Brain Project, for all it is worth, might lead us to the same (temporary) dead end. I do hope HBP will achieve its goals, but announcements that Dr. Markram made in the last years, especially (I paraphrase) "We can create human brain in 10 years" will come back to haunt us in 10 years if HBP did not reach its goals. EU agreed to invest one billion euros in this - I hope we picked the right time, but I am slightly pessimistic. Otherwise we will be up for another AI winter :(

Err, actually it works the other way: people with extraordinary claims need to come up with extraordinary proofs.

Inventor claims he invented a brilliant new method of propulsion, which seems to violate some physics laws. No biggie, if it really works I am quite sure the inventor will have no problem selling / licensing / giving away / whatever implementations of his invention.

If people had to recreate every single silly apparatus just to state that it does not work, the civilization would be busy with recreating garbage.

Mind you, I am not saying this particular thing is garbage, maybe it is a paradigm shift in space travel. But the burden of proof is on the inventor and people claiming the invention works.

NASA experiment did not prove this thing work. The fact that they got something out of deliberate setup designed NOT to work casts doubts on the validity of experiment. It also does not help that they did not perform the experiment in vacuum.

Nevertheless, if this invention does indeed work, it will have no issue whatsoever in being confirmed experimentally.

Re: Paradigms

Relativity did not have any problem to get accepted. Quantum mechanics, too.

Because these things were proven conclusively and repeatedly. Sure, there were probably people who did not "believe" until their deaths, but actually most of the academic world quickly catched up.

If this thing "works", then it will be absolutely no problem to replicate the setup and confirm that it really works. Perhaps somebody will also eliminate the possible causes of concern, such as the fact that the NASA chaps did not test in vacuum. If the proposed invention is really meaningful, it will have no problem with replication and confirmation in a rigorous setup.

Re: What did you expect?

Nobody is stopping you from modifying software you purchased, but nobody is forced to provide you with everything needed for the most convenient way. With binary code, you'll have to do it in assembler but nobody stops you in principle.

Did your vacuum cleaner company give you the production tooling and source files used to build the vacuum? No? Did your car vendor hand the source code for the ECU? Did they give you VHDL code for the ICs? Assembly instructions? No? Bas*ards!

As far as for banning, I'd first start with banning stupidity. But, for some reason it would not work.

Re: This is a Windows API problem

It would not work. There are too many applications written in the crappy way with pieces of code working properly only with decreased time quantum. Their time accounting would be fcked and the results would range from audio / video dropouts to total failure (in case of automation / embedded control software).

For example in all cases where code expects timer to be accurate to the level of, say, 1ms or 5ms. Too many multimedia-related or automation-related code would be broken.

It is sad, but true. Microsoft should never allow this Win 3.x / 9x crap in the NT Win32 API but I suppose they were under pressure to make crappily written 3rd party multimedia stuff to work on NT Windows flavors, otherwise they might have problems with migrating customers to NT.

Of course, nowdays (since Vista) there are much better APIs dedicated to multimedia / "pro" audio, but here the problem is the legacy.

At least, Microsoft could have enforced API deprecation for all software linked as NT 6.x so that this terrible behavior could be avoided for new software. But that, too, is probably too much to ask due to "commercial realities". Consumer's PC would suck at battery life, nothing new here :(

This is a Windows API problem

By default, quantum in NT kernels is either 10ms or 15ms, depending on the configuration (nowdays it tends mostly to be 15ms). However, >any< user mode process can simply request to get this down to 1 ms using an ancient "multimedia" API.

Needless to say, in the old days of slow PCs etc. this was used by almost all multimedia applications since it is much easier to force the system to do time accounting on 1ms scale than to do proper programming.

For example, idiot thinks he needs 1ms time precision for his timers - voila, just force the >entire OS< to wake the damn CPU 100 times a second, do all interrupt processing just to wake his ridiculous timer because the developer in question has no grasp of efficient programming. In most cases, it is perfectly possible to implement whatever algorithms with 10/15ms quantum, but it requires decent experience in multithreaded programming. This, of course, is lacking in many cases.

Only a very small subset of applications/algorithms need 1ms clock-tick precision. However, for those, system >should< ask for admin righs, as forcing the entire OS to do 10x times more work has terrible consequences for laptop/mobile battery life.

Microsoft's problem is typical: they cannot change the old policy as it would break who-knows-how-much "legacy" software.

Yet another...

Every time a company in the Valley becomes big enough (sometimes not even that), they have to give it a try and make their own programming language.

Apple already did this, it looks like this is their second try.

World is full of "C replacements", they come and go... but for some reasons, C is pretty much still alive and kicking and something tells me that it is going to be alive long after the latest reiteration of Valley's "C replacement" is dead and forgotten.

I am sorry, but this is simply not true (that open source software >cannot< have backdoors because someone, somewhere might spot it).

Very good backdoors are made so that they look like plausible bugs (and all general purpose software is full of them). Something like missed parameter validation or a specific sequence of things which triggers a behavior desired on the most popular architectures/compilers allowing adversary to read the contents of a buffer, etc..

It takes awful lot of time to spot issues in complex code - it took Debian more than a year and a half to figure out that their pseudorandom generator is fatally flawed due to stupid attempt of optimization. And >that< was not so hidden, it was there in plain sight. Not to mention that crypto code >should not< be "fixed" by general-purpose developers (actually, this is what caused the Debian PRNG problem in the first place), so your pool of experts that would review the code drastically shrinks. So you gotta hope that some of these experts will invest their time to review some 3rd party component. This costs hell lot of time and, unless somebody has a personal interest, I doubt very much that you would assemble a team of worldwide crypto experts to review your github-hosted project without paying for this.

Then, complex code is extremely hard to completely review. This is why in aerospace and vehicle telematics, critical software is written from the scratch so that it can be proven that it works by following very strict guidelines on how the software should be written and tested (and, guess what, even then - bugs do occur). General-purpose software with millions of lines of code? Good luck. The best you can do is to schedule expert code reviews and, in addition, go through the code with some fuzzing kit and spot pointer overruns etc. but even after all that, more sinister vulnerabilities might still pass.

So, sorry, no - being open source does not guarantee you lack of backdoors. Because in this day and age, smart adversary is not going to implement a backdoor in plain sight. Instead, it will be an obscure vulnerability that can easily be attributed to simple programmer error.

Faith that open source code is backdoor free because it is open is pretty much similar to the idea that infinite amount of monkeys with infinite amount of typewriters will write all Shakespeare work. Please do not get me wrong, I am not attempting to compare developers to monkeys, but the principle that just because there is some chance of something to happen - it will happen. No, this is not guaranteed.

Love it or not, there is no objective reason why you would trust Microsoft less than some bunch of anonymous developers.

Microsoft has a vested interest in selling their product worldwide, and backdoor discovered in their crypto would severely impair their ability to sell Windows to any non-USA goverment entity and probably big industry players, too.

I am not saying that BitLocker has no backdoors - but there is no objective reason to trust BitLocker less than TrueCrypt.

Sad thing is, when it comes to crypto there is, simply, no way to have 100% trust >unless< you designed your own hardware, assembler for building your own OS and its system libraries and, finally, crypto.

Since nobody does that, there is always some degree of implicit trust and, frankly, I see no special reason why one would trust some anonymous developers more than a corporation. Same goverment pressure that can be applied to a corporation can be applied to an individual and we do not even know if TrueCrypt developers are in USA (or within USA government's reach) or not. Actually, it is easier for a goverment to apply pressure to an individual, which has far less resources to fight compared to cash-full corporation that can afford a million $ a day legal team if need be.

The fact that TrueCrypt is open source means nothing as far as trust is concerned. Debian had a gaping hole in its pseudorandom number generator for everybody to see for 1.5 years. Let's not even start about OpenSSL and its vulnerabilities.

There is, simply, no way to guarantee that somebody else's code is free of backdoors, You can only have >some< level of trust between 0% and less than 100%.

Re: Whoa there

Actually, BitLocker does >not< require TPM since Windows 7. Since Windows 7 it allows a passphrase in pretty much the same way as TrueCrypt. I use it, since TrueCrypt does not (and, probably, never will after the announcement) support UEFI partitions.

Also, BitLocker does not, by default, leave a "backdoor" for domain admins. If this is configured, then it is done so by a corporate group policy, but it is not ON by default.

BitLocker does not allow plausible deniability on the other hand, and there people will need to find some other option now that TrueCrypt development seems to be ending.

The problem of trust is there for both TrueCrypt and its closed-source alternatives such as BitLocker. There are ways to insert vulnerabilities that would look like ordinary bugs and be very hard to catch even when somebody is really looking in the source code (see how long it took people to figure out that Debian pseudorandom number generator was defunct). At the end of the day, unless one writes OS and compilers and "bootstrap" them from its own assembler, it is always involving some degree of implicit trust of 3rd parties.

What we need is a truly open-source disk encryption tool which is:

a) Not based in the USA, so that it cannot be subverted by "Patriot" act

b) Which undergoes regular peer-reviews by multiple crypto experts

c) With strictly controlled development policies requiring oversight and expert approval of commits

The problem is: b and c cost money, so there needs to be a workable business model. And that needs to be creative due to a), which would preclude standard revenue stream from software licensing.

And even then, you still need to trust these guys and those crypto experts as well as compilers that were used to build the damn thing...

Actually, the fact that the memory is temporal is known for quite a long time.

At least since early 90s, after the discovery of spike timing dependent plasticity (STDP) - http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity it became obvious that the neurons encode information based on the temporal correlations of their input activity. By today, our knowledge has been greatly expanded and it is known that the synaptic plasticity operates on several time scales and its biological foundations. There are also dozens of models of varying complexity with even some simple ones being able to reproduce many plasticity experiments on pairs of neurons quite well.

Since early 90s there had been lots of research into working memory and its neural correlates. While we do not have the complete picture (far from it, actually), we do know by now very well that the synaptic strength is heavily dependent on temporal correlations and that biological neural network behaves like auto-associative memory. There are several models that are able to replicate simple things including reward-based learning, but all in all, it can be said that we are really just at the beginning of understanding how the memory of the living beings works.

As for Ray Kurzweil, sorry but anybody who can write something called "how to create a mind" is just preposterous. Ray Kurzweil has no clue how to create a mind. Not because he is not smart (he is), but because NOBODY on this planet has a clue how to create a mind, yet. Ray does, however, obviously know how to separate people from their money by selling books that do not deliver.

If somebody offers to tell you "how to create a mind" (other than, well, procreation, which pretty much everybody knows how to do it) just ask them why is that they did not create it, but instead they want to tell you that. That will save you some money and quite a lot of time. While I do not disprove motivational value of such popular books, scientifically they do not bring anything new and this particular book is just a rehash of decades-old ideas.

Re: Let a thousand flowers bloom

Lizards and frogs do not have neocortex, but are doing pretty well in surviving. Even octopuses are pretty darn smart and they do not even have brain parts even the lizards have.

Today we are very far even from the lizard vision (or octopus vision if you will), and for that you do not need an enormous neocortex. I am pretty sure that something on the level of lizard intelligence would be pretty cool and excite the general populace enough.

These things are hard. I applaud Jeff's efforts but for some reason I think this guy is getting lots of PR due to his previous (Palm) achievements while, strictly speaking, AI-wise, I do not see a big contribution yet.

This is not to say that he shouldn't be doing what he is doing, to the contrary, the more research in AI and understanding how the brain works, the better. But too much hype and PR can damage the field, as it happened before, as the results might be disappointing compared to expectations.

Re: model a neurone in one supercomputer

The reason computer always responds the same to the same inputs is only because algorithm designed made it so.

There is nothing stopping you from designing algorithms which do not always return the same responses to the same inputs. Most of the today's algorithms are deterministic simply because this is how the requirements were spelled out.

Mind you, even if your 'AI' algorithm is 100% deterministic, if you feed it with the natural signal (visual, auditory, etc.) the responses will stop being "deterministic" due to the inherent randomness of natural signals. Now, you can even extend this with some additional noise in the algorithm design (say, random synaptic weights between simulated neurons, adding "noise" similar to miniature postsynaptic potentials, etc.).

Even the simple network of artificial neurons modeled with two-dimensional algorithms (relatively simple algorithms, such as adaptive exponential integrate and fire) will exhibit chaotic behavior when introduced to some natural signal.

As for the Penrose & Hameroff Orch OR theory, several predictions this theory made were already disproved, making it a bad theory. I am not saying that quantum mechanics is not necessary to explain some aspects of consciousness (maybe, maybe not), but that will need some new theory, which is able to make testable predictions which are confirmed. Penrose & Hameroff's Orch OR is not that theory.

Re: This is the case for open source operating systems.

Jesus effin' Christ - Debian generated useless pseudorandom numbers for almost year and a half.

NOBODY spotted the gaping bug for >months<.

No, it is >not< possible to guarantee that software is 100% backdoor-free - open or closed, it does not matter.

Linux, like any modern OS, is full of vulnerabilities (Windows is not better, neither is Max OS X). Some of these vulnerabilities >might< be there on purpose.

The only thing you can do is to trust nobody and do the best security practice - limited user rights, firewalls (I would not even trust just one vendor), regular patching, minimal open ports on the network, etc. etc.

Re: AC

They probably mean Xeon E5, as Xeon E7 is by no means "latest" since it is waiting to be upgraded to Ivy Bridge EX in Q1/2014. Currently it is based on now ancient Westmere microarchitecture.

Xeon E5 is based on Sandy Bridge uarch, and the upgrade to Ivy Bridge is imminent (in couple of weeks) - however, E5 is limited to 4 sockets unless you use 3rd-party connectivity solution such as NUMAlink.

Re: Ultrabook debacle

Actually, the first "ultra thin" notebook was Sony X505, invented by Sony in 2004.

Google it up - that was good couple of years before Macbook Air.

Of course, Sony being Sony - they marketed the device for the CEO/CTO types, and priced it accordingly (it well above 3K EUR in Germany). Hence, it was not very successful.

But in terms of actual invention - this was "it". Apple just take more sane approach and priced the Air in the range of "affordable luxury" item - certainly not cheap, but well within reach of middle class.

Same flop (typical of Sony) was repeated with the Z series - Sony made the dream machine which was more powerful than most Macbook Pros (before 15" Retina) but ligther and actually thinner than the first-gen 13" Air. And Full HD 13" screen since 2010 - something that took Apple quite a bit of time to catch up. All in all, a perfect notebook - I know since I owned all Z models, before I switched to Macbook Retina 15".

Again, thanks to their ridiculous business model and practice of stuffing crapware (at some point Sony even had the audacity to ask $50 for a "clean" OS installation) world will remember Apple Macbook Air and Retina as the exemplars of ultra-thin and ultra-powerful machines, and not Sony X and Z series.

However, nothing changes the fact that it was Sony delivering the innovation years before Apple.

Re: Security

Normally, additional features that command a premium are fused-out in 'common' silicon and enabled only for special SKUs.

To temporarily enable fused-out feature you would need several things none of which are present in the computers employed in ordinary businesses. And even if you had all the tooling and clearances (which is next to impossible) the process of temporary enabling is not going to be unnoticed. Hardly something that can be used for exploitation. There are much easier avenues - including more and more rare kernel-level exploits.

Re: Talk about apples and moonrocks

Sorry, but you obviously do not know what are you talking about, sorry.

Intel's modern CPUs have different micro-ops which are used internally. x86 instruction set is only used as a backwards-compatibility measure and gets decoded to the series of micro-ops (besides, modern instructions such as AVX have nothing to do with the ancient x86). Today's Sandy Bridge / Ivy Bridge architecture has almost nothing in common with, say, Pentium III or even Core. Intel tweaks their architectures in the "tick" cycle, but the "tock" architectures are brand new and very different from each other.

As for X86 instruction set, and the age-old mantra (I believe coming from Apple fanboys) that x86 is inherently more power-demanding, this has been nicely disproved lately with refined Intel Atoms (2007 architecture, mind you) which are pretty much on par with modern ARM architectures in terms of power consuption.

I am not a fan of x86 at all, I use what gets my job done in the best possible way.

But when I read things like this, it really strikes me how people manage to comment on something they obviously do not understand.

Re: Where is the advantage?

Err, why would a sandboxed userspace code cause "computer to crash" any more than anything else that runs in the userland? I see absolutely no point in that statement. If a computer crashes today, this is almost universally due to bad kernel-level 3rd party code such as drivers. NaCL has nothing to do with this any more than Javascript - both languages would execute in the userspace and, at the end, use OS supplied userspace APIs.

And how is Javascript any higher-level than, say, C++ ?

Sorry, the fact that the language is more open to script kiddies does not make it any more "high level", nor it does make buggy code any less likely to occur. Crappy code is crappy code, it is caused by the developer and not by the language.

The advantage of the NaCL would be performance - if someone needs it say, for multimedia, gaming, etc... It does not mean that everybody needs to use it but it would be good to have it as an option. The fact that computers are getting faster does not render it any less relevant as modern multimedia/gaming is always pushing the limits of the hardware.

Re: So, not exactly Orac then

My bet is on the truly neuromorphic hardware.

Anything else introduces unnecessary overhead and bottlenecks. While initial simulations for model validation are OK to be done on general-purpose computing architectures, scaling that up efficiently to match the size of a small primate brain is going to require elimination of overheads and bottlenecks in order not to consume megawatts.

Problem with neuromorphic hardware is of "chicken and the egg" type - to expect large investments there needs to be a practical application which clearly outperforms the traditional Von Neumann sort - and to find this application, large research grants are needed. I am repeating the obvious, but our current knowledge of neurons is probably still far from being on the level enough to make something really useful.

Recognizing basic patterns with lots of tweaking is cool - but for a practical application it is not really going to cut it as the same purpose could be achieved with much cheaper "conventional" hardware.

If cortical modelling is to succeed - I'd guess it needs to achieve goals which would make it useful for military/defense purposes (can be something even basic mammals are good at - recognition, and today's computers still suck big time when fed with uncertain / natural data).. Then, the whole discipline will certainl get a huge kick to go to the next level.

Even today, there is a large source of funding - say, for Human Brain Project (HCP). But, I am afraid that the grandiose goals of HCP might not be met - coupled with pimping of general public and politician's expectations, the consequences of failure would be disastrous and potentially yield to another "winter" similar to the AI Winters we had.

This is why I am very worried about people making claims that we can replicate human brain (or even brain of a small primate) in the near future - while this is perhaps possible, failing to meet the promises would bring unhealthy pessimism and slow down the entire discipline due to cuts in funding. I, for one, would much rather prefer smaller goals - and if we exceed them, so much for the better.

Re: Better platform needed

There is still a tiny issue of connectivity - despite the fact that synaptic connectivity patterns are of "small world" type (highest % of connections are local), there is still a staggering amount of long-range connections that go across the brain. Average human brain contains on order of hundreds of thousands of kilometers (nope, that is not a mistake) of synaptic "wiring".

Currently our technologies for wiring things on longer distances are not yet comparable to mother nature's. Clever coding schemes can somehow mitigate this (but then, you need to plan space for mux/demux, and those things will consume energy), however - but still, the problem is far from being tractable with the today's tech.

Re: Strong AI will, of course, use Linux

Operating system choice has absolutely nothing to do with brain modelling.

Most models are initially done in Matlab, which exists on Linux, OS X and Windows.

Then, applying this in large-scale practice is simply a question of tooling, and tooling exists on all relevant operating systems today. You have CUDA and OpenMP on Linux and Windows. Heck, you even have Intel compiler if you love x86 on both Linux and Windows. It is more a practical choice which is down to the other requirements.

On the other hand, it is true, however, that there is a large choice of support tools (such as Freesurfer) existing on Linux and not on Windows. But then, anybody can run anything in a virtual machine nowdays.

Re: So, not exactly Orac then

Hmm, machine language would be a huge waste of time as you could accomplish the same with the assembler ;-) Assuming you meant assembly code - even that would be an overkill for the whole project and actually it might end up with slower code compared to an optimizing C/C++ compiler.

What could make sense is assembly-code optimization of critical code paths, say synaptic processing. But even then, you are mostly memory-bandwidth bound and clever coding tricks would bring at most few tenths of % of improvement in the best case.

However, that is still drop in the bucket compared to the biggest contributor here - for any decent synaptic receptor modelling you would need at least 2 floating point variables per synapse and several floating point variables per one neuron compartment.

Now, if your simulation accuracy is 1 ms (and that is rather coarse as 0.1 ms is not unheard of) - you need to do 1000 * number_of_synapses * N (N=2) floating point reads, same number of writes - and several multiplications and additions for every single synapse. Even for a bee brain-size, that is many terabytes per second of I/O. And >that< is the biggest problem of large-scale bilogically-plausible neural networks.

Java or not...

Actually, Java is the smallest problem here (although it is rather lousy choice if high-performance and high-scalability is a design goal, I must agree).

The biggest problem is brain "architecture" which is >massively< parallel. For example, typical cortical neuron has on order of 10000 synaptic inputs and a typical human has on order of 100 billion neurons with 10000x as much synapses. Although the mother nature did lots of optimizations in the wiring, so the network is actually of a "small world" type (where most connections between neurons are local, with small number of long-range connections so that wiring, and therefore energy, is conserved) - it is still very unsuitable for the Von Neumann architecture and its bus bottleneck.

For example, you can try this:

http://www.dimkovic.com/node/7

This is the small cortical simulator I wrote, which is highly optimized for Intel architecture (heavily using SSE and AVX). It is using multi-compartment model of neurons which is not biophysical, but phenomenological (designed to replicate desired neuron behavior - that is, spiking ,very accurately without having to calculate all intrinstic currents and other bilogical variables we do not know) .

Still, to simulate 32768 neurons with ~2 million synapses in real time you will need ~120 GB/s of memory bandwidth (I can barely do it with 2 Xeons 2687W with heavily overclocked DDR 2400 RAM!) ! You can easily see why the choice of programming language is not the biggest contributor here - even with GPGPU you can scale by max. one order of magnitude compared to SIMD CPU programming, but the memory bandwidth requirements are still nothing short of staggering.

Then, there is a question of the model. We are still far far away from fully understanding the core mechanisms that are involved in learning - for example, long term synaptic plasticity is still not fully understood. Models such as Spike-timing dependent plasticity (STDP) which were discovered in late 90's are not able to account for many observed phenomenons. Today (as of 2012) we have somewhat better models (for example, post-synaptic voltage dependent plasticity by Clopath et al.) but they still are not able to account for some experimentally observed facts. And then, how many phenomena are still not discovered experimentally?

Then, even if we figure out plasticity soon - what about glial contribution to neural computation? We have much more glial cells which were though to be just supporting "material", but now we know that glia actively contributes to the working of the neural networks and has signalling of its own...

Then, we still do not have too much clue in how the neurons actually wire. Peters rule (which is very simple and intuitive - and therefore very popular among scientists) is a crude approximation with already discovered violations in-vivo. As we do know that neural networks mature and evolve depending on the inputs - figuring out how the neurons wire together is of uttermost importance if we are really to figure out how this thing really works.

In short, today we are still very far away from understanding how brain works in the detail required to replicate it or fully understand it down to the level of neurons (or maybe even ions).

However, what we do know already - and very well indeed, is that the brain architecture is nothing like Von Neumann architecture of computing machines, and emulation of the brains on the Von Neumann systems is going to be >very< ineffective and require staggering amounts of computational power.

If money is no object THIS is the ultimate "extreme desktop CPU"

Ok, it is not exactly "desktop" but it is workstation - close enough :) And it allows dual-CPU configurations.

I have two of these puppies and the software I write is very happy with the speedup. Not to mention that it is quite easy to crank the Samsung's 30nm ECC "green memory" up to 2133 MHz (the official specs of the RAM and the Xeon's E5 allowed max is 1600 MHz) which makes any large-scale biological simulation quite happy due to the insane memory bandwidth (~69 GB/s).

Intel decided to cripple 3960X/3930K with fusing-out the two cores as they understand overclockers will push the voltages north of 1.3v... If they left all 8 cores on, this would generate extreme amounts of heat which would be very hard to evacuate from today's desktop setups, unless they are cooled off with some heavy-duty water cooling setup.

Re: Instead of telling us where the cameras are ...

Re: Speed Camera Databased

Actually, it is only Switzerland that is >very< anal about the speed camera POI databases, and they threatened TomTom and other GPS device manufacturers that they would stop sales of their devices if they do not remove that feature.

In Germany, it is also illegal to have such aid in a car, but they can't be bothered... You'll see TomTom GPS devices with "Speed Camera Database" advertisements in tech stores like Saturn. Illegal to use, legal to sell that is.

Please educate yourself about the matter you are writing about

AAC (like MP3 or H.264) has NOTHING to do with DRM. It is a worldwide international standard (ISO/IEC 14496 in case of AAC, ISO 11152 in case of MP3) and is related to audio coding only.

Encrypting this with your DRM has nothing to do whatsoever with the AAC (or H.264, or MP3) standard itself. Hell, you can even put Ogg Vorbis in DRM container if you wish. This is purely the decision of the implementer. Standards alone do not enforce nor prohibit DRM.

Second, H.264, MP3 and AAC are as open as it gets - they ALL are fully documented and available from ISO/ITU directly (or your country standards body) with open source reference C/C++ software being available as well. When those patents expire, they will be fully public domain - much more "free" (as in freedom) than some GPL-ed stuff.

The fact of the matter is - yes, complex audio and video codecs ARE based on patented technology more often than not. And there is NOTHING bad in that and NOTHING preventing them from being open for everyone to implement and use, with reasonable and non-discriminatory cost model.

It is only the freetards of this planet who are trying to spread FUD about the well-proven international standards like H.264. Sorry guys, technology has a price - someone worked very hard to invent it. No, those companies WILL NOT give it away "for free" so Apples and Googles of this would would use it to make money.

And, by the way - these ITU/ISO standardized technologies are in almost all modern digital TV, optical media, mobile and satellite standards. That makes much more relevant than what some FSF-tards would like people to know.

Ehm... because of... quality?

Because VP6 is quite better in terms of quality per equal bitrate comared to Ogg Theora?

Meaning that bandwidth required to carry quality streams will be lower?

Or, meaning that people would get higher video quality per same bandwidth utilized?

Not to mention that streaming HD streams with Theora requires ludricous bit rate compared to H.264 or even On2 VP6. Argument that "everyone has broadband" does not cut it - first of all because it is still NOT enough for HD Theora streams, and second -because someone HAS to pay for all that bandwidth being moved between servers.

Of course that anyone with a hint of technical competence would choose better codec (better in terms of quality per bits engaged) as it decreases costs and improves quality of viewing.