Neural nets (which is what that appears to be) are so far from conventional computers that programming one is not likely to translate very well to anything else.

As for actual future SoCs for the Pi...the next one is undoubtedly being developed even as we speculate. The only thing about it that I'm reasonably confident of is that it will be a 28nm device instead of a 40nm device. There are some things I *hope* the next SoC will have, but being a neural net isn't one of them.

Principal Software Engineer at Raspberry Pi (Trading) Ltd.
Contrary to popular belief, humorous signatures are allowed. Here's an example...
I've been saying "Mucho" to my Spanish friend a lot more lately. It means a lot to him.

Moore's Law is sort of a side issue. The real crux of the matter is the von Neumann bottleneck, which requires data to be pushed from memory to processor over a bus. One of my favourite things about the Pi is that it drives interest in computer science, not just sensors, bells, and whistles. The Pi 3 is an actual personal computer that enables experimentation beyond the soldering iron.

"Self-education is, I firmly believe, the only kind of education there is" - Isaac Asimov

I'm at a loss, hasn't every computer used buses to connect cpu to ram? even amd's hyper transport IS a bus still.
The trisisters will increase the data on the bus by 50%. That will improve things more. But I pity the programmers., yes, no, and maybe is going to make software coding a nightmare.
But he is wrong about brains power curve, it goes off the chart you do the nasty. Inefficiency in the most used processes is NOT a good idea.

Slackware wrote:I'm at a loss, hasn't every computer used buses to connect cpu to ram? even amd's hyper transport IS a bus still.
The trisisters will increase the data on the bus by 50%. That will improve things more. But I pity the programmers., yes, no, and maybe is going to make software coding a nightmare.
But he is wrong about brains power curve, it goes off the chart you do the nasty. Inefficiency in the most used processes is NOT a good idea.

Well that's the issue.
Every computer AFAIK since the 50's has followed the Von Neumann architecture and it seems to us that there is no other way. Computers systems have to follow the form of input->processing/Storage->Output. We are so used to it, it's hard to think of anything else.

That's where the bottlenecks occur.

Consider a brain - it's a really bad analogy for a standard computer because it's not really anything like a computer.

One of the features of brains that sets them apart from computers (as I understand it) is that there is no physical distinction between storage and processing and information. It's all the same thing.

If the VN model is reaching it's upper levels of performance then perhaps it's time to look at other models.