Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

"Can you imagine the virus you could write if you could change the instruction set of the cpu?"

Forgive my ignorance, but why would this be any different than the virus you can write with the general purpose CPUs we have today? You could make the machine unreliable, but that wouldn't make for an effective virus distributing machine.

Wow! The virus could execute arbitrary code! Just like if it could choose which of the existing instructions were executed by another processor. The core part of your virus could run faster, maybe in just one clock cycle!

Easy - Say, the extra instructions are supposed perform a matrix convolution. Call extra instruction 1 with some random matrix. If it doesn't calculate the same thing as a slow version run in the regular RISC part you know extra instruction 1 has in some way failed and needs to be reprogrammed. Your virus software and OS etc should never have special instructions and are always run in the regular RISC part.

I highly doubt anyone is planning on making PCs with these. They are designed for being a processor in something like a data logging / control system, surveillance video compression, etc. Your system will probably have no need for virus detection any more specific than other more general regression and test suites it will need during operation.

Actually, it's almost certainly based on standard SRAM FPGA technology. It's quite cheap in terms of power, and not especially expensive in terms of time, to reprogram, and there is no degredation over time from doing it too often.
The only real disadvantage is that it might be entirely possible to create on-die shorts with bad programming data, as it currently is in FPGA's.

How do you detect a virus that has control of the underlying hardware though...

The same way you detect a virus on any machine that has been compromised, with another machine and or a thorough understanding of normal operation and running processes. Nothing new here. Evaluate the harm done by a potential compromise and take steps accordingly.

There is no practical difference between a hardware and a software compromise and the remedy is the same. Indeed, for critical purposes, there's little difference b

A Minor change in the instruction set would likely render the OS dysfunctional - and while that would certainly get attention - it would not propogate very well.

There is a math about viruses which requires them not to kill their hosts, and to do as little damage really as they can bear. Damaging viruses get high priority on fix lists and would get shut down more quickly than less harmful viruses.

I think a CPU change virus would be a rather self-defeating proposition.

People developing along similar lines must have means of controlling the new circuitry so that hot spots don't form on the die. Especially if they provide analog capability. It could be too easy to set up a feedback that could really trash that part of the die.

Which brings up another thought: Do they have an on-board controller that tracks what parts of the die are usable and what aren't? If they do, they can have seriously high production yields.

In fact, I wouldn't be surprised if such a self-diagnostic utility made its way into modular dies with specialized circuitry. So a processor could run on two AMUs instead of three, and so forth.

If this doesn't rempresent the death of the megahertz as a processor-benchmark standard, I don't know what will...

Effective application speed was never based on a cycle count alone, because different processors can have better instruction sets for the given application. The main breakthrough here is that this chip leaves "user-definable" space in its instruction set so they can re-optimize the instruction set on the fly. Whatever you're running, its most commonly used functions can almost slide from being code to being "on the chip" and that's sure to speed up the experienced speed.

being code to being "on the chip" and that's sure to speed up the experienced speed.

first, where exactly is code run, if it isn't 'on a chip', and second, what? speed up the experienced speed?

you mean, as opposed to something like 'pretended speed', which is what i imagine you were using to measure your rapid desire to let your undoubtedly 'speedy' fingers get through your slashdot post without thinking...

This is basically an FPGA married to a RISC processor. So if you have a bit of RISC code that can be simulated using the FPGA portion, and you have enough spare cells to add it, and it takes 10 clock cycles for the FPGA "user instruction" to dispatch, but it takes 200 to do it outright in the original RISC instructions, then you're experiencing a 20 to 1 speed increase for that bit. You speed up the function without overclocking. Actually what you've done is "trade off".

first, where exactly is code run, if it isn't 'on a chip', and second, what? speed up the experienced speed?

When a function is defined in code, you have to use multiple processor cycles to complete the function. However, when the funciton is "on the chip", that entire function can be completed in just one assembly-level call to the processor.

"Experienced speed" is of course a pseudo-benchmark because it can't be standardized, and its components highly specialized. It's how fast you can complete a set of

When a function is defined in code, you have to use multiple processor cycles to complete the function. However, when the funciton is "on the chip", that entire function can be completed in just one assembly-level call to the processor.

But you cannot say that one "assembly level call" to the processor will take (even) fewer "processor cycles" to complete. Hint: very few instructions in even today's CPUs take a single clock cycle to execute, most take several, it's just with pipelining, many instructions h

This looks interesting, at this generation it looks to be dedicated applications. You code for your particular application and use their compiler which restructures the CPU to optimize for that application. What it does not say is if the hardware changes are read/write. If you release a maintenance patch to your application, do you have to swap in a new CPU for optimal performance? If the area is read/write just how many times can you change the CPU instruction set? Can you change the CPU instruction set with something else other than using their compiler? That is using a microcode release that rewrites the CPU. I would not want to load a compiler onto every one of my products.

The advantages of Java (and.Net, once Mono comes out) is not just portability but managed code as well, to help you protect from things like buffer overflows. This applies as well to interpreted languages like Perl, Tcl, Python, etc.

Where I see a real possibility is in taking the JVM/CLR/Parrot/etc. and putting part of THAT functionality on-chip. Imagine your bytecode or interpreted programs running as fast on this platform as a compiled program runs on your run-of-the-mill Intel or AMD processor!

There's a cool library called GNU Lightning [gnu.org] which will generate machine code at runtime, which is good for JITs and such. It isn't exactly what you're looking for, but it illustrates that having a standard assembly language (or, much more likely, several standard assembly languages!) isn't all that far off.

For the most part, FPGA's you build its code from scratch, you give it it's identity of how it works, what it does and so on.

This chip sounds like a hybrid between an FPGA and a run of the mill general purpose RISC processor. Being based on a RISC instruction set, you code for it as you would a normal processor, however if the compiler sees code which could take advantage of having more CPU support, it could add instructions to the FPGA like portion of the chip to enable better throughput.

The short summery is: FPGA, programmed from scratch. Standard RISC processor: Already has instruction set which you program against.

a FPGA is just a block of logic gates that can be connected after the original manufacture. Typically, they are used to implement simple logic cheaply and easily. This is more of an entire processor designed on a similar principle. I would guess that it includes registers, a clock, bus connection facilities, etc. If anything, this is closer to a CPLD which combines i/o blocks, function cells and interconnection blocks to create somewhat more complicated (and often times sequential, as opposed to combination

When you write code for this processor, the compiler would figure out which operations would fit best in reprogrammable logic, then configures the logic and compiles to this custom instruction set all on its own. At runtime, the custom logic is loaded and the program executes.

A traditional FPGA, while reconfigurable, is normally developed in Verilog or VHDL. Where reconfigurable logic is used in a micropr

Well it seems rather similar to the Virtex 2 Pro, those have PowerPCs integrated on them. Although they are rather expensive. And while the individual chips may not be all that expensive the boards are.

All in all it seems like these have a developer environment which helps the user port C/C++ programs to this platform. There has been quite a few of those chips / systems before though. It will be interesting to see if this one can take off the ground where the others have failed.

Short answer: FPGAs let you build using basic gates and (very small) lookup tables. This lets you build anything you please, and fully optimize the number of functional units of each type that you have, but has a speed and size penalty.

This chip is basically a RISC processor with an FPGA-type fabric bolted on as a co-processor, as far as I can tell from the detail-poor press release. By implementing most of the instruction pipeline as fixed, optimized hardware, it runs without any of the penalties of a purely FPGA-based implementation. When you have a number-crunching task that would benefit from a custom logic implementation enough to offset the performance penalty of implementing it in programmable logic blocks, the compiler configures the programmable logic into a suitable coprocessor which is stuck in as an extra branch of the instruction pipeline.

How much benefit you get from this depends on what you're doing. Modern general-purpose microprocessors have enough vector instructions to handle most DSP-ish tasks without an abysmal speed penalty (just a large size and power penalty over a purely DSP-based implementation). Most computing tasks aren't limited by processing horsepower at all - they're either waiting for memory accesses to complete (even cache accesses are very slow compared to register accesses), or they're waiting for the target address of a branch to be decided (speculation and BTBs don't address this perfectly by a long shot). A reconfigurable processor would suffer from much the same type of problem. While using the programmable logic path for slice processing could remove some of the branching penalties (by following all paths and selecting the desired result), this would be at an even greater area and power cost.

For specialized applications, it would be quite useful, of course.

A quick glossary of terms being thrown around, for anyone confused:

FPGA - Field Programmable Gate Array.This is a combination of lookup tables, sum-of-products combinational logic blocks, and scratch-pad SRAM that you can hook up in nearly arbitrary ways to produce custom circuits at a gate level. Bulky and slow, but good at implementing algorithms efficiently. Configuration information is loaded from a serial PROM chip at startup, letting you change it relatively easily.

CPLD - Complex Programmable Logic Device.Like an FPGA, but stores configuration information internally, so you need to take out the CPLD and burn it to change configuration instead of re-burning the configuration PROM.

PLA/PLD - Programmable Logic Array/Device.Little cousin to CPLD. This is what you played with in second or third year. Typically these are just a sum-of-products combinational logic block with a register stuck on the end to latch the output. Useful as glue logic.

ASIC - Application-Specific Integrated Circuit.This is an integrated circuit that's half-made. A number of gates and registers and so forth have been fabricated on the chip, and the lowest few metal layers have been used for internal routing for these, but you get to define the upper metal layers to form arbitrary connections among these (either as the last fabrication step, or by laser-cutting a pre-fabricated wiring mesh to leave the geometry you want). Works much like a CPLD, but the design is decided at fabrication time and cannot be changed. Faster and less bulky than a CPLD implementation.

Standard cell design.This is a custom-fabricated integrated circuit that uses cells from a standard library of components, usually automatically placed and routed from a VHDL or Verilog description of what you want the chip to do. Faster than an ASIC if you have good place and route software, but more expensive in small quantities because you're making what amounts to a full custom chip. Design time is much less than a fully custom design would be, though (but verifying that the design description is correct is a royal pain).

FPGA stands for Field Programable Gate Array... and it is a Chip that can be Programed, and Re-Programed... The programations is a low level one... even lower than Micros... you design it for electrical connection between gates...

Of course, there is no such thing as a universal solution and the Stretch processor does have its limits. One significant area is in "low touch" operations such as network processors. While it can certainly do the relatively simple packet inspection and transformation that switch fabrics and network processors normally handle, it is really much better suited to the heavy-duty calculation- and manipulation-intensive tasks found in "high touch" applications such as video compression. For example, H.263/264 motion estimation is capable of producing very high-quality video from a relatively small bit stream, but requires lots (and lots) of raw processing horsepower. Happily, the Stretch processor is only too happy to oblige, churning out a SAD (sum-absolute difference) operation on a tile-full of pixels for H.263 video in 43 ns (H.264 takes 83 ns).

I think we're going to have to move the crypto benchmarks back a step when this tech comes out. Not very many of us have RISC chips that are optimized for MD5 or any of the other popular crypto formulas, but if the typical consumer PC had this technology, we could all effectively have an on-demand RISC for whatever we need at the moment sitting in our PCs.

In short, the time-to-crack using consumer technologies for almost any form of crypto is about to take a step backwards. It won't "break" anything, but the brute force combinations will be able to be examined in a faster time, meaning higher standards will be needed for the same level of protection you have today.

So? the only reason crypto works at the moment is because cracking is several hundred orders of magnitude slower than encrypting/decrypting.

Taking more time to encrypt/decrypt isn't a problem (does anyone here notice the differance between 2.5ms and 5ms?) but reducing the crack time by the same proportions means that codes that were built to last years might only last months, or even mere weeks, which is a real problem.

Along with jsac's comment (more processor power exponentially benefits encryptors, only linearly benefits crackers, on the whole more power means a win for encryptors), I'd like point out this is only a set-back for encyption in-as-much as encryptors claim that their encryption will keep your data safe for all time. Which is to say, at least for the reputable encryptors, this isn't a set back at all.

If you insist on putting words in their mouth, then yeah, you might consider it a set back. But that's your misunderstanding, not theirs. All reputable encryptors have accounted for Moore's Law in their cost/benefits tradeoffs. Since it doesn't take much encryption power before it requires computers larger then the Universe to crack it via brute force (and since "cracks" on good encryption are really typically just ways of collapsing the search space, not procedures that give immediate answers, often adding more bits will require Universe sized machines, too), this isn't that big a deal for encryption. Push your key size up and be done with it. Even conventional machines can handle that today, it just takes longer.

We do, say, 2048-bit encryption (asymmetric), because it would be "too slow" to do 20480-bit encryption. "Too slow" here is a fuzzy term, but generally speaking, if you're sending an encrypted email you don't want to hit "send" and have it delayed for three weeks while it gets encrypted. There's no real reason we couldn't do it today.

As computers speed up, both encryption and decryption get faster. However, while adding another 128 bits to 128-bit symmetric cipher may be "free" with newer computers (and ev

Well, even if his math was wrong, his point is still valid.. going from 5 trillion years to 5 billion years isn't much different (of course, even 128 bit encryption is currently thought to take much longer [avolio.com] than a measly 5 trillion years to brute force).

Most cryptology systems are purposefully designed to take an absolutely absurd amount of time to crack -- exactly to account for many of these instant 1000 fold improvements.

How can something that normally takes "hundreds of thousands of instructions" be handled in a single instruction? Surely all the same mathematical operations must take place, except for some optimization. Or is it a matter of a certain structure for computation being created in a more permanent fashion rather than being dynamically formed upon demand? Then the operations could be performed in a single cycle. On the other hand, that portion of the processor would become useless to other tasks.
Or am I misunderstanding this entirely?

Say you had to compute a 10000-entry sin/cos table (simple example). The processor would reconfigure itself to perform sin/cos operations in a single cycle (parallel ALUs etc.) and, if there were enough configurable circuits, perhaps multiple sin/cos table entries at once. That's where the speed advantage is - large blocks of repetitious calculations. With a sophisticated enough reprogramming AI, computationally intensive apps like video games could get a huge performance boost.

There is the analysis required to even determine that the incoming instructions require sin/cos. Then there has to be a lookup into a rule table for how to rewrite the gates to optimize for this. Then that rule needs to be applied. You have to be able to show me that this can all be done faster and cheaper than a x86 at 4Ghz just ramming it through. Maybe it can, but I am skeptical.

You are making the assumption that all of this is done on the fly. It's not. The compiler would, at compile time, locate can

You hit upon the answer in the latter portion of your post. Most cpus are generalists--they're fast at most things, but aren't optimized for anything. This kind of tech allows you to optimize your cpu for a particular task.

If you have something that needs to do a simple operation on each member of a large data set, the chip could be configured as many tiny simple cores that are just smart enough to do that operation.

Or if you needed to do a complicated math function, you could optimize the cpu for that

I studied "Custom Computing" as it was called at my university a few years ago. That was based around using FPGAs as the processor, but with the same idea of doing on-the-fly redesign of your hardware to suit the current problem.

The basic idea is to move problems from the time space (i.e. do X then Y then Z taking T time to do it) to the physical space (i.e. do X next to Y next to Z taking S transistors to do so, but only one cycle). So your simple add operation in a regular microprocessor, which fetches t

You can do lots of addition/subtraction instructions to get the result of a single multiplication instruction.

Maybe they meant to say thousands of clock cycles can be reduced to one clock cycle since you can have larger single instructions(i.e. squareroot over pi or something) programmed into the chip that only take one cycle?

It's a DSP/RISC processor (basically the same thing) with an on-chip FPGA. If you have some particular algorithm, you can put it on the FPGA to get a solution instead of having to use code. (this is a lot harder to explain then I thought it would be....)

In electrical terms, imagine a processor that has left some of its circuit space with a "This space for rent!" sign posted. Instead of being a hard-wired function like normal, there's a grid of switches that cna be turned on and of in combinations in order to create define a few new processor functions.

Sure, you have to "call your shot" and define your new function before you can use it, but storing the function inside the chip rather than as code makes it a whole lot faster to use...

IANAEE, but I was just wondering if this technology provides greater advantages to unique monolithic apps as opposed to apps targeted for virtual machines such as the JVM or CLR. Those VMs are general-purpose, and maybe apps that run on them would be "invisible" to the hardware reprogrammability... however I don't know how just-in-time native compilation might change that picture. Anyone with knowledge of this stuff care to enlighten?

Right now, this product isn't meant for PCs quite yet. Basically, the manufacturer instructions are to write your program in standard C, and then run it through their application which will convert the most-used C functions into a RISC instruction for the chip.

So "virtual machines" is a situation this chip hasn't had to encounter yet. I'm guessing that a PC user would have to throw the switch manually to change which "processor image" is running at any given time...

It's called DISC, Dynamically Reconfigurable-Set Computer. It's existed for a few years now. If I remember correctly, there is a group at Berkley working in the area and have released a few nice papers on it.

I remember a project where hardware engineers setup a cpu to modify itself until it learned to do a task by itself. It got to the point where the hardware was doing the right thing, but not because the hardware was reconfigured properly, but because the software was using minute naunances in the electricity flowing through to get the job done. Even the hardware designers had no idea how it could possible be working

It was an FPGA, and it wasn't the CPU modifying itself, it was a genetic algorithm designing a circuit that would perform a specific task (differentiate between two different ranges of input signals, IIRC).

The interesting result was that the circuit designed by the GA didn't use conventional structures, but instead, according to traditional circuit design theory, should not have functioned at all -- dead loops, etc. The behavior and result was tied to the physical FPGA being used to test and give feedback to the GA -- the minute nuances, as you referred to them -- and was not portable to even another instance of the exact same FPGA.

I remember reading about this in either Popular Science of Discover magazine. I seem to remember that the head researcher took the chips to another building or room to show them off and they didn't work. Then took them back to the room they came from and they worked again. They finally determined that the rooms had slightly different temperature and the chips were so specific to that environment thta changing the temperature even a tiny bit stopped them from working.Crazy stuff.

He used a Xilinx FPGA and a genetic algorithm (implemented separately) to evolve a circuit which could distinguish (IIRC) two different frequency tones on the input as a logic level output. The "program" was allowed to interconnect the FPGA configurable logic blocks in any old sort of way internally and between CLBs. This would include ways which would cause logic designers to shudder in horror:), and did not include a clock input to the circuit at all.

The result was a successful circuit that used a relatively small portion of the FPGA. But trying to work out how it was accomplished the tone discrimination was impossible. There were sub-circuits that were isolated from the rest of the circuit but when removed would cause the circuit to fail. Thompson hypothesized that the circuits were taking advantage of "out of band" communication via electromagnetic or thermal influences on adjacent CLBs.

Furthermore, the circuits turned out to be very specific to the ambient temperature during training and usage, as well as being specific to a particular FPGA used (a working circuit on one would fail on another.)

In any case it was a fascinating small-scale exploration of what reconfigurable hardware and genetic algorithms could accomplish, when not constrained by the "clock driven sequential logic" paradigm nearly all human engineered circuits use.

Yes sure, rewirable chips would be cool for certain applications, but how does one go about making it deal with multiple applications with multiple needs? You'd over load the CPU with a truckload of specialized instructions - which would probably slow it down. Granted, I see uses in things like mobile phones, but for multitasking machines, a 'Jack of all trades' chip is the way to go.

You have OS support. New instructions are a resource that the OS manages. Too many processes want to add their own instructions? Then when a context switch takes place the OS overwrites instructions for the outgoing context with instructions for the new one. Same as managing small amounts of RAM by swapping.

From what I gathered, this allows the compiler to create an instruction that can do a lot of work in one instruction, NOT for the processor to decide to create an instruction. Think of it this way, if you know you need to do something like an array multiply many times, the compiler could create an instruction for it, and then use it as needed. The key to this is that the instruction set can be optimized on a program basis, so you don't waste gates on SSE2 instructions if you don't use them, etc.

This would compare with FPGA's I believe in that most FPGA applications are fixed once loaded, although I know that there was talk about stargate systems on slashdot (http://slashdot.org/article.pl?sid=03/02/15/16292 37&mode=nested&tid=126)using FPGA's for general processing before.

FPGAs are not static. They can even be reconfigured during runtime. (Though it takes a lot of time, from the chips point of view.)

Search around for reconfigureable FPGA and you'll find that there is several projects which does this. I know of three such projects of the top of my head (Stargate, RAW, Mitrion) so I would exactly call the idea new.

It sounds interesting enough that I wouldn't mind buying one to play w/ or port an os to. Their numbers of their 300mhz chip outperforming a 2ghz chips makes sense if the instruction set has been changed for a single purpose. A coworker pointed out that task switching can't be that speedy. So a general purpose chip that can automatically tune itself to a specific purpose is how this comes across. Still, this can be useful.

The concept of a programmable hardware device isn't all that new. And the encoding and encryption they talk about speeding up is a typical application of PLD's. High end routers use similar devices to optimize their tables etc.
Kuro5shin has a nice article for beginners.
http://www.kuro5hin.org/story/2004/2/27/213254/152

FPGAs have had processor IPs [xilinx.com] available for a while, which, in theory, can be reprogrammed on the fly. But AFAIK, no-one does this. I doubt this will be any different.

Hardware manufacturers that need special hardware operations (IE MPEG-2 decoding) use dedicated, custom hardware for large volume production. Dynamically configurable hardware is expensive for large scales production, and small scale production will likely use FPGA for similar effect. I may be sceptical, but I doubt it'll catch on.

Better yet, Xilinx also has FPGAs with up to four embedded PowerPC processors [xilinx.com]. These are the real deal, not IP cores that get compiled into the chip by the engineer. I suppose the difference to the part covered in the story is that the programmable logic can be reprogrammed on the fly, not so with this Xilinx part.

This is evolutionary, not revolutionary. Many chipmakers have offered microcontrollers and microprocessors with FPGA on chip. Often it is an extension of the I/O built into the processor, so it's not much different than an external FPGA on the processor bus. Please note that this is NOT like processors that run on the FPGA itself - these are seperate from the FPGA portion of the chip.

Stretch is different in a few ways:
It pulls the FPGA closer to the core, so that it can be utilized almost as part of the pipeline. I say almost because of the following statement in the article:Inside the chip, the ISEF is coupled to the rest of the circuit by 128-bit buses and has 32 128-bit registers. It runs in parallel with other areas of the processor, effectively becoming a fully reconfigurable co-processor, and can be reprogrammed for new instructions at any time during operation.

So it's still fairly seperate from the processor core.

But the core itself is high performance (fast clock, a little faster than the average FPGA) and it has a very fast memory bus (again faster than the average FPGA)

The downsides are likely to be:
1) Power cost and dissipation. Since it's a slow clock, the dissipation probably won't be bad, but it's not going into a small portable machine.
2) Time to reconfigure. This isn't meant to be a general processor with task switching. Context and task switching is going to be expensive and if you plan on running two concurrent tasks which both require special instructions the entire processor will likely perform, on average, much worse than it would without the reconfigurable portion. Unless, of course, the processes were created to use the same set of special instructions so the context switch isn't more expesnsive than it is for today's processors.

So they are targetting it correctly, it seems. Specialized areas with, in general, only one task/program running at a time. Multimedia players, for example, would be great here. A digital recorder/player would work well if both the encoding and decoding portions of the code were compiled so the special instructions created wouldn't have to be changed for either application to allow playback while recording.

This sounds vaguely like the dream solution for developers. The article says:

"It runs in parallel with other areas of the processor, effectively becoming a fully reconfigurable co-processor, and can be reprogrammed for new instructions at any time during operation."

Does that mean it can handly booting multiple OSes simutaniously? If so, how long before someone writes an app that bridges multiple OSes, allowing the equivalent of emulation, without the emulation? I don't know about the rest of you, but the potential of this chip sounds like a dream come true. And at $35-$100 per chip... it's cheaper than the processor for most systems anyway.

The first processor that can add to its instruction
set while operating? I think there were a few microprogrammed processors in the 70s/80s with writable control store that could do exactly that. Anybody remember PERQ workstations?
Now this new gadget appears to be able to extend itself by means of an embedded FPGA, instead of plain old microcode, so it's a bit like the Xilinx Virtex II PRO series (PowerPC core with big FPGA on one chip). The really innovative thing is that you don't have to program the FPGA in VHDL or Verilog, but the C++ compiler takes care of that.

I can just see this processor, mixed with a bit of Mark Tildens analog AI research to really advance Artificial Intelligence. For the uninitiated Mark Tilden discovered that by tying a group of only four or so transistors and sending a regular analog signal through it he could get small robots to walk, and indeed do an amazing number of things, including optimize it's path and even remember it's solution for a small amount of time(about 3 or 4 seconds). Not only that but when given a certain stimulus need (example make them solar powered and have only one are of light they would compete with other bots to gain access to better light. Indeed a lot of the behavior that these little bots produce is so complex and life like that he has spent a long time just documenting behaviour. Now give a set of these bot's circuits the ability to "optimize" the speed of the signal, and a few stimuly and let it play. If the stimulous was for "human approval" some input from a human indicating good or bad.... Heck what do I know, I'm non AI researcher but it always sounded cool to me:-)
For more information on Mark Tilden go to
BEAM Online [beam-online.com]

That insanely complicated piece of software that can automatically figure out what it needs the chip to do at any given time for its own survival --oh yeah, we have those... PEOPLE! Now, can I get those neural processor connects and graft this thing to my head already?

Pretty skimpy blurb - I suspect that the product is either a) vapourware or b) a lot more limited than is discussed in the article.

From the article, I presume that the processor's microinstruction memory can be updated with special information embedded in the executable file. This is not as unique as you might think: virtually all Intel and AMD processors have the ability to have their microinstruction memory updated during the boot process - this is used up upload microinstruction updates/corrections wit

The idea of programmable chips is nothing new. Xlinx etc have been doing it for ever. The idea of putting both a standard core w/ a generic instruction set AND a programmable core ont he same chip is very interesting. It will, however, be a niche product. You aren't going to use it in your home computer because your home computer does a broad range of things.

This will be useful in places that they mentioned. Places where you do a lot of processing that takes many generic instructions but can be translated into a single string of descrete instuctions.

The more I think about it, this is the direction processors are going. We keep moving processors towards RISC based cores. We keep adding specialized paths for things such as multimedia. Eventually we WILL have half the processor being a purely RISC core and half being programmable hardware for specialized computational intensive instructions. I retract my initial view.

I do wonder though, what the life is on the hardware side. How many times can you reprogram the hardware before it starts to die. What is the error rate in reprogramming it? What happens when a few programmable transistors die?

I've noticed some folks comparing this to Transmeta. While similar, there are a few more comparable architectures out there.

Perhaps the most notable (in its conception, at least) was Seymour Cray's attempt at a Pentium Pro core + reprogrammable extensions (via FPGA or the like) at his post-Cray Research company. More recently, IBM licensed PowerPC cores for use by Xilinx. Up to four of those cores get thrown on the die with a Virtex-II FPGA (?); each of the cores has the ability to add opcodes in FPGA lan

Stretch claims that their CPU running at 300MHz has shown superior performance to a 2GHz box. We have no details of their testing and I wonder about the real world performance.

Natural questions come to mind like how quickly does the chip configure itself to optimize for the application, does the configuration only occur at start of the application, how many chip-configuring applications can it run concurrently, will it optimize for interpreted languages, can some configurations be made "permanent" to accom

Star Bridge Systems [starbridgesystems.com] has been selling computers that reconfigure their own logic (with the help of compilers) for about 5 years now. True, their solution isn't a single chip, but the idea of reconfigurable computing is not at all new, and Star Brigdes implementation appears to be even more flexible.

General purpose CPUs are fast, ubiqutous, and cheap. While compelling, this new approach is in no sense a slam-dunk in the market. Stretch will have to show a compelling case why this is a faster and cheaper alternative to the x86 (compatible) hegemony.

The original design for the Zilog Z-80000 (Not to be confused with the Z80000 that actually shipped and was an enhanced Z8001) was also dynamically self configuring and optimized its execution based on the frequency of use of instructions.

Of course, that was only a little over 20 years ago.

FYI: Since somebody is going to ask... The original Z80000 design was killed when Zilog stalled out as a general purpose processor maker and moved into embedded processors after the bugs in the initial run of Z8001 chips and IBM's selection of the Intel 8088.

It seems Stretch is not the only company that announced such a product today:
EE Times article [eetimes.com].
Also, keep in mind, customizable ISAs have been around for a while -- in Tensilica and ARC processors. These guys do it dynamically.

I'm currently working on modular multiprocessor systems implemented on FPGAs, so this field is something I know something about.

Altera produce an FPGA with one or more built in ARM processors. This sounds very similar to the Scratch system, but the ARM processors are limited in connection into the fabric of the FPGA by the not particularly fast bus used with the processor. Scratch appear to have made the data transfer rate between the two parts of utmost importance, which is essential in high throughput applications like this.

Altera have also developed a softcore processor, that is one implemented entirely on an FPGA. It is highly configurable - instructions can be added, cache and memory behavior altered, buses adapted, etc. Coupled with things such as the DSP blocks (trees of multiply accumulates), a 50Mhz processor can process data in a specific task at the same rate as a general purpose processor running at 10 times the speed.

The work I'm doing is investigating the use of many of these processors on one fpga. Levels of optimisation that cannot be done with conventional multiprocessor systems will be possible. Changing the memory system to deal with specific algoriths, or bus widths between certain processors will allow much better performance.

Scratch also seems to be making a difference by claiming to have easy to use and working development tools, which is one thing that Altera cannot really claim to have done.