Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Sam Haine '95 writes "EETimes is reporting that ARM Holdings have developed an asynchronous processor based on the ARM9 core. The ARM996HS is thought to be the world's first commercial clockless processor. ARM announced they were developing the processor back in October 2004, along with an unnamed lead customer, which it appears could be Philips. The processor is especially suitable for automotive, medical and deeply embedded control applications. Although reduced power consumption, due to the lack of clock circuitry, is one benefit the clockless design also produces a low electromagnetic signature because of the diffuse nature of digital transitions within the chip. Because clockless processors consume zero dynamic power when there is no activity, they can significantly extend battery life compared with clocked equivalents."

In implantable medical electronics extremely low clock frequencies are sometimes used. I've heard of sub hertz frequencies, i.e. less than once clock pulse per second when in standby modes. Not quite millihertz but certainly decihertz.

The peripherals (serial ports, sound, LCD,...) are still clocked. The core is synchronised with peripherals by peripheral bus interlocks.

This is not really any different than the way a clocked core synchronises with peripherals. These days devices like the PXA255 etc used in PDAs run independent clocks for the peripherals and the CPU. This allows for things like speed stepping to save power etc.

There has to be a "clock" in the system. What Asynchronous means is that the ability to do add, sub, mul, if, jump, next-instruction, etc are not key'd to the clock. They are instead keyed to command signals, and instead of implicitly completing their request at the end of X clocks, they have to also generate a "I'm complete" signal. The controller continuously monitors them and acts as a flow-control manager for each segment of the CPU. So sound modules, etc will be using a clock for only those things

I read the summary and cringed. (1) Don't call them clockless -- they're called a-synchronous, because (unlike a synchronus processor, one with a clock), all the parts of the processor aren't constantly starting and stopping at the same time. A typical synchronus processor can only run at a maximum frequency inversely proportional to the longest length in the critical path - so if it takes up to 5 nanoseconds for information to propagate from one part of the chip to the other, the clock cannot tick any faster than once every 5 nanoseconds. (2) One very serious problem in modern processors is clock skew [wikipedia.org] - if you have one central clock, the parts closest to the clock get the 'tick' signal faster than the parts farhther away, so the processor doesn't run perfectly synchronously.

Maybe the first commercial micro-processor. DECs VAX-8600 [microsoft.com] was asynchronous. And it smoked for the day. I worked on some of the multi-variant multi-source clock skew calculations for the simulator used to model the processor, among other duties. Very slick hardware for the time. External syncronous contexts are maintained of course for syncronous busses but the internal processor speed is quicker in theory and cheaper power since you have fewer switching transitions. Think of the fun in ECL logic back then.:)

He didn't say it was a microprocessor. Actually, it was a small mainframe, in terms of size, or a high-end "supermini". It simply used asynchronous design concepts, when even the other minicomputers and mainframes of the day were synchronously clocked.The VAX 8600 was produced by a team at DEC that had a heritage doing large computers (PDP-10, DECSYSTEM-20). It was competing, internally, with a different group with a "midrange" (VAX) heritage, who produced the VAX 8800 and some other machines. There was

One of the top problems in CPU design is distributing the signal to every gate. It is very wasteful. Clockless CPUs are a revolution waiting to happen. And it will. The idea is just better in every respect. It will take effort to reengineer design tools and retrain designers, but they are far superior (now that we really know how to make them, which is a recent development).

I think the confusing part is that, in the terminology of conventional, "synchronous" design, "asynchronous logic" is used to mean "the combinatorial logic in a single stage". What conventional, clock-based design typically does is break the logic up into stages with clocked latches in between, thus limiting the depth of each "asynchronous" logic stage.

Unfortunately, self-clocked design (like the reported ARM uses) is also sometimes called "asynchronous" logic design; however, this is a completely different kind of thing than the "asynchronous" combinatorial logic used in clock-based design. Self-clocked design also does combinatorial logic in latched stages, but uses a self-timed asynchronous protocol to run the latches instead of a synchronous clock. Basically, the combinatorial logic figures out when it's finished, and tells both the next stage ("data's ready, latch it") and the input latch from the previous stage ("I'm done; gimme some more data").

To close the loop, each stage can wait until there's new data ready at its inputs, and space to put the output data. Thus, in absence of some bottleneck, your chip will simply run as fast as it can.

To overclock a self-timed design, you simply increase the voltage. No need to screw around with clock multipliers; as long as your oxide holds up, your traces don't migrate, and the chip doesn't melt...

Basically, the combinatorial logic figures out when it's finished, and tells both the next stage ("data's ready, latch it") and the input latch from the previous stage ("I'm done; gimme some more data").

That sounds a bit like a dataflow language [wikipedia.org]. Maybe you could make a program that automatically converts a program made in such a language into a chip design ? Then we'd only need desktop chip manufacturing to make true open-sourced computing a reality...

Processors like this do not have a clock. Each piece of the processor is self-timing, with handshaking done between components to pass the data (compare this with clocked processors, where you can assume the data is at your input and valid just by counting cycles.) Asynchronous processors don't have global 'cycles' when all components must pass data.

But your assertion about critical path is slightly off. Asynch processors still have a critical path. If you immagine the components as a bucket-bregade and the data the buckets, then they may not all be heaving the buckets at exactly the same time anymore, but they will still be slowed down by the slowest man in the line. The difference is that critical path is now dynamic. You don't have to time everything to the static, worst-case component on your chip. If you consistenly don't use the slowest components (say, the multiply unit), then you will get a faster IPT (instruction per time) on average.

And yes, you don't have clock skew any more which is nice, but you now have to handshake data back-and-forth across the chip. Of course putting decoupling circuitry in can help.

Hasn't the commercial microprocessor industry already been flirting with the idea of asynchronous electronics? Looking at developments like DDR, execution units in processors that accept instructions on both the up and down parts of the clock cycle, and whatnot, it seems as if the idea of strictly obeying a clock signal is becoming a bottleneck. Granted, it's a big jump to actual clock-less operation, but it seems as many of the big players in the processor market have already taken the first baby steps i

Most (all?) commodity motherboards are completely synchronous. In fact, even the buses running at different speeds are actually clocked at rational fractions of the One True System Clock. (Letting them run at different clocks would require extra latency for the synchronization stages, to keep metastability [wikipedia.org] from eating the data alive.)

If you're going to nit pick language, you should at least use the standard form of the word: "asynchronous". But this bit of language nazism is particularly lame: "asynchronous" and "clockless", in this context, mean exactly the same thing. "Asynchronous" simply means, "not synchronized". How do you synchronize something? With a clock.

As others pointed out, you've made mistakes.The most glaring is that you assume that synchronous processors can only have one clock - that's incorrect. While the clock tick is of fixed length (by design), the global clock (as seen by external parties) may run at a different speed than internal clocks.

If the a path of logic takes 5ns to complete, and its clock matches exactly, then you are perfectly optimized. You are hampered not by the clock, but by the transistor's switching speed. This path will have the

It's not just direct - power consumption is proportional to the *cube* of the frequency (according to the research paper I just peer-reviewed). But, there are all kinds of ways to vastly reduce that, using voltage scaling, frequency scaling, and power-down-during idling technqiues.

The fact that the CPU itself has no master clock is absolutely irrelevant to timing applications. You can bet your bottom dollar that the processor will sink interrupts, and that there will be a timer/counter component to the chip. Timing won't be a problem.

Since the PICs have interrupts and several timers, I doubt he was talking about that.

On the PIC series of microcontrollers, you can time any code simply by adding up the clock cycles taken by each instruction and figuring in your clock rate. There's even a nice tool to do this for you. This is often handy for simple delays; sometimes you're using all the timers or you don't want to stick stuff into a bunch of configuration registers just to slow down a loop. I don't see this sort of timing being as easy w

Truely wonderful and very special company for the first two of those years, then it slowly and surely went downhill - these days, it's just another company. ARM's culture didn't manage to survive its rapid growth in those few years from less than two hundred to more than seven hundred.

ARM was never like that. Unlike their parent company, Acorn, it was both a company of brilliant engineers and was always highly profitable. In later days, Acorn's share in ARM was all that kept it from going under.

> So basically, when it was a startup, you enjoyed your nerf tournaments, but then> their investors eventually demanded that they make a profit. Was that about the time> when you left?Your view in this matter is utterly unlike the reality of events.

ARM was exceedingly hard working and to begin with something like half the staff had PhDs. What (IMHO) happened was that with rapid growth the quality of lower and middle management in particular was diluted and also politics, the rot of all companies,

Sun has clockless chips up and running (real silicon, not sims) and they have done some interesting things, but they don't have a complete system that's ready to ship. And there are other components out there that use the clockless philosophy to do certain things, but they're not CPUs in any sense. To give credit where credit is due, as the parent post points out, ARM beat Sun out the door with a clockless CPU that is a drop-in replacement (to some degree, anyway -- not clear how much) for an existing, established architecture. But that wasn't/isn't Suns goal (although perhaps it should be...). They're pushing in new directions, not using this to reimplement current architectures.

I took an undergrad class on asynchronous chip design back in 2000. The class project was to implement the ARM7 instruction set (well, most of it) in about 5 weeks. We split it up into teams doing the Fetch, Decode, Reg file, ALU, etc. The nice thing about asynch is that as long as you have well defined, four phase handshaking between blocks you don't have to wory about global timing (there is no global "time" reference!). We were able to get it mostly done in those 5 weeks. Nothing manufacturable, and not tuned for performance, but we could simulate execution.

One of the neatest things about asynch processors is their ability to run in a large range of voltages. You don't have to worry that lowering the voltage will make you miss gate setup timing since the thing just slows down. Increasing voltage increases rise time/propegation and speeds the thing up. The grad students had a great demo where they powered one of their CPUs using a potato with some nails in it (like from elementary school science class.) They called it the 'potato chip'.

Another cool thing about asynchronous processors is that you can see the effect of temperature on the processor's speed. Wikipedia [wikipedia.org] describes a demonstration in which hot coffee placed on the processor caused it to visibly slow down, while liquid nitrogen caused its speed to shoot up.

Your project was really cool, but it's just a very simple in-order pipeline. Doing the same thing on a complex, ~20 stages out-of-order pipeline is very different. For instance, verifying such a design is considerably more difficult than for a clocked design. With verification accounting for about half the design cycles these days, I believe that asynchronicity won't make it in high-perf processors in the near future.

The alternative proposed by the research community is GALS [boisestate.edu] - globally asynchronous, loca

Almost 20 years ago I did some asynchronous stuff as a discrete-logic board designer. It was pretty seductive - we could save lots of power and use slower, cheaper parts without sacrificing the overall board speed.

It didn't really work out. While we could easily get prototypes to work well over rated temperature ranges, getting the production version to work reliably was an order of magnitude more effort than the clocked version. As the complexity of the logic increases, the number of potential race con

I know nothing about microprocessor design, but a simple answer would be to have a temperature sensor attached to a voltage regulator. When the temperature gets too high, reduce the voltage, and consequently, the speed (that is, assuming the other few posts I skimmed were correct - always a toss-up on/.).

This is certainly not the first commerical processor without a clock. The PDP/8 operated using a series of delay lines arranged in a loop so that the end of an instruction triggered the next one. One of the EE courses I took (back when EE majors still had to use real test equipment and soldering irons) involved a design of a clocked version of a PDP/8 as a class project.

Gads. Now that I'm "overqualified" to write software (i.e., employers don't seem to think experience is worth paying any extra for), the geek world has completely forgotten that it even has a history.

Not to belittle the energy savings, but how fast is it compared to a clocked CPU with a similar instruction set? To me, speed the most interesting quality of a new chip design other than reliability. The problem with a clock is that clock speed is dictated by the slowest instruction. Since a clockless CPU does not have to wait for a clock signal to begin processing the next instruction in a sequence, it should be significantly faster than a conventional CPU. Why is this not being touted as the most important feature of this processor?

For those of us with short-term memories, we can go back in time and read historical articles about the Transmeta Crusoe [slashdot.org] processor, which was supposed to be clockless. Of course if you go to their Crusoe Page [transmeta.com] today, their pretty diagram sure has a clock.

What did I miss? I remember the hype, the early diagrams of how it was all supposed to weave through without the need for a clock. Would someone care to elaborate on the post-mortem of what was supposed to be the first clockless processor, 4 years ago?

1997 - Intel develops an asynchronous, Pentium-compatible test chip that runs three times as fast, on the half the power, as it synchronous equivalent. The device never makes it out of the lab."

So why didn't Intel's chip make it out of the lab? "It didn't provide enough of an improvement to justify a shift to a radical technology," Tristram says. "An asynchronous chip in the lab might be years ahead of any synchronous design, but the design, testing and manufacturing systems that

I know typing this out will be useless, and it will get overlooked by the mods, but I might as well say this. Asynchronous designs have several advantages :

1. It will give good power consumption characteristics i.e. low power consumed, not just because of the built in power down mode, but also because of the voltage the chips will be running at. By pulling the voltage lower than a synchronous equivalent, it will be simpler to have greater power savings. This becomes possible if you are willing to sacrifice speed. and in async devices, speed of switching can be dynamically altered as each block will wait till the previous one is done, not until some outside clock has ticked.

2. Security: Async designs give security against side channel power analysis attacks. As all gates must switch (standard async design usually uses a dual rail design, so most gates means all gates along both +ve & -ve switch), differential power attacks become much harder. Thus async designs are perfect for crypto chips (hardware AES anyone?)

3. elegance of solution:the world is generally async. Key presses are, memory accesses are. so why not the processor:). (Yes I know busses are clocked, before you start, but if they were not.... )

But they have several points of disadvantage:

1. They are hard to do. Especially using the synchronous design flow that most of the world uses. Synchronous tools assume, especially in RTL, that the world is combinational, and that sequential bits are simply registers that occur once a clock cycle (not true for full custom designs like intel and amd, but for slightly lower level : esp ASIC design)

2. The tools that exist now, are either able to do good implementation using only a few gates ie small functions or bad implementations, that are in worst case as slow as synchronous equivalents but are larger functions. Tools exist like http://www.lsi.upc.edu/~jordicf/petrify/ [upc.edu] Petrify , but these become unusable for circuits with more than ~50 gates.

3. Async designs are usually large. This is not always true, but standard async designs are usually implemented as dual rail or using 1-of-M encoding on the wires. But the main overhead comes from the handshaking circuitry. For really fine grain pipeling, the output of each stage must be acknowledged to the previous stage. This adds a massive overhead, as it necessitates the use of a device called the Muller C Element, that sets the output to the output, only if the inputs are the same, or retains the previous value, if not. Many copies of this element are usually required, and its this that adds space, for example, a simple 1 bit OR gate, that would usually have 4 transistors, has 16 transistors for the dual rail async implementation.

For the time being, I think they will find a lot of use in low power applications - such as embedded microcontrollers/processors, in things like wireless sesnor networks, and security processors. However I believe that full processor design is very far off.

Thanks for looking at this with a realistic perspective. There is a reason that the article said these chips would be used in deeply embedded or automotive situations. In these situations, low power consumption granted by an asynchronous design is great.
Not so great, however, is the overall performance. Part of the reason for clocking something (for example synchronous busses) is to avoid the excessive need for handshaking algorithms. Extending the handshaking methodology to multiple pipeline stages s

You're right about that. I research side channel attacks on crypto hardware, and my first response to this was --- well, this would make EM analysis more complicated. For those not familiar with the general approach, in side channel attacks you don't try to do anything as complicated as breaking the underlying math of the crypto. Instead you observe the hardware for emissions that can give some clues as the instructions being carried out. If your observations help give you any info about what the chip is processing, you might learn parts of keys or gain a statistical advantage in other attacks. So if it's harder to observe signals emitted (electromagnetically from the chip, then attacking the hardware is harder.

ARM made a clockless chip in 1994 for cellphones. Couldn't find an amazing reference, but a quick google turned up http://www1.cs.columbia.edu/async/misc/technologyr eview_oct_01_2001.html [columbia.edu] where they briefly mention it... The last time I heard of this stuff being used was in 2001-- I actually wrote an English paper about it purely to see if I could bore my professor:-p

ARM is actually building this chip with Handshake Solutions [handshakesolutions.com], a Philips incubator. The work stems from Philips Research as early as in 1986 (yes that's 20 years from research to product), and has matured very much over the years. We used to have courses at our university explaining the basics behind these asynchronous designs. All in all I'm excited to see this technology finally in a product, and hope it will make my pda last yet a little bit longer.

Nope. All the pre-Mac Apple machines were based upon the MCS6502 and its derivatives. All were clocked, the original Apple ][ Standard at 1 Mhz. The Apple///'s selling point was that it had a hardware real time clock, which was removed in later revisions because of quality issues.

Many current CPUs don't have built in clocks, but still need them. This architecture is very different. It doesn't need a clock at all. All the timing is based on the propagation delay through the gates. This is extremely difficult to do right.

No kidding. When I took a digital systems lab class, we had to do one simple asynchronous circuit. The corresponding state machine only had four states (compared to a computer processor, which might have a hundred states or more), but it was probably the most difficult circuit to design. Basically, you have to make sure that as you're transitioning between states, you always end up in the correct one, no matter where you may be in between.

Async work is very annoying when the whole system is one state machine.

Hence, large-scale async work is often based on every data transfer between modules being sent along with a PULSE or READY signal. Of course, every module has to be designed so that its output is ready when it propagates the pulse... otherwise there's bogus output into the next module. Basically, one module having the propagation delay timed incorrectly can kill the whole system. BUT, with fast logic, your system will simply run as fast as the hardware can handle...

Commercial async processors have been around for AGES [multicians.org] -- but modern logic IC-based processors are rarely build and sold on a large scale, being mostly experimental designs.

Correct me if I'm wrong, I could be however if I am not mistaken, the early 6502 microprocessor used in Commodore 64 (and in later made VIC-20's) did not have a clock or need one.However I could be mistaken, and info related to the 6502 is a little hard to come by. Plenty of it from hobbyists however not all entirely accurate, and reaching CBM these days to ask is a little difficult;)

I think they're still in use in positioning devices that point things like satellite dishes and on microwave hops that auto

AHA! Found it. It was the 65CE02 which had an on chip clock which you could send a trigger to stop, causing the MP to go into a suspended but wake-able state (and from 5v to 1.5v consumption). When the clock resumed via external trigger, so did the MP without having to go through its full start up cycle. They never did much with it oddly.

When I read the article what popped into mind was low consumption while doing nothing, which is what made me think of it. So now I've shown my age and made quite the ass of myself, but what else is/. for?

So not the same thing. Sorry for the ruccus:) Hey I'm amazed I even remembered it:P

The original 6502 had dynamic register storage (using capacitors, similar to dynamic RAM) that would lose information if the clock was held off for very long. So the CPU clock had to be run at a certain minimum frequency (a few hundred kilohertz IIRC) and though it could be stopped briefly, more than a few microseconds would cause the CPU to crash hard.

It theoretically should make a good chip for PDAs and cellphones. I think initially it will be used as a controller for automobiles though. Asynchronous chips are currently not that fast because the tools used to design them are incredibly new, but they are already very low power. I predict we'll have them all over the place in a couple years is all. Intel and AMD might already be considering (or may already have) used asynchronous logic in parts of their processors or support chipsets.

Basically a good asynchronous chip would draw almost no power while it's waiting for something (like I/O events from network, keyboard, timers, etc). And it would instantly ramp up and handle the event as fast as it possible could. The speed is generally a factor of voltage and temprature. It's how fast the gates can switch and perform interlocks under current conditions, rather than what rate a clock is driving everything.

It's going to be interesting to see what performance metric is used on these "clockless" chips by the industry and by the marketing/sales types. MIPS? FLOPS? SPECmark? not that MHz was ever a good benchmark, but things like MIPS is a lot easier to manipulate to make your product appear faster than your competitors.

That's the point. there is no distinct state. To "sleep" you aren't switching from on to off. you're just waiting for your interlock, which is the normal behavior. There shouldn't be much different between sleeping and normally running in an asynchronous design. The slew rate is going to control how many operations/second your design can go though.Now if your design goes even further and automatically turns off circuitry when it's not in use (beyond just have it hold a 0) then you will have a delay as you p

Responsiveness of a CPU is never really a problem, humans generally precieve anything that happens in less than 1/10th of a second as happening instantaniously. The only real problem with using this chip in a PDA is it isn't very fast, the article says the chip is comparable to a 77Mhz ARM9 which is several times slower than anything you'd find in a PDA today. I would love to see a Palm-OS PDA based around this chip because of it's EXTREMELY low power consumption; we could be looking at the same kind of bat

ARM and Intel are operating in very different market areas these days (sad actually as ARM processors fly). ARM are targetting the embedded and PDA type market (Alot Pocket PC's use StrongARM) and given all their embedded processors stuck in cars, washing machines etc I'd imagine in their target market space they've got more than 50% market share.

Can people please remember the computer industry does not start and stop with the latest bit of kit for playing DOOM3 or surfing the ruddy internet....

Most digital logic has at least one repeating signal called a clock, which is used to sequence the logical changes (e.g. from 1 to 0) in the circuit. By limiting changes of state to a periodic time, you can simplify a digital design. One of the major challenges in digital design (besides errors in logic) is dealing with timing related issues such as race conditions. Race conditions occur when a logical operation uses the results of earlier operations. Because of the finite speed of signals inside a chip, sometimes a signal arrives too late for a proper operation to occur. Such an error considered to be a race condition.

Clocks help by allowing the designer to effectively freeze the state of the logical circuit on a regular basis. This way, all the signals in a chip can propagate to where they are supposed to go, then the logical operations occur. This process repeats on every clock pulse.

The problems with using clocks are pretty significant, however. First, you need to add a lot of additional circuitry to implement a clock. Another problem is that generally, A LOT of changes happen on every clock tick, which means a large spike in electical current (because you need to use the electrical current to actually change the state of all of the digital circuits). This spike also causes what is known as noise in electronics, and with higher frequency circuits, the noise can actually cause interference with other unconnected electronics (this is known as EMI). And another problem with a clock is that you generally need to keep it running all of the time for it to be useful, which means using electrical power even when no changes are occurring.

So, the asynchronous CPU is a significant engineering feat. It is very difficult to design, but it is probably much smaller and more efficient than any equivalent clocked ARM core. That said, I wonder how do you actually evaluate the performance? With synchronous CPUs, it is a simply a function of the clock speed and architecture. In addition, all of these devices need to be tested so that they are guaranteed to work - I wonder how they do that.

ARM have been working on asynchronous designs for a long, long time. I recall reading about their plans for an asynchronous core in Acron User magazine, back when there actually were Acorn users. This was back when the ARM6 was new and shiny, and the asynchronous part was expected to be released as the ARM8.

Your Slashdot post has been audited by a nerd committee and has been found to be lacking in both quality and substance. Normally this would only result in downward moderation. However, in this instance it grossly lacks nerdly appreciation of the subject matter presented, indicating that you are not a true nerd. If you were a true nerd, you would have instead made a post about where one of the said clockless processors might be obtained, or maybe indicate how it mig

A clock is a timer, as measured in Hz (oscellations per second). Generally the actions within each device, such as your processor or video card, operate on their own clock (this is the GHz number), while devices communicate with each other using the bus at the speed of the bus (more distance, mismatched components, and possibility for interference causes slower speeds, closer to 800MHz-1GHz these days).Essentially (as an example), when a processor wants to copy something from a register to memory, it puts

It's a regressive step if you look at the speed at which it can push things across. These days, the power consumed is as important an issue. Active research is going on in the area of Globally Asynchronous, Locally Synchronous (GALS, it's called;) processors, where each module (like, say, the caches, execution units, reservation stations etc.) run their own clock (and hence its synchronous within the module), and communicate between each other using asynchronous protocols (known as delay insensitive protocols). Such a design greatly reduces the need for clock wiring which would greatly reduce area, reduce clock wiring, save power etc. (at the cost of some processing speed, of course). Google for Globally Asynchronous.. if interested.

There's no easy answer. On a traditional clocked processor, each instruction takes a certain number of clock cycles. In the async case, everything just takes however long it takes. In fact, some arithmetic operations might take variable amounts of time depending on the value of the operands.

Given an equivalent process, layout technology, and number of transistors, an async design will be at least somewhat faster and vastly more power-efficient than a clocked design.

But none of those things are going to be equivalent in the real world - except possibly the process that ARM designs to. So comparisons will be difficult.