Posted
by
timothy
on Monday October 15, 2001 @03:33PM
from the all-battery-life-figures-are-lies dept.

CodeShark writes: "According to this story at CNN, Transmeta is set to release their new TM6000 microprocessor this afternoon. The chip apparently incorporates some of the functions usually provided by high-performance (and high price!) chip sets. Transmeta is reporting a further reduction in power requirements by 44% and sees the laptop and sub-laptop markert as the primary markets for their new CPU. Intel and AMD claim to be catching up with the Transmeta chips in terms of power requirements, I'd be curious to find out what the real world comparisons might make of those claims ..." If anyone out there is at Microprocessor Forum, please say in comments any further details that are made clear there.

If they did have a patent on coupling software with hardware to emulate registers on a CPU then I would think it will be very difficult for Intel and AMD to follow suit and come up with a equaly power and heat conservitive solution with just plane hardware, unless a new technology in solid state and integrated electronics had been discovered.

I'm curious about what information AMD and Intel have to back up their claims that they're catching up to Transmeta in power requirements.

I just got a toshiba laptop earlier this year with a 700mhz celeron. I love it but I rarely use it without being plugged into the wall, as from my experience it only lasts about 2-3 hours.

I remember seeing stuff saying a laptop with a transmeta chip can have a battery life of about 8 hours.

Assuming that is true, how could Intel and AMD possibly say they are catching up? I mean mine is a celeron, not even a pentium III or anything and it sucks up power like I would have never imagined. I hope Intel isn't talking about their powerstep technology, that is just a freaking joke.

Anyone with more information on power consumption among the different chips, I would think Transmeta would have tons of information about this since it's really their main selling point isn't it? I better go check their site.

The battery life of a laptop also depends on the display (the brightness, size and type), how long the hard drive stays spinning, how much you use the removable drives (ie: CD-ROM, DVD-ROM) and any other components that are active.

Transmeta's claims have been shot down several times because Transmeta doesn't have control over the power consumptions of the parts outside of the processor and the chipset.

From what I've heard, that 8 hour benchmark came from marketing. I don't believe the Transmeta chip can quite stack up to that. Good news is that it does well (4 or 5 hours?) against the power hungry Intel/AMD offerings. Results: less heat, longer battery life, smaller package. Sony has a nice subnotebook running a Crusoe chip, looks tempting but pricey. Transmeta will need a few more years to be taken seriously but given time they'll start showing up in cell phones, PDAs, car stereos, etc without enlarging the package.

The only reason to use x86 is because you want to use Windows (non-CE, if you want to use that ARM is of course the only way to go).

Transmeta chips are only low on power consumption compared to other chips running x86 code, compared to other chips performing the same tasks they are definetely not.

The only truly viable markets for their code morphing are (sub-)notebooks running Windows, ironic with the Linus connection, and as a transition path for anyone who would want to make a dent in the desktop/server market (you could introduce a new architecture and still get very passable performance on x86, a lot better than Merced etc in any case).

No, no it won't. The Crusoe chip will only go places that we need x86 instructions. There is no reason that we need x86 instructions on a cell phone, PDA, or car stereo. All of those can use strongARMs, DSPs, and other cheaper solutions.

8 hours of operation on the battery is marketing BS ehh? I get 10+ hours of battery operation on my Transmeta based Sony VAIO C1VP on a daily basis. Mind you that is using the quad capacity battery, but the 10 hours of work without having to plug-in is pretty sweet I must say.:)

It's also worth noting that your Celeron doesn't have the benefit of Intel's speedstep technology, and wastes power running at 700 MHz all the time. Secondly it's not part of the lower voltage line of P III M chips. Just one of those things.

I'd be curuous to know what the WA rating is on your laptop battery. I have an I book with (IIRC) a 42 watt*hour battery.. My machine lasts about 2 hours playing diablo over airport, and 3-4 hours while doing office type stuff, and playing mp3s half the time. The maximum watt usage as reported on the bottom of my computer is 42 watts, meaning that on average, it's using far less than that (even for doing hardware-compute intesive things such as games). Either the other components of your machine are sucking the juice, or the Celeron uses far too much energy. I vote the latter, because lower price machines are also known to be lower quality. That implies higher energy usage. My p-150 Compaq has a 42 WH battery, but lasts 1/3 the time (idling) my apple does when playing games. Probably means a bad battery. So, there could be many meanings to the time your computer runs...

I'm curious about what information AMD and Intel have to back up their claims that they're catching up to Transmeta in power requirements.

The only thing I've heard about is the revolutionary new Intel Pentium(R) [intel.com] processor, described by a company spokesman as "Like the pentium III, but consumes much less power." Operating with an order of magnitude fewer transistors, and clock speeds of up to 200MHz, the performance is almost as good as the Crusoe.

The best news is they're already released and available from reputable dealers [ebay.com] everywhere!

Having just left Transmeta for gamier pastures, I assure you that the people writing press releases, designing websites, and manning show floors for VIA (Centaur), Intel, AMD, and Transmeta are after only one thing- money.

All of their "issues" and "features" are make-believe. They are fly vomit, meant to turn consumers into a common, runny soup of stupidity, that can be slurped without the need to chew on issues.

Speedstep is not a joke. It's a cheap, excellent hack, far easier to verify and debug than PowerNow or LongRun. Intel enjoys most of the power savings afforded by LongRun simply by implementing APM and getting the same job done faster than the p95 and therefore going to sleep sooner. Sensible, mundane, and vomit-free, but true nonetheless.

LongRun has problems with all kinds of applications featuring unpredictable loads. And so does APM. Each is good at a certain set of applications, but neither is clearly superior. And to overlook the critical importance of your choice of operating system, southbridge, video card,... oh god i can't continue. This is like cleaning up someone else's vomit, and it's tripping my gag reflex.

Food, reconstitute thyself. Intel and Transmeta are in a deadly competetive battle. They are slitting their own wrists to give you 5% here and 3% there and need fly vomit because the numbers 3 and 5 don't sell product. Listen to your friends. Try a Transmeta notebook. Try an Intel notebook. You will like what you like. End of story. Every portable is completely different, no matter which CPU you use. Read reviews, friends, and personal experience, not corporate web sites.

Ditzel said Transmeta will prove, despite Intel's claims to the contrary, that the TM5800 beats Intel's lowest power chip by a factor of 2 to 1. "And when we go to our highly integrated chip, we're going to take off another 44 percent," he said. "So we think we've got a substantial lead today, and we're going to keep that."

And yet when we look at these laptops with their lower power processors, there is VERY little added battery life, for the simple reason that the processor is not the major consumer of power in a notebook.

When you factor in that the processors are much slower than the equivalent Intel or AMD (by how much varies by who you ask and what you're doing), and there doesn't seem to be any price break, why would anyone want to use a Transmeta processor?

Transmeta needs to stop trying to sell me that they are "more l33t than Intel" and show me products that are SIGNIFICANTLY better. If they can give me, say, twice the battery life it might be worth switching to an off-brand processor that is much slower.

And yet when we look at these laptops with their lower power processors, there is VERY little added battery life, for the simple reason that the processor is not the major consumer of power in a notebook.

Just another chip manufacturer trying to hype it's product over features Cough*MHZ*Cough that do little for the average user.

From what I've noticed, crusoe chips really only show their worth in the smaller sub-notbooks, like the sony picture book, where there isn't room for a cdrom or floppy drive. They also don't have the heat/fire problems that have cursed many laptop manufacturers. I have an old gateway laptop that after 20-30 minutes of use gets too hot to keep on my lap.

Maybe you want a Transmeta processor for something other than a desktop. There are lots of other specialized uses for them out there.

Here is a rundown of the top 3 microprocessors in 1998:

80x86 - 120 million

68k - 74 million

MIPS - 54 million

I don't know if Transmeta is focusing on the desktop market or not, but there are lots of uses out there for things like MIPS, which are almost never found in desktops. Try video games, laser printers, cars, etc., etc.

I have to agree.. I really wanted to like Transmeta. Admittedly at first I fell into the "hey, Linus is behind it so it must be good mindset". Then once they unveiled the code-morphing (or whatever it's called) technology I was really impressed. Wow, what a great idea, I thought - virtualizing the core of the processor and doing optimizations of the x86 instructions on the fly.. Not only should this be faster, but would theoretically allow the chip to run on many different architectures simply by updating the emulation/optimization layer. I thought it was one of the most innovative things I've seen in a while. Somehow they've managed to screw it all up.

First of all, performance has never been there.. They can't even seem to get close to mid-range AMD and intel chips, so they changed position to "well, it's a LOW POWER consumption chip for laptops". Like the previous poster said, even if you half the consumption of the CPU unless you work on the LCD and other components you'll only increase battery life by a few percent. To the average user that's just not worth having to buy a more expensive and unproven chip.

The only other market I could see for them would be in an embedded pc market where a company sold hardware products spanning several architectures and wanted one a single processor they could work with intimately rather than having to learn the quirks of different processors on each architecture they have. Honestly I've racked my brain and can't even think of an example of such a company.. Maybe Cisco? I'm not THAT familiar with their hardware but maybe it spans more than one architecture.

Moral of the story: Just because someone puts out something you enjoy doesn't mean you'll enjoy everything they put out. That's the flawed logic that caused me to actually sit through an entire episode of That's my Bush! (shudder) What a stinking pile of horse-dung that was.

The only other market I could see for them would be in an embedded pc market where a company sold hardware products spanning several architectures and wanted one a single processor they could work with intimately rather than having to learn the quirks of different processors on each architecture they have. Honestly I've racked my brain and can't even think of an example of such a company.. Maybe Cisco? I'm not THAT familiar with their hardware but maybe it spans more than one architecture.

If you think about it, this has happened in the desktop world a few times. Pretty much everybody has had code running on Motorola 68K machines (Sun, SGI, MacOS, HP/UX) and then moved them to other chips. MacOS being the smoothest transition, with the 68K emulator as a bridge. BeOS moved from PowerPC to Intel, dunno if they had an emulator. They made the move so early in their existance that there probably wasn't a lot of code that needed to be moved.

Erm, I think that if you look hard enough, there are similar tricks going on in the Intel Pentiums (and probably others too) to give performace while still being compatible with even the earliest of x86 code. In fact, I heard somewhere [zdnet.com] that the core of the P6 is essentially RISC based, and that x86 instructions are converted "into simple micro-ops" prior to RISC style execution.

It's 2:30 am and I can't sleep, so this is probably going to sound incoherent.

The translation to micro-ops isn't nearly as complicated as Transmeta's code-morphing, mainly becuase it's about taking more complicated instructions and breaking them into simple, manageable pieces that can be chained very quickly.

Most of these functions are memory addressing functions. array_pointer+(item_number*item_size) can be addressed very quickly, but is multiple micro-ops in itself, not to mention the actual function that it is to accomplish.

It's also one of the things limiting the superscalarness of P6-based chips, which can handle Four micro-ops in the first instruction, and the other two instructions that cycle must be only one micro-op. I think Athlons are not bound by this limitation, however.

Code-morphing is more taking the code, converting it to a new instruction set, and then keeping that code around. Transmeta throws in some nifty optimization gizmos, too. It also _saves_ this information, which is usually much larger space-wise than the original instructions, but is still quite nice.

Essentially, it's an optimizing emulator in hardware. Old idea, new hardware.

I think companies should foccus more on lower power displays, hard drives, and especially those evil (when it comes to power usage) cd/dvd drives (although using those alot on the road is generally not a good idea)

The problem is that the Transmeta chip will optimize programs and make them faster when compared to the unoptimized first run of a program. It is possible to speed up subsequent runs by saving recent CPU paths instead of re-translating the instructions each time, essentially 'hardwiring' the processor to do a certain set of instructions with no translation step.

Compared to an actual Intel or AMD CPU that actually has these instructions hardwired, the Transmeta chip makes a pathetic showing.

There is the possibility of software loops running faster than native code in certain circumstances, in theory.

HP actually found that some code actually ran faster in their PA-RISC emulator for PA-RISC than on the bare hardware! Perhapse HP was using the equivalent of gcc -O instead of gcc -O2 in their trials, thus giving more room for dynamic optimizations, but they got good results for an early project. Dynamic code optimization still looks promising. HP is working on a product utilizing quick-and-dirty PA-RISC to IA-64 translation and dynamic code optimization to ease the transition from PA-RISC to IA-64.

The HP Dynamo [hp.com] project has some good arguments about why dynamic optimizations might be becomming increasingly usefull. Basically, HP was researching emmulation, so they wrote a PA-RISC emulator to run on PA-RISC and put in some dynamic code optimization to increase performance of commonly run code. There's the old rule of thumb that 80% of your CPU time is spent on 20% of the code, so they concentrate expensive optimizations on the commonly run code, after on-the fly profiling indicates which areas should be optimized. It's like having a -O4 option for gcc and only using it on the code that gets run alot, in order to avoid all the bloat associated with gcc -O3.

Personally, I'd love to see AMD or Intell throw away hardware emulation of the ancient x86 instruction set. The greatly restricted number of registers causes the compilers to really hide the inherent parallelism in the source code. A lot of chip realestate is wasted in extracting the parallelism back out of the binaries. It's not as bad as the stack-based JVM, but the x86 instruction set is pretty bad about expressing parallelism in the source code. I think software emulation of legacy apps is where it's at. If Intel or AMD released an x86 emulator for thier new chipsets and got Microsoft to go along with the idea of software emulation of x86, then we'd see native apps running much more efficiently. It's my understanding that IA-64 kind-of does this with an x86 emulation mode. However, I think that chip realestate would be better spent on thins to speed up native code.

If I'm not mistaken, Win95 even had partial virtual DOS machines for each DOS executable. It's not too much more of a leap to emulate the ancient instruction set after you're emulating the ancient OS. Transmetta wants the flexability to completely redesign the native instruction set for each release, and that's understandable. However, it would be nice to move on to compiling into something that better expresses inherent parallelism in the source code.

I might be missing something, but if the power consumption is so much lower, what happens when you overclock these chips?

Does it mean you can get real high speed out of them when compared to the performance of an intel chips running at the same speed, or does the heat from overclocking come from somewhere else, meaning that you can't do this?

As is often the case, the Register has some really interesting comments on this story here [theregister.co.uk].
Apparently this release has a lot of market control and damage control related to it. There is a class action suit going due to previous claims of high speed chips. Anyways, read the Register article for more details.

By using less power, one would imagine less heat would be generated as well. But depending on the materials and processes used, will these Transmeta chips follow the same 'faster, hotter, more expensive' trend that AMD is following?

...will these Transmeta chips follow the same 'faster, hotter, more expensive' trend that AMD is following?

AMD's latest CPUs use less power and generate less heat. When they get to 0.13 micron with silicon-on-insulator and copper interconnects (Q1 next year), AMD chips will use 20% less power and run 20% cooler.

Transmeta refuses to release any industry standard benchmark results for their CPUs.

Ask them. If you get something other than FUD back, please post it.

Why won't they run the SPEC int and FP tests??

They try and hide behind low power claims and can spin FUD with the best of'em. Low power means absolutely nothing unless you know how much WORK it can do.

They will give you benchmark results only if you sign an NDA and promise not to tell anyone how slow their chips are. Most companies who sign the NDA decide not to use their product. What does that say?

I'd really like to see these guys compete with Intel/Rambust, but I have no respect for companies built on FUD, regardless of who is involved.

Their Code Morphing layer is antagonistic towards the benchmarks you mentioned, mainly because it runs lots of different tests that don't allow the code morph software to optimize. On real world tests they'd probably have better results.

I think a fairer comparison would be performance/watts rather than a synthetic bench that doesn't stress how much work you can do. It should also take into consideration the support chips that other traditional CPUs require (it looks like they're building in a bunch of other stuff that you'de need secondary chips for on Intel and AMD).

There are plenty of benchmarks that use real software, or emulate the use or real software reasonably well. They won't publish those benchmarks either. If a company won't give you the information you need to make an informed decision on their products, don't buy their products.

I have a 2001 iBook. Apple claims 5 hours of battery life; I've never gotten more than 4:10, and usually closer to 3:40. I do like the machine, but... 5 hours would be a lot nicer, and considering the marketing, also a lot more honest. I'm going to be buying a 2nd battery, but don't kid yourself -- the 2nd battery will make it more acceptable, not as outstanding as the brochure says. Caveat emptor, etc etc. (Yes, set to maximum battery savings, too.) The airport card doesn't seem to change the battery life either direction, either; I was afraid that it would make it noticeably worse, but hasn't, and having it built in is nice enough to be worth a (moderate) battery life cut anyhow.

Besides not getting 5 hours (ever), the battery meter (at least under OS 9.1) is pretty jumpy, changing times pretty strangely, sometimes up, sometimes down.
When Mandrake 8.1 is ready for PPC, I would like to see what sort of battery life it gets.

I have a sony crusoe picturebook with a double battery. I usually get 5+ hours out of it (pretty unimpressive in my opinion), but with a pcmcia wireless card in it, I get less than 1 hour before dead battery.

Well, I haven't measured -- but I also haven't noticed any difference. Certainly not cutting from 5 hours to 1. I'd be surprised if it makes even 20 minutes difference, but I'm not planning to take it out in order to run a controlled experiment:)

I really am somewhat disappointed in the battery life, but then again a spare battery is something I wish I had anyhow.

Why is it that every time some new "revolutionary" processor design is announced, it's always about "blowing away Intel and AMD" by some unbelievable factor, but without fail the actual product release always seems to target the "laptop and low-power" market. Funny that.

Part of the deal with the Crusoe-chips is that they do "code-morphing" and morphs x86 instructions into something the crusoe can handle.
What if the crusoe chip could do the same to PowerPC-code?
Imagine dual-booting MacOSX with Linux x86 and Windows.
Now, that would be interesting, (and probably not something Apple would like).

People keep trying this again and again and again, from Smalltalk, LISP and USCD p-code through Digital's FX!32 and Apple's 68K/PPC Mixed Mode Manager to Java, but again and again and again these interpreted solutions lose out to economics and Moore's law, turning them into liabilities instead of advantages. Give it up already.

Why do you mention java here? Even if you hate it, it is the language in the world that most developers know.. and it has reached that mass in a very short time. A failure? Hardly!

Besides, you fail to mention that the Intel Pentium Pro to Intel Pentium IV and AMD K6 to Athlon, all do some translating internally from x86-CISC to RISC. They are RISC at the core. Are these failures? Hardly!

I don't hate Java. In fact I've probably written more lines of Java code in the past few years than you have used sheets of toilet paper in your entire life.

But I mention it because it has failed to accomplish most of the things that it set out to do: e.g. the lack of a "delete" operator is supposed to make memory management easier, but instead you just end up with enormous memory leaks if you're not careful; the VM is supposed to make your code run anywhere, but in reality you can only run on platforms that Sun makes JVM's for; the class libraries should make your code richer, but in reality you cannot even get the creation/modification date of a file; the language and bytecode are designed to allow tiny programming, but in reality you need at least 50MB to run a simple GUI app. Built-in and pervasive threading is supposed to make your code more responsive and scaleable, but in reality it means always having to worry about locking and having your 3000 client server die because the system runs out of memory to create new threads. And then there's bugs, bugs, bugs, bugs, bugs: there is no way to close the audio device once you've used it; input fields randomly acquire and lose focus (but this depends on the platform); the virtual machine never releases memory back to the OS (but, again, may depend on the platform); NullPointerExceptions in java.io.* code; copy & paste mostly doesn't work; drag and drop mostly doesn't work, etc. etc..

So Java is either not yet finished, or simply failing to live up to its promises. The fact that Java has gained some popularity with lazy college teachers who want to be able to pull entire tutorials off the web and business drones who can't even distinguish between megabyte and megabit just means that Sun has done a great job marketing Java as a convenience language (i.e. a simple language with most of the nasty-looking bits removed) for convenience people.

We have these languages every once in a while in the industry. Remember Pascal? Java is the Pascal of the nineties.

As for the CPU examples... Even the Motorola 680x0 series used microcode to map their ISA onto the hardware that they had, and microcode-based chips go back way farther than that. So it's not "translation" per se that I think is a bad idea.

The bad idea is to wed yourselves to a "Code Morphing Layer" when what your customers want is a fast, silent and cheap computer. Because while the "Code Morphing Layer" promises to deliver that (just like e.g. Java), in reality _it does not do so yet_, and you lose out to Moore's law and simple economics.

Yeah, its a shame the silicon team worked so closely with the morphing team. A generic Crusoe would be really cool, not necessarily for applications where emulating multiple architectures per second would be demanded though.

One concern that goes through my mind when I look at the not very stunning performance of Crusoe is the effectiveness of VLIW (very long instruction word) processors.

Both Transmeta and Intel have bet that VLIW processors are the way forward. Intel's Itanium and Transmeta's Crusoe are both based around the VLIW concept. Transmeta hides the VLIW nature of Crusoe behind the 'Code Morphing' software that allows the chip to be IA32-compatible - Intel's IA64 architecture gives compilers raw access to the VLIW nature of the processor, and has (very slow) on-chip emulation of IA32.

Between them, they make up the only commercial VLIW processors around, and both are very poor in terms of performance compared to more conventional modern processors, whilst at the same time introducing some enormous obstacles to overcome - IA64 requires some very major changes to the way compilers work, and Crusoe requires major extra complexity in the form of the Code Morphing translation layer.

I don't wish to jump the gun, but I think this means things don't look too bright for the VLIW concept. Evolutionary enhancements to conventional RISC/CISC processors appear able to continue Moore's Law for many years yet. AMD has outright rejected VLIW for its future 64-bit strategy (x86-64) and none of the other major CPU manufacturers seem to be jumping on board either.

Have Transmeta and Intel made a very large strategic mistake? VLIW looks good on paper, but is it effective on a practical level?

It will certainly be interesting to see what happens with future Crusoe and IA64 processors.

a VLIW architecture needs good compiler support: you gotta file in all the slots of each VLIW instruction ahead of time (at compile time)! How much parallelism can you detect at compile time? Depends on the application, how much code specialization and duplication you wanna.

The idea, though, is that this will be a win in the end (over purely dynamic scheduling) because, among other things, it vastly simplifies the instruction decode stage (and dispatch as well, I think) of the CPU. For certain applications of interest, the instruction decode is the primary bottleneck: whether because you're missing instruction cache, or because you just hafta do so much work to determine the data dependencies, register renaming, etc. that an out-of-order issue processor requires.

What seems strange to me is that the Crusoe is x86 ISA compatible. THis must mean it's doing all the VLIW instruction packing on the fly. My guess is that's not gonna fly, ehhe. What's VLIW buying you in this case?

What seems strange to me is that the Crusoe is x86 ISA compatible. THis must mean it's doing all the VLIW instruction packing on the fly. My guess is that's not gonna fly, ehhe. What's VLIW buying you in this case?

A bunch of things. Primarily, the heat and power loss associated with the hardware decoding logic implementation does not happen since this is implemented in software. Second, ignoring optimizations, the decoding only really needs to happen once.

Finally, being in software allows for really complex decoding logic (such as trying execution based on radical assumptions, failing, and retrying immediately without those assumptions) to be implemented much easier, and also allows for that logic to be updated easily in the case of a mistake.

good observation, and good question. while you're correct that Crusoe and IA64 are the only two commercial VLIW architectures (that i've seen, anyway), the concept is not new. VLIW's been around for many years, and has been tried by lots of chip manufacturers in research. i believe some non-mainstream chip manufacturers (like pre-split AT&T) have tried it commercially in the past, as well. the results are always the same: poor performance.Intel's been very careful (initially anyway; they may have loosened up a bit) to avoid using the term VLIW in reference to their IA64 chip, for exactly this reason. they talk about EPIC as the design architecture, but EPIC's basically one impelementation of VLIW.Intel and their chips performance will both be further hit by the fact that VLIW - including EPIC - is notoriously hard to write compilers for, particularly ones that perform even reasonably. i've heard very little about AMD's 64-bit architecture, but if they're avoiding the VLIW mess, i'm quite hopeful that they'll blow past Intel's performance there.

of course, none of this really proves the VLIW concept is flawed, but the implementations sure all have been. and it does prove that VLIW - even in its EPIC form - isn't the magic bullet Intel's hoping it is.

The applications I have seen VLIW [arstechnica.com] succeed in are high bandwidth multimedia applications. Although, I don't think its a mainstream card a company called Equator makes a video encoding card that uses VLIW technology. The PDF for the card is here [equator.com]. There are several other manufacturers of high speed video encoders that use VLIW designs as well.

I'm not sure how the market shakedown is going to work, but we will have to move beyond the x86 if we want to see continued performance gains. There are only so many tweaks that one can do. Is VLIW the right choice? We'll see... in the meantime I'm sure AMD will enjoy a ripe stomping until the VLIW compilers and developer tools are mature.

The TriMedia 32-bit embedded processor cores have served as the computational heart for a series of media processor products. Originally designed in.35-micron technology in 1996,

As regards embedded processors rather than PCs, things like MIPS per Watt and MIPS per $ are important. If we compare apples with apples, VLIW doesn't look too bad. If
you're allowed to have a chip with a huge noisy fan and a nuclear power station behind it, it's apples and oranges.

What I want to see is a laptop with no hard drive, just one of those solid state RAM drives mentioned earlier (too lazy to look up link - don't need the karma). That would draw less power than a Hard drive, yes? Anyone got numbers on how much?

I have been researching the construction of PCI daughtercards, which are essentially Single Board Computers, but designed to be peripherals somehow controllable over the PCI bus.

As a kind of example, suppose the card was assigned a frame buffer address of memory, and reprogrammed to implement OpenGL transformations. Or perhaps load it up with Distributed Net, or a Quake server, or whatever.

Maybe, say, take a PCI ethercard, and modify it, adding a Crusoe processor, ramdisk, couple external connectors. Then the card acts like an ethercard which is connected directly to the embedded system. What I can't find is any documentation about how to interface the chip withought signing up as a Transmeta Developer Associate Member from an Approved Business Partner:-)

It's true that other parts of a portable can draw more power than the CPU: the display is a huge drain. But it's still useful to have a low-drain CPU.

I would love to have a Crusoe laptop that was as small and light as a NEC MobilePro: no moving parts, just a lot of RAM and some flash memory. Put Linux on it instead of Windows CE. Put in a Lithium ion battery. Give it a PC card slot so we can put in a 5 GB hard drive card if we want. It would rock. Sure the display would suck more power than the Crusoe, but why make the situation worse by going with some other CPU?

I want my next laptop to have no big honking LCD, just an LED-based HUD. I don't mind ugly wires because I'm not looking for wearable, just to save the biggest power hog (not to mention space, weight, and $) in the entire laptop (imagine: without a screen, and with some clever keyboard design, the whole notebook could be built to fold in half, making a much more carryable object). I know that micro-LED arrays, on a single chip and suitable for HUD, are much easier to build than a full-sized screen and I think they're already on the market. Are my tastes just too unusual for such a device to make it to market from a reliable manufacturer? Do people really need the ability for two pairs of eyes to share a monitor that much that they can't wait 'til they find an old CRT to plug into?

goes without saying: such a device could start to really benefit from lower-power processors.

about this whole Transmeta thing is the level of speculation and un-clearness.

Talk all the shit you want about Intel, but I can tell you that I'm working on a board right now that uses a Mobile Celeron Mobile 400A: http://developer.intel.com/design/mobile/datashts/ 28365403.pdf [intel.com]. The datasheets says thermal power 10.1 Watt max. Well, we never _ever_ get that high. Also, the newer 500 Mhz ultra low power is 8 Watt max, 5 Watt under more normal conditions.

The thing is that TM _never_ published said figures (quickly: what's the MAX Watts a TM CPU can draw?), because supposedly all that we need to know is the power required to decode a DVD. Well, today that happens largely by the VGA controller now, doesn't it?

What suprises me even more is that Torvalds, if anyone, should know that using the simple HLT instruction in the idle thread, makes any Intel (or AMD) CPU draw a lot less power.

Even on paper I don't see the advantage of the TM CPU's. And I really hoped they would, believe me...

Is a crusoe based home server. Not because of the power consumption, but the idea of reducing the fan noise. A home file server/ipmasq system needs very little processor power. Crusoe could be an important step in making a modern silent system at a reasonable price. Right now I have a Pentium-60 without fan doing the job, but a little more speed would be nice.

The mobile cellron 400a one of the above posters mentioned doesn't need a fan either. There are a lot of embedded chips that won't require a fan, that will give you more performance than your pentium 60. Crusoe will likely do a good job for you, but the system is likely going to be more expensive than other commonly used embedded chips. The real problem I see is that it's hard to make a reasonable decision because Transmeta won't publish a full set of specs. It goes back to what I've posted before. If a company is that reluctant to give you the information to let you make an informed decision. Their product isn't worth considering.