Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

But the leakage current problems have been increasing with process shrinks (not just at Intel, but also at IBM and AMD). So they can use even smaller lithography. Great. Will the leakage current and associated heat suck even worse than Prescott?

My very basic understanding of the relationship is this, it takes less power to cause a smaller semiconductor to switch states, however as you move wires closer together you start to have capacitive leakage and inductive effects from the wires. Up until a few years ago, you the former was signficantly larger than the latter, but in recent years they have become more equal in magnitude of effect. I like to think of semiconductors (and most electrical things) in terms of fluid flow (not ideal but you can ge

I read somewhere today that Intel engineers have developed a new compound to use for the insulating layer on the gates, to replace SiO2. This was said to reduce the leakage currents and allow finer lithograph. IIRC the article said they were planning to start using it for 55 nm lithography.

> But the leakage current problems have been increasing with __process shrinks__ [my emphasis] (not just at Intel, but also at IBM and AMD).

Not really true. Leakage current doesn't increase significantly with just a process shrink; rather, it tends to be associated with process shrinks because one of the main reasons for a process shrink is to rev the clock rate up. In this case there is little reason to rev the clock rate on an 802.11a/b/g chip that is processing signals at pre-defined frequencies.

Thanks for the interesting responses, folks. I feel I've learned a lot. Perhaps I didn't RTFA well enough, but I was under the impression that these were two separate news items: one about wifi chipsets, and another about a new lithography technique that Intel would be using ubiquitously, including for future CPUs.

I definitely agree about the power savings from the process shrinks (thanks for the correction!); we saw those in the Coppermine->Tualatin shrinks and the Willamette->Northwood shrinks,

That's actually a funny story, with more point than you realize. A while ago, a number of groups spent a lot of money on x-ray lithography, without any commercial success. Because of this x-ray lithography has a bad reputation. So, to distance the technique from x-ray lithography, and to more closely align it with the very successful optical lithography, they changed the name to EUV lithography from projection x-ray lithography.

This also points out an interesting cultural difference between Americans an

Yes, 13.4 nm (~100 eV) is far from hard x rays (> 30 KeV), but who said anything about hard x rays? X-ray lithography was generally done with wavelengths near 1 nm, so it's hard to say if 13.4 nm is closer to 1 nm or 193 nm. All three techniques are very different.

In any case, look some of the first work done on the technique by Bell Labs and others in the late 80's, early 90's. Those papers refer to the technique as soft x-ray projection lithography.

Intel® Extended Memory 64 Technology is one of a number of innovations being added to Intel's IA-32 Server/Workstation platforms in 2004. It represents a natural addition to Intel's IA-32 architecture, allowing platforms to access larger amounts of memory. Processors with Intel® EM64T will support 64-bit extended operating systems from Microsoft, Red Hat and SuSE. Processors running in legacy* mode remain fully compat

That really doesn't answer the question so much. It does clarify that it runs both memory modes, but what about the actual processing portion of the core. RISC processors (the ones these are supposedly going to cut into) have access to 64 bits for memory, but also have significantly more powerful processing mechanics. As I understand this from the market speak (which is confusing at best, hence the question) this is basically the same exact processing core as the standard x86 with some tweaks to allow it

The AMD64 running a 64-bit compiled binary is a 64-bit CPU. Addresses are 64-bit values (although only 40 bits are "honored" as far as the hardware goes right now). sizeof(long) == 8. etc. Doubles are still 64-bit, I don't think they do long double or anything like that (no 128-bit floats). The Intel 64-bit extention stuff is supposed to be binary ISA compatible with the AMD64 x86-64 ISA.

As far as other stuff, summaries of the AMD64 programming model can be found all over. There's probably one on ArsT

Actually, EMT64 is an incomplete clone of x86-64 by most reports, and doesn't appear to be binary compatible with x86-64. The x86-64 bit Linux distros are having to hack in support for the CPUs that essentially still does paging in software.

On top of that, all the ALUs on the CPU are still 32 bit, and it does not support the NX bit. There's a reason why Intel is only touting it as an "extended memory" architechture. It's an incomplete hack on top of the existing 32 bit chips that seems like nothing more th

Heh. Intel has been very careful about choosing its words. They're doing press releases that say things like "Xeon 64 will run software currently being developed for the AMD Opteron with very little modification." They categorically refuse to call their new chips "AMD64 compatible" even though that's exactly what they are. They licensed the AMD64 instruction set and renamed it.

Uh yeah - but do realize that the Intel CPU will naturally be of a completely different design. Its not like they are rebadging AMD chips. I.e. you will feel like a winner if the Intel design is better than Athlon64/Opteron, otherwise you'll just feel like a loser stuck with overpriced sucky hardware.

Not really. 802.11a operates in the 5 GHz band, and can thus coexist with 802.11b without suffering degradation, unlike 802.11g which does degrade when.11b devices are present -- if nothing else because the.11b devices hog the channel for 5 times as long.

Thus, heavy-use WLANs like corporate installations are frequently A+G, and a lot of current wlan client chips are also A+G.

In the current wlan market, 802.11a is the premium solution; unfortunately both in terms of cost and performance.

It's worth noting that 802.11a has a significantly shorter theoretical maximum range when compared to the 2.4GHz (802.11b/g) solutions.

That is true but it is also far less crowded, with five or eight available channels in most countries. With the recent FCC posting, "a" is considered an indoor technology. I get pretty good range with "b" - something pretty close to the claimed 1000ft with the equipment I have, but that is with no obstructions. I really don't need that sort of range. The range problems

Too bad this type of wireless sytem is not allowed to use in better parts of the world, due to the regulation of radio frequencies.
Why not use this adaptive frequency model in CPUs. Let the clockspeed scale with the load on the processor! (I meen scale in 30 MHz increments or something, not step between two speeds like it does now on some CPUs!)

Just FYI, the operating frequency of the radio has *NOTHING* to do with its speed. At whatever the frequency the radio operates on, it uses a fixed amount of frequency width (which is on the order of 30 MHz not GigaHz). So, if I am on 10 GHz, it means that I am allocating frequencies between (10GHz-15MHz) and (10GHz+15MHz). It doesnt mean that I have a CPU running at 10 GHz. Operating speed of these radios are based on reception power which is generally inversely (and exponentially) proportional with the di

It seems like people are misunderstanding what you mean, but you talk about two disparate things in one paragraph.

10GHz is still pretty expensive to deal with for consumer commodity parts for wireless radio. 5GHz is a hard enough sell as it is.

I'm not sure why CPUs don't have a larger range of speeds for dynamic clocking. There may be little power savings benefit for clocking slower than the minimum speed, and not much benefit to having intermediate speeds if the system can switch between the two freque

AFAIK, 'harvard architecture' CPUs like the ancient 68040 in my Quadra could be clocked ALL the way down, even stopped if need-be. When I heard that Intel was introducing 'SpeedStep' so their CPUs could drop from 500 to 400MHz (or whatever) to save some juice I couldn't help but think that they missed the boat entirely. You could make very cool, very quiet laptops if you had CPUs that would just clock themselves based on a signal from the memory controller signalling how busy the bus was (bus saturation exc

That has nothing to do with harvard architecture, and your 68040 wasn't a harvard arch.

Harvard architecture [wikipedia.org] refers to seperating instruction and data memories, unlike Von Neuman architectures you find most places. Harvard architectures are still popular in many microcontroller families, though.

Whether parts are certified for static operation (e.g. clock frequency down to 0Hz) is a completely different matter.

That has nothing to do with harvard architecture, and your 68040 wasn't a harvard arch.

That is not 100% accurate. Actually it is common to designate CPUs as a harvard architecture when they use separate data and code caches. For example it is impossible on the 68040 to modify code that resides in the code cache.

This is semantic. Harvard architecture implies seperate paths for data and instructions. The path into the CPU for the instructions is the same as the path into the CPU for data.

On the 68040, yes, there is a separate I-cache that isn't coherent with memory writes. But it is quite possible to use instructions that operate on data memory to modify code-- as long as you're sure to invalidate the i-cache before the code runs.

Yes, I admit some people use the term harvard architecture to refer to processor a

Intel's new method of throttling is to take less instructions off the queue per unit of time. The CPU does less work, so less gates switch, so power dissipation as heat is reduced. Why change clock rates when you can just process less instructions?

Instead of posting anonymously defending a company with billions of dollars who refuses to write the drivers, why don't you divert your energies to signing up for a Slashdot account.

While you're at it, maybe you should think about how retarded that statement you just made was, and rethink it. An acceptable retort would be "Linux sucks, I personally hate it, and Intel is doing the right thing by ignoring it. If you feel differently, write it yourself!" which is what your statement came off as to begin w

Uhm. I think one could expect a vendor to provide drivers themeselves. You actually have to pay for their products, remember? You give them money, you make them rich. I really don't feel like giving money to a company just to find out that I also pay them for limiting my choice.

They may not be using DSPs as much as FPGAs/ASICs - a great deal of the signal processing for that sort of thing is easier done as parallel blocks of hardware than software.

It's an 802.11a chip. While.11b used DSSS (which is a time domain solution and goes well with dedicated logic),.11a and.11g use OFDM (which is based on FFTs thus is much easier to do in a DSP than with dedicated logic).

(And just now I have a real need to get hold of an OFDM testbench for prototyping some related things in a nearby

You can also implement FFTs in hardware, or use a different approach - use a more "analog-y" method like mix&filter, which allows you to run a seperate downconverter for each carrier.

As for the HW - what kind of development are you doing? What's your price range for a devel board? - are you doing this as a hobbist or professionally? If you are looking in the professional range you could get a Pentec board or an Aeroflex PXI board.

There has been a great deal of discussion regarding the availability of the Lindenhurst chipset [theinquirer.net], and WIN Enterprises [win-ent.com] is pleased to offer developers the latest Xeon technology for their embedded controllers and platforms.
WIN Enterprises, Inc., a leading designer and manufacturer of customized embedded controllers and x86-based electronic products for OEMs, has announced the availability of the latest Intel 64-bit Xeon core module for developers of high-performance embedded platforms - Nocona / Lindenhurst [win-ent.com].
WIN Enterprises is pleased to offer leading-edge, long-life solutions based on Nocona / Lindenhurst for everything from embedded single board computers to platform systems. For OEMs looking to incorporate the newest Xeon technology, WIN Enterprises has developed a proven core module for Nocona / Lindenhurst to create custom embedded controllers.
"We have spent an extensive amount of time debugging and perfecting this specific core module," said Chiman Patel, WIN Enterprises' CEO and CTO. "This will allow our OEM customers to bring their application-specific Nocona / Lindenhurst embedded products to market quickly and cost-effectively."
For more information, please contact WIN Enterprises at 978-688-2000 or sales@win-ent.com. Visit www.win-ent.com to learn more about WIN Enterprises' embedded design and manufacturing services.

Well I only hope this new wireless performs better than Centrino. It's not like integrating WiFi into a chipset is rocket science as all chipset makers are at it now. Oh and this time, some Linux drivers right off the bat, please.

At the moment Centrino pairs an excellent low power, good performing processor (Pentium M); with the one of the poorest performing Wi-Fi solutions you can get. But look at how they've marketed it on it's poorest facet, with Centrino you can read your email on top of Everest, brows

Intel's chip obviously is a completely different design - i.e. it works differently internally. They copied the _instruction set_ to make the new 64-bit instructions compatible with the AMD chip's - seeing as AMD had got to the market first this was the logical thing to do. However its worth noting that Intel already had their own 64-bit chip designed beforehand - they just hadn't thought the market was there for a 64-bit chip just yet (thus letting AMD beat them to the gun).

I believe you are mistaken. It is true that "Intel already had their own 64-bit chip designed beforehand," and in fact it was actually available as a product. These 64-bit Itanium chips have a completely different instruction set--they are not x86 chips. Intel's plan was to move the world from x86 to Itanium, so it is incorrect to say "Intel would have eventually reseased the same chip" without AMD breaching down its neck. The success of AMD64 forced Intel's hand.

Right. I was just wanting to make clear what we mean when we say 'clone', as it wasn't clear originally whether you just meant the instruction set or more.

The Itanium is another story. I was however referring to some of the P4s, which Intel has for a while been selling with 64-bit capabilities there but simply disabled (as Intel didn't see a market for them, and obviously from a marketing perspective wants to hold back their introduction until its something that can be sold for extra $$$). Heres what a qui

The chip can switch between different networks and frequencies; it is capable of tuning and tweaking itself.

I don't see how this has anything to do with the 90 nm process. We've had the technology to do this for quite a while. Just have the right frequency divider on the VFO for demod and you have the frequency switching. Run it over the bands sequentially and you've got autodetect. Program one or two algorithms into the firmware and you have all the tweaking you'd ever need. Is this just some other c

One and the same. The "CMOS" in your system is a memory chip made using a CMOS process. It was called CMOS because the first systems to have it had mostly NMOS chips, and a CMOS chip was the only type of chip that was low enough power to be run by a battery. Here are the basic chip technologies:

Well actually Netcraft doesn't confirm it, and Intel may not be dying, but they are going downhill. Does anyone else find these releases underwhelming in light of the recent story about how AMD is pushing ahead while Intel stagnates and delays the releases of 4GHz and 64 bit technology?

Quite simply, Intel took shortcuts to get temporary advantages, and it's coming back to haunt them. The GHz myth is being dispelled and Intel is falling behind in the technologies that really matter. Today's new releases are only stopgap measures-a slight bump in the Xeon and some wlan card that's only going to be a minor player in an area Intel has not been focusing heavily on.

What is Intel focusing on? Branding. Marketing. Getting their stickers on everything and being known to the general public. Intel? "oohh they make computers!" AMD? "Durr is that those missiles in Iraq?" That may be why Intel still has a commanding lead in the processor market, but it will only take them so far. As word of mouth carries AMD to dominance in the hobbyist market, high end buyers will follow the hobbyists' lead. Enterprises will flock to 64 bit technology now that it is maturing on AMD, and still unavailable on Intel. Once AMD has taken control of the high-end market, the midrange will follow along like lemmings. All they know is, they want what the big boys have. And the big boys want AMD to go along with their fancy cars [shawnandcolleen.com] and fast women [spilth.org].

This downward spiral will continue until Intel loses its position as the king of processors and becomes just another hardware company. Nobody will care about what your sticker says is inside, and consumers will win as competition and diversity increase.

A few years out, Netcraft will finally deploy their stunning new technology that can detect your processor type, even through NAT. At that point the truth will become stark and clear, slapping us all in the face with the blinding realization that... Intel IS DYING! You heard it here first, folks: The future belongs to BSD on AMD. Beowulf clusters of BSD on AMD. Wintel is Dying. Wintel is a decrepit artifact of the past, to be fondly remembered in museums along with the 8 inch floppy and "turbo" buttons.

p.s. Netcraft also confirms that the baby-shit BEIGE OF THE END TIMES is spreading like a cancer. Oh god its so horrible, what kind of sadistic bastard is behind this.

How on earth is this dribble insightful????(yeah OK I know it was supposed to humorous, so why the fuck mod it insightful? oh yeah, it knocks wintel... nuff sed)

Netcraft? what the fuck? Every domain must have it's own webserver and every webserver must report truthfully what OS and hardware it is running and all of a sudden this will account for the vast majority of CPU's sold?????

OOh, intel is dying because of the Mhz myth and the scale of the chip features in nanometres.. yeah, and amd of course are not

Quick summary: All of these installs were "out of the box" installs. I used to compile my kernel almost daily back in the 90s because it was fun and neat. I don't have the time for that anymore and much prefer stability and usability over sheer performance now.

Fedora Core 2:Installation went OK.I have a Geforce2MX200 video card (maybe MX400) and there were driver problems. Certain screen savers would lock the entire machine requiring a reset button hit. Overall, it was reasonable, but I didn't care to

Intel is laying on the marketing because it works. Microsoft hasn't released x86-64 Windows XP, and why? There are obviously drivers for certain pieces of hardware and we'd see lots more if the damn OS were already out. Plenty of people would be willing to design a system around a version of x86-64 which supported only ATI and nVidia graphics cards, only VIA and nVidia chipsets, only adaptec scsi cards and 3com network cards, et cetera. I can only conclude that it is because intel and Microsoft are in bed

AMD and Intel have been trading blows for years now.
how was AMD's product line doing for six months before the A64? it was totally smoked by Pentum4's across the board.
just because intel has some setbacks and isnt the fastest CPU for doom3 anymore doesnt mean that theyre spiraling into oblivion.

Intel's got a design team in Israel that still knows how to make decent CPUs. They've designed the Pentium M. Intel's had to do a 360 from their old "GHz is all that matters" strategy, to a model number strategy (that they've botched - larger cache and LV/ULV is too heavily weighted, and you can't compare two chips, even if they're in the same series, like a Celeron M and a Celeron D).