Posted
by
timothy
on Sunday August 29, 2010 @05:04PM
from the whatta-boggin dept.

sylverboss writes "Intel Corp., the world's largest chipmaker, is close to an agreement to buy Infineon Technologies AG's wireless business, three people with direct knowledge of the discussions said. When it comes to desktop, laptop and server chips, Intel's pretty much got a lock on the market but everyone can see the writing on the wall: mobile chips and architectures are the future of computing thanks to the popularity of smartphones, but Intel doesn't have anything to offer in that regard. Don't know Infineon? You should: they are the guys who have supplied Apple with their iPhone baseband chips since 2007."

they make a lot of chip including RAM. They have a big market share for the memory.... just look a their competitors... TI, Broadcom, STMicroelectronics, Marvell, Freescale, NXP, Renesas, International Rectifier, Fairchild Semiconductor, Semikron, Dynex Semiconductor.
Yup, definitively an important company!

I think it means two things:
1) They are a big enough player that they can supply Apple with key components for a widely distributed product.
2) If they sell to Apple they probably sell something to other large handset makers.

"they are the guys who have supplied Apple with their iPhone baseband chips since 2007."

Does that really mean they're important, though?

Not in isolation, no. It is, OTOH, an example of a very high profile contract, which suggests that they are sufficiently stable in the rest of their business that Apple would be willing to trust doing business with them, without having to fear them suddenly going bankrupt and cutting off supply of a key part. Given that the company is important enough that it would be impossible to list every example of a device that uses an Infineon chip in tfS, the iPhone was probably the most effective example to use.

The IP likely belongs to Apple, and wouldn't need third party escrow. On the other hand, even with the IP, a lack of supply is what killed the power pc in the first place. The same, but affecting the now higher profile iphone would be a disaster for Apple.

IP isn't the only factor when it comes to chips. It's not like you can just email some zipped-up files off to a fab and get chips back in a few weeks (like you usually can with PCBs); there's a lot more to it than that. So even if you have all the IP, masks, etc., there's still a significant and costly ramp-up time at whatever fab you choose to supply you with the chips.

It's not like you can just email some zipped-up files off to a fab and get chips back in a few weeks

Actually, you can, for lots as small as a few hundred. It's very expensive doing it that way, but there are quite a few companies that offer precisely that service. If you talk to ARM, they have agreements in place with a few fab companies that let you license a core from ARM, add your own on-die peripherals to the design, and get the whole SoC produced by one of ARM's partners.

A lot of Intel's advantage comes from the fact that they invest heavily in research related to manufacturing techniques, so th

Infineon's Trusted Platform Module would be of far more interest to Intel as Infineon supplies TPM's to a lot of OEM's who shift more units per quarter then Apple has in the last 3 years. Not to mention the interest Intel's had in pushing Trusted Computing. I'd say expect TPM on die in future. Possibly an Atom based SOC but even Intel has already figured out how unlikely that is to take off compared to ARM SOC's.

I don't think TPM-on-Die is something that Intel would've needed to shell out $2B for. It's a straightforward device designed to be implementable with as little cost as possible. Intel has done far more complex things in house and adding TPM on-die would probably take them a trivial amount of time if they cared to do so. Putting aside that, I don't see that as a huge differentiator in the market. First, because even if people by large explicitly cared, then AMD could easily do it too, and even if they d

What makes them important (to Intel) is that they are one of the worlds largest suppliers of Trusted Platform Modules (TPM). Which is also why I distrust any radio made by them, who better then a Trusted (treacherous) computing company to build a back door in for the manufacturer.

Yes and no. It is a major win for a few people on Intel's management team, but I doubt if this even pays to keep the lights on, let alone keep investors happy. These chips are usually the domain of Korean companies like Hynix, and a few other small-fry Japanese semis. Intel is DESPERATE to be something other than a high-volume, low-profit-margin CPU company. So, these chips cut the mustard on THAT score, sure enough; so I guess you could say 'Yes' to the small business win, but a really big NO to it mak

I hate when a cash cow company cannot accept its fate as a cash cow (intel, microsoft, google, etc). Intel has very large margins on its sales (66% in the last quarter if I recall correctly). So it is a hugely profitable slow (or no) growth company. Why is that bad? Intel could have paid shareholders a huge special dividend rather than use up 10 of its 14-15 billion in short term cash on deals of questionable merit.

They have a great semiconductor business. They should be the best little buggy whip m

The deal with Rambus was a purely business one. Rambus paid them a good deal of money to use Rambus technology. Also, at the time, it really WAS faster. Wasn't faster enough to be worth the money and of course scaled like shit, but a Rambus P4 was quick. So Intel made the decision to use RDRAM. However it turned out to be a bad decision as DDR-SDRAM quickly eclipsed it speed wise, which helped AMD with the edge they had at the time. So, when the deal was up, Intel chose not to continue using RDRAM, and still does not to this day. Rambus does make new RAM products, XDR RAM is their current thing and the PS3 does use it. However Intel decided it was in their best interests not to.

Companies generally aren't buddies or anything, they just have interests that may match up. Intel though RDRAM was the way to go, especially since they made a lot of no-cost money on the deal. I mean $100 million is nothing to sneeze at. If someone is willing to pay you that to use their technology, and their technology looks like it works, then great. However that doesn't mean it was "BFF for life," or whatever. It didn't work out, the arrangement ended, that is that.

Actually, it wasn't fast at all, depending on your definition of "fast". Rambus RAM had huge bandwidth, but terrible latency. So it was great for things like streaming media, and terrible for just about everything else. At the time, all Intel could think about was bandwidth, and they didn't give a second thought to latency. They basically thought everyone was going to start using their computers for watching movies, video editing, and little else. So they designed the P4 with a horribly long pipeline that meant any context switching resulted in terrible performance, and they used Rambus RAM which was perfectly matched to their pipeline and memory channel bandwidth. Worked great if you were doing video editing, but most other applications had mediocre performance, with sheer clockspeed trying to compensate for the huge penalty of poor latency and pipeline flushes. In the end, people were stuck with fancy new computers with horrifically expensive RAM which weren't any faster than their old PIIIs for most applications, yet consumed 3 times as much power, making their offices very warm.

The whole thing was just a bad idea. AMD pretty quickly realized what was going on, avoided Rambus RAM like the plague, and concentrated on better performance at lower clockspeeds. AMD made huge inroads against Intel during this time. After user rebellion (including people building their own motherboards using Intel's notebook CPUs, which had a different architecture that had much lower power consumption with better performance), Intel finally dumped the P4 "Netburst" architecture and moved to "Core". They also dumped their CEO Craig Barrett who was responsible for this disaster. Since then, they've been greatly outperforming AMD, probably due to their larger size, and their strong fab technology (Intel makes all its own chips, AMD I believe outsources theirs).

I'm not saying it was a good design over all, I'm saying that when it hit the market with the P4s, it was the fastest thing out there. Its desktop performance was unmatched. It just cost a lot and didn't scale.

High latency isn't necessarily a problem, good caching can get around a lot of that, and of course increasing the speed means that the cycles of latency matters less.

Now in hindsight, RDRAM was a bad decision on Intel's part but it was understandable. It was also understandable in terms of their CPU d

They basically thought everyone was going to start using their computers for watching movies, video editing, and little else. So they designed the P4 with a horribly long pipeline that meant any context switching resulted in terrible performance.

If you don't know much about CPU architecture, please don't make a bunch of random statements about the P4.

First, the pipeline length has minimal impact on the speed of context switches. Context switches are relatively infrequent (compared with the CPU frequency) and relatively slow (typically several hundred cycles at a minimum).

The major downside of pipeline length comes from branch mispredicts. Branch mispredicts hurt you more because you have to flush more wrong instructions. Additionally, the scheduler is less able to parallelize instructions because instructions with data dependencies need to be spaced further out in the pipeline (forwarding doesn't help you unless the result has actually been computed, and in long pipelines there are typically several execution stages). Some of this can be improved with tactics like better branch prediction or multi-threading, but ultimately you give up IPC in a longer pipelined design.

Second, the P4 was not designed for "watching movies, video editing, and little else". It was designed to be fast. When Intel was designing the P4, the IPC-bag-of-tricks was starting to run out. The P6 (Pentium Pro, later evolved into the Pentium II/III) already had all the common improvements including multi-level, fast on-chip caches, a fully pipelined design, out-of-order execution, branch prediction, and multi-issue. The bottom line is that Intel realized (like everyone else) that making the chip wider or increasing caches really didn't do much for performance anymore. To keep seeing dramatic improvements in single-threaded performance, we either needed a completely new bag of tricks or we needed much higher clocks. Intel figured that they would make a CPU that (architecturally) could hit very high clocks, which means very deep pipelines to meet timing constraints. Yes, P4 would have lower IPC, but it would more than make it up in clock speed.

For a while, it worked. P4 was not a huge winner at first but over time (with Northwood) the P4 managed to out-gun AMD's lineup and become one of the fastest CPUs available. It does't matter if the Athlon could retire more instructions per clock, the P4 was clocked dramatically higher.

The problem is that somewhere around Prescott, the process technology ran out of gas. Leakage current became an issue more quickly than Intel had anticipated, thermal issues became problematic, and despite Intel's tricks (sockets that could handle more power, BTX, etc.) it became clear that people just weren't going to put a 400W CPU in their machine.

None of this is really a problem with the P4 architecture. With the right cooling and power, P4 can hit 8GHz. That's higher than any Intel or AMD CPU before or since.

You'll hear people say that P4 was a marketing decision. While I'm sure that the high clocks did benefit marketing, people who know the actual architects will tell you that it had more to do with chasing single-threaded performance than it had to do with marketing.

Some people say that the P4 was optimized for media. While it's true that highly predictable code (e.g. loopy scientific code and media encoding) performs especially well on the P4, compared with the Athlons of the day (before Athlon 64) so did everything else. You can't compare a 1.5GHz Athlon XP to a 1.5GHz P4 and argue that the Athlon is better because it's faster. P4 was specifically designed to make up for its lower IPC with very high clocks.

The whole thing was just a bad idea. AMD pretty quickly realized what was going on, avoided Rambus RAM like the plague, and concentrated on better performance at lower clockspeeds. AMD made huge inroads against Intel during this time.

Intel's Atom chips are low power. They're not good for putting into smartphones?

Are there some Infineon chips now used for only smartphones that will show up in netbooks? Do they run Linux? Do they run x86 instructions? And if not, will Intel sustain a product line that splits its main CPU culture away from x86?

The Z600 in 4Q10 is the first Atom supposed to go into Smartphones.What Intel bought are not general purpose CPUs like Atom. It's the high frequency chips talking to the mobile base stations. Think "modem chips for mobiles". The chips running applications on the phones are totally different ones.

Atoms are low power only compared to Intel's other x86 chips. Compared to typical controllers for portable devices, they use too much power.

Why do you trolls persist with this fiction? Is it the Microsoft hate disease rearing its ugly head again? Didn't Lunis Trovalds himself mock you zealots himself? Look, I understand you want to promote non-x86 on portable devices as desktop Windows doesn't run on ARM but, telling lies about Atom isn't going to win your argument for you. Atom is just as power efficient as Snapdragon and OMAP processors. Just like Windows tablets will destroy the iPad, Windows smartphones could easily be made. Microsoft is just biding its time until the opportunity is right then you open sores religious nutcakes will be off and running again with your tales between your legs. If MSI, Asus, and the many other netbook makers couldn't succeed with that LUnix crap and had to go begging Microsoft for Windows XP, do you think Goggle and Abble have a chance? Ha ha ha.

I often wonder how is it even legal to run non-Windows on a computer? Even the government gets this and uses Windows and Office. Do you people think you are better than or smarter than your own governments? I have a surprise for you... You're not! Seriously, you arrogant grandiose assholes need to take a look into the mirror and reevaluate your lives.

I'm no fan of the iPad, and wouldn't buy one myself, but what Windows Tablets are even getting a fraction of the sales (or buzz) of the iPad? Windows desktops/laptops/netbooks have 90%+ marketshare, but their Apple counterparts have hardly been destroyed.

I often wonder how is it even legal to run non-Windows on a computer?

Because the government hasn't gotten around to banning the practice? The statutes don't pass themselves.

Intel thought they'd develop an x86 chip with the power requirements of an ARM chip, that's why they sold their ARM division to Marvell. I guess that was a bad move considering the progress ARM made during the last few years.

Infineon does not make cell-phone or computer CPU's. They do however create smart card controllers and security controllers that contain a CPU.

Smart card controllers do have some rather specific security design constraints though (the high end Infineon SLE66 obviously did not have enough not to be hacked though: look for Christopher Tarnovsky and watch the video).

So there might be an awful lot of phones out there with Infineon CPU's, but they are in the SIM card:)

Here in Toulouse* Intel has just hired back most of Freescale's cellphone/embedded R&D team (which was recently closed by Freescale along with the rest of its local fab) as well as test equipment, in order to work on system integration of ultra-mobile platforms. Since they definitely are targeting this market in the near future, I can see some logic in the buying of a baseband processor maker.

Believe it or not, there's a lot more to a cellphone than the processor.

Perhaps more relevantly, there's a lot more to a cellphone processor than the CPU core. You don't buy mobile phone CPUs from ARM, you buy them from a company like TI or Qualcomm, and you get an ARM core taking up around 20-50% of the die area, plus a load of extra controllers. Intel tried licensing Atom to third parties to make SoCs, but I've not seen anyone actually doing it. This means that, to compete with ARM, they need to produce the extra stuff in house, or license it. Intel generally doesn't seem

iirc the Atom uses about 5x the power as an OMAP3, and the Atom still needs other hardware that is already integrated with the OMAP. The Atom is simply not a good competitor for smart phones, and isn't even on the map for dumb phones.

Atoms are low power, and despite what the ARM fanboys like to say, they do a lot given their power budget. However they are still higher power than you want for mobile devices. They are targeted at low end PCs, like netbooks, or perhaps some higher end embedded applications. ARM chips (most of them at least) use far less power. When you are talking the tiny batteries in cellphones, this matters. Going from a half a watt chip to a 2 watt chip means 4x the power draw. Given that the CPU is one of three major components that draw power (the LCD and radio being the others) you don't want this.

For example my BlackBerry has a 4.3 watt-hour battery. That means just what it sounds like: It could provide 4.3 watts for 1 hour. Ok so a CPU that uses 2 watts could drain the battery by itself in 2 hours, even if the screen was off (which of course it wouldn't be). A Half watt CPU would last 8 hours on the same battery. Big difference for a small device.

Atom is doing much better these days on the power budget, though of course you're right the Atom chips aren't quite there yet. The ARM chips are clocking up and multi-coring up which increases their power needs so there's an vector intersect here. At the rate they're going they will approach ARM soon, maybe next year. The reason why I'm not hopeful for any product with Atoms being in my mobile arsenal is quite simple. Intel platforms run Windows. If the platform runs Windows, the manufacturers will be "

Low power has different rules. The people at the top thought that they could keep charging a premium for faster chips and that Intel couldn't catch up because of the extra complexity of producing an x86 chip. The same thing that made them wrong work against Intel now.

Once you get past the instruction decoder, there isn't much difference between an x86 chip and any other CPU. Remember the marketing claims about the Pentium being 'RISC internally'? The front end decodes x86 instructions into micro ops an

For the very lowest power devices, x86 probably won't be a good fit. Smartphones? MIDs? Tablets? Hell yes, wait until you see Medfield. Why do you think Intel just bought part of Infineon? Integration. Anywhere you see a high end ARM you'll see x86 competitive at worst.

The intersect is going to start happening later this year when Intel releases Moorestown. Moorestown is a ground-up redesigned architecture that will still run x86, and will idle at 23mW and play video at 1.1W [anandtech.com]. It will also give about 2X performance increase over current ARM designs, although the 1.1W power consumption will probably mean that it will only end up in tablets, MIDs, and PMPs. For naysayers who keep bashing how wasteful x86 is (which it is) and how it will never compete with ARM, note the power consumption in idle.

The real intersect will happen when Intel releases Medfield [anandtech.com], the next generation of Moorestown, probably in Q4 of 2011.

One caveat to this is the fact that by the time Intel releases Moorestown and Medfield, ARM performance would have also increased to an extent that Moorestown's performance edge may only be a small one (although ARM's power consumption also seems to be increasing). On the other hand, x86 (and Linux) support may be a strong reason for companies to migrate to this platform.

I disagree with your views on Intel/Windows. Firstly, your notion is quite outdated - in the mobile space, Intel is actually pushing Linux very strongly in the form of Moblin, and is really not trying to shove Windows down everyone's throat.

Secondly, and more interestingly, MS itself recognizes how unsuitable Windows is in mobile devices. Take a look at the extent to which MS has redesigned Windows Mobile 7 - I strongly suspect that it will be a viable challenger to Android and Apple in the near future.

Take a look at the extent to which MS has redesigned Windows Mobile 7 - I strongly suspect that it will be a viable challenger to Android and Apple in the near future.

I wouldn't be so sure about that. Being late to market isn't necessarily a killer, but being late to market when it means you have only a small fraction of the apps needed to compete is huge. People already know Iphone/Droid branding, and people already know that they either of those phones will do what they need. When Android started out it was only available on cheap hardeware, and that hurt it. Now Mobile7 is only going to go through the same thing. It is going to be a really tough sell competing wi

I didn't say Intel was pushing Windows. Intel's interest and motion are irrelevant to the question. To pretend you don't know this is to pretend to be ignorant of one of the primary factors controlling the market. If the platform will run Windows - no matter how poorly - it will come with Windows. Therefore the Atom is less interesting as a platform because with Windows the experience is poor. If Intel wants to get people excited, they should make a low power mobile CPU that's not Windows compatible.

I didn't phrase that as gracefully as I should have, you're right. I'm sure Intel is as frustrated with the situation as consumers are. They want to sell lots of chips. They want to win in the tablet and mobile space. As he said, they're promoting Moblin and MeeGo and other stuff, which would actually be pretty cool on an Atom chip. If that stuff shipped it would move a grip of units.

But the OS that ships on the platform isn't selected by the processor vendor. It's selected by the OEM. No large OEM

I met a guy named Bandit once, at an IEEE lecture. He has only one upper limb, and told me he likes to hang out by the ARM table at conferences, telling people nearby, "If I get an ARM here, I'll finally have two!"

2 billion dollars for a bunch of chips and antenna components? I guess we know the true value of an ARM and a leg.

They'll make it back in the next Apple order for chips for the iPhone 5 and iPhone 6. You can be sure of it. (iPhones use Infineon parts). Or maybe just in iPhone 4 sales, too, if Apple could keep it in stock... how many of those have they freaking sold?

When it comes to desktop, laptop and server chips, Intel's pretty much got a lock on the market but everyone can see the writing on the wall: mobile chips and architectures are the future of computing thanks to the popularity of smartphones, but Intel doesn't have anything to offer in that regard.

Server market is a different ball game. Xeon are only in the low end server. IBM has lead in middle and high end servers with their P and Z systems. P and Z chips are custom designed for IBM systems only.