No wonder the world is f&*ked all people are worried about is money.
What is wrong with old tech if it still works.
Wouldn't it be alot cooler for intel to say we have 30 year old machines still running?
Maybe Intel should start some sort of recycling industry to combat intel processors boards etc.

I despise the markets and all those sort of business people. That are only worried about the now and the future.

In the real world post 2005 you don't need to upgrade your processor every 2 years.
Back in the 1990's you could just run windows 95 on a 486 and maybe windows 98.
There was a huge performance jump from a 486 to a Pentium then a big jump to a Pentium 2 and to Pentium 3 and 4 ....
If you where a PC gamer you needed to upgrade every 2 or so years other wise you couldn't play the newest games.

No one writes software to take full advantage of CPU's anymore.
And more and more people don't us pc's or laptops anymore.
Its not like you need 40GHZ to get on the internet.

So either program something that is good that takes advantage of current tech or die.

I jumped ship to Intel because socket 478 was used for ever.
Then went to socket 775 because of the upgrade path from dual to quad core.
Most people I know have 3+ year old desktops because they have no need to upgrade there CPU.

Also if intel didn't make 20+ cores per a CPU they might not shot themselves in the foot.
Most servers now days are run in an ESX or hypervisor box. With either 1 or more multicore CPU's running heaps of servers.
I saw a whole server room condensed to 1 Xen box with 4 Xeon CPU's Which replaced 30+ xeon CPU's.

If intel do this someone from the ARM camp just needs to make a decent board with 6+ sata ports....

Click to expand...

Planned? Hardly.

However, as much as we despise the business markets, Intel is a publicly-traded company (I own stock in Intel, in fact), and needs to satisfy those investors. Multicore is ubiquitous - and is everywhere (those selfsame ARM CPUs are quad-core); however, those ARM CPUs are, in fact, reduced-instruction-set (RISC), as opposed to complex-instruction-set (CISC), therefore they aren't as complex to manufacture. ARM Holdings itself has no fabs - they are basically a licensing and development company. Intel, however, is vertically integrated and has fab capacity out the wazoo; that is what they have been leveraging to drive AMD to the point of destruction as a going concern. ARM isn't vulnerable because they have concentrated entirely where their design is strongest (and where any CISC design - including Intel's - is weakest - low-power and ultra-low power; even we have to admit, Atom, which is based on Core/CISC, is not exactly efficient in terms of power compared to ARM). ARM (and RISC) is not coming head-on at CISC, but coming from underneath. The lack of complexity, and the cost of manufacture, is playing right into the strengths of ARM; throw in the poor economy, and the needs (or lack thereof) of the computing masses, and it is a Very Bad Harbinger for the future of CISC, and Intel in particular.

There is only 1 Atom CPU that is intended to compete with ARM offerings--Medfield and its successors. And it is actually just as good as ARM CPU's when it comes to power consumption, and it's an x86-based CISC CPU, that offered comperable performance. That was Intel's first attempt, imagine how good the newer versions will be once Intel really starts working at it...

BTW, why is Intel talked up so much in this context as if AMD doesn't even exist. I suspect even if this is the case, there's still AMD to capitalize in this segment. But all in all, seriously, WTF. Might as well rename every x86 PC to Apple iSomething.

BTW, why is Intel talked up so much in this context as if AMD doesn't even exist. I suspect even if this is the case, there's still AMD to capitalize in this segment. But all in all, seriously, WTF. Might as well rename every x86 PC to Apple iSomething.

1. Broadwell (the current subject of discussion) in the SoC space (this segment is now occupied by Atom, and eventually Clover Trail, which is now starting to arrive) - these CPUs *already* use BGA packing primarily - does Atom even SHIP in LGA packaging?

Click to expand...

Hence the "higher powered Atoms" part. And I did mention Celerons along, didn't I?
In case I'm poorly translating my thoughts, what I'm saying is that Intel might add in another segment between the low power entry level Atoms and the low end Celerons and Pentiums.

2. Haswell-MS/Lynx Point - this will succeed Ivy Bridge and be a *tock*; it also may or may not use a new socket. (I'm talking specifically about LGA1155 or direct successor socket.)
3. Haswell-EX - This will replace Sandy-E and/or LGA2011, a variant will be a XEON for the WS/server space.

Now chance to cross on AMD. No way.
I only hope Extreme will stay like now and than upgrade later but on Extreme.
If you buy platform for 1000$ and use that 5 years for gaming that is OK.
Intel i7 CPU can hold one or two graphic 4-5 years.
i7-860 is OK and today and next year.

Bad situation is if you buy motherboard together with CPU(example in future) and you get wonderfull overclocker, amazing, one in 1.000 samples...
but little things on motherboard die and you need to change everything and next CPU is crap...
If they decide to leave overclocking...

AMD's APU marketplace is almost a subniche - it fits in that small (and getting smaller) space between tablets/slates powered by ARM and full-fledged portables (Ultrabooks) - powered largely by i5 (not i3). All too often, if the full power of Sandy Bridge/Ivy isn't needed, but portability is, consumers will buy a tablet or slate running Android or WindowsRT (price issue - not even AMD APUs can compete here based on price) - otherwise, the APU is too underpowered, even compared to i3, let alone i5.

Last time I checked AMD delivered a CPU that does awesome in multitasking for roughly half the price. (FX8350 vs 3770K) Are they the fastest single core on the market NOPE, but that being said who cares? For what I do it's faster than a Phenom II in the single core market and faster than Intel's offerings in multitasking. Sounds like a good performance/value CPU to me.

AMD's APU marketplace is almost a subniche - it fits in that small (and getting smaller) space between tablets/slates powered by ARM and full-fledged portables (Ultrabooks) - powered largely by i5 (not i3). All too often, if the full power of Sandy Bridge/Ivy isn't needed, but portability is, consumers will buy a tablet or slate running Android or WindowsRT (price issue - not even AMD APUs can compete here based on price) - otherwise, the APU is too underpowered, even compared to i3, let alone i5.

Click to expand...

The A8/A10 compete just fine with the i3 series that they are priced against. If you are going to compare them to the i5 may as well compare them to the i7 not even in the same class.

Last time I checked AMD delivered a CPU that does awesome in multitasking for roughly half the price. (FX8350 vs 3770K) Are they the fastest single core on the market NOPE, but that being said who cares? For what I do it's faster than a Phenom II in the single core market and faster than Intel's offerings in multitasking. Sounds like a good performance/value CPU to me.

Click to expand...

Yeah I know it was mostly to pull DDD's strings. He's such an AMD fanboy.

The System-on-a-Chip (SoC) concept is spreading. In the early days of computers, we had specialized discrete cards (sound cards, video cards, etc.) because CPUs weren't powerful enough to do everything expected of a general-purpose machine. CPUs these days are so powerful that things are being dumped onto the CPU. There was a time when discrete sound cards were a necessity for gaming because sound processing ate up a significant amount of CPU cycles. Nowadays, on-board sound is standard because the load is practically trivial for modern CPUs. Graphics follows something similar as well. Of course, the progressive concentration means less choices. Remember the days when motherboards could use both AMD and Intel CPUs? Or when the CPU, GPU, and chipset markets weren't duopolies? They'll likely still have expansion slots for things like TV tuners, RAID cards, and other less general stuff... at least until all that gets slapped onto the CPU as well.

The big question is whether Skylake will follow the Haswell model or the Broadwell model. "Ticks" bring smaller improvements than "tocks" and is mostly just power reduction due to die shrinking. Soldering the die-shrink generation isn't that bad unless you're a chronic upgrader (e.g. going from Sandy to Ivy). But either way, there's always the enthusiast market being absorbed into the server market. With DDR4 RAM debuting on the server variants of Haswell (mainstream doesn't get it until two years later with Skylake), maybe it's time to consider picking up a Haswell Xeon processor or two and brandish the 20+ MB cache e-peen?

Bah, sb-e will prob be my last upgrade. Judging by my family's history of cancer, both father and mother, and grandmother. Honestly, I'd rather be dead before I see cpus soldered to mobos again. Only the worst proprietary prebuilt makers ever did that before. I've thrown out many. God help us all.

I think that this wouldn't present a problem for 90% of users if done right but first intel needs to reduce its product portfolio to 10 or so CPUs (2 per segment). As most of the chipset would be inside the CPU by then, a single SB could be used for every board. And ship everything unlocked. Leave the workstation/server platform open for enthusiasts.