I will admit the industry has had me in a mind bender lately with all that has been going on. Normally you see consoles get a powerhouse chip, and this time, they're getting something that competes with the freakin atom and pulls off performance like my 8150/geforce770.

Consoles don't get powerhouse processors, they never have. Xbox360 had a triple core PowerPC with crude SMT (think P4 level crude) to give six symmetric threads. PS3 has just one of those and a bunch of fairly capable shader units built into the CPU. That's not a lot of good single thread performance for either last-gen console, but what it is is cool running, small and cheap. It's there simply to support the main heavy lifter, which is the GPU.

Fact is both xbox360 and PS3 had a clock of around 3.2ghz. Jaguar supposedly has around half that at 1.7-1.8 ghz.In 2005 2.4 ghz was on the top end AMD san diego chips, not sure about intel. But back then, AMD was "better".

Fact is both xbox360 and PS3 had a clock of around 3.2ghz. Jaguar supposedly has around half that at 1.7-1.8 ghz.

I know clockspeed isn't everything and that there other considerations, but isn't this going a little backwards?

Well, the Jaguar chips are still going to be faster computationally in single-threaded performance, *and* add cores. The xbox 360 is the "least dissimilar" to the next gen consoles, and it goes from a triple core PPC with hyperthreading to an 8-core (or 4-module, 8 integer 4 FPU design if you want to be pedantic.) The problem is that the 360, specifically was a huge drag anchor on gaming in general, PC gaming specifically but not because its CPU was terribly outclassed. It was both RAM-starved and hobbled with a slow GPU. The GPU is a DX9, Shader Model 3.0 and is among the very earliest uses of unifed shader models (although it predates SM4.) It's not optimized for GPGPU and while it has a small local cache, it has to rely on sharing 512 MB of system RAM with the CPU for texture cache. That's godawful small and slow compared to modern GPUs. If you'd try to compare it to a desktop-class CPU, it's probably in the ballpark of an x1900 series, or roughly similar to an nVidia 7800 GTX.

We've come leaps and bounds since then (literally 6-7 GPU generations), but the RAM-starved nature of the console was atrociously bad.

I'm actually fixated on the 8 GB RAM being the most important single hardware change with both of the consoles. I'm content to wait and see whether doing that 8 GB as all GDDR5 as Sony did was the better approach or if using all DDR3 (lower bandwidth but also lower latency) is of greater benefit to both of the folks who play Xbone.

AMD's not going to sell the exact APU in the next gen, but we're getting 4-core parts and can compare it to other socket FM chips. It's a modest CPU to be sure. Don't forget that low-power/low-heat has to be a priority for these guys... and HOPEFULLY some longevity as well. I don't think anybody has the patience to go through another generation of hardware as flaky as the last one. Having said that, though-- if you have an FX or a Sandy/Ivy/Haswell i5 and a midrange or higher current gen GPU, you're already ahead of where the consoles will be at launch.

Fact is both xbox360 and PS3 had a clock of around 3.2ghz. Jaguar supposedly has around half that at 1.7-1.8 ghz.

I know clockspeed isn't everything and that there other considerations, but isn't this going a little backwards?

Well, the Jaguar chips are still going to be faster computationally in single-threaded performance, *and* add cores. The xbox 360 is the "least dissimilar" to the next gen consoles, and it goes from a triple core PPC with hyperthreading to an 8-core (or 4-module, 8 integer 4 FPU design if you want to be pedantic.) The problem is that the 360, specifically was a huge drag anchor on gaming in general, PC gaming specifically but not because its CPU was terribly outclassed. It was both RAM-starved and hobbled with a slow GPU. The GPU is a DX9, Shader Model 3.0 and is among the very earliest uses of unifed shader models (although it predates SM4.) It's not optimized for GPGPU and while it has a small local cache, it has to rely on sharing 512 MB of system RAM with the CPU for texture cache. That's godawful small and slow compared to modern GPUs. If you'd try to compare it to a desktop-class CPU, it's probably in the ballpark of an x1900 series, or roughly similar to an nVidia 7800 GTX.

We've come leaps and bounds since then (literally 6-7 GPU generations), but the RAM-starved nature of the console was atrociously bad.

I'm actually fixated on the 8 GB RAM being the most important single hardware change with both of the consoles. I'm content to wait and see whether doing that 8 GB as all GDDR5 as Sony did was the better approach or if using all DDR3 (lower bandwidth but also lower latency) is of greater benefit to both of the folks who play Xbone.

AMD's not going to sell the exact APU in the next gen, but we're getting 4-core parts and can compare it to other socket FM chips. It's a modest CPU to be sure. Don't forget that low-power/low-heat has to be a priority for these guys... and HOPEFULLY some longevity as well. I don't think anybody has the patience to go through another generation of hardware as flaky as the last one. Having said that, though-- if you have an FX or a Sandy/Ivy/Haswell i5 and a midrange or higher current gen GPU, you're already ahead of where the consoles will be at launch.

Yeah, unless they make a tablet ps4 or something I will probably not buy this gen of consoles unless there is an exclusive that cannot be passed up. I'm also hoping this gaming windows 8 kernel shows up for pc, hopefully bypassing some needless layers that eat up cpu time couldn't hurt.

I guess getting back on to the 5ghz chip, i'm also on a wait and see. I'm all for it if it brings some power, but I am about 3 months away from when I want to upgrade cpu. I'd hate to go haswell but AMD may leave me no choice. Hopefully steamroller hits by October, I could wait another month.

Yes it is, especially when people use your own point against you 2 pages later, LOL.

Yeah, but ~3+ Ghz CPUs have been mainstream for years now.

And the "~3+ Ghz CPU" in the Xbox360 is pathetic. Why do you think MS specified three of the damn things? Because it was designed for the IST Cell in the PS3, a processor designed largely as a block of SPEs which performed very poorly in general code (which consoles don't do much of, so don't need much CPU).

It's in-order, with slow cache, slow vector performance, slow everything. It has a piss poor floating point unit which only works well at single precision (where you'd use the PS3's SPE instead), and suffers an 8x throughout penalty for using double precision.

The faster x86s at the time, hell even the hoary old Pentium-D, would obliterate it. Jaguar is so vastly superior that it's not even funny. Those using packed double SIMD formats would pull away so far that the PPE would cry into its breakfast cereal.

While Jaguar may be low-power optimised, it is a full-on out of order 2x ALU, 2x FPU. From what I've seen with its instruction windows and buffers, it's what you get if you take a K8 (e.g. Athlon64), partially upgrade it to "Stars" featureset, lop off an ALU and an FPU, halve the L1 cache and lop off any L3 cache. It still leaves you with a front end able to decode and dispatch 2 x86/x87/SSEx instructions per clock, functional units able to accept four micro-ops per clock and retire all four and very strong branch prediction.

At 1.8 GHz, just one Jaguar core has more or less the same performance as two PPEs at 3.2 GHz. Okay, so the Xbox 360 has three PPEs! Surely it's better? Nein. The next-gen consoles are using EIGHT GODDAMNED JAGUAR CORES. Very roughly, one PS4 has enough CPU power to match about five Xbox 360s.

Yes it is, especially when people use your own point against you 2 pages later, LOL.

Yeah, but ~3+ Ghz CPUs have been mainstream for years now.

And the "~3+ Ghz CPU" in the Xbox360 is pathetic. Why do you think MS specified three of the damn things? Because it was designed for the IST Cell in the PS3, a processor designed largely as a block of SPEs which performed very poorly in general code (which consoles don't do much of, so don't need much CPU).

It's in-order, with slow cache, slow vector performance, slow everything. It has a piss poor floating point unit which only works well at single precision (where you'd use the PS3's SPE instead), and suffers an 8x throughout penalty for using double precision.

The faster x86s at the time, hell even the hoary old Pentium-D, would obliterate it. Jaguar is so vastly superior that it's not even funny. Those using packed double SIMD formats would pull away so far that the PPE would cry into its breakfast cereal.

While Jaguar may be low-power optimised, it is a full-on out of order 2x ALU, 2x FPU. From what I've seen with its instruction windows and buffers, it's what you get if you take a K8 (e.g. Athlon64), partially upgrade it to "Stars" featureset, lop off an ALU and an FPU, halve the L1 cache and lop off any L3 cache. It still leaves you with a front end able to decode and dispatch 2 x86/x87/SSEx instructions per clock, functional units able to accept four micro-ops per clock and retire all four and very strong branch prediction.

At 1.8 GHz, just one Jaguar core has more or less the same performance as two PPEs at 3.2 GHz. Okay, so the Xbox 360 has three PPEs! Surely it's better? Nein. The next-gen consoles are using EIGHT GODDAMNED JAGUAR CORES. Very roughly, one PS4 has enough CPU power to match about five Xbox 360s.

I'm guessing since this is double the standard e-series 17w chip, that this thing is about 30~watts tdp. I would say that is a pretty impressive feat. And should keep the red-ring issues in the past. If this thing is only 30 watts, it looks like they went overkill on the cooling.

At 1.8 GHz, just one Jaguar core has more or less the same performance as two PPEs at 3.2 GHz. Okay, so the Xbox 360 has three PPEs! Surely it's better? Nein. The next-gen consoles are using EIGHT GODDAMNED JAGUAR CORES. Very roughly, one PS4 has enough CPU power to match about five Xbox 360s.

OKAY you MADE your POINT!

Quote:

I'm guessing since this is double the standard e-series 17w chip, that this thing is about 30~watts tdp. I would say that is a pretty impressive feat. And should keep the red-ring issues in the past. If this thing is only 30 watts, it looks like they went overkill on the cooling.

Considering the problems the XBOX 360 and PS4 had, this might be a good thing.

I'm guessing since this is double the standard e-series 17w chip, that this thing is about 30~watts tdp.

It's an APU, you must count the GPU portion as well. Where the A4-5000 Jaguar chip has 4 cores with 128 stream processors in a 15 W TDP, the Xbox/PS4 have 8 cores and either 768 or 1152 stream processors. Thanks to the much beefier graphics, the console chips will be a lot higher than double the wattage of the mobile Jaguar chips.

I'm guessing since this is double the standard e-series 17w chip, that this thing is about 30~watts tdp.

It's an APU, you must count the GPU portion as well. Where the A4-5000 Jaguar chip has 4 cores with 128 stream processors in a 15 W TDP, the Xbox/PS4 have 8 cores and either 768 or 1152 stream processors. Thanks to the much beefier graphics, the console chips will be a lot higher than double the wattage of the mobile Jaguar chips.

I should probably back that off a bit. Looks like the 7850 chip itself (the closest comparison AFAICT) is more like 105W, though the whole 7850 board TDP is 130W. Either way, the PS4 is using the same graphics core on the same process node. To turn out the 1.84 Tflops Sony has claimed, it will use about the same power.

More like 130W if the graphics side is going to have the flops Sony is claiming. The Xbox should be a bit less.

yikes that's normal desktop territory. About the same as the 360 though, so for all the extra performance I guess its not that bad.

It's actually pretty good for CPU + GPU, which the consoles are.

The RAM, spinning disk and other misc. controllers are going to have a minimal but nonzero power draw, but it's probably safe to assume that Sony can run the whole thing on a <200w PSU. In any case, they definitely fit all their components + adequate cooling + the power supply into a very tight space. I haven't seen comparisons with current gen, but the PS4 is significantly smaller than the Xbone, and the Xbone still has the external brick power supply similar to a 360.

I don't see any reason why that would be the case. Nobody--in this thread at least--is saying that AMD only makes rubbish CPUs. In fact, in most of the markets they compete in, AMD makes perfectly decent CPUs; it's just that almost without exception, Intel's offerings are much better.

There's nothing wrong with a company providing relative mediocrity in a market segment. It can, in fact, be quite lucrative, as GM seems to be demonstrating once again. Mediocrity serves a lot of people's needs just fine, but a competitive society leads to people who don't want to be seen to accept mediocrity, which leads to the occasional explosion of... fanboy delusions of grandeur, let's call it

I don't see any reason why that would be the case. Nobody--in this thread at least--is saying that AMD only makes rubbish CPUs. In fact, in most of the markets they compete in, AMD makes perfectly decent CPUs; it's just that almost without exception, Intel's offerings are much better.

There's nothing wrong with a company providing relative mediocrity in a market segment. It can, in fact, be quite lucrative, as GM seems to be demonstrating once again. Mediocrity serves a lot of people's needs just fine, but a competitive society leads to people who don't want to be seen to accept mediocrity, which leads to the occasional explosion of... fanboy delusions of grandeur, let's call it

AMD's high end desktop has been in turmoil lately, and obviously the bulldozer didn't deliver what was expected. But they seem to have started to recover with piledriver. I think steamroller will either get them to the point they should have been at or dig them deeper into this so called mediocrity. I don't think it was ever intended to be mediocre, they just shot for the stars and landed on the moon for the time being. If intel can't squeeze more out of their chips before another die shrink, perhaps AMD can nip at their heels. Being at a disadvantage fabrication wise, it would be pretty impressive for a 32nm chip to get close with a 22nm chip. The tables would be turned on the mediocrity statement, that's for sure. The 8350 compared to a 32nm intel chip i7 2600 is pretty comparable. So w/e.

Unfortunately all of this doesn't exist in a vacuum with respect to price.

Bulldozer cores were pretty miserable, and while Piledriver is better, you're looking at the top end of the FX lineup before you achieve parity with midrange Intel (excluding the 5 GHz and 4.7 GHz turbo parts for the time being.) It's hard to benchmark these things since they function so differently, but other than workloads that heavily favor the Piledriver architecture they tend to be a little over/under a midrange i5, sometimes dipping down into Sandy/Ivy i3 or Bloomfield 920 territory. Occasionally, though, they *do* challenge high end i7s, but to be reasonable, let's call the fastest FX, the 8350 an i5-4430/3450/2500 equivalent. That's not terrible in general... the workloads that don't favor Piledriver are a little rough (low-end i3 equivalent speeds), but it's within the range of CPU performance that somebody might buy even with a reasonable budget.

The FX-8350 in isolation isn't terrible, but it's $200, going up against an Intel CPU (likely the i5-4430 or i5-3450 as the closest equivalent) at $10 less ($190 retail.)

Then you have to look at the chipset--comparing the 990FX against the Z77/Z87. AMD used to have an advantage over Intel on included, native SATA 6Gbps ports-- a valid consideration-- but Haswell brings Intel up to parity (and if not, that could be fixed 3rd party.) AMD still lacks PCIe 3.0

I think a lot of that could be mitigated with more aggressive pricing, but AMD's just up against the wall with their Piledriver cores priced at parity with Intel. There's not a lot of profit, since they're more expensive to manufacture. Cut $50 from the FX-8350 and it becomes a fairly reasonable option in areas with relatively cheap power, but it doesn't leave AMD much wiggle room to make a profit. At $200, you're $30-50 away from an i5 K-series. That's painfully close to a *really* fast chip.

Then there's the heat/power problem... It's really a lot more severe than it looks on paper.

In areas where power is not cheap, it's almost a non-starter for Vishera with lower power/TDP Intel CPUs priced similarly. I pay about USD $0.08 per kwh which is on the low end for the U.S. People in Europe frequently pay >$0.30, topping out just over $0.40/kwh for Denmark. Hawaii's right behind Denmark and NYC splits the difference at about $0.20/kwh (for some real-world examples.) For the ~50w difference between Ivy Bridge and Vishera, I pay about 1/2 penny per hour. That adds up, but at 8 hours/day/365 days a year, the electricity cost difference is $15 dollars/year to run AMD, excluding A/C costs. Price that CPU $50+ cheaper, and I could consider the power costs a rounding error, even though it might be slightly more expensive in the long run.

Do the same thing in NYC, and it's $30/year.Denmark/Hawaii, and you're looking at $60/year.

Granted I'm going to have higher A/C costs offsetting those savings vs. some of the milder climates, but that's hard to measure. Even assuming $20-50/year in power cost difference, for a 3-5 year lifespan, going from Vishera to an i7 saves you money in those locations. Unless you're in some weird living arrangement like company housing, barracks or a dorm without metered power (or with mom & dad) where cost of power is absolutely not the slightest concern, the Vishera carries a non-trivial hidden price penalty.

Then look at those electricity prices for a 125w CPU (reflecting the 50w difference between a ~77w CPU) and extrapolate out to a 220w CPU. Triple those expenses for power, and it really looks grim for AMD pricing if the 5 GHz Vishera is only at parity with an i7-4770 or so... and there's no reason to believe it's going to be faster.

That's not taking into account the cost of water cooling or OEM pricing.

(And if you want to saw "screw the power costs" the i7 has some overclocking headroom; The 5 GHz Vishera has some on paper, but it can't be much.)

--

Richland is another animal. You're looking at a chip that is at both power and performance parity with Pentium/Celeron up to an i3 level of CPU performance, but priced appropriately. The CPU's roughly equivalent, but the GPU's faster than anything Intel provides to the desktop, bar none. If you were looking at not needing more than an i3/low-end graphics in the first place, then Richland makes lots of sense for lots of applications-- a small computer for basic office work/web surfing, HTPC, ow-end gaming on a tight budget, etc. It's a niche product that's actually very worthwhile within its niche. I have a harder time criticizing Trinity/Llano/Richland as long as the buyer understands that it's a completely different price/performance class of hardware.

Odd you bring up PCIE 3.0 (which is on newer AM3 boards) and then bring up power consumption. PCIE 3 from what I understand doesn't even come in to the picture until you have SLI cards. (I may be wrong, never used sli)

Its simple, AMD stumbled out of the gate, already at a known fabrication disadvantage, it would seem like an insurmountable task to recover, unless they mastered their issues and steamroller and excavator release as intended. By the time we hit excavator, their main vision for the FX chip will be realized (hopefully) and we'll see from there I guess.

AMD doesn't offer super high end chips, so intel is alone in that market. That may not be the case forever. Depends on how these next few iterations turn out.

I never, ever buy super high end. Even if I were to go intel, it wouldn't be one of their $1000 chips. I think that's why AMD has resonated with me recently. Hasn't always been the case, I recall a quad core from them being around $800 when AMD was king of the hill.

Haswell finally has caught up in the feature department with their mainboards, and is one of the reasons I am actually considering them. Do I care about video encoding on the cpu? No I have a gpu to do that now. Intel's margin of performance superiority displayed by review sites doesn't really sell me on a platform. (well up to a limit)

One of the main reasons I was sold on the FX was the promise of platform compatibility, whereas intel requires new motherboards on this new chip.

Provided you're running on GCN-based APUs or running a GCN-based GPU. You can easily run a GCN-based GPU on Intel or run nVidia on your FX processor.

It's still a dickish move on the part of the worst company in America though. It's not crippling-- nVidia has to put lots of resources into optimizing right at launch and can't get updated drivers out prior, but it's a severe annoyance for anybody who just wants to play. Granted nVidia refusing to license PhysX to AMD is kind of a dick move as well, but it doesn't make this decision pro-consumer in the slightest.

Provided you're running on GCN-based APUs or running a GCN-based GPU. You can easily run a GCN-based GPU on Intel or run nVidia on your FX processor.

It's still a dickish move on the part of the worst company in America though. It's not crippling-- nVidia has to put lots of resources into optimizing right at launch and can't get updated drivers out prior, but it's a severe annoyance for anybody who just wants to play. Granted nVidia refusing to license PhysX to AMD is kind of a dick move as well, but it doesn't make this decision pro-consumer in the slightest.

Odd you bring up PCIE 3.0 (which is on newer AM3 boards) and then bring up power consumption. PCIE 3 from what I understand doesn't even come in to the picture until you have SLI cards. (I may be wrong, never used sli)

Hmm... It looks like the latest 990FX boards have been revved to PCIe 3.0 if the mobo manufacturer support it. I wasn't aware of that. That's a pretty similar situation to x79 on Intel-- you're not guaranteed to get it with the platform.

Ironically, AMD was first out the gate with PCIe 3.0 cards and reported that high-end single GPU cards saw a ~1-3% performance benefit on PCIe 3.0. That's not earth-shattering, but it's something. Multiple GPUs on a single card benefit more(e.g. GTX690s or 7990s) Those have fallen somewhat out of favor, though. For SLI/Crossfire it's more of a flexibility thing. If you're on PCIe 2.0, running two GPUs in two x16 slots, you're going to want to allocate as close to 16 lanes per as possible, taking up 32 lanes of PCIe bandwidth (i.e. a lot of lanes.) You can get exactly equivalent performance by only allocating 8 lanes per, only using 16. It makes triple/quad (and higher) feasible.

We're starting to see SSDs hanging off the PCIe bus as well, and while the solutions now are proprietary, SATAExpress as a standard is coming. Early parts more than double what the fastest SSDs on SATA 6Gbps can do-- SSDs have outgrown the SATA bus.

So short version, it's a nice to have, but it's a little more than just nice to have for some uses.

Quote:

Its simple, AMD stumbled out of the gate, already at a known fabrication disadvantage, it would seem like an insurmountable task to recover, unless they mastered their issues and steamroller and excavator release as intended. By the time we hit excavator, their main vision for the FX chip will be realized (hopefully) and we'll see from there I guess.

The problem is that AMD... nVidia... and everybody else on Earth for that matter is at a huge disadvantage against Intel's fab technology.

The question remains whether AMD made a sound move in the core architecture primarily in terms of modularity or whether it's just a bad idea that works OK, but just can't ever be great-- a Netburst style failure if you will. Good in isolation but never capable of being competitive.

Like I said, Piledriver has helped with performance. AMD does have a history of botching a new architecture and then improving it with iterations. I'm just not convinced, but my crystal ball with Intel isn't clear either. They look to be deviating from predictable roadmaps.

Quote:

AMD doesn't offer super high end chips, so intel is alone in that market. That may not be the case forever. Depends on how these next few iterations turn out.

I never, ever buy super high end. Even if I were to go intel, it wouldn't be one of their $1000 chips. I think that's why AMD has resonated with me recently. Hasn't always been the case, I recall a quad core from them being around $800 when AMD was king of the hill.

The problem isn't that the CPUs in the $500-1000 price range exist. They're typically pretty awesome hardware-- I do get to play around with that class of hardware on occasion-- but they're not all that cost-effective either for the consumer. They're justifiable only in business cases where time is expensive.

I'm more concerned about the midrange quadcore parts. At ~$230-$350 --less if you live near a Microcenter-- the K-series i5 and i7 aren't terribly expensive. If Intel followed the pricing patterns of even 5 years ago, those CPUs might slot into price points in the $400 to $800 class, but they're not that expensive anymore. Until you go prosumer/S2011, the *most* expensive consumer Intel CPU's the i7-4770K at $350. Compare that to something like a Q6600, which was $800 at launch in 2007.

Quote:

Haswell finally has caught up in the feature department with their mainboards, and is one of the reasons I am actually considering them. Do I care about video encoding on the cpu? No I have a gpu to do that now. Intel's margin of performance superiority displayed by review sites doesn't really sell me on a platform. (well up to a limit)

One of the main reasons I was sold on the FX was the promise of platform compatibility, whereas intel requires new motherboards on this new chip.

It's hard to claim that AM3+ hasn't had some longevity, but if you had an AM3 board when Bulldozer came out, you'd still be in a jam in terms of compatibility. S1156 had a good run. The future of S1150 is unclear, but it's got at least two generations of Haswell lined up for it. AMD had been mulling an eventual merger of the FM and AM sockets at some point, so I'm not convinced AM3+ is going to last forever.

In any case, upgrading a CPU by itself is not done that frequently anymore since with either Piledriver or Core i, you're relatively unlikely to be CPU-bound. It could happen, but if you're looking replacing the CPU at a significantly later point in time, you'd better hope that the BIOS support is there and that there hasn't been a VRM requirement change.

Provided you're running on GCN-based APUs or running a GCN-based GPU. You can easily run a GCN-based GPU on Intel or run nVidia on your FX processor.

It's still a dickish move on the part of the worst company in America though. It's not crippling-- nVidia has to put lots of resources into optimizing right at launch and can't get updated drivers out prior, but it's a severe annoyance for anybody who just wants to play. Granted nVidia refusing to license PhysX to AMD is kind of a dick move as well, but it doesn't make this decision pro-consumer in the slightest.

Provided you're running on GCN-based APUs or running a GCN-based GPU. You can easily run a GCN-based GPU on Intel or run nVidia on your FX processor.

It's still a dickish move on the part of the worst company in America though. It's not crippling-- nVidia has to put lots of resources into optimizing right at launch and can't get updated drivers out prior, but it's a severe annoyance for anybody who just wants to play. Granted nVidia refusing to license PhysX to AMD is kind of a dick move as well, but it doesn't make this decision pro-consumer in the slightest.

We're starting to see SSDs hanging off the PCIe bus as well, and while the solutions now are proprietary, SATAExpress as a standard is coming. Early parts more than double what the fastest SSDs on SATA 6Gbps can do-- SSDs have outgrown the SATA bus.

Maybe so, but if your system dies on you, it would easier to move the SATA to another computer then PCIe.

We're starting to see SSDs hanging off the PCIe bus as well, and while the solutions now are proprietary, SATAExpress as a standard is coming. Early parts more than double what the fastest SSDs on SATA 6Gbps can do-- SSDs have outgrown the SATA bus.

Maybe so, but if your system dies on you, it would easier to move the SATA to another computer then PCIe.

The next revision of SATA involves plugging your device directly into the PCIe bus, as a standard. It's rumored to be integrated into the refresh of Haswell, mid life or failing that, Broadwell or whatever is the Broadwell-equivalent on the desktop.

It's not going to be any less standard than SATA or SAS, if plans pan out.

I stopped reading at someone mentioning cooking an egg on their GTX 770, wondered if you guys had seen this guy?

That was funny.

Quote:

Dude, if you wait as long as you did to do anything at all with your computers, you're going to risk having issues with standards/interfaces changing.

True. My point is every modern computer has SATA ports. However they may not have extra PCIe slots. So if your computer dies and you need to do work or to retrive your data with another computer then you are SOL if the computer that is avilable doesn't have a spare slot for a PCIe SSD.

And just about every self-built computer is on a motherboard with unused PCI-e slots in a typical build. Even the OEM prebuilts I see, the ones we use all have plenty of unused PCI-e slots. True, if you're talking bargain basement pieces of junk then you might not see many, but if you're buying bargain basement pieces of junk, you probably aren't buying SSDs either (especially not the newest shiniest near-future SATA Express ones).

(let alone pricey PCI-e SSDs that actually sit in a PCI-e slot, since those tend to be enterprise-marketed anyway with prices to match!)