Xbox One APU reverse engineered, reveals SRAM as the reason for small GPU

Share This article

For months, Sony and Microsoft fanboys have lined up to hurl insults at each other over which console would pack more hardware, hit higher performance targets, or prove a better design for the next generation. With the two consoles launched, the game-to-game comparisons have mostly come out a wash, with a slight edge for the PS4. But there’ve still been questions about the underlying chip design — which architecture is more efficient, and what unique sauce went into each console?

The fine folks at Chipworks have completed their teardown of the Xbox One and given us an answer to that question — and a few puzzles to go with it.

The Xbox One die is 363 square millimeters, up from the PS4’s 348 sq mm. The 5% additional space, despite having the smaller GPU core, is mostly due to RAM. The Xbox One contains a whopping 47MB of on-die RAM, and that pushes the die size up considerably. It’s also why Microsoft didn’t have room on the APU for a larger GPU.

Xbox One APU die shot, by Chipworks

There are some interesting differences to explore. First, consider the Xbox One’s Jaguar CPU blocks. Like the PS4, it has two quad-core chips — but the Xbox One has a bit of circuitry hanging off the CPU that the PS4 lacks. Here’s a comparison of the Xbox One and PS4 CPU islands. We had to rotate the blocks to line them up identically, which is why the label is reversed.

Xbox One (left) vs. PS4 (right) Jaguar CPU blocks

See the block in red? The PS4 doesn’t seem to have an equivalent. What it actually does is unclear. It’s a bit large to be the built-in audio or the IOMMU that HSA theoretically requires. There’s nothing analogous on any of the Kabini floor plans we’ve ever seen.

(It’s also possible that this is a Photoshop artifact or deliberate obfuscation. Companies often mask details on die shots. )

Now, over to the GPU. Like the Sony PS4, the Xbox One contains more Compute Units than are actually active on the console. The chip has 14 CUs, 12 of which are turned on, while the PS4 has 18 active CUs out of a 20 on-die. These are disabled to improve yield. Whether Sony or Microsoft might one day choose to enable the CUs in future console versions is an unknown — typically console manufacturers don’t update core specs post-launch, but consoles have been trending towards greater upgradeability over the past two generations. It’s not impossible that this could change.

The other mystery? The Xbox One GPU cores are physically shorter than the PS4’s equivalents. I don’t mean the GPU block, which is obviously smaller — one GPU Compute Unit on the PS4 diagram, is 50 pixels wide, 395 pixels tall. On the Xbox One, each Compute Unit is 42 pixels wide, 347 pixels tall. It looks as though Microsoft may have picked a tighter arrangement for its GPU core, again possibly to save the maximum amount of space and make room for as much SRAM on die as possible.

Speaking of SRAM, the arrangement of the Xbox One’s was a considerable puzzle when Microsoft unveiled the console architecture. According to the company, the Xbox One doesn’t really have a 32MB contiguous cache, but four 8MB cache blocks instead. There are two blocks of cache to the right of the GPU and a smaller block to the left. This smaller block is possibly used for cross-CPU communication.

A die shot of the Xbox One APU, showing AMD’s maker tag

It’s hard to tell exactly how the Xbox One’s 47MB of claimed SRAM fit into the floor plan, however. We know that the CPUs in question contain a total of 512K of L1 and 4MB of L2. If the two blocks to the right are ESRAM, each block should be 16MB, for a total of 32MB of cache there. The GPU should contain 512K to 1.5MB of L2 (512K being standard for a GCN chip of this size, with more L2 if Microsoft choose to boost that capability), and about 224K of L1 in total.

That leaves about 10MB of cache missing. If the SRAM block between the two CPUs is that large, it’s far more dense than the SRAM to the right of the GPU.

Chipworks also tore into the Xbox One controller, but it’s not that interesting. It has an ultra-low power Freescale microcrontroller and a Cortex-M0+ core. A custom Microsoft WiFi chip handles communication with the mother ship. The chip count here is kept minimal to speed manufacturing and lower cost. A teardown of Kinect should be up and available in the not-too-distant future.

The PS4 APU die shot, for comparison with the Xbox One

Different designs lead to similar places

After looking at both the Xbox One and PS4, I think we see companies arriving at the same point through rather different approaches. Both manufacturers chose architecture they felt would allow them to work most effectively. Microsoft invested more silicon in large, low latency caches, while Sony sank more money into raw bandwidth. As far as performance is concerned, this could well end up a tie; as the Xbox One should be able to access data more quickly, while the PS4 can stream sustained data far more effectively. Since game developers can leverage both of those features, the final result could be a wash.

Both companies also picked designs that should be relatively easy to migrate to new process nodes. As 20nm technology comes online, we’ll probably see refreshes in 12-18 months. It won’t surprise me if the first SSD designs start to pop up then, too — there’s too much potential upside in a premium SKU with solid state storage for either company to ignore the possibility.

Tagged In

Post a Comment

jay

So PS4 has 32 ROPS compared to 16 on Xbox One, and 384 more GPU cores…so then performance wise, how can this be a tie?

https://twitter.com/xarinatan Alexander ypema

But Caching™! It’s magical! And obviously developers are also magical entities that write their multi-dozen gigabyte games entirely in assembler in order to properly make use of these extra features. Not to mention you can totally fit an entire frame of 32 bits 1920×1080 pixels uncompressed in that <50MB ondie cache, let alone 60 of them so you have a single second of cached material.

There's a reason L1/2/3 cache never goes past 12MB, and it's not because it's hard to make. Microsoft is going to get bitten in the arse with this move.

David Tallon

You do not have to write code in assembler to take advantage of cache. Cache controllers do that via a layer of abstraction so the programmers do not have to worry about it. And one would not use the L1 cache for display frames because you are not likely to use that data again very soon. Cache would be for memory information that is expensive to access via the hard drive or RAM datapaths and is used frequently enough to warrant grabbing it and holding onto it. Only time will tell how its performance gain helps against the speedier data-paths in the playstation4 but as of launch, games look pretty damn similar to me (maybe excluding the few with a resolution disparity, but that’s a developer issue as the box and ps4 both support 1080p even though several titles were released for both that underperform in that category).

Andrew

This whole argument about trying to show a marginal advantage to the PS4 goes contrary to consumer intelligence and behavior.

The PS4 is running practically all games at 1080p @ 60 fps (frames per second) while the Xbox ONE is running practically all games at 720p @ 30 fps. The problem with the Xbox ONE is not that it cannot generate games at 1080p @ 60 fps, but that it does so only with games such as Forza 5 that does not require much GPU power because a lot of its environment is made up of static images.

Practically all cross platform games are running on the Xbox ONE at 720p @ 30 fps. The Xbox ONE’s anemic graphics throughput of 1.2 TFLOPS versus the PS4’s 1.84 TFLOPS is showing up early in the fact that cross platform games are overwhelmingly running on the PS4 at a native 1080p resolution at 60 fps in contrast. I am not saying that one should not buy the Xbox ONE, but to deceive consumers as many in the press are doing into believing that the two systems are closely matched is unconscionable.

The bottom line is that most gamers because of the prevalence of social media are well informed about the two consoles and are already voting with their dollars. They already know that the technical specs of both systems are substantially different and that such difference cannot result in the same outcome. And the reports are indicating that the performance difference between the PS4 and Xbox ONE is vast. Some have argued that up-scaling DVD movies to 1080p is marginally identical to native 1080p, yet consumers in overwhelming numbers are buying Blu-ray movies.

David Tallon

FIrst off, I do not wish to get into some idiotic fan boy argument. I will, however, reply to you since you seem to have taken my comment completely out of context. I was replying to the guy above me, who was making an erroneous computer engineering statement regarding cache. I was simply explaining that cache is used to speed up computation by reducing the need to go to the memory datapath as often which can significantly increase performance of a machine and that programmers DO NOT have to do anything to take advantage of it because the hardware controller is in charge of making the cache decisions. All your impressive data about how awesome the PS4 is does nothing contradict my factual statement about how cache works and the fact that only the future will tell if a system with significantly sized cache memory can overcome the deficit of the slower main memory they decided to use.

Now I will address your computer engineering mistakes. Firstly, The xbox has an GPU output of 1.32 TFlops and indeed the PS4 has 1.84TFlops. However, Adding pipelines and having an impressive computational speed does NOT change speedup over another system LINEARLY. This is the huge misconception going around the fanboy wars these days. The statement goes something like: “Well PS4 has 2x this and 2x that, so its 50% faster than the Xbox”, and it is an asinine one to make. Calculating speedup when comparing two machines is incredibly complex and takes into factor alot of things which is why the ACTUAL speedup is currently unknown but MANY tech sites, like this one, are calling it close to a wash (based on how current x86 PC machines work). It is because you do not experience linear gains from adding things, the actual gains are MUCH slower than that. You may well be right that the PS4 will come out to be a crazy more powerful machine later, but its just not true now, nor does any of the computer science point to this being the case. So, the bottom line here is that the ‘PRESS’ for whatever stake you think they have in it, are not deceiving the people, but rather are simply reporting what they know and not speculating wildly as you are and many fanboys across the interwebs are prone to do. That my friend is deceptive.

Finally, I accept your point that, “The bottom line is that most gamers because of the prevalence of social media are well informed about the two consoles and are already voting with their dollars” but think I should point out that it seems to go contrary to your opinion that the PS4 is significantly better. Seems like if people are indeed ‘well informed’ and ‘voting with their dollars’ and if we are to take this as a metric as to how much better one machine is over another, than the two systems are indeed very evenly matched as they have nearly identical launch sales.

cheers.

Bobby Long

Just wondering from a sales perspective, didn’t the X1 launch in 13 countries whereas the PS4 only launched in North America? The sales may have been nearly identical, but to me it seems like the X1 only balanced out because the sheer amount of countries it released in.

franken stein

Doesnt really matter because his “idea” that consumers are voting with their dollars which is better is a wash, and anyone that has any idea about consumerism knows better than that.. Last gen we had the PS3/360/Wii well the Wii outsold both PS3/360 was it really the more powerful better gaming console??? No it wasnt when you look at it from the graphical power side of the consoles but it still sold more why???(Money) Because historically and on average. Consumers tend to lean towards value and cost effective products to save money that can perform the functions just like the expensive brands.

His idea that the PS4 runs all games or most games in 1080p 60fps and the XB1 running all in 72030fps is just flat out not true. He is clearly into fabricating facts to bolster his Sony biased opinions therefore losing his validity to publicly state any real knowledgeable opinions or facts.

Jack Zahran

While that’s true, there are still other factors you need to consider:

– Initial “Sales” are usually filling distribution channels and retail shelves. Actual sell-through to customers will not be known for a few months. To me that indicates Microsoft may be filling shelves more than selling to consumers. Distributing to 13 countries required them to fill more pipelines.

– This seeming Microsoft negative can actually backfire on Sony. If Sony can’t deliver, their current higher demand will go stale. So they have to grow their distribution and satisfy their demand quickly in order to grow their initial momentum.

Initial sales are more about distribution backed up by marketing to then get the products off the shelves.

By March or April we’ll know for sure whose winning. I’m afraid though the real winner will wind up coming from left field. When it comes to 720P and 1080P gaming, an iPad Air and iPhone 5s coupled with AppleTV is more than good enough and has its own advantages.

Guest

Nearly identical launch sales with Xbox One released in 13 countries and the PS4 in just 2 at time of this article. I’d say it is asinine of you to disregard the fact that consumers are speaking with their money. Pretty obvious, isn’t it?

franken stein

As of Dec 1st the PS4 has sold 2.1 million units in 32 countries. XB1 has sold as of the 11th of Dec 2 million units in 13 countries. So actually the PS4 is available in way more areas than the XB1 and only has a slight higher figure. If the XB1 was available in the same 32 countries the XB1 would be leading the sales war as of now with ease. Basic math dictates.

Johnathon Allen

So you’re saying the ps4 launched in two countries with limited supply, and some how to this date magically has launched in 48 countries world wide one month later ? Yet you can’t buy one in stores because they’re sold out? Sorry buddy ps4 launched in more than 2 countries at the time of this article.

blake

your wrong there money was speaking to the consumers the cheaper console is the one they go for for just that reason

GK15

Wow. A year later, that guy (like many PS fanboys) looks really dumb. Right now we are seeing games that are pretty much identical with maybe a slight difference in native resolution or a few fps. I’m perfectly fine with that as I dont think either are a big deal.
Interesting post, you obviously know what you’re talking about. Cheers

Anti haters and fanboys

lol what are u saying (The PS4 is running practically all games at 1080p @ 60 fps (frames per second) while the Xbox ONE is running practically all games at 720p @ 30 fps.) whatttttt
dont lie to the people fanboy only 3 xbox one games runing at 720p call of duty ghost, killer instinct, dead rising 3, ryse runs at 900p same with battlefield 4 at 900p with day one actualization

except for the fact that only one game thus far has been ran at 720p 30 frames per seconed all the rest were ran at either 1080p 60 frames or 1240 80 frames

https://twitter.com/xarinatan Alexander ypema

This isn’t L1 cache, it’s SRAM. It’s not taken advantage of by default like L1/2/3 cache is, you have to make your own instruction cache. And even if it was, there’s hardly ever more than 10MB used by these instructions, really putting 50MB of that stuff ondie is a huge waste of space that could’ve been used to put in more actual processing power. Not to mention the PS4 solves the problem better than the Xbox One; They’re using GDDR5 which just has much more bandwidth overall, versus the much slower DDR3 in the Xbox one so you don’t have to cache things, the L1/2/3 cache is plenty.

The reason that the games aren’t 1080p really just is because the hardware inside just can’t handle these games in 1080p. That AMD APU is a customized but otherwise generic off the shelf AMD Kaviar which will be available for laptops in a few months, and we’re talking about games that you need a pretty beefy rig for to max out, there’s 30GB+ of textures in both BF4 and COD:Ghosts, the bandwidth demands are insane. They look good, but that’s also the reason they won’t run maxed out on consoles, because they’re really just commodity PCs.

That said, you’d need a side by side comparison to really spot the differences, hardware these days is powerful enough to run all these games with almost similar polygon counts and texture sizes, the difference is in the details, stuff that’s left out like volumetric fog, more realistic shadows, etc.

And all that said, I still heavily prefer developing for PCs, if only for the fact I can debug on the same rig I’m writing the software and I don’t have to go through all kinds of convoluted SDKs in order to just make something work like it does for me on my PC when I just hit F5.

eAbyss

The RAM and cache situation is a bit more complicated than you would like to lead people to believe. First off CPUs love low latency and GPUs love high bandwidth. Because of this Sony has crippled the PS4’s CPU with DDR5 and Microsoft has crippled the X1’s GPU with DDR3. What does this mean? This means better CPU performance in the X1 and better GPU performance in the PS4. But…

The X1 actually has a higher CPU and GPU clock speed than the PS4 and it’s higher GPU speed actually makes it perform better than if they had just unlocked it’s two additional compute units due to additional CUs not adding performance linearly (decreasing reward for additional CUs). As everyone should know GPU’s, ESPECIALLY AMD’s, are highly CPU dependent (this is why high end i7s are used for GPU benchmarks, to eliminate CPU bottlenecks) which means that the PS4’s GPU may turn out to be bottlenecked by it’s CPU. As for Microsoft’s bandwidth problem, they added the high speed SRAM as a cache to help solve this and the GPU can actually access data from both the SRAM and the DDR3 at the same time making it’s total accessible bandwidth larger than either of them separately. The X1’s total bandwidth is estimated to be near (slightly above or below) that of the PS4.

Now on to other things.

Audio: Microsoft has done great here, it’s custom gaming audio block (SHAPE) is capable of 512 audio channels, taking over virtually all of the audio processing and providing excellent quality audio. The PS4 on the other hand has an audio block capable of only 200 channels, offloading much of it’s audio processing to it’s slower CPU and providing reduced audio quality compared to the X1.

Compute engines: These are part of the GPU and help with physics and AI. The problem with compute engines though is that they can only be used while the GPU isn’t busy rendering graphics which it will be doing almost all the time. The X1 is capable of a total of 16 commands at a time while the PS4 is capable of a total of 64. This is more of a wash though because it’s extremely unlikely that anything besides exclusive titles will be able to use even 16. There is only one title out right now on either console that uses any, it’s Killzone (PS4 exclusive) and it only uses 1 out of 64.

Move engines: Only the the X1 has one and it has a total of 4 move engine processors. Move engines greatly reduce the overhead of transporting data between the CPU and the GPU, freeing up clock cycles for more important things (improving CPU and GPU performance). It also helps transfer data to the SRAM.

In the end due to Microsoft’s customizations, the X1 has an advantage in CPU power and not quite as much of a disadvantage in graphics as many would like you to believe. The PS4 has a small advantage in graphics which probably won’t pan out to much in the end due to the consoles not being that far apart and developers programming to the lowest common denominators.

“The reason that the games aren’t 1080p really just is because the hardware inside just can’t handle these games in 1080p.”

This is wrong and you know it. Both consoles are more than capable of displaying games in 1080p60. The problem is that the developers haven’t had enough time with the finalized hardware in order to more fully utilize the hardware. Infinity Ward actually came out about a month ago saying that the X1’s hardware is capable and they were planning on 1080p60 for Ghosts but they lowered it last minute to 720p to keep it at 60fps because of the change in the finalized hardware. The PS4 version was actually shipped with 720p60 single player but they upped it last minute with a patch to 1080p60.

PS4 doesn’t even run BF4 at 1080p60, it runs at 900p60. I guess that means that the PS4 sucks too huh? Or maybe, just maybe it’s like the last generation and it will take time for the devs to figure things out.

In the end you probably won’t notice a difference between the two besides their exclusive titles so just buy whichever has your favorite exclusives.

https://twitter.com/xarinatan Alexander ypema

You’re right about the part that GPUs love bandwidth (because they have to fetch all the textures every frame) and CPUs love low latencies (because they have to perform in real time).. But:

“The X1 actually has a higher CPU and GPU clock speed than the PS4 and it’s higher GPU speed actually makes it perform better than if they had just unlocked it’s two additional compute units due to additional CUs not adding performance linearly”

-Yes, but the minimal overclock that the Xbox one has does not at all make it compete with the PS4. More CUs makes a much larger difference than the slight difference in clockspeed.

“As everyone should know GPU’s, ESPECIALLY AMD’s, are highly CPU dependent (this is why high end i7s are used for GPU benchmarks, to eliminate CPU bottlenecks) which means that the PS4’s GPU may turn out to be bottlenecked by it’s CPU.”

-False. GPU steering frameworks, such as DirectX, are heavily CPU dependant. The reason benchmark rigs use high end CPUs is because well, why the hell would you use a low end CPU if you’re going to benchmark speed? This is just as true for Nvidia as it is for AMD. OpenGL has a much lower dependency on the CPU, however, the Xbox One does not support it (they’ve disabled it in the software/firmware), whereas the PS4 does. The GPU does not at all have to communicate with the CPU, that’s what they invented DMA for.

” As for Microsoft’s bandwidth problem, they added the high speed SRAM as a cache to help solve this and the GPU can actually access data from both the SRAM and the DDR3 at the same time making it’s total accessible bandwidth larger than either of them separately. The X1’s total bandwidth is estimated to be near (slightly above or below) that of the PS4.”

-So what you’re saying is, my HDD has 10GB/s+ bandwidth, seeing it would be added up to the speed of the RAM? Sorry but, that’s not how it works. It’s nice you have a 47MB cache in which you can stuff some variables and the like, but it will hardly impact the performance and will certainly not make up for the sluggish speed of DDR3 vs GDDR5.

“Audio: Microsoft has done great here, it’s custom gaming audio block (SHAPE) is capable of 512 audio channels, taking over virtually all of the audio processing and providing excellent quality audio. The PS4 on the other hand has an audio block capable of only 200 channels, offloading much of it’s audio processing to it’s slower CPU and providing reduced audio quality compared to the X1.”

-Games hardly ever use more than 100 channels at the same time, and they’re usually mixed within the game engine into a single channel for the soundcard. It’s a nice feat but I doubt it makes much meaningful difference.

“This is wrong and you know it. Both consoles are more than capable of displaying games in 1080p60. The problem is that the developers haven’t had enough time with the finalized hardware in order to more fully utilize the hardware.”
-No. Sure you could squeeze out more performance out of pretty much anything out there by optimizing and rewriting and it takes a lot of effort to do so, but in the end, these consoles simply miss the raw horsepower in order to run it at 1080p out of the box at 60FPS. They lowered the resolution in order to guarantee 60FPS with the current state of the engine, which given time may improve, but really that does not change the fact that the consoles are not fast enough to keep up with the engine. It’s kind of like saying “My Toyota Prius can keep up with your Ferrari! Just, I haven’t tweaked the engine enough so I’m stuck at half the speed you are going” or, “My 8 year old PC can’t run that game on highest settings right now because the game isn’t optimized enough yet”. TECHNICALLY you’re right, but really it’s just the same thing as “the consoles are too slow” put in other words.

Both consoles suck. They’re DRM-restricted commodity hardware that could be running just about anything you could imagine if it didn’t have the console sticker they’re wearing. They’re both. Just. PCs. Slow ones, too. In a few months you’ll be able to get that exact chip, minus the customizations, in LAPTOPS. Average laptops.

But yes, you won’t notice a difference because that would cause the lesser party to bitchslap the fuck out of the developers with lawyers. Hardware wise, the PS4 is better than the Xbox one, but I wouldn’t touch either with a stick myself.

Mike H

Jaguar has a DDR3/DDR4 and a GDDR5 compatible controller. It could be that processor allocated memory uses a slightly different data path. It wouldn’t be that hard to allocate blocks of ram to DDR3 spec and maintain unified memory.

Mike H

Jaguar has a DDR3/DDR4 and a GDDR5 compatible controller. It could be that processor allocated memory uses a slightly different data path. It wouldn’t be that hard to allocate blocks of ram to DDR3 spec and maintain unified memory.

Mike H

The PS4 is actually 15 % faster CPU wise.

hypernovae

This is incorrect. The SRAM is handled automatically if the developers choose to allow it. Xbox-OS will interface the cache through the memory pipeline seemlessly as they would allocate any memory resource. This was confirmed by the development team already.

https://twitter.com/xarinatan Alexander ypema

Well perhaps, I can imagine it would be, but as I mentioned earlier:
“And even if it was, there’s hardly ever more than 10MB used by these instructions, really putting 50MB of that stuff ondie is a huge waste of space that could’ve been used to put in more actual processing power. Not to mention the PS4 solves the problem better than the Xbox One; They’re using GDDR5 which just has much more bandwidth overall, versus the much slower DDR3 in the Xbox one so you don’t have to cache things, the L1/2/3 cache is plenty.”

Marlon

Okay, as I suggested above, there are things people aren’t thinking about:
Identical Hardware:

These are the same GPU’s and same CPU families, with varying intricacies. Like the article suggests; Microsoft and Sony took slightly different avenues to arrive at the same point. Memory speed, everything. Capability-wise, there isn’t a single thing the PS4’s 7870 can do that any other 7xxxx series cannot, they are the same family and they are both DX11.1+ capable. EVen, had the Xbox used a 5XXX GPU, the same could be said…but performance would be slower on the 5XXX. Any DX11 card is essentially, the same. So, we know the PS4 has 16 extra ROP’s that doesn’t translate into anything more than higher pixel count. Same capabilities, higher pixel count. I say “theoretical” because we don’t know if Microsoft has some amazing method in which they can makeup ground e.g. some kind of amazing scaling technology etc who knows. It’s all in how you use the HW. Now, there is a debate in just how much you can notice on 1080p TV, in terms of pixels. 4K isn’t going to be affordable anytime soon and by time that happens, we’ll be halfway or more through this cycle and the next console rumors will be around the bend…only then, may you begin to notice the picture difference and that’s where I believe Sony did a better job future proofing their system and when you will see the benefit of 16 more ROP’s. 384 more shaders? That means better quality lighting and shadowing, that’s negligible, again, it won’t translate into a major difference, if you compare the two, unless, you make a habit out of parking yourself in front of a wall and staring at every crack and surface detail…if so, maybe you ought to be a game developer and not a gamer.

Software:

The PS3 had smashing specs. Far beyond the CPU that is even in my PC today. On paper, the PS3 was leaps and bounds ahead. The CELL could even double as a GPU and produce HDR (see Heavenly Sword) and make up for what the GPU lacked…couple that with more promising particle effects and physics…etc. etc. This generation, it’s just a difference in shaders and pixel count. Despite, the PS3’s specs on paper, Microsoft games still ended up looking better and if not, identical. Software is the biggest varying difference. Drivers, OS, SDK’s…patents/ methods, all of which we know MS excels at.

jay

So, what we know is PS4 has better GPU, but what we DON’T know is that if Microsoft has this “amazing, magical piece” somewhere tucked inside Xbox One.

Again, PS3 and Xbox 360 were completely different machines, and it took 8 years for PS3 to catch-up and exceed 360’s render performance and quality. We all know that, it’s a moot point in PS4 and Xbox One discussion.

I don’t care about what one can notice on a 1080p or 4K TV, it all depends on their own personal environments and preferences.

We are talking about the capabilities of PS4 and Xbox One on a 1080p TV.

Today, PS4 and Xbox One are almost exactly the same. You are dismissing 16 more ROPS and 384 GPU cores of PS4 which will make a noticeable impact if used properly in cross-platform titles, but at the same time speculating that Xbox One “might” have something amazing and magical somewhere that will pull it ahead of PS4’s rendering performance?

PS4 will always be ahead of Xbox One, period.

Now how Sony and Microsoft enable developers and what these developers do with these machines will be seen in the coming years.

Cheers!

Marlon

I’m not dismissing PS4’s more RAW power and bandwidth, I’m saying it does not equate to more capabilities. The APU’s are of the same family. My point, in extending back to the HD5000 series, is merely to suggest that even those cards can compete, when software is being written for it. It’s all DX11, for instance. Sure, the PS4 has more ROP’s which translate into more bandwidth, which translates into more pixels and polygons/ higher resolution. There are 384 more shaders, for better shaders effects, but that’s negligible difference compared to the PS3/ 360 era, which, surprisingly was close as well, all things considered. All I’m saying, is, with the selected hardware (and who knows what software methods/ patents), they can come up with a slightly scaled down ratio, still well above 1080p at lower polygon counts and the differences would be negligible. You tell me what what 1.83 TFlops vs 1.31 will make. At the end of the day, it does not translate into capabilities, it translates into more polygons and pixels (resolution). When Microsoft quoted “The PS4 cannot do anything the Xbox One cannot do” they were correct. It’s all the bloody same GPU family. Now, if the PS4 has a DX12/ OpenGL 5 (?) comparable chip in their unit, then you can take what I said and throw it out the door, but the fact of the matter is, they are capable of the same things, with pixel/ polygon count being the differing factor. At the end of the day, software is going to be the determining factor.

https://twitter.com/xarinatan Alexander ypema

The little bits and pieces that are different are expendable really, they won’t make a difference. The PS4’s hardware is faster, period.

What DOES make a difference is that Microsoft is known for making developing for their platforms like delicious fresh cake compared to the ‘moldy bread from last week’ that the Playstation’s SDKs have been so far. Microsoft is sitting on -THE- best developer tool out there; Visual Studio (fellow developers will agree. Unless they’re Richard Stallman or Linus Torvalds of course).
I’m pretty sure we’ll see much more especially smaller titles for the Xbox One considering how easy it will likely be to compile and debug stuff on it.

But hey, since we’re talking about what’s easy to develop for, you know what has a billionfold more software available right from the box than either these consoles will ever have for the exact same price? PCs!
Let me just take another moment here to rub in everyone’s faces that these “Consoles” are really just crippled laptops (AMD APUs are commonly found in low-midrange laptops) that will only ever run the few OEM-endorsed titles that will appear in the next few years that a console ‘cycle’ lasts. BRING ON THE BUTTHURT. YOUR TEARS ARE DELICIOUS.

Dustymack

I’ve been waiting for the PC market to merge with the console market for ages. Its like they keep reinventing the wheel.

RENAN MORINIGO SANTORI

I love it how this guy was defending ps4 from the very beinging and then when someone comes along and shoves facts in his face about both consles being almost alike and proving it through facts, he gose “PC Is way gooder and all have buy PC” dude for real….gaming started on consoles and it’ll always be better on consoles.. WHY??? TITLES! I can name a list of at least 100 games that PC never saw and will never see. And pc are better true! BUT who plays anything other then MMO on a pc anyways?Like what 50 people? The true essence of gaming has always been on consoles.

https://twitter.com/xarinatan Alexander ypema

Hey, Psst, Consoles are PCs these days. Literally, identical hardware. Except a PC you can upgrade and customize. Did you know you can hook up console controllers to a PC? And did you know you can get a steambox with specs equal to that of a PS4 for less money? The only reason the titles stay on the consoles is because Sony, Microsoft and Nintendo would hate to lose their monopoly to dictate the price and quality of modern games, and they pay big money to developers to simply not release those games on the PC. If everyone would get off those consoles and see what PCs can do, developers would just give those 3 companies the boot and release their otherwise console-only games on the PC. And ehh, those that don’t, you can always emulate. PS3 emulator isn’t too far off, and PS4 emulator is quite simple since they’re PC architecture (x86 CPU, AMD chipset and GPU if you want fancy nerd terminology), The PS1 was unique hardware, the PS2 somewhat too, the PS3 was just powerpc/cell which is off the shelf hardware (used in pre-2007 Macs), the PS4 is literally a PC, with an AMD laptop chipset.
The only reason consoles still exist is because people are too stubborn and afraid of the unknown to try a PC for gaming.

Trever Grissam

Newer ps3 vs 360 games tend to look better on the ps3 now when it comes to thing like lighting and particle. Not a huge deal, however why is the ps4’s custom chips to offload things like streaming, downloading and video encoding being ignored in some earlier post like the one mention xbox one having more audio channels ?

Andrew

These reporters are going to owe their readers a big apology when their consoles end up performing significantly worse. How then can they explain the fact that practically all cross platform games are running at 1080p @ 60 fps on the PS4 while these same games are running at 720p @ 30 fps on the Xbox ONE.

Could they please explain to gamers how the PlayStation 4 graphics throughput at 1.84 TFLOPS versus the Xbox ONE’s throughput at 1.2 TFLOPS make it a wash?

Are they saying that though the PlayStation 4 comes with an ultra-fast 8 GB of GDDR5 graphics memory, it does not end up having a significant performance advantage over the Xbox ONE which comes with the much slower 8 GB of DDR3 PC memory?

Are they trying to tell us that the Xbox ONE’s science fiction 32 MB memory overcomes the overwhelming hardware advantage of the PS4, even in light of the fact that the PS4 has 18 CUs (Compute Units) versus the Xbox ONE’s 12 CUs?

Am I starting to read articles at tech sites that are no longer making sense? It is starting to appear as if a lot of these tech articles are being written in Microsoft’s PR division and handed out for general consumption.

Mike McKee

It could be that, or it could be that these tech writers actually know what they are talking about and you don’t. At the least, your claim that all XBOX ONE titles run at 720p/30fps is ludicrous and easily refuted (Forza – 1080p/60fps, Ryse – 900p/30fps, Battlefield 4 – 720p, 60fps, etc).

A game that needs only 1.3 TFLOPS will run on either system fine. That will be the cross-platform target for now.

In the near future those same games will reduce base requirements by perhaps 0.1 or 0.2 TFLOPS due to optimization. This will allow more to be done in newer games using the freed resources and thus better games will emerge.

Developers say the PS4 is hard to optimize for without going to hardware-level instructions ala Mantle, which is NOT a common practice and will take time to catch on. Microsoft on the other hand is always optimizing their tools and working directly with development studios. At least in the immediate future, Microsoft will likely take the lead on optimization.

When the Xbox One edition of a game requires 0.5 TFLOPS less to do the same things as the PS4 edition, that gap in raw hardware power will mean nothing. The way Microsoft’s acting, this is likely going to be the case in about a year or maybe two if there’s complications. Effectively, this leaves both consoles at about the same output ‘power’ and nobody will have any reason to complain about either side. Gamers can rejoice as making games exclusive will be dumb since there’s no perceived reason for one console over another. Glitter and rainbows for days, son.

However Microsoft will still have more features with hardware support (such as video chat while gaming, multi-way video chat for parties, Twitch-specific quality h.264 encoded streaming video, etc.) where Sony will have to either sacrifice system power to compete or will have to admit defeat like they did with cross-game party chat systems on the PS3. I’m not saying those features will ultimately matter to everyone, but the availability is appealing and (as with 360 parties) may prove invaluable in rare cases. This could be the real area where we see one system take over the other.

eAbyss

Yep, it’s going to come down to software just like the last generation. It doesn’t matter if you have higher hardware specs (on paper) if the other guy is able to utilize his hardware more efficiently.

Twitch-specific quality h.264 encoded streaming video, etc at least for this one doubt power would have to be sacrificed from the gaming end unless the dedicated chip for encoding theses stream they have is some how deficient. The video chat might be different though I don’t see why it would take more resource to achieve than the windows on the xbox unless they are just lazy on the implementation. For me I am more interested in seeing the exclusive tittles for the consoles than perfect 1080p resolution though. I have often leaned towards sony’s first party studios on this but that can change.

carbonFibreOptik

A game that needs only 1.3 TFLOPS will run on either system fine. That will be the cross-platform target for now.

In the near future those same games will reduce base requirements by perhaps 0.1 or 0.2 TFLOPS due to optimization. This will allow more to be done in newer games using the freed resources and thus better games will emerge.

Developers say the PS4 is hard to optimize for without going to hardware-level instructions ala Mantle, which is NOT a common practice and will take time to catch on. Microsoft on the other hand is always optimizing their tools and working directly with development studios. At least in the immediate future, Microsoft will likely take the lead on optimization.

When the Xbox One edition of a game requires 0.5 TFLOPS less to do the same things as the PS4 edition, that gap in raw hardware power will mean nothing. The way Microsoft’s acting, this is likely going to be the case in about a year or maybe two if there’s complications. Effectively, this leaves both consoles at about the same output ‘power’ and nobody will have any reason to complain about either side. Gamers can rejoice as making games exclusive will be dumb since there’s no perceived reason for one console over another. Glitter and rainbows for days, son.

However Microsoft will still have more features with hardware support (such as video chat while gaming, multi-way video chat for parties, Twitch-specific quality h.264 encoded streaming video, etc.) where Sony will have to either sacrifice system power to compete or will have to admit defeat like they did with cross-game party chat systems on the PS3. I’m not saying those features will ultimately matter to everyone, but the availability is appealing and (as with 360 parties) may prove invaluable in rare cases. This could be the real area where we see one system take over the other.

Metal

it’s sony.. they always fuck it up

WGP

Because sites these days are overly diplomatic to the point of absurdity, and will pretend that “720p and 1080p are pratically the same”, “fill-rate doesn’t matter”, “advanced effects are over rated” blah blah blah.

They make the PS3 CELL argument, which really does not apply when the systems are almost identical expect that PS4 just has more.

Ease of development doesn’t matter either, apparently no one seems to remember that PS3 was heavily criticized because of its split pool of memory, that combined with a GPU with a lower fill-rate (looking at you 16 ROPs) just made multi-platform development a nightmare.

“But the Xbox One has Direct X…”, what you mean the API with like 20 years of legacy support baggage? You mean the API that hasn’t been updated pretty much since Windows 7 released? Sorry but it’s really annoying when it has been nothing but praise for Sony’s new SDKs (see PS4 and Vita), but journalists still go on about “Well Microsoft has the software advantage…”

In conclusion, in a way Microsoft and Sony took two different approaches to arrive at the same place, and that place was 8 GB of system memory.

Microsoft took the safe route of sticking with DDR3 and adding embedded RAM on the chip, ultimately causing the die to be bigger (higher cost) and having less rendering overhead (smaller GPU).

Sony made a gamble that they could secure enough 512 MB GDDR5 modules to meet the demand of a global market, and based on the massive initial units available, I’d say that their higher performing, more developer friendly gamble paid off. #DealWithIt

Marlon

Yep, as I have always said, it’s about how the data is handled and what you do with the software. Ignorant kids automatically consider the PS4’s 50% faster performance (once again, just as with PS3) to be endgame in the contest. Not even close. That 50% performance boost (actually, slightly less, since, Microsoft’s speed boost), could work out to serve as a mere portion of headroom. It just depends on things like SDK’s, drivers, game programming and operating system . People get lost in the specs on paper. They are admirable and exciting, but that’s about as far as it should take you. Here’s why:

Sony’s PS3 had a supercomputing chip in it and it was supposed to have two of those, initially, but Sony had budget issues and opted, last minute, for an off the shelf GPU. On paper, the PS3 had the power to crush anything…even, the PC (had it only more RAM) and what happened during the PS3 vs 360 war? Microsoft was often times ahead, if not directly neck-to-neck. Microsoft’s unified design was easier to develop for and make the most out of. Resulting in snappier, crisp, high resolution gameplay…the biggest contributing factor? They have the best SDK’s and software all around…drivers, OS etc. In fact, he same SDK’s are used for PC, which had been used for years. One article I read, back in 2007, named Sony’s SDK’s buggy and hard to use. I even recall one developer quote, “The PS3 is a complete waste of everyone’s time and a pain in the ass.” That’s a direct quote. What good is the worlds fastest automobile engine, if the ECU is shit? I’m sorry, but you will find that Microsoft will, once again, make more out of less….just by exercising what they know with PC development, DirectX (the Xbox’s namesake) operating systems, data handling, drivers and networking.

We’re so soaked in this eternal PS4/ Xbox war that we don’t even look at the most threatening contender…poised to throw PC’s back at the top of gaming and looking very promising, I might add: Steambox. Heck, Microsoft and Sony don’t even hate each other as much as we hate choose to hate them. They’re probably on each others speed dial. They obviously coordinated and settled on a hardware standard to make it easier to develop/ port between their two systems. You don’t think they just coincidentally selected the same hardware, do you? Microsoft has taken the hit for bad PS2/ PS3 ports to Xbox/ Xbox 360, Sony has taken a hit for bad Xbox/ PC ports to their machines. Sony sells products with Windows on them (Vaio’s), Microsoft uses Sony’s Blu-Ray technology…the list goes on.

massau

doesn’t sony use openGL for there PS4.

also the XBOX 360 was better than the PS3 because the PS3 was really hard to program/optimise for. Also there is no gaming company that will give one console better graphics than the other except if they get paid to do it they both optimise the game until it its a decent FPS.

Marlon

Historically, they’ve predominantly use OpenGl and I think OpenCl now, too. Last I read, they were using a modified version of DX11. Whether, that means they are modifying existing DX11 components at Microsoft’s discretion, or the supposed, “acquired feature sets,” I don’t know. When I last read on it, Sony fans were taking that as sort of an insult, forcing Sony to comment with something to the effect of “acquired/ borrowed features” or something like that, to sugar coat the matter. I wouldn’t blame them for using it, it’s what 98% of the game industry optimizes their games for. I’m expecting SteamOS will bring OpenGL back into the picture on a larger scale, again, for the first time since the turn of the millennium. You used to have a choice in what you wanted to run your games in, OpenGL or DX, by modifying INI files. The only time I’ve seen the option are in ID games, anymore…as they actively support the Open seen, but I may be wrong. I also wouldn’t blame Sony if, now, they have to use Microsoft’s XNA or “equivalent” or whatever “borrowed” software they may have…it’s what developers are used to.

Anyway, the point being, they are paying each other royalties. Most everything uses USB. Microsoft is one of the founding companies, along with Intel and IBM, who established USB.

Mirimon

imo, it’s all moot.. the new features in dx 11.2 came from OpenGL in the first place, both consoles can use either.
I don’t mind OGL, great for spitting out decent games fast. But for those truly great AAA titles, that stand out and define a platform, OpenGL is where it’s at and worth the extra week of development.

anyways.. none of those cool features we leaned about the new ogl and dx11.2 will likely be seen in games for another 5-6 years.

carbonFibreOptik

The supposed ‘new features’ you speak of were under development by Microsoft during the early SDK phase of DirectX 10. Things like tiled resources were a Microsoft thing first, stolen and implemented by OpenGL coders for fame, standardized later, and then (and this is the important part) never incorporated into hardware-compatible solutions. In DirectX 11.1/11.2 all of these features are either directly processed on dedicated graphics hardware or have the ability to be easily loaded into hardware compute systems with little loss in efficiency.

Tiled resources in particular is a joke when done the software-only way as OpenGL requires. It takes an extra 20% longer to draw a texture than if you just brute-forced large textures using tons of vRAM the hard way. The original DirectX version though works flawlessly on the current set of cards (nVidia 700 series for example) and allows the benefits of almost 50:1 vRAM savings over traditional texturing methods.

Battlefield 4 on PC (and allegedly XB1, though that’s a flaky rumor) makes use of many of these DX 11.2 features. With them being ~standard~ on the XB1, I expect a huge rise of these features on PC versions of games as those features easily translate to a console when porting. We won’t wait half a decade to see these features. They’re being used in development-stage games as we speak.

I’m not trying to sound condescending or anything, just stating the facts. Ever since the new millennium OpenGl has been stealing ideas and cloning practices from DirectX more and more if only to stay relevant. I’m actually hoping SteamOS makes the main contributors to OpenGL realize that they can and should do their own thing rather than mimic what’s successful. I for one foster the idea of competition, so long as it leads to progress.

Xplorer4x4

Isn’t Sony using a modified FreeBSD OS? So wouldn’t it be easier to use OpenGL?

carbonFibreOptik

They aren’t using DirectX, only emulating certain feature sets in hardware-level code (similar to AMD’s own Mantle API). Microsoft specifically stated they wouldn’t allow ‘the competition’ to use the most up-to-date version of DirectX in a console.

Sony uses OpenCL, but their wrapper is capable of DirectX 11 (though at significantly reduced efficiency). None of that overly matters since in another year or two virtually all AAA developers will be using the low level API instead of the DirectX/OpenCL wrapper for both consoles and it will cease to be relevant.

massau

i assume you mean openGL (graphics lib).
openCL(computing lib) is computing and will not be replaced by mantle.

eAbyss

I don’t know about OpenGL but they do use DX11.

tbss

PS3 did things 360 could not. No games on 360 look as good as Killzone, Uncharted, Heavy Rain, or The Last of Us.

massau

killzone is ps3 only thats why it had better graphics because they optemised it only for the ps3. like i said if a company makes a game for ps4 and xbone than they will look the same (exept maybe for the resolution). they will not make and extra investion for special textures and optimisation if they don’t get paied for it.

maybe if it is also ported to the pc than there might be 3 different qualety textures.

jay

PS3 and Xbox360 were apples and oranges when it comes to writing code.
PS4 and Xbox One both are like Pomegranate, PS4 has more delicious seeds in it. They have the same architecture, Xbox One OS can be “easily” ported to PS4 and vice versa, same can’t be said of PS3 and Xbox360.

All this talk of DirectX is irrelevant when both consoles’ game OS is already close to the metal (supposedly). It all comes down to how much headroom there is and what developers will do with it. Xbox One will hit the wall sooner than PS4.

This generation, we can easily tell which one is more powerful already.

Cheers!

carbonFibreOptik

Sony doesn’t like to optimize or develop new tools unless they absolutely need to. They’re infamous in developers’ eyes for handing out PS3 dev kits, a manual in 100% Japanese, and then saying “Figure it out yourselves.”

Microsoft is lauded for their developer support post-launch.

Saying the Xbox One will hit the wall first is a joke, right?

Matthew Bryant

Multiple developers have already stated the Sony’s IDE is considerably better than Microsoft’s right now. Not to mention Sony did wonders with the PS3 IDE after the first couple years. You really have no idea what you’re talking about. The 360 had it’s own problems with eDRAM post launch. It took a good 2 years after launch for them to finally get most games to 720p because of the fact that 10 MBs wasn’t enough capacity to fit a HD image. Developers had to learn tiling and Microsoft had to refine the feature in their API. It wasn’t all sunshine and unicorns for Microsoft.

carbonFibreOptik

Uncompressed, a 1080p image requires 8Mb of space. There were indeed 1080p games (though primitive–Earth Defense Force for example) within months of the system’s launch. The real issue was not the eDRAM itself but the developers misusing it, and that came down to a tools issue. Still a problem, but it wasn’t inherent to the eDRAM’s capacity.

Both systems had issues. Both current consoles do as well, to be blunt. However the PS2 and PS3 both have a history of Sony only waiting until an exclusive partner threatens to jump ship before they invest in required development tool updates (and in the latter case, system firmware updates). Microsoft has had their last two systems go on record as the most developer friendly by many studios, and the reasoning almost always was directed to Microsoft’s willingness to improve their platform. I can get behind Microsoft on that viewpoint, as a more attractive tool set and support line usually leads to more developers, and thus more games with which to rake in royalties. I’m actually baffled at how Sony has handled things in the past. It’s just bad business.

Basically, any massively impressive titles on the PS2 and most early-era PS3 ones were completely due to the developers and not Sony. Kudos to those guys, actually. That’s not an easy pedestal to climb atop.

Matthew Bryant

Multiple developers have already stated the Sony’s IDE is considerably better than Microsoft’s right now. Not to mention Sony did wonders with the PS3 IDE after the first couple years. You really have no idea what you’re talking about. The 360 had it’s own problems with eDRAM post launch. It took a good 2 years after launch for them to finally get most games to 720p because of the fact that 10 MBs wasn’t enough capacity to fit a HD image. Developers had to learn tiling and Microsoft had to refine the feature in their API. It wasn’t all sunshine and unicorns for Microsoft.

https://twitter.com/xarinatan Alexander ypema

..Would still rather have a PC for the same price with pretty much identical hardware that does a thousand if not a billionfold more right out of the box than these OEM-crippled DRM players ever will in their entire lifetime. Bring on the fanboy tears.

C4

Well you can buy a PC AND consoles. I don’t see an issue here :-)

500$ without Windows and stuff will get you a nice PC. Towards 1000$ a powerful gaming PC. Wait 2 years and it will provide much more power then the consoles (which then will be 100 – 150$ less)

https://twitter.com/xarinatan Alexander ypema

Why would I still get a console that does only part of what my PC does if I get a PC? I could spend that extra money on a PC that not only matches, but outperforms the consoles in terms of performance. For under 400$ I have a PC that contains almost the exact same hardware as these two consoles (AMD quadcore CPU, HD7850 GPU- which is faster than the one in the APU-, 8GB RAM, power supply, case, and 1TB ‘hybrid HDD’), without being restricted to the few overpriced OEM-endorsed game titles that’ll appear for it in the next few years. Not to mention I can upgrade it for cheap.

I don’t get consoles. They’re literally crippled PCs with a few extra (useless) parts to look shiny, and a trademarked name. They made sense in the 90s when they contained unique hardware that offered an edge over computers. In 2013? Not so much..

rahuldey85

You get consoles to play exclusive games. Example GTA5 is on the PS3 and Xbox360, not on PC. Or maybe Gran Turismo, Heavenly Sword, Tekken, DOA….the list is long.

https://twitter.com/xarinatan Alexander ypema

Which I only find more disgusting and gives me even MORE reason to boycot consoles, because those games would run just fine on any PC. The only reason ‘console exclusives’ exist is because people keep falling for that joke.

qsd

The list is long, and full of shit. 90% of console exclusives get press simply because they are exclusive, not necessarily because they are any good. The best games are all multi-platform (GTA5, in a few months). There’s barely 2 exclusives per console that’s worth playing. Hardly justifies spending 400+500=$900 (not to mention the more expensive games).

Also, I would rather support cross-platform games than encourage companies to keep games out of gamers hands based on the device they own.

Heartless Hero

I recommend you research before you make your arguments. GTA5 is comming to PC in 2014. It will also LOOK way better.

rahuldey85

I said its not on PCs right now, not that it wont come. Read my post before making a stupid post.

C4

I see. During the whole console gen there are always some exclusives that makes it worth buying though for me, in addition to the PC :)

Another thing as said below by rahuldey85 some genres like fighting games are rare on PC.

https://twitter.com/xarinatan Alexander ypema

You know what the difference is between a PC and a console? Software. That’s all there’s to it. Literally if these consoles weren’t locked down you could install Windows on them right now. Or Linux. Or even Mac OS X if you wanted to.

When a company says they will make their game a ‘console exclusive’, they’re really saying they’re going to put a special bit of DRM inside it so that you CAN’T run it on a PC, rather than ‘focus on building it for a console’.
You can do EVERYTHING, every last thing you can do on a console on a PC. Gamepad? Hell, just hook up your Xbox controller to your PC. Works fine. Want splitscreen? Just hook up two of them.

Consoles are a market of cartels and contracts between various companies that make a limited normal PC look special in order to sell games at a higher price, for a device that will get replaced in its entirety including ALL games written for it in a matter of years.
You know what happened to the games that were written in 1994 for the PC? I can still play them today, on my PC, same disk, same everything. Because PCs don’t get deprecated and forgotten, they get upgraded.
That’s why I boycott consoles. I hate the philosophy behind it and everything they stand for. If I buy a device I want to own it.

C4

I see. During the whole console gen there are always some exclusives that makes it worth buying though for me, in addition to the PC :)

Another thing as said below by rahuldey85 some genres like fighting games are rare on PC.

qsd

Why would I waste my money on a console when I have a more powerful device??

Matthew Bryant

Except this isn’t even remotely true. If you want identical hardware you’re looking at around $550-$600 to match the PS4. Of course, that doesn’t overly matter since optimization makes that more like $700-$800.

My sincere advice is that you flush $400-500 down the toilet, and pretend you bought a console.

Boudou

“As far as performance is concerned, this could well end up a tie; as the Xbox One should be able to access data more quickly, while the PS4 can stream sustained data far more effectively. “

This is untrue.

XB1’s ESRAM basically compensate’s for its use of DDR3 RAM which has 68 GB/s of bandwidth, the The ESRAM has 102GB/s of bandwidth. PS4’s GDDR5 has 176GB/s of bandwidth.

What you have is a situation where you have to dedicate resources to get the maximum bandwidth out the XB1, as you are working with slow 8GB of DDR3 and faster 32MB of ESRAM. The PS4 has one very fast pool of 8GBs GDDR5 RAM.

The trade-off is cost. The RAM costs $28 more in the PS4, though the ESRAM makes the weaker XB1’s APU cost $10 more. So a delta of $18.

One thing not mentioned are the APU’s ROPs, which is 32 for the PS4 and just 16 for the XB1. AMD puts 32 ROPs on all their 1080p targeted PC GPUs, meaning that the XB1 really wasn’t designed for 1080p to begin with.

From a spec perspective, there really is no competition in regards to performance, the PS4 is a good deal ahead and its shown that in real world gaming examples on multiplatform games. The XB1 on the other had is clearly focused on Kinect and cable-TV pass-through, which it feels is worth $100 more.

Joel Hruska

Access latency != bandwidth.

Boudou

Who’s talking about latency? Issue here is bandwidth, its the whole point of having that extra ESRAM.

It doesn’t matter though, there is a reason why any mid-high end graphics card uses GDDR5 right now.

Matthew Bryant

32 MBs of eSRAM is useless for most graphic applications. It can’t even fit a 1080p image, much less more than a handful of 1080p textures at a time. It can’t make up for 8 GBs of GDDR5. Ever. It helps, but minimally.

Cappernougght

ummmm….. WHAT?

carbonFibreOptik

Wait a year for the next cards to emerge and the first wave of DX 11.2 games are churned out. You’ll see games using only 500-700Mb of vRAM on those cards, but producing results similar to 2-3Gb engine/card setups.

If you want to brute force huge textures back and forth in vRAM you might need loads of space as well as high bandwidth. If you use 1/10 the size per texture you need not only 1/10 the vRAM but technically only 1/10 the bandwidth (though some processes benefit do higher bandwidth like advanced AA).

Bandwidth means nothing if you optimize your data systems.

Matthew Bryant

DX 11.2 won’t be used until Microsoft takes away the Windows 8.1 exclusivity. 90% of PC gamers are still using Windows 7. Why the hell would developers code for DX 11.2? They won’t. I know Microsoft is trying to force everyone on Windows 8, but this will backfire on them. DX 11.2 won’t be used for the forseeable future.

Also, you’re referring to tiling and your statement is complete and utter bullshit. No offense, but it is. Tiling only works when you’re dealing with non changing scenarios. It also has ridiculous pop in. It’s practically useless for any games other than flight sims (which is why they used a flight sim to demo it).

Bandwidth still means plenty. DX 11.2 won’t change that.

carbonFibreOptik

Pull facts out of sources and not your ass. According to Steam 64% of gamers use Windows 8, and there’s no reason not to upgrade to 8.1 since it’s free. I can’t help that you have an infantile attachment to an older OS. When DirectX 11 came out exclusively for Windows 7, most gamers made the switch from XP as well. According to you 154% of gamers use something newer than XP, so by your own goofy logic you prove that (more than!) every single gamer grew out of that “XP is still better” phase once a viable reason presented itself.

Tiled resources works perfectly fine in ~MY~ studio. The only time we ran into pop-in was when running in software on OpenGL, as I explained. DirectX now allows us to render everything in a scene, ~including dynamically generated textures and shader systems~, with no humanly-perceptive pop-in. No offense, but your entire statement is bullshit when you haven’t put your hands on the tech yourself and I have.

Bandwidth does mean plenty. Many changes in DX 11.2 make it mean less-plenty.

The bandwidth argument reminds me of uninformed gamers building PCs, namely in the early naught years. The common phrase of the time was “More RAM is always better.” The first part ever recommended for upgrade when a game was running slow was always to add more RAM. At the time no game used more than 2Gb of RAM and your OS barely used half that as well, yet everyone exclaimed that 8 or 16 Gb was absolutely required.

The truth? You only need as much as you actually use. In the mentioned case, 4 Gb was more than enough, and you could focus on ~faster~ RAM with your savings rather than get more.

Bandwidth is the same. Yes, you need a decent amount. No, you don’t need more and more. Rather, you need to code such that you don’t fall into an exponential trap and bottleneck yourself. Optimization unclogs the bandwidth bottlenecks and makes any unused bandwidth like unused RAM storage–dead weight. Yes you may need to use it in the future, but you also may find a way to use even less at the same point. Effectively the options when you need more are to just brute force and get faster pipelines (monetarily costly) or to bunker down and re-code more efficiently (time-wise costly).

So does more bandwidth mean anything? No if you optimize, and yes if you’re lazy. It’s really simple.

carbonFibreOptik

Wait a year for the next cards to emerge and the first wave of DX 11.2 games are churned out. You’ll see games using only 500-700Mb of vRAM on those cards, but producing results similar to 2-3Gb engine/card setups.

If you want to brute force huge textures back and forth in vRAM you might need loads of space as well as high bandwidth. If you use 1/10 the size per texture you need not only 1/10 the vRAM but technically only 1/10 the bandwidth (though some processes benefit do higher bandwidth like advanced AA).

Bandwidth means nothing if you optimize your data systems.

Joel Hruska

Whose talking latency? I am. The author. I’m talking about latency.

Caches are used to mitigate the impact of lower bandwidth memory subsystems. A memory controller capable of speculative prefetching can have information from memory queued up and ready to go while the CPU is yanking data out of cache.

In gaming and on the desktop, latency matters more than bandwidth. Graphics is a special case (GPUs have typically relied on high bandwidth, high latency connections), but clearly that 32MB of EDRAM serves as a buffer for both GPU and CPU.

The Xbox One is designed to hide its lower bandwidth DDR3 by relying on a large, high-speed, low-latency cache.

The PS4 is designed to maximize bandwidth by using a unified GDDR5 memory pool for CPU and GPU.

The PS4’s approach makes it simpler to program (a point we’ve addressed in other stories), but there’s nothing intrinsically broken about the Xbox One’s choice. Whether or not it turns out to have been the right choice? We will have to wait and see.

Boudou

You gotta be kidding me? Its amazing someone with no technical knowledge is writing this article.

There’s a myth that every new memory format brings with it a latency penalty (That GDDR5 has more latency than DDR3). The myth is perpetuated by the method upon which latency labels are based: Clock cycles.

Consider, latency is a function of clockspeed when RAM is concerned,667mhz, DDR2 has LOWER latency than DDR3 (7ns vs 10.5ns). DDR 333mhz has a latency on 6ns. If latency was an issue DDR ram would be used.

This is because cycle time is the inverse of clock speed (1/2 of data rates). In real world applications GDDR5 has equivalent latency as DDR3 (its only when you measure it as the inverse of clockspeed that you get skewed latency figures).

Also, the XB1 uses ESRAM NOT EDRAM. This is a HUGE difference. If the XB1 APU was using EDRAM it would have a much smaller footprint.

For the XB1, the ESRAM itself isn’t faster than the GDDR5 RAM. Its slower, even when you add ESRAM + DDR3, you’re still getting sub-parity in speeds. Worse, you only have 32MB of ESRAM, and now you’re juggling these bandwidth around and have to move your data between multiple slow pools of memory.

With the PS4 you have the GDDR5 that can be accessed by both the CPU and GPU, more importantly, with HSA, CPU and CPU memory resources can be accessed by each other.

Really, proof is in the pudding, look at the multiplatform games. The XB1 is struggling by a factor of 50-100%. Those are real world examples, not talk on paper.

That’s relative RAM latencies measured in nanoseconds. The idea that each generation of RAM has higher real-world latency isn’t a myth. It’s how things work. (Apologies for scale on that image).

Here’s why I maintain what I’ve said about the latency issue.

1). Cerny says GDDR5 latency isn’t “particularly” higher on the CPU side. For the GPU, he notes that GPUs are designed to be latency tolerant. He doesn’t say the two or equivalent, or deny that the GDDR5 latency is high on the GPU. This directly relates to 2).

2). The memory controller AMD uses for all of its GCN cards is *very* high latency. This article doesn’t show a GCN card, but let me promise you — I’ve specifically run these tests and pulled the figures. They *don’t* get better.

Even R9 290X has extremely high latency compared to NV hardware, Intel hardware, or even an APU. Now, it’s not clear what memory controller Sony is using for the PS4, but presumably they modified the GPU memory controller. Unless they seriously fixed the latency issue, it’s going to bite.

Finally: On-die cache should offer markedly better latency than off-die GDDR5. If the GPU has direct access to that 32MB of cache, it *better* be faster than accessing off-die RAM — if it isn’t, MS really screwed something up.

3). The only reason for MS to implement a giant cache on-die is if they thought it would give them some sort of advantage. That’s common sense. They didn’t build it because they thought it would look pretty.

Absent the kind of benchmark data I doubt we’ll ever be able to see, it stands to reason that the Xbox One should offer better access latencies and the PS4 obviously has an advantage in sheer bandwidth. The PS4 is also going to be easier to program, another mark in its favor.

Boudou

We know why Microsoft went with the DDR3+ESRAM route. Because they needed 8GBs of memory to run 3 operating systems, snap function, HDMI-IN pass through, and run Kinect.

Sony’s original plan, according to Digital Foundry, was to go with 4GBs of GDDR5 RAM. What happened was that GDDR5 prices dropped to push in 8GBs.

Still, the GDDR5 RAM in the PS4 costs $28 more than the DDR3 in the XB1 according to the iSuppli IHI breakdown. Being that the bill of materials of the XB1 is already high with the Kinect, going with cheaper DDR3 is desirable.

So the reason that MS went with DDR3 is COST not latency.

As far as latency itself and its impact on performance, we see several examples of DDR3 and GDDR5 performance differences and they are huge.

The problem with having multiple pools of memory is that data needs to be read-written-refreshed. The bandwidth itself is 102GB/s. Meaning that the ESRAM is only acting as a scratchpad on the XB1.

The puny 32MB size of the ESRAM is also a problem, if you look at the Intel’s Haswell setup it uses DDR3+E 128MB of ESRAM.

And outside of budget graphics solutions, every other performance solution is moving to GDDR5. Which is why AMD’s CPU/GPU Kaveri is moving to GDDR5 as well.

So AMD too are moving the GDDR5 for their future APUs.

Joel Hruska

You can’t compare against Haswell’s L4 cache — that pool of EDRAM *is* an L4, not just a buffer for the graphics card. I don’t think there’s any evidence to suggest the 32MB buffer on the Xbox One is analogous, though I agree it’s not clear exactly what MS uses it for (and yes, they try to fudge the bandwidth). I’m not convinced it’s actually a contiguous block of cache — they describe it as 4x8MB in their own documentation, which means hitting max bandwidth may involve striping data across all four cache blocks simultaneously.

Point of order: I’ve seen Kaveri hardware. There’s no evidence of Kaveri “moving” to GDDR5. AMD’s FM2+ platform for Kaveri is still a DDR3 platform, and there’s no GDDR5 on-die with the APU.

Boudou

The reason I bring up Haswell is because Intel knew that they needed 128MB of on-die cache. 32MB is insufficient, and the uses for it become very limited.

Think about the memory required for a 1080p frame (1920 * 1080 = (2073600*24)/8=6.22MB with no AA. For 60 frames, that’s 373MBs of memory (again, without AA): thats data that you have to be copying in and out of ESRAM. This small ESRAM size is a large reason (along with the 16 ROPS) that a lot of games are running 720p.

Haswell uses 128MBs of cache because Intel figured that to do proper 1080p gaming, its a bare minimum.

Yes, Haswell is EDRAM. As I said above, EDRAM is much smaller than ESRAM. EDRAM has 3x the space saving of ESRAM. My point was that MS should have gone with larger EDRAM rather than smaller ESRAM for its XB1. The 360 had 10MBs of EDRAM if you remember.

In terms of Kaveri APU, GDDR5 will NOT be, never will be, “on-die”. GDDR5 (or DDR3) is never integrated onto the die itself.

There was initial rumbling of an FM3 Kaveri APU moving to DDR4/GDDR5. Which would make sense. Again, DDR3 is a huge bottleneck for APU performance (this is the reason why MS added the ESRAM).

AMD’s APUs don’t have ESRAM like the XB1, and are more similar in design to the PS4. GDDR5/DDR4 is clearly the way to go.

Joel Hruska

As far as Kaveri goes, I can tell you now, having seen the shipping hardware — Kaveri is FM2+ on DDR3-2133. AMD is still mulling the DDR3-2400 question — I think we’ll see it shipping in half-supported mode.

AMD has not unveiled any plans to move to DDR4, or even shown a CPU / chipset map out past December 31 2014. DDR3 is a huge bottleneck for integrated GPU performance, but that’s *always* been true.The 68GBps of bandwidth on the Xbox One is puny compared to the PS4, but it’s 2x what Kaveri’s GPU will have.

AFAIK, the only mainstream desktop part on DDR4 next year is Haswell-E. I think even Intel’s mobile chips are staying on DDR3 through the end of 2014.

Richardo

They couldn’t go with edram because of inability for Global Foundries to fabricate edram on the 28nm node. Nintendo was able to because Renesas Electronics could fabricate on the 40nm node. Could TSMC? Perhaps but there may have been a number of reason why TSMC was not chosen.

@Joel Its not just about what technology would be optimal, its about what your business partners are able to provide.

Boudou

You seem to be confusing a lot of different information.

That image you cited shows memory cell cycle time in ns, there are very different types of latency when RAM is concerned, CAS, RL, RL-tRCD, etc. Latency in RAM is not a singular value.

Also, GDDR5 memory in the PS4 runs at 1375MHZ (5500MHZ effective), the DDR3 in the XB1 uses 1066 (2133MHZ effective). So there is “running at a high enough clock speed over DDR3″

BUT. Increasing clockspeed actually INCREASES latency.

And comparing DDR3 and GDDR5 latency based on clock speed in the greater problem you’re encountering, they are fundamentally different types of memory.

The image you cite exactly supports an argument I’m making. In it you see DDR4 performance relative to DDR3. The thing is DDR4 actually doubles the latency over DDR3.

Basically, what you have is the fact that RAM is getting more complex. The reason why you see RAM performance skyrocketing as latency increases is because controller handles that complexity. Latency in itself doesn’t have an impact on performance, its a red herring, its part of the larger design of the memory itself.

The reality is latency isn’t a major factor in performance difference between the XB1 and PS4. If MS could afford it, they would have gone with GDDR5 over DDR3+ESRAM, but they needed to keep cost down due to the inclusion of the Kinect.

The real performance gap is the lack of GPU compute cores, and half the ROPs. No amount of latency is going to make up for real hardware deficiencies,

It sounds like you’ve mistaken a point of mine, however. Looking back, I could’ve worded this better.

When I said that performance could end in a draw, I was referring specifically to memory subsystem performance, not total system performance.

Even if the Xbox One has a latency advantage (and I still think it probably does), that doesn’t mean the Xbox One is the faster (or even ‘just-as-fast’) platform. Even if there’s an offset between those two characteristics, the PS4 also has (though I believe this is unofficial) more ROPS, more TMUs, and more total bandwidth. GPUs generally *aren’t* particularly latency sensitive.

They dropped it due to the added complexity. As Sony has learned with the PS3, non-first party developers aren’t going to invest in developing exotic hardware.

As far as DDR4, these CPU/GPU are going to have to move to DDR4 and higher latency memory. Can’t stay on DDR3 forever.

However, latency as a whole is not a singular factor. There are latencies between each part of the memory module, its design that overcomes this. The same thing happened from DDR2 to DDR3 as latency too went up.

In regards to the PS4, I personally think that when DDR4 prices are good, and speeds are fast, that the PS4 can move from GDDR5 to DDR4 as long as the technical speed requirements are met.

Down the road this could be a huge advantage in terms of price, last-gen memory prices go up as they get fazed out. DDR3s days are numbered.

Joel Hruska

I would be very surprised to see Sony change something so fundamental about the core architecture post-launch. Heck, I think it’s just barely possible that they might tweak clock speeds or enable the compute units that are currently disabled at some further date. Overhauling the memory uarch would be a much bigger deal.

Boudou

I think we’ve seen the end of exotic console hardware from Sony. The Vita is basically common ARM chip with a PowerVR SGX, and the PS4 is AMD x86 APU without any fancy frills like ESRAM.

I would suspect a PS5 to follow that x86 route, and have more logically continuity in their hardware like Apple’s iPhone3,4,5, etc. There doesn’t seem any reasonable way that Sony will revert back to odd custom hardware like Cell, EmotionEngine, etc.

For this reason, I think Sony has left a good deal of fungibility in their components and supply chain. I don’t think the APU will change, or any part of the core lgoic,but bigger faster HDD, and possibly faster RAM, that is, if all the minimum specs of GDDR5 is met.

Games wouldn’t run faster, it would merely be a substitute years down the line when GDDR5 is no longer mass produced. It can get difficult and costly to source old memory. So its really just a supply chain benefit.

Joel Hruska

I think you’re right about the lack of custom hardware. In the case of the Xbox One in particular (since it runs a version of Windows that’s virtually guaranteed to be based on the same W8 kernel), the difference is now entirely software based. On the Xbox, that’s an even thinner gap.

But Cell was a really odd duck. Prescient in some ways — you could make a strong case that Cell’s SPEs were a forerunner of programmable GPUs (and I believe the Frostbite engine uses them for graphics processing). But overall, Cell was always tricky to program. Moreover, for whatever reason, all the money Sony sunk into its development didn’t give it a huge leg up over Xbox last cycle.

Unfortunately that’s the largest size I can find, but it’s far from mythical. Mark Czerny has said that the PS4’s CPU memory latency isn’t particularly higher, and that GPUs are “latency tolerant.” That’s not the same as saying there’s no difference.

2). AMD’s GCN memory controller has really high latencies compared to NV. Far higher than what we see on an APU, even. Even on Hawaii, memory latencies are 200 – 500ns, compared to half that or less for NV cards.

3). MS didn’t build a 32MB ESRAM cache (I’ll fix the EDRAM reference) because they thought it looked pretty. It’s common knowledge that it compensates for the relatively limited bandwidth of the DDR3-2133 controller.

When I look at the Xbox One, I see an APU backed up by a fast, low latency cache, followed by a lower-latency DDR3-2133 memory bus with relatively limited bandwidth compared to the competition.

When I look at the PS4, I see a higher latency, higher bandwidth design that’s simpler and likely easier to optimize.

What’s the final real-world gap? Don’t know. Probably never will.

From the Aether

Sure, theoretically the PS4 GPU looks to be better but where I’m not so sure is how the bus contention of several units will affect the real performance. Framebuffer DMA, Shaders, CPU, Audio and IO all access the GDDR5 randomly and each time it needs to start a new memory burst sequence. Who can really say here outside AMD and maybe MS/Sony to how much stalling this leads in the GPU or CPU. At the moment I “assume” that the PS4 can reach higher FPS but the XB1 might be better at generating stable FPS.

Andrew

One thing that is consistent about the Xbox ONE, and it is that cross platform games are running consistently at 720p @ 30 fps. Yet the PS4 is running practically all of these games at 1080p @ 60 fps. And where some instability was observed such as with Call of Duty, it was later learned that the game’s graphics engine was at fault demanding a lower frame rate than what the PS4 was attempting to generate. In other words, the PS4 was overpowering the games modest requirements.

But Forza 5 on the Xbox ONE has given some gamers hope that 1080p @ 60 fps is achievable for running most games in the future. But the problem is that Forza 5 has so much static images like buildings, roads, cars, and embankments that not much graphics power is needed. So this thinking about the Xbox ONE is as bad as imagining that creating new worlds in the cloud will somehow enhance the game play. The cross platform games require more graphics intensive processing such as dynamically generating waves in the oceans, clouds, explosions, battles, and scenes such as having airplanes fly between clouds; and it is in these instances where the Xbox ONE stumbles. Developers are already showing that the simplicity of developing games for the PS4 is allowing them to showcase their games at their best, and that they will not take the unfair decision to limit the game’s performance on the PS4 so as not to make the Xbox ONE look bad.

Boudou

I think eventually, the XB1 will get 900-1080p more frequently than it does now, but the 16 ROP situation will probably limit them more than the lack of compute units.

As the Digital Foundry breakdown went into, Forza 5 really isn’t much of a feat being that it has baked-in lighting and cut-out audiences. There are clear sacrifices that were made to get 1080p60.

As you’ve said, the biggest issue I see is the difficulty of programming for the XB1. This is the same mistake Sony make with the PS3. Exotic hardware is a disadvantage.

I can’t imagine that if the XB1 gets a lower installed base than the PS4, many developers will put the resources to get the most out of the system. Even the expensive and late PS3 did well in continental Europe and Asia, if the XB1 loses those markets and North American and UK, its going to be an uphill battle to get developers to devote larger resources to making a game look good on a system with a smaller installed base.

Jaedre

Ghosts is 720/60 on X1 but on PS4 its 1080p an a non stable framerate of 60 but drops as low as 30s

Andrew

This was the early report, but subsequent reports have revealed that the Call of Duty’s graphics engine is old and that the drops is due to developer oversight. So the game’s old graphics engine is being updated to better interface with the PlayStation 4’s graphics processor. Not trying to be condescending in any way since we are all prone to making mistakes, but one thing I try to do when given a piece of information is to use common sense. The fact that the graphics hardware on the PS4 is 50% more powerful (i.e. faster) than the Xbox ONE, and the fact that no other game has experienced instability while running on the PlayStation 4 is a strong indication that the problem is most likely not with the PS4. But the facts have shown that the problem is indeed with the Call of Duty game. Thanks for bringing this point up because most gamers may not be aware of the latest news update on this issue.

Matthew Bryant

That’s not true. The framerate does drop to high 40s at some points, but the major reason why you see non-stable framerate is because the PS4 is actually showing framerate higher than 60 fps often. Digital Foundry addressed this already. There’s supposed to be a patch that fixes it. We’ll see what happens. Ghosts is poorly optimized on both systems. Battlefield 4, on the other hand, is 720p on the Xbox One and 900p on the PS4. It also runs at a more stable framerate on the PS4 (though both consoles have dips). Either way you look at it, the PS4 is obviously doing better right now.

carbonFibreOptik

Infinity Ward outright admitted that they didn’t properly optimize the XB1 version. The PS4 version simply required less optimization due to more standardized system architecture.

The PS4 edition has no issues related to ‘overpowering’ the engine. It’s a legit issue in the way they accessed the dynamic vertical sync function of the GPU. When the game needed vSync it didn’t trigger properly, and similarly had a large delay in disengaging. There’s apparently a patch in the works to fix that.

carbonFibreOptik

Infinity Ward outright admitted that they didn’t properly optimize the XB1 version. The PS4 version simply required less optimization due to more standardized system architecture.

The PS4 edition has no issues related to ‘overpowering’ the engine. It’s a legit issue in the way they accessed the dynamic vertical sync function of the GPU. When the game needed vSync it didn’t trigger properly, and similarly had a large delay in disengaging. There’s apparently a patch in the works to fix that.

Matthew Bryant

Doesn’t explain the differences with Battlefield 4 or why the difference is so large between the Xbox One and PS4 with COD Ghosts. The PS4 has 125% more pixels. That’s a huge difference. Even if they completely ignored the eSRAM, the difference shouldn’t be that large on similar hardware. I’ll agree that the Xbox One is capable of more, but their API is a mess right now. They need to get their IDE in a better state and then we’ll see a bit more parity. PS4 will still beat them out. That won’t ever change. It has more powerful hardware period. I think going forward we’ll see more of a difference in effects rather than resolution though. Effects make more of a difference anyway visually.

Battlefield 4 on Xbox One looks almost picture-perfect to the PC edition, just running 720p The PS4 edition has mismatched textures, blurred surfaces, simplified physics (namely explosion particle simulations and ‘flickering’ fire sprites), less surface detail shaders (displacement reduced to normal mapping alone), lower quality ambient occlusion (less accurate, depth-independent scale), and the list goes on. The equivalent is something like a mix of medium and high settings on PC, but with the texture mismatching still unexplained. So is there an explanation for how they got more resolution out of the PS4 edition? Yeah, they cranked down the settings. The art style caries the game most of the way but a discerning eye for detail (and likely a PC player anyway) would have good cause to complain.

COD was not fully optimized on Xbox One in time for release, and was admitted directly by an Infinity Ward press release. Since both editions look largely the same apart from resolution, I can only speculate that the Xbox One version could indeed have have been able to render at a higher resolution if given the time to optimize properly.

Does Microsoft need to improve things on their end? Yes. That’s my whole point. Optimization, be it by the hardware developer or the software developer, is always a major factor in console output capabilities. Raw power on the other hand has proven to at best be a quick bandaid for a long term issue rather than a full solution, as the PS3 best represents. Even with more potential power to tap, it still took better tools and optimization before the PS3 started regaining market share via proper exclusives. Sony proved that more power and no support can lead to years (three!) of stagnant development when your weaker competition is making their lower power quotient get more bang for its buck.

Mikeherp Derp

Completely wrong, stupid, and delusional.

The Order is now easily the best looking next gen game, and PS4 multiplats consistently run better by a significant margin.

Doesn’t explain the differences with Battlefield 4 or why the difference is so large between the Xbox One and PS4 with COD Ghosts. The PS4 has 125% more pixels. That’s a huge difference. Even if they completely ignored the eSRAM, the difference shouldn’t be that large on similar hardware. I’ll agree that the Xbox One is capable of more, but their API is a mess right now. They need to get their IDE in a better state and then we’ll see a bit more parity. PS4 will still beat them out. That won’t ever change. It has more powerful hardware period. I think going forward we’ll see more of a difference in effects rather than resolution though. Effects make more of a difference anyway visually.

Cappernougght

Not to mention the huge amount of processing power needed on PC to run it at 1080p 60 FPS due to the poorely programmed engine.

carbonFibreOptik

Yeah, that really was surprising. Assassin’s Creed IV also ran horribly unoptimized on PC (AC3 settings ran well, new filters and effects could single-handedly take my GTX 770 to half the framerate)

This isn’t too unexpected though. These games are running preliminary engines for next-gen. It’s like they were ported to the new hardware (or PC) after they were made for the older consoles. In six months we’ll start seeing more engines dedicated to DX 11.1/11.2 and the new consoles. It’ll get better in time.

Just look at Just Cause versus Just Cause 2. One ran amazingly slow, the other was rather optimized, and both used the same base engine. :/

Mikeherp Derp

Correct. The Order is now easily the best looking next gen game, and PS4 multiplats consistently run better by a significant margin.

The X1 is actually able to access both the eSRAM (150GB/s real world) and DDR3 (60GB/s real world) at the same time not one at a time like you say so it’s total bandwidth is both of them combined. This provides more than enough bandwidth for the X1 which only requires about 120GB/s.

Also the low latency of the DDR3 gives the CPU better performance than DDR5 and the CPU is clocked higher, both of which reduce the CPU bottleneck with the GPU (also clocked higher). Then there’s the audio block (512 channels vs 200 on PS4 which offloads much of it to it’s slower CPU) and the move engine (4 processors, PS4 doesn’t have one) which eliminates almost all of the overhead from data transfers between the CPU, GPU, and memory, freeing up clock cycles for more important things.

Mikeherp Derp

Bullshit. Xbox has slower mem width in real world game performance. Idiot napkin math “Add the numbers” is useless.

David Herrera

Well the Steam box is better!!

Guest

Why would I waste my money on a console when I have a more powerful device?

smithj33

Games?

smithj33

Games?

Steven Richardson

PlayStation fanboys must be so used to getting egg on their face over the last 6 years, and its about to happen again. lol

Matthew Bryant

It’s hilarious when people who don’t understand the article at all try to play fanboy in the comment section.

Steven Richardson

I totally agree and by reading you’re above post you don’t have a clue. Its like saying all 8 mega pixel cameras are better than all 7 mega pixel cameras, that just isn’t the case.

Matthew Bryant

Your response to my logical argument above is a fallacy of false analogy comparing resolution to memory bandwidth? Try again. Only this time either address my entire argument with a logical response, or don’t bother. We know the specs of the consoles. The PS4 has better specs. Unless you have proof that the Xbox One runs faster in benchmarks (and it won’t since the PS4 has the more developed API right now) then there’s no logical reason why the PS4 won’t outperform the Xbox One. You’re dealing with mights, I’m dealing with the most likely scenario. Especially since third party games seem to agree with me. Keep trying though.

Steven Richardson

You’re argument is based on picking numbers in a Top Trumph style to suit the PS4, you are forgetting the APU chip for the Xbox one is more expensive than the PS4, why is that we wonder, maybe its better designed for developers and Direct X. And where’s you’re benchmark proof??

Mikeherp Derp

LOL. It’s more expensive therefore it must be better! Flawless logic!

Imbecile.

Millenia Blue

Logical argument?! HA! Sony fanboys like you don’t even know the meaning of that!! and your HATER LEVELS IS OVER 9,000!!!

Mikeherp Derp

The Order is now easily the best looking next gen game, and PS4 multiplats consistently run better by a significant margin.

Ugh, more ridiculous equality articles. There is no tie when it comes to RAM. GDDR5 is MUCH better when it comes to most graphical uses. Sure, the Xbox One will be able to multitask better, but that’s not needed when you’re playing one game. The PS4 will not only stream textures and reflections faster, but it will stream more or higher quality ones per second. It makes a huge difference. No amount of make believe will change that. Stop pretending they’re equal. There’s a reason why GDDR5 is used in graphics cards. It’s not because eSRAM is just as good. No. Seriously. It’s not.

The PS4 has a better GPU. That just exists. You can’t make the GPUs equal each other. They don’t. The Xbox One has more overhead for Kinect. That gap will narrow over the generation, but it will always be there. There’s nothing equal here. The PS4 is better for gaming. End of story.

I have no idea why people pretend otherwise. Just grow some balls and admit it. Stop placating Microsoft here. Maybe apps or TV will take off and they’ll do well there, but as far as games they’re at a hardware disadvantage. That’s the cold hard reality. Also, SSD won’t become the norm anytime soon. Most consumers have no idea what SSD means, but they do understand hard drive sizes. I’d also rather have the extra storage capacity over faster load times. Especially at a quarter of the price.

Tom Sawyer

As for your “there is no tie when it comes to RAM” argument, you obviously didnt read the article thoroughly. The XB1 GPU design was created to take advantage of ddr3’s lower latency, which is why they intentionally didnt opt for gddr5. Whereas Sony went for the full throttle (albeit with higher latency) setup, and designed their rig around that. Due to those specific architecture choices, they are roughly equivalent. The One is faster off the line (has an extra 32MB ESRAM over the PS4, and higher clock speed:1.75Ghz to Sony’s 1.6Ghz) , and the other has a higher top speed, to use car parlance.

As for the custom GPU, again, your argument is flawed. The GPUs are both custom AMD Radeon setups. And while Sony’s custom chip has a slight processing advantage (more compute units), the MS design ensures the “low latency” tasks of the XB1 can be offloaded to Azure; and since the systems architecture was DESIGNED to take advantage of latency, it effectively evens out in the end.

Whatever slight disadvantage MS has in this product, is far more slight than it would generally appear. I dare say the people at chipworks, extremetech, and others are more knowledgable about the subject than yourself. Why not just enjoy gaming, and agree they are fairly similar and on par, instead of the whole “mine is better” tantrum? That said, we agree on your point about solid state drives though.

Surging

Sony solved the “Latency” issue with an added extra shared memory controller therfor the latency has been reduced to almost nothing, just wait for another year or 2, PS4 will be OP against Xbone sending M$ to the hall of shame.

According to Thurrott , PS4 has periods of slowdowns and gets choppy.
MSFT did their homework , Sony failed.

Mikeherp Derp

LOL. Delusional idiot.

ramonzarat

LMAO… 50% more shader and twice the bandwidth = same performance? In what universe do you live in? No voodoo black magic ESRAM or wishful thinking could EVER make up for this huge gap, period. Beside, ESRAM is a pain devs must cope with where PS4 can enjoy near straight port from PC and vice versa due to practically identical architecture: X86 code, shaders and GDDR5.

The very simple and undeniable argument is this: When it come to mid to high end dedicated GPU for gaming on PC, how many of them use DDR3 or onboard ESRAM cache? The answer is resounding ZERO.

The XB1 is a jack of all trade, master of none, especially when it comes to pure gaming, which is the essence of what defines a video game console. The other easy argument is the XB1 must already compromise on resolution as low as 720p on current titles just to keep 40-60FPS exactly because it lack the required horse power. What will it be 2-3 years down the road when BF5 and GTA VI come out? And in 4-5 years? 720p at low/medium quality settings on every games, that’s what.

Just buy a PS4 now and wait 2-3 years to get your XB1, without the NSA approved Spy cam, for 99$ in the discount bin at Walmart to play the handful of REAL exclusives that will never be published of the PS4.

Millenia Blue

HATER!!

Mikeherp Derp

Yup. Called it.

Guest

I don’t think that accessing some data a bit quicker will mean squat when Devs start using gpgpu on PS4. The lower CU count on the XB1 will prove to be an hindrance.

Mikeherp Derp

True.

Jessie Bristol

Xbox One can do GPGPU as well, except it has more bandwidth available to do so. The PS4 can do it at 19GB/s whereas the Xbox One can do it at 30GB/s.

Folks I am not tech savvy enough to comment on the specific hardware comparisons but I will say this, MS did not invest hundreds of millions of dollars to lose out to Sony for a few bucks in hardware choices that were made. The XB1 is a very complex machine which will evolve into what MS intended it to be.What that is maybe known to some insiders but to the majority of consumers it will become clear when the gaming software becomes refined to the true capabilities of the hardware/software. My guess is that XB1 in 1-3 years time will surpass PS4 in the gaming software sphere due to XB1 hardware/software capabilities. Sony’s hardware choices were partly intended to give the impression of a more modern and advanced counsel due in part that MS appears to be weaker.

Mikeherp Derp

Stupid and wrong “MS couldn’t possibly have thrown gamers under the bus!” assumption. They did, deal with it.

The Xbox One has 14cu’s but to balance there system they seen a bigger improvement upclocking versus unlocking the 2 cu’s. SOny has 18cu’s but 4 are seperate. The system is balanced for 14 and the other separate 4 are there if Devs want to access them. 1+1 does not equal 2 in console architecture.

The reason Xbox got more performance with an upclock vs 2 more CUs is because it’s bottlenecked by its 16 ROPs, while PS4 has 32 ROPs.

That leaked slide is a suggestion, not proof that 4 CUs are unable to be used for rendering. Cerny denied 14+4 in an interview.

“Mark Cerny: That comes from a leak and is not any form of formal
evangelisation. The point is the hardware is intentionally not 100 per
cent round. It has a little bit more ALU in it than it would if you were thinking strictly about graphics. As a result of that you have an opportunity, you could say an incentivisation, to use that ALU for GPGPU.”

As we’re now discovering what’s on paper doesn’t equate to actual performance, and comes back down to the same old argument that the PS3 is more powerful than the Xbox 360… but games looked better on the later… essentially Sony marketing BS!
Comparisons of Thief on the Xbox One and PS4, showed up that the later had inferior anti-aliasing (it was actually really bad), which resulted in noticeable jaggies, but also the PS4 had issues loading textures, which made the game look very low res.
Revelations that Sony was throwing around the term 1080p to show Killzone Shadow Fall was actually running in Full HD, was actually a partial lie. The multiplayer section of the game actually runs in 960×1080 resolution… which is basically the same as 1280×720… keeping the 1080 bit meant they could maintain the lie.
Twitch broadcast resolution also shows the PS4 up even more, with a rather unimpressive 960×520 – yes FIVE TWENTY, resolution, which is much lower than the HD resolution the Xbox One broadcasts in.
I’m not really impressed that Sony keep hanging onto full hd like it’s THAT meaningful. It isn’t, games that look good, run well, and are actually great to play matters. For Sony there is no good games, and anyone who says that cross platform games look better on the PS4, you’re delusional. Watch the comparison videos from a neutral standpoint, and you’ll see all that extra power equates to little or no difference, and sometimes even worse image quality!!

Duke

Mate seriously. You are so full of crap it’s literally seeping from your pores.

1. Yes, early on with 360 and PS3, the 360 had the better quality games. But it was well noted that games developed exclusively for PS3 had graphic levels that 360 games could not match and late into the generation, after developers learnt tricks and techniques on how to develop for the Cell architecture, the PS3 began to get the better quality games graphically (minus a few, like Skyrim) or there was no discernible difference.

2. Thief is 1 game, 1 game that was very poorly optimized for both consoles and released before it should have been. But it was released to try make money as soon as possible.

3. Don’t kid yourself with the Killzone Shadow Fall graphics. It took 4 months for anybody to notice it. 4 MONTHS! of people playing this game online. If it was basically the same as 720p, the difference would have far more noticeable than you are making it out to be. Guerrilla Games used a complex technique to pull this off and I guarantee that it is a technique we will see more of this generation on both Xbox One and PS4.

Twitch may broadcast at 520p. But that’s because it requires a far lower bandwidth to broadcast. Not everybody has the upload speeds required to broadcast it, nor the download speeds to stream it at 720p which requires a much higher bandwidth. You are also kidding yourself if you think that 520p is the set resolution that it will stream at, all this generation. A firmware/software update will come to bolster that.

Mate, I am sorry but you are the delusional one. 1 game where it runs poorly, but poorer on the PS4 than the Xbox One and 1 game where the resolution in only the multiplayer section was so discernible to being 1080p, that it took 4 months to notice and all of a sudden it’s as if all PS4 games run worse or exact to the Xbox One counterparts.

Games like Assassin’s Creed, FIFA 14, Battlefield 4, Call of Duty Ghosts, NBA 2k14, which were the major multiplatform games, have all been shown and proven to run better and look better on the PS4, over the Xbox One. Metal Gear Solid V: Ground Zeroes is the next major multiplatform game due for release I believe and the differences are astonishing. The Xbox One version looks barely better than the 360 version and the PS4 version blows all other versions of out of the water. Hideo Kojima himself, came out saying just that… At the risk of losing sales.

On paper the PS4 is more powerful and it has been proven early on. That isn’t to say that this will be the case all throughout the generation. Hell it could be an entire role reversal where eventually the Xbox One versions look best in a couple of years. On paper specs mean a crap tonne. Hence why a GPU like the GTX Titan or the GTX 780 or GTX 780Ti craps all over the GPU’s in the PS4 and Xbox One. They have far more power in them, hence why they cost more than the actual consoles themselves.

Please seriously look at both sides of the picture. I have no issue with your purchase of a Xbox One, please don’t have an issue with my purchase of a PS4. We are gamers, we love games. I plan to add a Xbox One to my collection of PC, PS4, PS3, PS Vita, 3DS and 360 later this year because there are some good games coming.

Like holy crap man, going through your comment history goes on to show just how much of a Microsoft fanboy you are. It seems in your mind they can do no wrong.

SMH

is a custom Kaverie chip btw… not F@Guar

SMH

its a custom kaverie [NOT KABINI] chip with full hardware coherent HSA…. hence why you cant see or fathom where or why those lil “RED BOXES” are there …. chump.

Guest

There’s an second APU on the soc…

AlphaX

I have both a PS4 Day one and Xbox Day one. Running on a Sony 65inch XBR W850,through a B&W sound bar all connected through AudioQuest chocolate HDMI cables and I have to say both systems look great! I have Froza 5 and it looks beautiful. DR3 looks good for only 720p. I have been playing Need for speed on PS4 and the game looks very pretty! both systems look great! but all the games I will be getting in the future I will be getting for PS4. I like the Xbox one but PS4 just feels better.

AlphaX

I have been a long time PC gamer, building my own high end computers. My PC setup is i7 980x 6core @ 4.4Ghz 12GB Ram @ 2000Mhz with Two EVGA GTX Titan Black series GPU cards in SLI on a Dell 27inch Ultra LCD so I can max out all my games at 2560×1440 and I still love gaming on my PS4 Xbox setup.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.