...If the specs has been changed and the console came with a high-end or even a medium AMD GPU , then the situation will be different .

Click to expand...

seriously, are you kidding me?
i think, you should open your console machine cover then move and place your high end GPU along with intel 6 cores, high end mobo, and also your 32GB rams inside your console box.

FACT 7 : PCs will maintain a higher visual quality and frame rates , consoles will have the graphics of a two years old PCs .

This thread is full of ignorant people making false and misguided statements all along , the only people here with brains are : Benetanegia and BigMack70 .

First you morons need to read some facts , here they are :\

Click to expand...

FACT 1 : Not even High-End PCs can sustain 60FPS @1080p in demanding games with maximum graphics , games like : Metro 2033 , ARMA 3 , Crysis , Dragon Age 2 and many many more , there will be drops below 60FPS and in many occasions .

Click to expand...

In many occasions yes, But AVERAGE is what matters bro. a few 50 fps in couple of miliseconds doesnt matter

FACT 2 : Consoles include insane amount of code optimization , where every CPU/GPU cycle is utilized , they literally run on the machine code , which is the lowest language of software programming , contrary to the PC that sports many compilers and higher languages which wastes valuable cycles .

Click to expand...

You said it yourself, Heavy Optimization, Which is why Consoles from 2005 are able to run today's games. same reason why a Console in 2013(PS4) will be able to run games up to 2020 or so.

FACT 3 : Even with these optimizations, consoles run with shitty graphics . they cant even maintain 30 FPS at 720p , they usually drop to 25 and 20 FPS , they even run at sub 1280x720 resolutions , sometimes as low as 900x600 !

Click to expand...

Because they are pushing the console to its limit by trying to play a game on a 7 year old machine. but still looks decent for CONSOLE gamers to play

Not severely, Almost all games in consoles are the low-medium graphics counterparts of their PC versions. High/Ultra is just additional eye candy benefit for PC users. but most PC players mostly aim for pure performance and flexibility.

Will barely run today games at 1080p @60 FPS with PC graphics level , all of the code optimizations will be spent on the cost of resolution increase (to 1080p) and the cost of graphics increase (to PC level) , such as shadows , lighting , textures and etc .

If the specs has been changed and the console came with a high-end or even a medium AMD GPU , then the situation will be different .

Click to expand...

you said it yourself again, "PC graphics level", This is a console bro, and Its not running PC graphics level, include all the heavy optimizations and the huge possibility of mid-range AMD GPU, as with PS3's midrange GPU. and you can do 1080p @60fps average

FACT 6 : These consoles will have to do the usual dirty business to be able to run at 1080p , cut resolution and upscale , decrease all graphics elements (lighting and shadows and textures .. etc) below the future PC level .. This happened to the previous generation too , Xbox and PS 3 stared operating at 720p just fine , then they had to cut corners to increase graphics other wise the visuals will stall .

Click to expand...

Again this is a console, don't expect it to be PC God like performance, If you want that, Buy a PC, they never said the graphics to be similar to PCs, they only said 1080p @60FPS

FACT 7 : PCs will maintain a higher visual quality and frame rates , consoles will have the graphics of a two years old PCs .

End of Discussion .

Click to expand...

uhmm Nobody said that Consoles "will maintain a higher visual quality and frame rates" than PCs, End of Discussion???

I wouldn't say Average frame rate is all that matters. FPS dips will affect gameplay way more than adjusting the LoD so its just a tad better at the cost of some visual affects. Say you have 2 situations, one where a certain game--say Modern Warfare 4--can run at 1080p with an average of 60fps, but it drops down as low as 30fps. Now compare that to the same game running at the same resultion but it average 55 fps, and never drops below 50fps, the only difference is they removed a particular lighting effect. On a console the second option will be more enjoyable because you'll get a more constant frame rate, despite having a lower average frame rate.

As for console optimization, you're kind of blurring the perception a bit. The most demanding and well optimized console games render at 720p at about 29-30 fps, with the equivalent of low\medium settings and almost no anti-aliasing. They look decent by most peoples standards, but hitting the bar isn't that hard. You could accomplish that on same title with a mediocre PC (with none of that optimization) akin to a C2D and HD4870 GPU. Hell, you could probably do a decent amount better.

You have to realize rendering stuff at 540p and 720p really gives a lot of wiggle room. I think the biggest bottleneck with current gen consoles is actually in the RAM department. Imagine making a game that can only access 256MB of system memory (PS3), and see how well it runs. I think we have to keep things in perspective. The APU's GPU is capable of running modern PC games at medium settings at 1080p with moderate amounts of AA, and still posting 20-30fps. That's really not bad. When you throw a 6670 in the mix it only gets better. The question will always be quality. They could use that setup and render a single textured cube spinning at 1080p at 60fps and their statements would be accurate, but people want a game that hits those settings and still looks good.

I wouldn't say Average frame rate is all that matters. FPS dips will affect gameplay way more than adjusting the LoD so its just a tad better at the cost of some visual affects. Say you have 2 situations, one where a certain game--say Modern Warfare 4--can run at 1080p with an average of 60fps, but it drops down as low as 30fps. Now compare that to the same game running at the same resultion but it average 55 fps, and never drops below 50fps, the only difference is they removed a particular lighting effect. On a console the second option will be more enjoyable because you'll get a more constant frame rate, despite having a lower average frame rate.

As for console optimization, you're kind of blurring the perception a bit. The most demanding and well optimized console games render at 720p at about 29-30 fps, with the equivalent of low\medium settings and almost no anti-aliasing. They look decent by most peoples standards, but hitting the bar isn't that hard. You could accomplish that on same title with a mediocre PC (with none of that optimization) akin to a C2D and HD4870 GPU. Hell, you could probably do a decent amount better.

You have to realize rendering stuff at 540p and 720p really gives a lot of wiggle room. I think the biggest bottleneck with current gen consoles is actually in the RAM department. Imagine making a game that can only access 256MB of system memory (PS3), and see how well it runs. I think we have to keep things in perspective. The APU's GPU is capable of running modern PC games at medium settings at 1080p with moderate amounts of AA, and still posting 20-30fps. That's really not bad. When you throw a 6670 in the mix it only gets better. The question will always be quality. They could use that setup and render a single textured cube spinning at 1080p at 60fps and their statements would be accurate, but people want a game that hits those settings and still looks good.

Click to expand...

this seems a marketing move to tell we give you 60 fps at 1920 x 1080 when most of the console gamers dont know nothing about fps and punt money on something wortless than a pc.
If they comes out with a system with high performance in EVERYGAME at high settings or ultra is good but they dont have to give the same FAKE NEW TECNOLOGY of ps3 for steal money from idiots with the excuse of the bluray .
This is very sad and we pc gamers brings again porting with shit graphics and unoptimized like gta 4 and others.
Now will see .

This thread is full of ignorant people making false and misguided statements all along , the only people here with brains are : Benetanegia and BigMack70 .

First you morons need to read some facts , here they are :

FACT 1 : Not even High-End PCs can sustain 60FPS @1080p in demanding games with maximum graphics , games like : Metro 2033 , ARMA 3 , Crysis , Dragon Age 2 and many many more , there will be drops below 60FPS and in many occasions .

FACT 2 : Consoles include insane amount of code optimization , where every CPU/GPU cycle is utilized , they literally run on the machine code , which is the lowest language of software programming , contrary to the PC that sports many compilers and higher languages which wastes valuable cycles .

FACT 3 : Even with these optimizations , consoles run with shitty graphics . they cant even maintain 30 FPS at 720p , they usually drop to 25 and 20 FPS , they even run at sub 1280x720 resolutions , sometimes as low as 900x600 !

Will barely run today games at 1080p @60 FPS with PC graphics level , all of the code optimizations will be spent on the cost of resolution increase (to 1080p) and the cost of graphics increase (to PC level) , such as shadows , lighting , textures and etc .

If the specs has been changed and the console came with a high-end or even a medium AMD GPU , then the situation will be different .

FACT 6 : These consoles will have to do the usual dirty business to be able to run at 1080p , cut resolution and upscale , decrease all graphics elements (lighting and shadows and textures .. etc) below the future PC level .. This happened to the previous generation too , Xbox and PS 3 stared operating at 720p just fine , then they had to cut corners to increase graphics other wise the visuals will stall .

FACT 7 : PCs will maintain a higher visual quality and frame rates , consoles will have the graphics of a two years old PCs .

End of Discussion .

Click to expand...

What has any of this got to do with BigMack70's claim that your hardware needs to be 4x demanding to jump from 720p to 1080p?

Windows 7 Pro 64bit (not sidegrading to Windoze 10 until I have to...)

Benchmark Scores:

WTF is this...I don't even....

I'm really not impressed, and pretty disappointed with this.

Rumours suggest the next Xbox going the same route of using AMD APUs. So, that will make both consoles boring and pretty much identical PCs-in-a-box, with only the optical drives and the company logos setting them apart.

The worst part is, they're not even going to be high end by today's standards, which they were back in 2005/2006 when the 360 and the PS3 launched, and they certainly NEED to be if they want them to last anywhere near as long as the current consoles have. It's been about half a decade since the launch of both the 360 and the PS3 and they're already seeming more and more dated with every release.

The only good news I see from this is the fact that it'll use AMD GPUs. This should take a tonne of weight from the shoulders of their driver developers and make their lives a whole lot easier with first party developers that will maximize the capabilities of their hardware (and possibly encourage more console developers to develop or at least release decent ports of their games).

Other than that, the whole lineup of next gen consoles sounds crap. At this point, I can't see why another company, other than Sony, Microsoft and Nintendo, can't do the very same thing but better.

Actually ... that isnt even impressive at all. 60fps @ 1080p is slow by todays standards.

Click to expand...

With a reasonable level of antialiasing/anisotropic filtering, it sounds good by today's standards. The problem is, it'll be painfully slow by tomorrow's standards (even if they release it by 2014, it will already be way outdated).

The worst part is, they're not even going to be high end by today's standards

Click to expand...

At this point guys... they are forgetting that at current development scale in mobile market... next gen ARM + PowerVR and Tegra will catch up the consoles pretty soon...(I am worried about Adreno I mean Radeon R500... it is still stucked in DX9 era... they fetched some people from ATI team recently again, I guess they are in a hurry)

Why you ask?

It has a larger market!

Good app ecosystem (stores). Unreal engine kit is already working with no problems, bringing no problems for devs...

The darn thing isn't only usable for gaming and doesn't gain dust in the shelf while mommy doesn't give $$ for a new[again the same] COD .

Do you think it won't be capable to catch this so called next gen?

Well I am currently playing this on my almost two years old crap phone based on Tegra2.

That's not a point about hardware. In fact, strictly speaking my point here has nothing to do with hardware. It's a point about math which is apparently too simple for many of the elite minds in this thread to grasp. :shadedshu

I'm worked up about kiddos who don't understand kindergarten math and reading comprehension.

Click to expand...

That is mathematically correct for painting pixels on 2D surface by "simple c algorithm" with predefined array of 2D vector images.
Today rendering of 3D scene to 2D surface is much more complex than your simple calculation.

Objects are represented as 3D mesh with additional properties tied to them (material, textures, etc.). Now, you need to place this objects into 3D space by applying transformation and output texture position(s) (2d, to pick color from texture [x,y]) and vertex position "on screen", usually you utilize not only [x,y] coordinations but information about how deep "in screen" vertex is (vertex processing).
For each pixel that covers calculated triangle (output of 3 executions of vertex position transformation) on screen is then run pixel (fragment) shader with interpolated values of vertex shader output (here you can mix, modify skip etc. on-screen pixel color).

You see ? Your simple calculation matches only pixel shader computation. Described procedure is very simplified totally basic projection of 3D objects with textures into 2D space. Our simple "resolution based computation power requirements" formula is now much more complex, isn't it ? (let's not start even with adding basic lighting, shadowing or God-forbid animation to the calculation).

If you try to animate some object, you need to update its vertex positions within mesh as they are not only somewhere else on-screen (happens when you turn camera) but their relative position to each other is different. If this is handled by CPU, then this is often responsible for "CPU bottleneck", as it pushes constant pressure on CPU regardless of graphical settings. You can see it in multiplayer FPS with many players, or as perfect example - MMORPG games (CPU requirements for games as Lineage 2 in seriously "mass" pvp are astronomical). If it its handled by GPU, then you again have constant computation complexity not affected by render screen resolution.

On topic:
Create compiler exactly for single x86 architecture without compromises to "universal x86" operations selection (you can take into account exact memory / cache latencies and instruction latencies and their selection) and believe me, you will see miracles.
If next gen consoles contain some sort of X86 based APU, then did you not consider, that this will force to adapt new thinking for utilizing APU's in general for considerable amount of software developers ? And that is great success even for future desktop development.
If you have exact machine specification (HW, SW), you don't need statistics to determine how much operations can you execute in a given time on (avg) target machine HW with (avg) target software layer (drivers, OS) > you can count them exactly.

AD Internet 90' format (semi-ot):
In the past, reading almost any discussion thread on sites devoted to technical stuff resulted in gaining substantial knowledge (either by users directly writing information in post, or by pointing other discussants to relevant resources). After spending hour of forum reading, you took for granted, that your knowledge base expanded (not necessarily in exactly-wanted direction).
Today after huge Internet users expansion and with connection accessible even on toilet you need to watch out to not end up more stupid after hour of reading technical stuff related forum.
If users spent single hour of reading about how 3D rendering works (You can pick DirectX SDK samples, NeHe tutorial, some other introduction material or even a completely simple "How it works" or Wikipedia [1][2] reading) instead of smashing F5 for quickest possible response to "discussion enemies", then there would be real information sharing and knowledne gain benefit for all. Today Internet is not a medium for information and knowledge sharing (I have sometimes bad feeling that knowledge-generation process is stagnating) but one great human based random "BS" generator that can without any problems compete with random number generator run on supercomputer.

Seriously - this thread contains enough text and graphics to cover PhD or some other work, but information value posts can be counted on one's fingers...
Until some genius comes up with "BS filter", it would be interesting to "emulate" such feature by manually picking of information-rich posts by moderators or even by forum users (something like existing "thanks" function) with forum filter to show only flagged posts.

EDIT: Now I checked Wikipedia second link and statement "Vertex shaders are run once for each vertex given to the graphics processor" is not alway true. If you utilize Radeon HD2k-HD4k tesselator, then vertices count processed by vertex shader is actually higher, because fixed pipeline tesselator is placed before vertex shader in rendering pipeline (see Programming for Real-Time Tessellation on GPU)

today I spared more than usual ammount of time to read the whole thread, register (after years) and prepare reply in the Internet 90' format.

That is mathematically correct for painting pixels on 2D surface by "simple c algorithm" with predefined array of 2D vector images.
Today rendering of 3D scene to 2D surface is much more complex than your simple calculation.

Click to expand...

Fully agree... It reminds me of the old times when Voodoo reigned and the sucker still didn't have hardware T&L... if someone still remembers what did it do.

Simply summing up by coefficient isn't possible due to large data overhead that runs also in parallel. First of all memory bandwidth bottle necks, latency increases due to more complex scene and more shader intensive tasks due to light sources etc and the engine itself.

The coefficient ain't linear anymore since those late 90ies I guess.

The next thing that consoles have only! Why they can evolve the graphics on the same platform. For example Metal Gear, Hideo Kojima stated himself in interview that the development was so long due to the fact that they had to rewrite many engine parts using assembly to achieve needed performance for PS3. It is a nightmare you know, but this also bends the math about calculating what we could expect on screen, due to nonexistent recompiler software layer.

Yes, I can agree with that, because of the effort and such, BUT the info he posted is heavily outdated and because of that non-relevant almost entirely. He's describing forward rendering and to make matters worse, forward rendering with per-vertex shading. That's not used in 95% of the games for 5+ years already. Nowadays everything is calculated on a per-pixel basis and several buffers are created with the resulting pixel information. That's it, deferred rendering/shading.

In deferred rendering what he describes only happens in the first pass (BF3 has 8+ passes): the diffuse color pass, which has very little information and after that everything from lighting, to shading, to advanced shadowing to ambient occlusion to everything happens on a per-pixel basis. Without all of this per-pixel calculations the end result would be the looks of a 90's 3d game. 90% of the work is based on pixel data == buffers == frames that are afterwards mixed (by ROPs and again pixel by pixel) into the final composition.

@Am* - Current consoles do ~720p30, NOT 1080p60. I tried to sum up why it seems far fetched to me that next gen consoles will do 1080p60 in post 279. Even if you think I'm wrong, that should at the very least help you understand why I make the conclusion(s) I make regarding this.

@Disruptor4 - I posted benchmarks early on in the thread showing no single GPU can get 60fps minimum framerate in all games with all settings maxed at 1080p. Not trying to make a big deal out of that at this point, just wanted to clarify where I got my claims from (I could also say my own experience with a single 7970 but I figure benchmarks are more trustworthy).

Click to expand...

I fully agree 100%. Today's consoles do not do 1080p, they do 720p then they get upscaled to 1080p. Consoles don't have the power to output at 1920 x 1080p resolution especially when the game is too complex.

All systems (except for maybe the Sega 32X, Nintendo Virtual Boy) have at least one or a couple timeless classics that keep them alive

Now I know what you meant and I think it'll be close.

September 29th 2012 said:

Last week, the Vice President of Hardware Marketing for the PlayStation brand, John Koller, told news outlet GameSpot that the company plans to support the PlayStation 3 until 2015. While some may think this automatically means the PlayStation 4 will not see release until 2015, this is probably not the case. As Koller explains, the PlayStation 2 stayed active for years, even after the PlayStation 3 saw release. It makes sense for Sony to continue to support the PlayStation 3 since they do not alienate those Sony fans who won't upgrade ASAP to the PlayStation 4. However, chances are good that all of Sony's major franchises will make the jump to the PlayStation 4, leaving only the smaller developers and publishers to continue to support the PlayStation 3.

Yes, I can agree with that, because of the effort and such, BUT the info he posted is heavily outdated and because of that non-relevant almost entirely. He's describing forward rendering and to make matters worse, forward rendering with per-vertex shading. That's not used in 95% of the games for 5+ years already. Nowadays everything is calculated on a per-pixel basis and several buffers are created with the resulting pixel information. That's it, deferred rendering/shading.

In deferred rendering what he describes only happens in the first pass (BF3 has 8+ passes): the diffuse color pass, which has very little information and after that everything from lighting, to shading, to advanced shadowing to ambient occlusion to everything happens on a per-pixel basis. Without all of this per-pixel calculations the end result would be the looks of a 90's 3d game. 90% of the work is based on pixel data == buffers == frames that are afterwards mixed (by ROPs and again pixel by pixel) into the final composition.

Click to expand...

What I described in previous post is definitely not forward rendering but exactly what I stated that it is:

And yes such projection looks like 90' 3d game. EDIT: I used "On screen" term to avoid need to explain render targets.

Both forward and deferred rendering require more outputs from vertex shader (e.g. normals), not only texture coordination and vertex position in 2D space. It is hard to imagine how can you produce dynamic shadows with only single projection (mentioned BF)

Whilst some argue this wont be enough, imho this is an initial dev kit based on what they project will work with ps4,
I recently noticed Amd have changed trinitys sucessor back to piledriver cores with radeon cores next, which to me indicates a trinity successor with Gcn2 and Hsa optimisations ahead of more cpu grunt, and its this chip with additional IP yet to be disclosed(imho interposer with lvl 4 x126mb cache and further Dsp's) that i believe will be the bases of a ps4 anyway, not directly this trinity chip thats for sure, so apples to apples will count for nothing,.

add in hardware optimisations for consoles and Amd's hint at a hardcoded gameing future (Api reduced/removed) and you will have a console that does 60 fps,

to the naysayers i have a crossfire main rig with a physx card(HYBRID) and it will do 60 Fps in EVERY game at 1080p with few settings needing to be eased ever bar AA, so to me the games amd dont do well in are nvidia biased or straight up physx games and an interesting point is that the xbox and ps3 implementation of physx use sse like extensions and are optimised better so nvidia are going to have to write physx to work well on Amd gear, of a kind

id swear some games work better just because an nvidia cards present(tho not used by the game)sometimes.

Both forward and deferred rendering require more outputs from vertex shader (e.g. normals), not only texture coordination and vertex position in 2D space. It is hard to imagine how can you produce dynamic shadows with only single projection (mentioned BF)

Your argument was there's more than pixel shading, which is true and no one said otherwise. However you went all off with the description of what it's basically <5% of a modern game render pipeline and frame time, as if that represents a big proportion of it. So it's essentially true, but as I said irrelevant to dispute the argument that 4x the resolution requires 4x the power*. Even modern shadows are much more than a projection into shadow maps and is dependent on pixel shading.

*I said it in a previous post, this word is the biggest problem. The problem in a way is that people understand power as == performance in reviews and that's incredibly innacurate. A card with 2x the SPs is undeniably 2x as powerful in that department whether it ends up producing 2x the fps or fails to do so because it's bottlenecked elsewhere.

Now the problem regarding the OT is that an A10 APU is severely limited in ALL fronts, SPs, ROPs, texture units... everything and is simply not going to do what a high-end GPU has difficulties on achieving even today, no matter the optimization. And that's another focus of argument because some people are saying that we don't know if it's going to be a custom APu with more GPU power, etc, but the artcile states it's a A10 and that's not a custom APU is it? It's a commercially available APU, which is why an APU is supposedly going to be used, because it's available and cheap to produce. The days of heavily customized chips is over. Otherwise (custom chip) they would have continued with PowerPC architecture and keep the backwards compatibility.

and it will do 60 Fps in EVERY game at 1080p with few settings needing to be eased ever bar AA

Click to expand...

First of all that's a lie or very arbitrary on what "few settings needing to be eased" trully means.

Second your crossfire setup is at least 5x more powerful than an A10 APU, so even if it was true an APU would do 12 fps on the same "few settings eased" conditions. It's a console so let's add a MASSIVE optimization from being a console and you might or might not reach 30 fps (200% increase), but 60 fps not a chance.

Anyway the Wii U is rumored to have a significantly more powerful GPU than A10 APU. Is SONY trully going to release something less capable?