irrelevant to what topic?im not arguing about "how it render" but "how it show".

Click to expand...

You posted downscaled pictures which have no relevance to anything in this topic and which definitely have no relevance to "how it renders" or "how it shows" because you're doing something completely opposite - rather than upscaling the low resolution images, you're downscaling the high resolution one (though you, for good measure, downscaled them all ).

If you want to compare upscaled with real resolution, then you need to post two pictures - one with dimensions 1080p which was originally rendered at 720p and then upscaled, and then that same picture with dimensions 1080p but which was natively rendered at 1080p. Alternatively, if you have a 1080p monitor, you could just look at a picture in 720p zoomed in to full screen vs that same picture at 1080p with no zooming in.

the A10 APU is at least 8x the rendering performance of the current ps3 hardware...
384SPs vs the 16 in the 360 which is comparable to the ps3 = 24x the general shading performance. I don't see how there would be a problem rendering only 4x as many pixels...

the A10 APU is at least 8x the rendering performance of the current ps3 hardware...
384SPs vs the 16 in the 360 which is comparable to the ps3 = 24x the general shading performance. I don't see how there would be a problem rendering only 4x as many pixels...

As you can see not 8x faster and not much faster than 7 year old cards.

The PS3 suffers of the same BAD way of comparing things. RSX had 24 pixel shaders, but also 8 vecter shaders, so that "equals" to 32 unified Nvidia SP, but again like with Xenos, these could do up to 5 ops/cycle vs 2 op/cycle in the current unified shaders. So it's more like 32 x 2.5 = 80 "SP", versus the 128 SPs in the 8800.

Been saying this would be the case...Comparable to 1080p @ 60fps on contemporary pc titles at launch.

Between the nexbox and the ps4, I've long felt the dev kits were something like 256 + 1024 (salvage trinity + 7850ish gpu) eventually moving to 384 + 896sp (28nm fully-working trinity + an example of what 8770 could be). There are other possible configs of course, but something like that.

896sp/16 ROPs on a 128-bit bus could run 950/6000 very efficiently, for example....and be very, very close to the avg potency of 7850...which on average is going to net you about ~45fps at 1080p. With the APU adding perhaps ~40% more resources you're very much at the 1080p60 level for most titles...assuming they find a way to make them operate seamlessly.

The number of completely uneducated and uninformed individuals making statements other than 'Wow! I wonder how this will work out," are staggering. The people I'm referring to know who they are.

The hardware designed for consoles is specialized and commissioned to suit a specific work-load. They are the min-maxed gaming machines using the absolute lowest common denominator of hardware to achieve optimized results. Again whatever they select may be based off of an architecture but it will most certainly have its own specialized cpus and gpus which suit the console's intended purpose. A target BASELINE performance of 1920x1080 at 60Hz is not impressive to us PC users but this is designed for TV play. With this limitation in mind it allows the system to use hardware to specifically drive those pixels with the utmost efficiency. This actually allows for less powerful hardware to achieve better results than it would in a PC environment... so for the most part this won't be evolving the console market past TV-HD, and honestly it may be a short-lived bump. The console devs are able to supply the mass with something for cheap... driving a lot of game sales and keeps the console market strong has got to be a balancing act for these industry giants. I bet the cost per console will be about $150 but they will be selling them for about $400. Something they weren't able to achieve with the 360 or PS3. For once making a console might net them a profit.

You posted downscaled pictures which have no relevance to anything in this topic and which definitely have no relevance to "how it renders" or "how it shows" because you're doing something completely opposite - rather than upscaling the low resolution images, you're downscaling the high resolution one (though you, for good measure, downscaled them all ).

If you want to compare upscaled with real resolution, then you need to post two pictures - one with dimensions 1080p which was originally rendered at 720p and then upscaled, and then that same picture with dimensions 1080p but which was natively rendered at 1080p. Alternatively, if you have a 1080p monitor, you could just look at a picture in 720p zoomed in to full screen vs that same picture at 1080p with no zooming in.

There's a big difference.

I don't think you know what you're talking about.

Click to expand...

I never posted any downscaled image.I quote from another site and show the source.Next post is camera capture shows 1080p60Hz on TV info to countermeasure your opinion regarding lack of 1080p capabilities on PS3.You suggesting a proper method for comparing between these two,you may quote any of my last post,did i mention "rendered"?

Hmm. What I gather from this thread is a bunch of arguing over gfx capability and or processing/horsepower. In my opinion hardware does not mean a thing if the game(s) made for that hardware suck balls. No wonder developers push games with graphics over gameplay. Makes me want to dust off the old NES, SNES, or PS1 and relish the glory days of gaming.

On Topic: I think people will be surprised with the results of these APUs in the coming consoles.

As you can see not 8x faster and not much faster than 7 year old cards.

The PS3 suffers of the same BAD way of comparing things. RSX had 24 pixel shaders, but also 8 vecter shaders, so that "equals" to 32 unified Nvidia SP, but again like with Xenos, these could do up to 5 ops/cycle vs 2 op/cycle in the current unified shaders. So it's more like 32 x 2.5 = 80 "SP", versus the 128 SPs in the 8800.

Click to expand...

guess I remembered wrong but that is not correct either. The inefficiencies in the original R600 designs were made the performance extremely low. The trinity A10 is exactly 1/4 of a 6970. the 360 performs like a 2600xt.
the 2600xt gets 933 3dmark vantage, the 6970 gets 21k. If trinity wasn't bandwidth constrained it would get more than 5k which is an almost 6x as much performance increase. This is from outdated and inefficiency software system. 8x the performance is what sony quotes I think. Which given moore's law, its very easily done.

I never posted any downscaled image.I quote from another site and show the source.Next post is camera capture shows 1080p60Hz on TV info to countermeasure your opinion regarding lack of 1080p capabilities on PS3.You suggesting a proper method for comparing between these two,you may quote any of my last post,did i mention "rendered"?

If you didn't have three of them,PS3 XBox and PC,why arguing?

Click to expand...

I already addressed your claims of the PS3 rendering at 1080p. It's nonsense. Go look at the link I posted that shows you the resolution PS3 renders various games in. 99.9% of them are upscaled 720p.

And I know that those images were from another site. They're all heavily downscaled, even on the original site. Not 1080p. Therefore irrelevant.

I already told you what you need to do if you want to compare real vs upscaled resolution. I don't see you doing it...

Not on that particular argument... maybe read the thread? Notice anyone still trying to argue that 1080p60 isn't 4x as demanding as 720p30? Your original post just reflects a lack of reading comprehension as even Phenom realized that my argument was correct.

As for rather the PS4 will be doing 1080p60, that's anyone's guess. Some think no, myself among them, others think yes. No way to know who's right until the hardware is released.

Hmm. What I gather from this thread is a bunch of arguing over gfx capability and or processing/horsepower. In my opinion hardware does not mean a thing if the game(s) made for that hardware suck balls. No wonder developers push games with graphics over gameplay. Makes me want to dust off the old NES, SNES, or PS1 and relish the glory days of gaming.

On Topic: I think people will be surprised with the results of these APUs in the coming consoles.

As you can see not 8x faster and not much faster than 7 year old cards.

The PS3 suffers of the same BAD way of comparing things. RSX had 24 pixel shaders, but also 8 vecter shaders, so that "equals" to 32 unified Nvidia SP, but again like with Xenos, these could do up to 5 ops/cycle vs 2 op/cycle in the current unified shaders. So it's more like 32 x 2.5 = 80 "SP", versus the 128 SPs in the 8800.

Click to expand...

Ah yes...xenos. The chip with avg texture usage is 15us:1tmu...more optimal than even the 16 shader:1tmu or 14:1 nvidia use today (a big reason radeons gets crapped on for texturing). Odd since it has been that way since the dawn of dx10...and this preceded that. Sometimes you wonder about those 'experimental' chips like xenos that were kind of crazy brilliant. Why give it unneeded rops? Why give it unneeded bandwidth? Why give it unbalanced texture ability? Then...they create R600. WTF.

When you see ratios finely and finally perfected in Kepler: 14 shading + sfu: 1 tmu (slight overkill on texturing but better than under like amd), optimal amount of smaller sfus compared to true shaders (32:192...aka perfect...no idea if overall better use of space than amd doing it in shader) or the total amount of shader/sfu per array corresponding with rops (224 total units...around 230 is the sweetspot for scaling with 4 rops...pretty much perfect given how an array has to be set up)...it really makes you wonder where all the mad scientists at ATi went. They used to own that turf...now all they have is shader density/flexability...which granted helps a metric ton, but still...a carry over from years and years ago.

I remember when I used to discuss this stuff on B3D...those were the days.

Not on that particular argument... maybe read the thread? Notice anyone still trying to argue that 1080p60 isn't 4x as demanding as 720p30? Your original post just reflects a lack of reading comprehension as even Phenom realized that my argument was correct.

Click to expand...

You ever argue with a brick wall? we all just did. Just when we thought we were going to convince you, we realized that a brick wall is an inanimate object.

You ever argue with a brick wall? we all just did. Just when we thought we were going to convince you, we realized that a brick wall is an inanimate object.

Click to expand...

So you argue that 720p --> 1080p (2.2x) along with 30fps --> 60fps (2x) somehow yields something other than a ~4x increase in computational difficulty?

I don't know why I keep having to state the obvious: it's basic math. In the case of this argument, I am not and never was talking about PS3 vs PS4 or 2006 vs 2012 hardware. I'm making a picky point about basic kindergarten level math.

I'm absolutely amazed so many people have failed to understand that. :shadedshu

Yup... completely ignored. Why are the non-engis/programmers arguing over specs which may not even appear in the end system. These chips aren't the same you can buy for PCs and have less to do with performance than how well the software is written for the platform...

One of the best examples of this is ICO for the PS2. The ways they got that game to look the way it did on that system were genius, and so specific the team which remade the game in HD had to re-code the engine to support more resolutions/objects displayable on screen.