All it takes is the liscensing, and every next-gen console will do it. Not all from AMD, but...

The problem is, what are they gonna do for "NextGen" consoles? What's the next selling feature? 1080P w/3D, HD Audio, these are already obviously standards for hte next genreation, based on current products in retail, and the needed processing power to do it, ain't really all that much, by today's PC standards.

I expect the impending launch of Windows 8 is for setting the stage for then next console development platform, but hardware is obviously going to be dated later than that; maybe even just starting development.

not really most PS3 games are 720p with no AA or MLAA and or 1080p on a select few titles and many 1080p titles are actually upscaled 720p the only game i own that is truly 1080p is Metal Gear Solid 4.
Xbox 360 tends to be 1000x600 upscaled to 720p with 2xAA but most of the time its no AA.

my guess is the next gen of consoles will be more like steam is for PC less 2nd hand game options more buy it and its tied to an account with 1080p HD audio of course with either 3D or improved bleck motion controls on top of which we will probably see better integration of Web features and options like Netflix becoming more streamlined. I am also expected next set of consoles to offer rental options for games much like Onlive does, download and play a game for 5 days for $X.XX etc.

my guess is the next gen of consoles will be more like steam is for PC less 2nd hand game options more buy it and its tied to an account with 1080p HD audio of course with either 3D or improved bleck motion controls on top of which we will probably see better integration of Web features and options like Netflix becoming more streamlined. I am also expected next set of consoles to offer rental options for games much like Onlive does, download and play a game for 5 days for $X.XX etc.

Click to expand...

ps3 works pretty good as a HTPC since keyboard and mouse work on it - i can use tpu from it if i want - i can see the next consoles just being highly optimised pc's

I'm imagining a ps3 with a 32ghz processor 36 core - ps1-ps2-ps3 all had 10x clock speed of previous gen and cores on ps3 were 6 when normal pc's had 2 - all this acheved just by using a small amount of a full procesor and helping it along with optimised co-procesors - face it -most parts of the cpu are dead silicon that the home user will never use - virtulization/encryption can be done slower in software/most of the instruction sets included for legacy purposes

at the end of the day modern day PC's could experiance a massive jump if for one gen every 20 years manufactures jut gave up compatibility and run old programs through software emulation

true cheesy but guess what AMD is doing exactly what cell is with bulldozer in a sense, meaning before the end of the year id expect to see 10-12 core Bulldozer chips this is 2 years before the next gen of consoles is to be expected, now the fact is in terms of computational performance a 6 core Cell was slower then a Core 2 Duo in terms of overall computational performance in tasks like Folding @ Home etc. so in terms of an app that could use CPU power the cell was rather lacking, as in reality the cells capabilities were still only around the same as the Xbox 360s 3.2ghz triple core, which is essentially the same architecture that the cell is based on the cell just uses it differently. now that said if those 2 chips got schooled by a Core 2 Duo and the fact that IBM hasnt really done much with the Cells development. would mean Sony would be forced to go in a new direction which they did, as it would seem the PS4 was going to use Larrabee for graphics processing but we all know how that turned out so Sony shelved that idea.

So basically this means that while the cell was great in its main purpose its now overshadowed by just about every PC cpu in existence even the lowly $50-60 dual cores.

id expect since the next gen of APUs from AMD which will use Bulldozer cores. we will see an 8 core APU by the 2nd -3rd quater of 2012 . so an 8 core bulldozer would in my estimations if they are even close to the mark allow for a 800 shader Integrated gpu, meaning an 6-8 core Bulldozer with an integrated 6850 would be possible. at the 140w TDP limit.

true cheesy but guess what AMD is doing exactly what cell is with bulldozer in a sense, meaning before the end of the year id expect to see 10-12 core Bulldozer chips this is 2 years before the next gen of consoles is to be expected, now the fact is in terms of compuational performance a 6 core Cell was slower then a 2ghz Core 2 Duo in terms of overall computation performance in tasks like Folding @ Home etc.

this means that while the cell was great in its main purpose its now overshadowed by just about every PC cpu in existence even the lowly $50-60 dual cores.

id expect since the next gen of APUs from AMD will use Bulldozer cores we will see an 8 core APU by the 2nd -3rd quater of 2012 . so an 8 core bulldozer would my estimations are even close to the mark allow for a 800 shader Integrated gpu,

Click to expand...

its been shown in many studies that most home users use the same 5%-10% of instruction over and over again, if we left full grade cpu's to servers and just had 1 full x86 core and a load of small cores optimised for specific tasks (optimised cores are up to around 10000% (not a typo) faster in tasks)

i read somewhere that someone's developing a smart phone cpu that does just that, 1 core and 100 or so shader like cores that do a fixed thing each

actually f@H runs many times faster on a ps3 then even a core i7 but the points are scaled down massively as they can only do certain tasks

from the F@H FAQ

It seems that the PS3 is more than 10X as powerful as an average PC. Why doesn't it get 10X the credit as well?

We balance the points based on both speed and the flexibility of the client. The GPU client is still the fastest, but it is the least flexible and can only run a very, very limited set of WUs. Thus, its points are not linearly proportional to the speed increase. The PS3 takes the middle ground between GPUs (extreme speed, but at limited types of WU's) and CPU's (less speed, but more flexibility in types of WUs). We have picked the PS3 as the natural benchmark machine for PS3 calculations and set its points per day to 900 to reflect this middle ground between speed (faster than CPU, but slower than GPU) and flexibility (more flexible than GPU, less than CPU).

Click to expand...

also the cell procesor is just a power pc based design and other companies are filling the void - http://en.wikipedia.org/wiki/PowerPC_A2 - 16 cores with 64 threads using just 65w of power!!! - procesor is in late development and should be marketed to customers later this year or next

overall a PS3s cpu is in most taskes inferior to the 360s 3.2ghz triple core. which its based on, the difference is the Cell cpu can do basic 3D tasks like appling MLAA now the point here is that Cell cpu is not going to get a revamp its dead in the water the only major user of it its Sony in terms a game console etc, and if sony was looking at Intel whats that tell you of the probability of the Cells continued use in a game console?

point is that we might not need all instruction sets etc, but the consoles have become a center point of a home entertainment system are you so sure they will shun all the extras? because if something comes out that needs it and dosent work then what? thats why it wont change

The new cell chips that are being produced by IBM at the 16core count are not even available to anyone other then IBM and there clock speed is only at 1.4ghz even then die size is 428mm^2, which is getting pushed more toward server tasks then tasks that a game or web browser might needs so in the end the Cells development as branched back toward more typical PC designs.

where as bulldozer seems to be the opposite multiple less complex cores that share other parts of the CPU which is why id expect to see an SOC chip based on an AMD APU instead of a Cell cpu because no matter how many cores a cell might have it cannot do the tasks that Sony hyped it could do, as the PS3 was never ment to use a gpu the Cell cpu was to do both cpu tasks and gpu tasks now if that was the case the PS3 would have been the inferior machines in all regards if they had not added the nvidia gpu at the request of game developers.

So while the Cell is powerful in its own way industry adoption for it and coding for it is at a stage where again its essential not proving to be the end all be all that it was hyped to be.

Think of it this way the Cell has been around awhile nearly 6 years now and IBM Sony and Toshiba just started getting the Industry ready to code for it by selecting various Institutions to teach and have access to the technology to better train programmers that said it would mean that those who know how to properly use the Cell cpu wont be making apps that take full advantage of it for another 2-4 years even then it wont be widespread, now in a consumer world where maximum compatibility and broad consumer support is a must what path would you take for profitability? the tried true and accepted methods which have wide industry support and a large developer base or would you continue using the new kid on the block thats just finally getting started.?

and 16 cores 64 threads thats cool, but since games need to be multi platform do you honestly expect developers to properly utilize that? i dont and they wont. nor will both sony and microsoft use the same cpu and setup so porting code between consoles would be a nightmare. which would results in more console exclusives or even shitter port jobs. Im going to put my money on the more likely to happen situation and pass up the dark horse.

and i say this looking at PC ported games that when run on cheaper hardware then the consoles it runs better and at higher frame rates.

a good example is Metro 2033 set it to 720p with an 8800gt and the game looks and runs better then the console equivalent. which is something to keep in mind when you think about it the Cell isnt all its cracked up to be in this situation. as a simple Athlon II x2 240 + 8800gt is cheaper and offers better performance if you look at a more modern GPU of the same performance as an 8800gt such as a 5670 it uses less power with those 2 combined then either console and offers better performance at the same exact settings

an SOC chip in the same vein as AMDs APUs is the more likely choice because it offers better performance per watt then a cell + gpu not to mention in the tasks it does it would be the more efficient choice, and more likely to use much less power in the end. as the original PS3 with a Cell cpu + gpu used 380w in the first gen 280w second gen and the slim only dropped it to 260w this would mean currently the typical X86 cpu of say FM1 socket APUs the A series offers roughly from the looks a Phenom II 955 + 5670 at 100w tdp, bulldozer cores next year would offer up to 8 cores and possibly 6850 performance at 140w TDP even with board power Blu ray / dvd ram etc its far less then the typical console today. meaning its the overall more efficient design which would also result in a lower failure rate do to overheating components which were common place.

Do to this issues I would except either an Intel or AMD SOC chip with a more tolerable power envelope to be selected.

even then do you HONESTLY expect game developers to properly code for it, if they cant even code effectively for 4 x86 cores or let alone the difference in performance from a 360 to a PS3 in some games? i honestly doubt it.

even then do you HONESTLY expect game developers to properly code for it, if they cant even code effectively for 4 x86 cores or let alone the difference in performance from a 360 to a PS3 in some games? i honestly doubt it.

Click to expand...

my point is though we can at least strip out some of the code from consoles seeing as compatibility is minimal seeing as they don't work with windows, i do like the bull dozer idea though - taken to a much more extreme end could work well for the way games rely mostly on integer performance - have 4-8 interger cores which can work together for the really complicated stuff

which is why again i think bulldozer + IGP on die will be what ends up happening as we are moving away from Gaming console to more media hub with web video games rental services etc.
and i agree consoles can live with features stripped out of the cpus. so essentially a customized Bulldozer chip with a 6850 on die would be easily possible and would if stripped of non essential features would i bet result in an SOC that could easily fit the bill i mean we have to remember here

consoles cpus are essentially when compared game to game on pc vs console at same graphics settings a PS3 or 360 is still essentially only on par with a Athlon II x2 240 or Core 2 Duo E8200 + 8600gt / 2600XT etc etc. so when you look at that setup vs as of right now the current APUs being essentially a Phenom II 955 + 5670 on die compared to a customized APU being Bulldozer 8 core + 6850 the jump in performance is well rather immense taking into consideration the efficiency and the fact no matter what each side does a game engine ment to work on all platforms will have more luck scaling with a typical x86 APU and PCs then a cell 16 core 64thread etc cpu would.

Well then, I guess that my POV regarding consoles and SOC is: PLEASE DON'T!
Current consoles are already a limiting factor, the low low common denominator and their hardware was mostly top notch when it was released. If they start conceeding performance even more in order to fit it in one package, stagnation in games is a sure thing (more so than this generation == THE HORROR). Yeah by the time they release next gen consoles, AMD and maybe Intel will probably be able to fit a octo-core Bulldozer/Sandy Bridge and the equivalent of a Cayman on a single die, but by that time mainstream PC gamers will be using PCs twice as fast, not to mention enthusiasts. Come on guys current APUs while an improvement over previous integrated graphics are only barely faster than R600 was 5 years ago FFS! Even if they double the performance every year (which I doubt considering current cycles) that will still leave us with Barts kind of performance for when they are released in around 2014. That seems good right now, but in 2 years Barts' kind of performance will be completely outdated or so I hope. For sure I don't want to suffer another 5 years getting subpar console ports. At least DX11 brings in tesselation which can be used to improve quality without having to worry about scaling between different platforms.

^^ That's one of the reasons I want them to focus on geometry, because they can create the games with console hardware in mind, then by just upping the tess factor and using higher detail/resolution height maps, we get more detail with little effort from developers. For the same reason I want large scale physics to be used and one day raytracing or some kind of photon based lighting. What all these 3 techs have in common is that they are very independent techniques that require very little tweaking from artists. With those elements the game engine would behave like the real world does and they would not need to care about things like how many lights are affecting a certain place, how many shadow maps are going to be created and many other things. They would just have to care about putting light sources where they should be, tell which material a certain object is made from etc. and let the engine and config file dictate which LOD level is used, and how the objects are going to behave, look and break, between other things.

@crazyeyes

Most developers create very high poly models in order to create bump textures (or they outsource the creation to 3rd parties), they can use them to create displacement maps. You are right in that they are not using tesselation the way it should be used and most of them are just using it to round edges, normally not even caring about if they are tesselating flat surfaces too much while areas that require far more polys are left with a very poor number of them. On this, Heaven bench is probably the worst, because (at least IMO) the stairs, the dragon and pretty much all the buildings and rocks need much much more tesselation to even start looking good. On the other hand the ship has far too many polys in some areas. For me in overall Heaven looks like crap, because it attempts to use tesselation+displacement but falls very very short, even on extreme settings, leaving hard and pointy edges, textures that have been stretched too much and something similar to high frequency artifacts on the geometry as you move around. I don't know about Ati demos because I don't own an Ati card, but Nvidia ones certainly do a much better job than Heaven, both on quality and performance. Especially the island11 tesselation demo.

The thing about Ati cards and Nvidia cards being almost as fast on some games/benches using tesselation is because tesselation is not really being used as much as it could. The island11 tesselation demo can show as many as 25 milion triangles (yes thats far more than pixels onscreen) per frame at acceptable framerates (GTX570/580) and the absolute maximum without using adaptative tesselation or culling is 160 million. My GTX460 for example can handle around 500 Mtriangles sustained per second, and it does around 20 fps with a 25 Mtriangle scene and 4 fps at the maximum of 160 M triangles*. Heaven, Stonegiant, and others are nowhere near that mark, my estimation is that they use a maximum of 2 M triangles.

*For reference the demo does 75 fps when tesselation is at minimum, and then the scene has around 50k polys. So although it might not be the best optimized of demos, extreme level of tesselation comes very cheap.

- Consumer gets the SoC and it changes it himself? (bad idea and don't most console gamers buy consoles because they don't want to mess around with the hardware and sofware?)

- You have to send it to them, kind of RMA?
- You can upgrade on the store?

Anyway, I think the whole purpose of consoles is to have a closed and cheap platform that stays the same throughout it's lifespan and it's easier on developers. If the CPU and GPU are going to be changing anyway, why not simply focus on PC and PC alone? SoCs will always be inferior to what's posible on separate dies so even if diversification makes optimization a little worse, the sheer power allows much better games to be made, without the need optimize as much on every single configuration. If developers had to create 3 different configs for each console, they would be making like 9 different optimizations and in that case developing for the PC would simply be a lot better.

On the consumer side it would be even worse when some developers choose to fully use the power of the latest and greatest SoC forcing people to upgrade or not play/play a dumbed down version. I mean it's the PC, without the benefits of PC. And on top of that we know at which outrageous prices the upgrades would be sold.

Well then, I guess that my POV regarding consoles and SOC is: PLEASE DON'T!
Current consoles are already a limiting factor, the low low common denominator and their hardware was mostly top notch when it was released. If they start conceeding performance even more in order to fit it in one package, stagnation in games is a sure thing (more so than this generation == THE HORROR). Yeah by the time they release next gen consoles, AMD and maybe Intel will probably be able to fit a octo-core Bulldozer/Sandy Bridge and the equivalent of a Cayman on a single die, but by that time mainstream PC gamers will be using PCs twice as fast, not to mention enthusiasts. Come on guys current APUs while an improvement over previous integrated graphics are only barely faster than R600 was 5 years ago FFS! Even if they double the performance every year (which I doubt considering current cycles) that will still leave us with Barts kind of performance for when they are released in around 2014. That seems good right now, but in 2 years Barts' kind of performance will be completely outdated or so I hope. For sure I don't want to suffer another 5 years getting subpar console ports. At least DX11 brings in tesselation which can be used to improve quality without having to worry about scaling between different platforms.

^^ That's one of the reasons I want them to focus on geometry, because they can create the games with console hardware in mind, then by just upping the tess factor and using higher detail/resolution height maps, we get more detail with little effort from developers. For the same reason I want large scale physics to be used and one day raytracing or some kind of photon based lighting. What all these 3 techs have in common is that they are very independent techniques that require very little tweaking from artists. With those elements the game engine would behave like the real world does and they would not need to care about things like how many lights are affecting a certain place, how many shadow maps are going to be created and many other things. They would just have to care about putting light sources where they should be, tell which material a certain object is made from etc. and let the engine and config file dictate which LOD level is used, and how the objects are going to behave, look and break, between other things.

@crazyeyes

Most developers create very high poly models in order to create bump textures (or they outsource the creation to 3rd parties), they can use them to create displacement maps. You are right in that they are not using tesselation the way it should be used and most of them are just using it to round edges, normally not even caring about if they are tesselating flat surfaces too much while areas that require far more polys are left with a very poor number of them. On this, Heaven bench is probably the worst, because (at least IMO) the stairs, the dragon and pretty much all the buildings and rocks need much much more tesselation to even start looking good. On the other hand the ship has far too many polys in some areas. For me in overall Heaven looks like crap, because it attempts to use tesselation+displacement but falls very very short, even on extreme settings, leaving hard and pointy edges, textures that have been stretched too much and something similar to high frequency artifacts on the geometry as you move around. I don't know about Ati demos because I don't own an Ati card, but Nvidia ones certainly do a much better job than Heaven, both on quality and performance. Especially the island11 tesselation demo.

The thing about Ati cards and Nvidia cards being almost as fast on some games/benches using tesselation is because tesselation is not really being used as much as it could. The island11 tesselation demo can show as many as 25 milion triangles (yes thats far more than pixels onscreen) per frame at acceptable framerates (GTX570/580) and the absolute maximum without using adaptative tesselation or culling is 160 million. My GTX460 for example can handle around 500 Mtriangles sustained per second, and it does around 20 fps with a 25 Mtriangle scene and 4 fps at the maximum of 160 M triangles*. Heaven, Stonegiant, and others are nowhere near that mark, my estimation is that they use a maximum of 2 M triangles.

*For reference the demo does 75 fps when tesselation is at minimum, and then the scene has around 50k polys. So although it might not be the best optimized of demos, extreme level of tesselation comes very cheap.

my point is the endless city demo
At maximum settings in that AMD and Nvidia are on par the point is that its an Nvidia demo and it runs the same yet in many apps we see Nvidia dominate with tessellation why is that when in there very own bench shows an outcome completely different,

and i agree SOC would stagnate pc game development but guess what its already stagnant and sadly whatever they can get the most money out of for minimum cost is what will be used. and while i agree on the high end PC side of things it would be bad take a look at the consumer base with good gaming PCs were a tiny group and everyone knows it so its not like matters would be that much worse. it would pretty much end up the same regardless.

and yes the millions of billions of polygons for a high res mesh example would be gears of war
1 Billion polygon high res mesh on a 13k triangle base mesh

in general tho tessellation just tessellates everything where as a mesh thats done regularly has varying topology densities. and you can edit areas that need it most tessellation does it do that, example i dont need 1million polygon eyeballs or teeth etc.
hell dont even need a million for a realistic human being. really just need proper model techniques and a few hundred thousand because using extreme tessellation for skin pores is just absurd, but id rather see for example tessellation left out in favor of dynamic hair you know what i mean real hair not this low poly triangle hair with transparency maps.

but the major point is here that rounded edges are nice, they help but we dont need large polycounts for that to happen, Id rather see adaptive tessellation that focuses on key things

good examples ears nose mouth, wrinkles, finger nails etc, we dont need 150,000 polies on a forearm but a few thousand extra to give proper finger nails and dynamic wrinkles to the knuckles that would be nice,The true displacment mapping on a high res model is great but i feel normal mapping works just fine in regard to many surface details, thus extreme tessellation isnt really needed, but again on structures, that get blown apart like a concrete wall a dynamic tessellation that can take procedural textures etc would be great never the same and more realistic in terms of the damage done. Im sure you can see where im going with this theres far better uses for tessellation then whats going on now, Ive always believe if you need that detail do it manually at least in how its used today because you can do a better job of it, again good examples the characters in Metro 2033, Stalker CoP, Alien Vs Predator, it would have been better to just give an ultra geometry settings and manually make the meshes that way to begin with the only way i see tessellation as a factor is to reduce the geometry at great distances etc to remove that LOD pop in issue that plagues many many games.

I think you are confused about Oblivion. It does not have 4 M poly scenes at all. 4 M polys per second is far more likely. A Crysis scene heavy on vegetation, and dead bodies is aroud 1.5 million polys. Oblivion does not have 4 M polys, believe me. I have created and edited many weapons and armor, and the stock weapons usually have ~500 polys, while armor sets never exceeded 2000 polys iirc. There's no way the scenes had 4 M poly.

As for Endless City demo, it is obviously bottlenecked somewhere else other than tesselators, because I get 25 fps at the same settings that people report 30 fps with GTX580, 570 and so on.

Endless city demo is probably the worst demo to test tesselation performance anyway. It does use a lot of tesselation (although not as much as the island demo), but it also uses many other new and overkill features that stress many other areas including the CPU. If I had to guess where the bottleneck really is, I would say the 500.000 light sources and generated shadows are the problem. Or the fact that the base mess is being generated procedurally. Or the use of ambient occusion in such a massive environment... there's so many.

Pure tesselation benchmarks like Tessmark, Microsofts DX11 tesselation runtime and others clearly show a 200% lead by Nvidia at low tesselation levels (16 and below) and over 800% with high levels like 64x. All the other comparisons are moot and only highlight the weakness (or equality) of Nvidia architecture in other areas, which no doubt have to be improved, and so do AMD's but maybe to a lesser extent. On tesselation however AMD has a lot of catching up to do and not necessarily on performance (which may or may not suffice) but on constructing an scalable (regarding tesselation) architecture, which they clearly lack right now.

Ive always believe if you need that detail do it manually at least in how its used today because you can do a better job of it, again good examples the characters in Metro 2033, Stalker CoP, Alien Vs Predator, it would have been better to just give an ultra geometry settings and manually make the meshes that way to begin with the only way i see tessellation as a factor is to reduce the geometry at great distances etc to remove that LOD pop in issue that plagues many many games.

Click to expand...

On this I agree 100% and I think I've said that plenty of times here in TPU. The way they are using the tech is absurd because you could effectively create the same detail and probably get better results and better performance. Yeah it would take more time and effort, tessellation can make their life easier, but that's not a "metric" I would be fond of these days. In the age of expansion packs that are sold as games or games that really are just expansions whatever you like more; in the age when publishers rush games out the door and milk them like there's no tomorrow, something that is easier for developers is something I don't really like. Some years ago when something easier for the developers would have meant that more emphasys would be made on other areas or where content (aka play time) would be extended, yes I would fully agree with that approach. But today something that is easier for developers, just means they would just take far less care to detail and rush out the games or said features even faster. The examples of tesselation you mentioned are the proof. Today it lloks like developers are either lazy or so much pressured by publishers, release after release, they no longer care about detail, polish or even overall quality of their products.

while ill agree that other bottlenecks may be the case you cant compare pure tessellation benchmarks to a game either.

as it stands right now i really dont see any need for games to hit 25million polygons per scene

many games are hitting 2-3mill with a few that hit 3.5-4mill mainly being Gamebryo engine based Fallout and Oblivion and Crysis + Warhead,

now again i could see tessellation making big advancements in image quality but if we look at the original soldier that was a DX11 demo awhile back

cant remember what its called maybe you remember i know it was one of the first DX11 demos, that you could manually adjust the slider on for tessellation that demo relatively shows my point you dont even need to hit the multiple triangles per pixel situation before you hit a massive diminishing return the problem i see with tessellation is every benchmark uses it on everything thus in that senario AMD has a major bottleneck for sure, but if used where it SHOULD be i think either AMD or Nvidia are perfectly fine, as again there is eventually a diminishing return point and when things are moving or your playing the game the difference wont be noticeable but it will be evident in still shots. eitherway point remains no ones using DX11 properly so untill that changes i cant honestly say who has a better implementation

Nvidia is brute forcing with shaders. which works for now but with gpu physics and shader heavy games with tessellation they would run out of steam, where as AMD has the limited tessellation performance that drags them down but then they excel elsewhere. really is an interesting divergence in ideas. as far as games go it makes little difference, as todays gpus tomorrows gpus and the ones after that still wont be ready for prime time yet. Not to mention by the time this finally does get sorted out, I have a feeling we will get DX 12

and while this is interesting and im grealy enjoying this conversation its really completely off topic

Nvidia is brute forcing with shaders. which works for now but with gpu physics and shader heavy games with tessellation they would run out of steam, where as AMD has the limited tessellation performance that drags them down but then they excel elsewhere. really is an interesting divergence in ideas. as far as games go it makes little difference, as todays gpus tomorrows gpus and the ones after that still wont be ready for prime time yet.

Click to expand...

Why does people keep saying/thinking that? It's absolutely false.

Nvidia does not do tesselation on shaders, it does it on 16 (GF100/110) dedicated tesselators, exactly like the ones that AMD cards have. Also it does not do tesselation by any more "bruter" force than AMD, except for the fact that they included 8x more of them in their chip. Both AMD's and Nvidia's tesselator work in the exact same way, which is the one imposed by DX11. One stage is done in the fixed fuction unit(s), while the other stages are done in shaders (hull/domain). Using more tesselators does not come in detriment of shader use, unless your code specifically emphasizes on hull/domain shader usage (i.e by being more adaptative) and in that case shader use is just as high on AMD hardware as it is on Nvidia hardware. They idea that Fermi uses shaders for tesselation is a myth invented and perpetuated by Charlie Demerjian and is not any truer now than it was when he first said it. So the bottom line is that when games start using more and more shaders for physics effects and whatnot, Fermi will still retain the exact same lead on tesselation they have today.

The confusion regarding tesselation on Nvidia cards probably comes from the fact that the tesselators are physically located on the Shader Multiprocessors*, but so are texture units and I have never seen someone claim that texture filtering is done on SPs...

I guess that like every myth or meme it will take some time until people grasp the truth, but it gets annoying to hear this kind of things so often because they actually make people have wrong concepts about the architectures and their real capabilities.

*and when those are disabled, tesselators are disabled with them or when more SMs are added more tesselators are added to the whole.

eitherway tessellation today if used properly for the hardware and to scale to the lowest common denominator means were kinda screwed for the time being lol.

Id rather see more usage of sub surface scattering with lighting for more realistic skin shader effects, dynamic hair etc. i feel these things would add more to most games then tessellation does right now.

really hating the flat nasty poly hair lol that clips into everything i feel the tech were seeing now like the new facial animations used in La Noire coupled with dynamic hair + subsurface scattering for skin etc would do a big big improvement in games more then tessellation, as robotic boring characters are still boring no matter how pretty. Or maybe thats just me? i dont know for me graphics are good not quite good enough but other tech thats needed is still far behind where it should be for that nice complete package.

We need a Open CL physics platform that actual works and gets used
need proper usage of DX11
better multi threading on the CPU, and better allocation of resources
better more realistic skin shaders
dynamic hair
better facial animation

Id rather have the above first before tessellation. but then thats just being greedy isnt it? but as a consumer i demand it thats right i have the balls to DEMAND IT!!!!

Id rather see more usage of sub surface scattering with lighting for more realistic skin shader effects, dynamic hair etc. i feel these things would add more to most games then tessellation does right now.

It's kind of funny because you are "dismissing" tesselation when it's the one thing that can apparently fix that thing you hate more.

The problem with tesselation (like any other new tech actually) is that there's a massive difference between what it can be used for, as demostrated in tech demos and what game developers actually use it for in actual games, and capabilities of each of the GPU architectures have nothing to do on this.

It's just lazyness or tight schedules/budgets. If you think PhysX is dead, you can be sure it is in that situation because of lack of support and not because the tech was not worth it. It has never been used even to a 5% of what it could have been used for and has remained pretty much a gimmick. Sadly tesselation as of right now, it's following the same route. IMO some things cannot be used unless you use them extensivelly and tesselation for me is one of those. It's not worth the performance penalty (because you pay a big penalty upfront, no matter the tesselation level you use) unless you use it everywhere, BUT with common sense and to add displacement, etc. IMO.

And regarding sub-surface-scatering, many games use it now, and many to come have announced they are going to use it, but I think it's pretty heavy on the hardware. At least it's very heavey on off-line renderers, but maybe it's something easier with semi-dedicated hardware, ie. GPU shaders than it is for CPU renderers. No more no less the same happens with tesselation (subdivision) which is so heavy on the CPU, but so fast on fixed function units.

PS: Although all this talk about features seems off-topic it is not completely off-topic because of a simple reason: If any of those features are to be included in future games, a SoC will simply just not be enough, not even close, and I think almost everyone here wants these things or things of similar nature.

well i realize tessellation can do it, but theres more effective methods for it i feel, then tessellation but most wouldnt really work well in razterizer based renders in terms of game engines,

and sub surface scattering yea is in some games but many are not effectively using it, or using it in a way that makes the lighting have impact the only game ive seen use it effectively was Metro 2033, but we shall see again im probably being greedy i want skin shaders that are similar to what i can accomplish with mental ray lol, then again id also like to see nurbs patches etc be possible in game engines.

as for Physx its not dead or gimmick in its ideal usage the problem is Nvidia were douchebags and limited it to there hardware, and AMD could have licensed it but i dont think they want Nvidias branding on all there gpus lol that would be bad for business so we end up stuck in a holding patter of great ideas and great advancements that never see there full potential.

also demos like the one released by Aegia back in the day dont help when ppl found way to run the demo on cpus and a single core AMD Athlon 64 was only 1 fps slower then an Aegia PPU running the same cloth/ massive boxes and particle effects with that F2P game they put out. Overall the industry itself just pisses me off arbitrary vendor ID locks that stop development and punish the consumer.

as for more people wanting these features im sure if people KNEW about them they would the problem is the mass market for gamers is consoles most console users dont even know what DX11 does, my friends dont they thought it was just the newest DX release for Windows 7 and it was just a fancier DX9 and did nothing improving visuals

these are the kinds of people that own a 360 PS3 WII and 30+ inch HDTVs they have no grasp of what computer technology is capable of today compared to there consoles, and thats the typical issue how can they want this if they dont even know it exists or what its capable of

Well yeah if you want the same quality as you get with mental ray, you probably have to wait "a bit" lol. But I do share the same objectives in the end, I'm just focusing more on those that I've seen in action so that I know they can be done. PhysX-like physics is one and tesselation is the next one. 25 M poly scenes are probably not needed right now, but for sure are posible which is the point. AMD should just catch up on tesselation so that we don't have to philosophize about what is best or more suitable implementation. If 25 M poly scenes are as easy to achieve as they are on fermi cards, gosh, just freaking bring them to us. Ok maybe not 25 M just yet but, give us 10 M.

You hate hair. I hate the scenes with sand and small rocks, they just never look right and no ammount of bump mapping or POM has ever made them look half convincing. Tesselation can change that, and in my mind that's what it is for, allong with so many other grainy surfaces that I hate so much. And hell yes at some point (2016?) I want them to use it even on pores.

well i can agree with mass tessellation on world at large in games i think i said that already especial games liek Fallout New Vegas which would look 50x better with tessellation on the world geometry, as far as characters go again i feel tessellation is more a wasted feature on them a bump from 10-15k average of today to 25k would be enough i believe to keep parity. and the world itself dosent move much in the horizon or around you in game where as characters or enemies try to hid move flank they get blown up etc. which means extra detail while nice seems kinda wasted when there moving and developers use "Blur" i feel in alot of situations the characters these days tend to look better then the world around them, again this is in general and depnds on the game. but yea to be honest 10million is doable right now and without tessellation but then again tessellation allows 3-4million poly scenes become 10mill so that lowest common denominator aka weak gpus can still handle it. thats what i think tends to hold us back as well gpus advance rather quickly in terms of performance in todays market and ppl dont upgraded every year or 2 years like the typical enthusiast so were stuck with so many shitty machines that were kinda stuck catering to the lower gpus. as if we dont we end up with the Crysis 1 fiasco where everyone bitched cause they couldnt run it at Very High / Enthusiast settings. This resulted in Crysis 2 being a step backwards in technology and overall advancement it still looks good but i can see the areas they sacrificed to get better performance to make the sheeple happy.