cynan wrote:And also, that Titan benched above must have been overclocked pretty much to the max. From the quick dig around google, it seems that the HD 7970 gives a good 2/3 the performance of the Titan in the Valley benchmark.

I wanna see Titan SLI run this now.EDIT: added in the end pic for my system in the op.

Not sure how that one HD 7970 got in the 60s for avg FPS. I'm a hair shy of 50 FPS Using the Extreme HD settings at 1080p and that's running at 1350Mhz. Highest score is a quad HD 7970s, followed by a Tri-SLI Titan setup.

cynan wrote:And also, that Titan benched above must have been overclocked pretty much to the max. From the quick dig around google, it seems that the HD 7970 gives a good 2/3 the performance of the Titan in the Valley benchmark.

I wanna see Titan SLI run this now.EDIT: added in the end pic for my system in the op.

Not sure how that one HD 7970 got in the 60s for avg FPS. I'm a hair shy of 50 FPS Using the Extreme HD settings at 1080p and that's running at 1350Mhz. Highest score is a quad HD 7970s, followed by a Tri-SLI Titan setup.

I don't know why it says Preset: Custom; I didn't change anything. I left everything on defaults except I changed to Ultra mode. I ran without AA obviously; with AA would be too horrible for this GPU...(see below)

[edit] My friend Emily's results on her Dell XPS:

[edit the second]Okay, so I was wrong! Not bad for an old mid-range Fermi card, huh?full size version: here

Last edited by auxy on Sun Mar 03, 2013 3:42 pm, edited 3 times in total.

auxy wrote:Titan is also quite limited in its overclockability, due (if I recall correctly) to the relatively low thermal ceiling set by Nvidia. I believe ~990Mhz is as high as you can go without raising the voltage, and you can't raise the voltage very much (again, IIRC). So, since you have a pretty good OC on your card, I'm curious to see the comparison! °˖✧◝(⁰▿⁰)◜✧˖°

I was just remembering Anand's review where he said that you couldn't go over like 1019Mhz (or something?) without raising the voltage past the spec. I don't like to run things out of spec on voltage (I never exceed Intel recommended voltage when OCing CPUs.) （ﾉ´д｀）

Either way, even 1019Mhz is pretty awesome for a card with such a huge GPU! (ノ*゜▽゜*) 1200Mhz is just silly!

My hardware certainly isn't setting any records with the Valley benchmark, but I can still get a decently smooth demo (high-20's in FPS) at 1920x1200 with High settings, 4XAA and ambient occlusion turned off with my 560 & old Core 2 desktop.One thing I love about Valley is that all those mountains you see off in the distance are not simply 2D faked wallpaper that is projected onto the outer boundary to provide an illusion of depth.. if you keep going long enough you can actually climb up all those mountains! Obviously the number of models and textures are kept low so that the download doesn't become obscenely large, but given some more sprucing up with unique models, NPCs, water, towns, etc, the Valley could host a pretty nice RPG game all by itself.

chuckula wrote:Obviously the number of models and textures are kept low so that the download doesn't become obscenely large, but given some more sprucing up with unique models, NPCs, water, towns, etc, the Valley could host a pretty nice RPG game all by itself.

I wonder if Valley is doing anything particularly clever; I don't really think it is -- the textures are relatively low-quality, the meshes are fairly low-detail (compared to e.g. Skyrim), and the largest portion of the valley's good looks come from the foliage, which is small enough and repeated in such quantities that being low-detail doesn't really affect it. That is to say, Valley is more or less just a demonstration of Level of Detail optimization allowing the illusion of a gigantic, detailed world without limiting draw distance (say, via fog.) (;´□｀) It uses relatively simplistic assets with an advanced renderer and some shader effects to create a very nice-looking scene (although looking at it without AA makes me want to puke.) ヘ(>_<ヘ)

I'm curious why these sorts of games haven't moved to procedural generation to reduce the size of the assets; .theprodukkt demonstrated back in 2004 with their 96kbyte .kkrieger demo that you don't need gigs of data to produce a visually complex scene, yet you still don't see this sort of technique used at all really; maybe a few things but certainly not much, in any case. Spore was a high-profile champion of procedural generation in games and I wonder if the high-profile failure of that game didn't do a lot to tarnish the perception of procedural techniques. ( ￣д￣;)

On a tangentially-related note, as I understand it these kinds of huge scenes are traditionally a weakness of immediate-mode renderers like in our Radeon and Geforce cards (due to extreme degrees of overdraw), and traditionally a strength of tile-based renderers like in the PowerVR hardware; I wonder if, as this kind of huge open-world game becomes more the norm (with games like Skyrim and sandbox games like GTA ever growing in popularity -- not to mention open-world MMORPGs ala World of Warcraft and TERA Online), PowerVR might not make a return to the desktop enthusiast space? 「(°ヘ°) Though I do remember reading recently that tile-based deferred renderers, due to their very nature, are terrible for action games; I can't remember why exactly -- something to do with inherent latency of the design...?

I'd love to get the input of someone more knowledgeable on this topic! ヽ(*≧ω≦)ﾉ

I know the engine used by Free Radical for their games (Timesplitters, Second Sight, Haze) had a render optimization where only the things the player could see in their screen was being "rendered" in full effect. So for example, when running down a hallway, up ahead is a corner where the hall splits off to the left. While running down the hall, that left split had not been loaded yet, at all. But when the player got within a set distance, that hall was then rendered, and the furthest back part of the hall was removed from ram. In other words, streaming...but in a whole new set. Modern games stream by loading the textures into their models as the player moves, this streamed the entire world. Thus allowing for high detail areas without wasting resources on things the player can not see. There is no need to have the entire front of a level rendered WHILE the player is ten city blocks away from it. I'll try to find the article about this, but it was back in 2004. Still, fairly interesting method. And the system would do it so fluidly there were no hickups or Unreal 3 style texture pop-ins. Or, Crysis 2 style and tess the water underneath the city...crap like that this engine was designed to not do.

Some of the rendering is moving toward distance function evaluation for some scene elements that normally require massive overdraw.And we also start to see raytracing real-time potential, so I'm not sure tile based renderer have much to offer beside really low end mobile GPUs.

Now, the issue with those non rasterizing techniques is that even so its light on geometry (traditional rasterized triangles), its compute heavy.You trade overdraw with compute, but I think long term its innevitable and the Ps4 might include allot of hybrid rendering.http://raytracey.blogspot.com/2013_01_01_archive.html

The result is attractive. And like you mentioned, their is very little scene data and its dynamic, where not two rock or tree have to be alike (This is realtime on high end GPU)"When you do all these four types of thinking at the same time, you get a formulanimation – something that looks for example like a forest, but that it’s just one huge mathematical formula:"

So the idea is that instead of generating a massive amount of geometry, this is transformed into a set of distance function evaluation. No more triangles, no more overdraw.The burden is now all compute.

sschaem wrote:So the idea is that instead of generating a massive amount of geometry, this is transformed into a set of distance function evaluation. No more triangles, no more overdraw.The burden is now all compute.

Point. To push this type of compute rendering at a nice frame rate would require some interesting power.

Sooo... Yea, playing games on MacBook Pro (or on any other Mac with OSX) is not a good idea at all, even if it has a dedicated Nvidia GPU Of course, there's always a "BootCamp+Windows OS" solution which is a "MUST" if you want to play any kind of 3D games on your Mac

My subscription allows you people to exist on this site and makes me a better human being than you'll ever be

It's not that low. From this thread it seems that benches for Titans overclocked to the max on air (1100-1150Mhz) get around 70 avg FPS, maybe a couple FPS higher for the very best. I don't know what's up with that Titan bench you posted on the other page. Or was that with AA turned off? There are some pretty crazy scores in that overclock.net thread. Must be hard-modded for increased voltage with extreme cooling.. Not really something us mere mortals can compete with.

killadark wrote:forgive me for asking but is this skyrim :o if so which glorious mod is it

No offense intended, but... (¬_¬)Your 4870 512MB is going to cry out in agony if you try to run the game as depicted in those screenshots. (ﾟДﾟ;)Best to not even bother. It runs like bollocks on my GTX460!

I got interested in your comments and then thought I try it on my GTX 680. This is what I got:

I don't know, the numbers seem too high? Or is this for real?

These are my settings:

Of course, the GPU is factory OC'ed to 1072/1502/1137 per GPU-Z

A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty, until found effective.Edward Teller

SomeOtherGeek wrote:Hi all --I got interested in your comments and then thought I try it on my GTX 680. This is what I got:[image]I don't know, the numbers seem too high? Or is this for real?These are my settings:[image]Of course, the GPU is factory OC'ed to 1072/1502/1137 per GPU-Z

Hehe, you ran without Anti-Aliasing. Try with 8x AA and see what you get. It'll probly be more in-line with what you'd expect.

By the way, I love your signature! Edward Teller is one of my personal heroes!

A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty, until found effective.Edward Teller

auxy wrote:By the way, I love your signature! Edward Teller is one of my personal heroes!

Yea, mine too. When I need a moral boost, I read his stuff. Always brings me back to earth.

A fact is a simple statement that everyone believes. It is innocent, unless found guilty. A hypothesis is a novel suggestion that no one wants to believe. It is guilty, until found effective.Edward Teller

B.t.w, it looks like you're not using the latest drivers... Upgrade them Also, whatever the manufacturer of your GTX680 card - download their "overclocking" utility and set the "Power Target" slider to max value - it won't harm your card and won't increase the idle power consumption but will allow the GPU to automatically "boost" its clock speed past the "stock" values at full load.

My subscription allows you people to exist on this site and makes me a better human being than you'll ever be