Hi guysI have been a long time lurker on these forums but didn't register so far .I always like the way how you conduct your GPU reviews as you guys give a stronger emphasis on smoothness rather than the absolute FPS numbers.But I have a question though, how do you guys measure the frame time? I ask this question because in the past I have used PerfHUD for NV gpus to measure this.But you needed access to the code base to insert the perfhud hook.I raised the same question here http://forums.anandtech.com/showthread.php?t=2287709 but didn't get a response from the resident gpu editor.

I think this might be a better question under the specific article (since Scott seems to answer good questions like this), but they use Fraps. I personally have not used Fraps, but he explains in the first Inside the Second review that there is a setting to log individual frame times.

From the article:

Damage wrote:I did make one simple adjustment to my testing procedures: ticking the checkbox in Fraps—the utility we use to record in-game frame rates—that tells it to log individual frame times to disk. In every video card review that followed, I quietly collected data on how long each frame took to render.

"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP"

I used FRAPS.It dumps out a log when you hit F11 to benchmark and that includes the exact time each frame was rendered.Plop it into a spreadsheet and subtract each time from the previous time to get numbers you can put into a graph, or just sort if you're looking for frames over a certain threshold.

Some people ask me why I have always enclosed my signature in spoiler tags; There is a good reason for that, but I can't elaborate without giving away the plot twist.

Thanks.But correct me if I'm wrong the reported frame time in fraps is the time when CPU makes the frame available to DirectX API not when it is rendered right?I ask this because in PerfHUD tool you can track the frame during it's complete life cycle (later Nsight for Kepler GPUs)

I do worry that Fraps' method isn't sufficient to capture the effects of any frame metering or other delays that might be happening in multi-GPU scenarios, where DX hands off to one GPU and then another. However, I'm less concerned about those issues in single-GPU configs, where the rendering process should act as a kind of feedback loop. The point in that loop where you capture a timestamp shouldn't matter much.

That said, I agree we ultimately need even better tools to show us what's happenning onscreen.

Saying that isn't the same thing as saying that Fraps-based frame time measurements are problematic, though--and it's a long, long way from making the case that such measurements aren't better than the deeply flawed FPS averages we still see way too often.

I concur Scott.I asked this as I was interested if you guys are doing any real time performance analysis which we can do in D3D apps.Looks like for the released software we are stuck with Fraps for the time being anyway thanks for stopping by and answering my question.

The current version of MSI Afterburner captures frame times, dunno how long they have had it. Spits it out on their nice little green graphs, not sure if you can output it to a log or whatever....you can always screenshot the graph.

Chrispy_ wrote:I used FRAPS.It dumps out a log when you hit F11 to benchmark and that includes the exact time each frame was rendered.Plop it into a spreadsheet and subtract each time from the previous time to get numbers you can put into a graph, or just sort if you're looking for frames over a certain threshold.

Excuse my weak spreadsheet-fu, but when you are dealing with 30K+ lines of data, is there any way to automate that?

Well, if it outputs the frametime additively, then you would make an equation something like:Column A = raw valuesColumn B = desired values

B(x) = A(x) - (A(x-1))

So, for example:

__A___B1_20_2_40_3_60_

B(1) = 20 - (null) = 20B(2) = 40 - (20) = 20B(3) = 60 - (40) = 20

edit2: I booted up open office calc, made a set of example values, then simply made the equation for the first line, so in B2 the equation is (A2) - (A1). I then copied that cell and pasted special, and in that sub menu choose "paste formulas". The program automatically adjusted it for each subsequent row. BAM. Done.

Yeah, that's how I've done it, so far. We've recently built an import tool that just does that calculation during the import of all five runs worth of data. Once you have the deltas, you don't need the timestamps anymore.

I was pretty concerned with fraps frame times to. I think many people are because its so new and there is a learning curve. So i started thinking, we all trust Fraps with fps averages. It is never questioned. If the FPS measurements have proven to be quite accurate then how can it be questioned sampling frame times? The FPS numbers it reports is actually calculated from the very same place. If the frame time values arent correct, then the FPS values it samples would be worthless too.

There have been questions in using fraps for Frame times still. But the more you dig into it, the more you have to see its a pretty good tool thats results are very meaningful. I think there is a fast growing interest in frame times now as other sites look at it. One site set out to check to see if Fraps was a good tool for this. Validating it directly against several game engines that offer the ability and it proved to be spot on. http://alienbabeltech.com/main/?p=33060. Fraps' capture point in the rendering process are slightly different than the game engine's but the data trends were like mirror images when they are charted. Fraps captures data a few milliseconds off from the game engine but the data follows the game engine extremely well. After seeing this i have high cofendence in fraps and frame times and i have little reason to think that multi GPU results would be a lot different. I think i will suggest to him that he look into multi GPU to see how it compares.

Scott i think you have really started something. What road you have been paving has finally opened and people are wanting to go down it. I wanted to bring up the ABT as it complements your work for sure. i had not seen this http://techreport.com/blog/24133/as-the ... 9ae8#metal before i posted above. So i read the beyond3d thread from Andrew Lauritzen and found it very interesting for sure. There is a lot of information there and so much more to consider. The game simulation clock times really can be a tremendous factor to the hitching or jitter. This happens because of the long frame time spikes. This is variable that really makes things a little crazy. If we had tools that simply measured the output in frames displayed, they could be way more consistent than the jitters we see on the screen. I dont think there is any reasonable way to take in all these variables and represent them meaningfully. But the techniques you are using are a fantastic way to show a lack of smoothness. Especially since the game engines use frame times as an input that triggers clock/adjustments of simulation/ whatever we may call it......

So the skyrim video shows the character walking steady but then its like the animation skipped steps and was ahead then behind. There could be an effect of the game simulation clock/timing which was disrupted by those spikes in frame times but instead of staying high they quickly go to shorter/small frame times. The engine would adjust again to try to play catch up. The frame times you measure from DX calls is used as an input for important synchronization of game simulation. Those long frame times are more than just a chance of a hitch but also could trigger a chain reaction. I think this is a very important thing to consider. From Andrew's post, i take it is his suggestion that more than likely most of the actual output frames to the lcd are pretty much steady and the most of the hitching is is a result of disruption to the games internal simulation clocks its trying to keep synced. It has bound to have some weight to it.

This bring me back to my point in all this. From this thread i take it that you have some reasons to believe that your frame time methods with fraps are lacking when it comes to representing hitching in multGPU setups. From this i conclude you probably have tested multiGPUs and have not seen the trends you expected. That frame times were not representing your experience? Is this correct?

Is it possible that the frame time spikes you measure have a major impact on fluidity by throwing the games simulation clocks out of whack but multGPU setups have other issues that these tools are not capable of measuring? Like perhaps the actual output to the monitor is not steady or something else is occurring as the task are being manipulated and split long after the DX calls occur and the frame time data is collect. Maybe fraps isnt capable of seeing this because it is not monitoring it at all?

After reading one of the recent inside the second reviews (I think it was the last one that showed up on Ars), I did some research to see how you could measure what gets outputted to the screen with good accuracy. At first I was thinking of using a high-speed camera and outputting the frame's timestamp as a pattern that could be auto-recognized...

But I think what would work well is:

1) Using a hook similar to FRAPS', add a non obtrusive overlay to the picture. Use the overlay's color to encode the frame's timestamp. Keep a list of all timestamps.

2) Use something like a Blackmagic Decklink, capture frames from the HDMI stream and look at the relevant pixels to determine when/if the frames are displayed.

I figure that by putting a vertical stripe for the overlay you could also detect/address tearing.

Wouldn't any overlay program add a huge chance of compatibility issues? What if some minor thing in nvidia's drivers doesn't like how the program does something, and then adds 50us to every frame, but only in X title? You would have to somehow prove that the overlay you are using does not effect frametimes in itself, on a per title (or per graphics engine) basis.

The best method looks to be the raw data gathering that Andrew L. is doing...but an expensive method at that.

Damage wrote:Yeah, that's how I've done it, so far. We've recently built an import tool that just does that calculation during the import of all five runs worth of data. Once you have the deltas, you don't need the timestamps anymore.

Any chance of you sharing this wonderful tool?

I've been running some benchmarks with this methodology, and I think the most time consuming part is just getting all of the frametime data together accurately. I've been running tests on a 3570K from 3.6 to 4.6GHz in 200mhz steps to see the difference it makes. tl;dr, it dramatically reduces the number of frames spent above 16.7ms in most games I've tested when you increase frequency, notably above 4200MHz.

Also, a lot of the games I've been testing I've been able to use in game replays/demos which helps match things up very nicely. If only this function was standard, the last two weeks would have been a lot easier for me.