Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

crookedvulture writes "AMD is bundling a stack of the latest games with graphics cards like its Radeon HD 7950. One might expect the Radeon to perform well in those games, and it does. Sort of. The Radeon posts high FPS numbers, the metric commonly used to measure graphics performance. However, it doesn't feel quite as smooth as the competing Nvidia solution, which actually scores lower on the FPS scale. This comparison of the Radeon HD 7950 and GeForce 660 Ti takes a closer look at individual frame latencies to explain why. Turns out the Radeon suffers from frequent, measurable latency spikes that noticeably disrupt the smoothness of animation without lowering the FPS average substantially. This trait spans multiple games, cards, and operating systems, and it's 'raised some alarms' internally at AMD. Looks like Radeons may have problems with smooth frame delivery in new games despite boasting competitive FPS averages."

It seems like everyone always wants a single measurement to judge how good something is. Graphics cards have FPS, CPUs have GHz, ISPs have MB/s. What's not shown in these single number measurements are things like lag, or overheating problems, or random spikes of instability.

Sigh. Maybe one day we'll learn that every product needs more than a single number to judge how good it is performance-wise.

There's a difference between measuring milliseconds per frame and frames per second.With the former your minimum resolution is one frame.With the latter your minimum resolution is one second.

Because of that even the minimum FPS rate doesn't necessarily tell you how jerky or smooth the rendering is - since it's averaged out over one second.

Taken to the extreme it could be rendering 119 frames in 1 millisecond and get stuck on one frame for 999 milliseconds, look like a frigging slide show but still show up as 120FPS. Whereas that sort of thing will stick out like a sore thumb on a milliseconds per frame graph. Hence measuring milliseconds per frame is better.

The extreme case shouldn't happen in practice, but as the article shows (and from personal experience) the max/high latency stuff does happen. I've personally experienced this on my old ATI and Nvidia cards - my Nvidia 9800GT was slower but smoother than my current Radeon. I went ATI because the Nvidia cards were dying too often (they had a manufacturing/process issue back then). But my next card is probably going to be Nvidia. Even with Guild Wars 1 my ATI card can feel annoyingly "rough" when turning in the game - you see the FPS stay high but it's rough to the eyes if you get 60 fps by getting a few frames fast then a very short pause then a few frames fast then pause repeat ad nauseum.

On a related note it's good to see that at least some benchmark sites are also starting to take latency/consistency into account for stuff like SSDs. A maximum latency that is too high and occurs too often will result in worse user experience, even if the overall throughput is high, and even for storage/drives.

Well, you have that information from the graphs of the X% of frames under latency Y, but it's a lot of information to report not in a graphic form:) Most site I`ve seen using this `new` measurement display 'extracts' of frame rendering time vs time in harder locations, a graph plotting the frame rendering time (Y axis) vs the frame ID (X axis), which will provide you with the framerate, total number of frame renderer, average latency, maximum latency, etc, and the finally provide the analysis of this infor

The problem is that you are talking about the frame rate per second. The issue is that the high latency frame rate may only occur for a few frames. So the FPS might still be high for that second but the hitch in the frame rate would be noticeable to a viewer. That's why the Tech Report is including both the FPS rate and the 99th% percentile rate. So you can see if the frame rate is good enough as well as what the maximum delay you might see from frame to frame.

A person who knows all the different measurements usually doesn't just look at FPS. The single numbers are there for people who don't understand all the other things. It helps them decide on what card is supposed to be better.

Hell even cars use a MPG sticker on each window, they don't go into how they get that MPG number, and most are not reflected in real world tests of the car.

How bout next time you RTFA. For the past year or so, that site has scatter plots showing frame render times in milliseconds; they show time and time again that despite similar FPS rates, performance isn't the same.

Oh, we have learned this. Which is why decent review sites don't just publish a "single number" representation of speed. They post a complete FPS graph [hardocp.com] for similar runs through each game, so you can compare side-by-side

That's not the same as milliseconds per frame.Frames per second graphs will be showing numbers that are AVERAGED over one second. So the minimum number still might not reflect how crappy the rendering is.

To take an extreme example, you could have a card that does 120 FPS by drawing 119 frames in 1 millisecond and the last frame in 999 milliseconds.

Then you have a card that does 50FPS by drawing each and every frame in 20 milliseconds. The 50FPS card will appear smoother, whereas the 120FPS card will look li

It all depends on the settings that you use. I'm sure that if Scott Wasson turned down a few of the eye candy options just a bit and re-ran the tests he wouldn't see the same issues. So if you don't turn on all the eye candy (I usually don't) you may never see this issue.

Indeed. If AMD doesn't fix their x86 stuff with something that makes me salute, they're going to be out of the next round of upgrades. The FX-8350 is an improvement over the FX-8150, enough for my to upgrade to it. However, it's still a ways behind where we should be. That Intel processor with the six-cores, with six hyper-threaded units, is calling out to me. Given that I have already purchased the necessary liquid coolant system to keep the FX processor happy, switching over to that i7 wouldn't be much of

Anyone that actually knows anything about the GPU industry knows that both AMD and NVIDIA graphics suffer from these latency spikes, but it's not with all their SKUs. NVIDIA's 660 Ti works well in this case, but their 670 and 680 has more latency spikes than the competitive AMD cards do. The 7850 demonstrated here is an anomaly for AMD. None of their other cards do this. Look at past reviews from Techreport and you will see what I mean.

However, if you actually read the Tech Report's review of the GTX 670, you will find they say the exact opposite. The GTX 670 has ridiculously low latency compared to the Radeon 6990 and just a bit lower than the 7950 and 7990.

Anyone that actually knows anything about the GPU industry knows that both AMD and NVIDIA graphics suffer from these latency spikes, but it's not with all their SKUs. NVIDIA's 660 Ti works well in this case, but their 670 and 680 has more latency spikes than the competitive AMD cards do. The 7850 demonstrated here is an anomaly for AMD. None of their other cards do this. Look at past reviews from Techreport and you will see what I mean.

Sorry, but ATI suffer from this far more than Nvidia as apparently it is something nvidia actively try to improve:

The article didn't mention power settings. I'm quite skeptical of all the new tech which overclocks on demand and then clocks down when it gets too hot (or too idle). They should definitely try this test with the standard frequency: pinned at the nominal frequency (if there is such a thing at all).

"This trait spans multiple games, cards, and operating systems, "First of all the article only tests 2 cards accross Win7 and Win8. Considering that Win8 is basically just Win7 SP2, it's hardly fair to make that statement. Micro-stuttering an issue that mainly affects multi-GPU cards. Both Nvidia and ATI have had issues with this in their SLI and Crossfire cards. You can read more about it here:http://hardforum.com/showthread.php?t=1317582 [hardforum.com]

As far as 3D graphics is concerned, there are noticeable differences between Win7 and Win8 - the former is WDDM 1.1, the latter is WDDM 1.2, which covers quite a bit of new stuff (basically all the new features in Direct3D 11.1).

"This trait spans multiple games, cards, and operating systems, "First of all the article only tests 2 cards accross Win7 and Win8. Considering that Win8 is basically just Win7 SP2, it's hardly fair to make that statement. Micro-stuttering an issue that mainly affects multi-GPU cards. Both Nvidia and ATI have had issues with this in their SLI and Crossfire cards. You can read more about it here:http://hardforum.com/showthread.php?t=1317582 [hardforum.com]

But if you avoid SLI then nvidia cards are fine, only ATI really suffers from this issue badly. See the link I posted above.

Games on the Radeon 6850 would generally perform fine but seem 'jittery'. Usually didn't notice it, but sometimes it was quite obvious that you were losing frames here and there even when in non-complex situations. Of course you wouldn't notice a single one, but when it's happening every 2-3 seconds it starts being noticable when you're playing for hours. Some games were much worse than others, though I couldn't say it was directly related to how shiny the game looked. It was never big enough a deal that I

A few months ago I decided to do a complete replay of the entire Mass Effect trilogy with my 6900 series card, and I am seeing the occasional lag that didn't used to be there. I also revisited Skyrim when Dawnguard came out, and I'm seeing it there too. This machine didn't used to do this, and since I can't find anything else running that could cause the CPU to spike, I have been working on the assumption that some driver update (perhaps as far back as six months ago) has been to blame.

I notice that like 2 months ago, after upgrading from a Radeon 5850 to a 7850. Since my CPU is an old Core Duo 2.4GHz I didn't expect that much of a performance boost, but expected a noticeable change. When comparing the Haven benchmark results with the previous card, the higher frame rate went up as expected (15-25 fps not remember now), but the lower frame rate went down too from 18 fps to 6 fps on new card. Tried with some driver revisions, being 10.10 the last one tested having same behavior on the 7

- he is using different thresholds in different games.- he is using fraps to measure it, but fraps only records part of the process- he is testing with vsync off, which is weird considering he is trying to measure smoothness- his average FPS numbers for AMD card are much lower than in reviews from other sites

Are his tests always reproducable (spikes always occur at the same spot)?Why such a small selection of games?

Well, I had seen this problem for very very long time with the ATIs. First with Rage 128 Pro (with Intel CPU) and later with the 4850 (with AMD CPU). That's why during the last upgrade I went with the nVidia instead.

Though of course I have never thought about it as a problem(*) nor investigated it thoroughly: I have simply seen that unless one keeps graphics at low settings, ATIs tend to occasionally drop to 10-20 FPS from the usual 40-60. Last game - and very dated at that - I have tested with ATI 4850 w

For years I was part of the cult that would invest $1000's upgrading or building my own PC game box, buying the best video card, researching the best CPU and RAM that could be overclocked, even matching the exact spec's of a system bench marked on an enthusiasts sight.

After the many hours of building and tweaking and break in I finally would get a hold of the most popular game at the time and then try to run it will all the graphic's features cranked to the max because every website I reviewed said I could.

Agreed but in this case and many other related to computer parts it isn't a "misguided notion of brand loyalty" so much as an awareness of which brands are able to consistently produce quality goods vs brands that produce acceptable goods.

I wouldn't ever consider an HP machine and it isn't because I don't like the brand. It's because they die within months. Sony on the other hand is a kniving cunt but still manages to produce consistently acceptable hardware.

The company that does is called none. As in, does not exist. Not even apple, not google, not Nvidia, not AMD.

If people are so idiotic to not be able to understand that whether your physical product is from a good batch at the manufacturer or not has a significant correlation between whether the product falls apart or not, and not anything to do with a brand whatsoever?

Well, unless you are buying an old device, there is no way to know whether it will be crap or not, so using past experience with the company is quite valid.

Let's say I bought a device and it failed very quickly (or was just crap), the company says that they fixed the problems and please buy our new model, this will work just fine. What if it doesn't? I'll wait a few years and if the company produces good quality products during that time I may buy from them again.

Unless the item is really cheap I cannot afford to buy because "maybe they got better" or to have more data points (hmm, my hard drive failed too soon - better buy 20 more of the same brand/model to see if it was a fluke or not). Also, if another brand also had failures at about the same time it would mean that I would be looking for a third brand (because who knows, maybe both failed products were made in the same factory, just branded differently).

While the pat experience may really no longer be relevant (though as you said, some companies consistently suck), still it would feel kinda wrong to buy from the same company that made the crappy product. I don't really want to step on the same rake a second time, especially when buying a brand new device (if I buy a device some time after its release the reviews will be more accurate (or at least it would show up if the device has a high failure rate), but when buying something just released I can only bas

The article does not justify its arguments. We do not know how it should look, only the way it does in the various pictures. Seems to me that the ATI method, while offering "higher sharpness" will suffer from temporal aliasing artifacts much like you see on distance surfaces when using high resolution textures in minecraft.

That is ancient and irrelevant after the geforce 7 era, as it was fixed for these cards. Today both chipsets give very similar results. The problem lies in quality drivers. radeon drivers are riddled with rendering bugs when used with anything but the 'latest' games they were 'optimized' (patched up) to work with. They're also BSOD trigger happy. While not perfect, nvidia's drivers behave far better, to the point where I'd rather have a previous generation nvidia card in my machine than the latest ati/

I always bought nVidia until the 7950 / 8800 / 9800 dry solder issues. After that many RMAs and arguements with wholesalers and retailers, I bought AMD out of retaliation, and this 5870 has been rock solid. I'll be going back to nVidia for my next refresh, but for me, nVidia has not 'always just worked'.

In fact, I'd wager no one brand 'just works' these days, since extreme capitalism is in these days, and they'll shop around for manufacturing plants and methods.

This is the most accurate post here so far. You need to make sure that you are getting a good card manufacturer more than the chipset designer. The chipset is only as good as the cards they are bundled with.

Except the G80/G92 disaster *was* a chipset issue.Nvidia used lead-free balls between the die and interposer with the same soft underfill they used for leaded balls before, resulting in thermal stresses causing joint fractures.Card manufacturer doesn't play a role in it other than "more extreme thermal loads/cycles accelerate the deterioration", so cards tuned to run hot/silent died quicker.

So one incident created due to the fact that no one is allowed to use lead anymore for environmental reasons. People keep milking this and sure it WAS a problem but compared to the amount of issues with AMD cards in terms of both hardware and software it's nothing.

In fact, I'd wager no one brand 'just works' these days, since extreme capitalism is in these days, and they'll shop around for manufacturing plants and methods.

I'd argue that nVidia and AMD aren't brands in the traditional sense.

The majority of video card problems are caused by card manufacturers, rather than the chipset vendors. I've had nVidia cards that weren't worth the box they came in; I've had my share of Radeon-based cards that were complete crap. Nine times out of ten, though, whining about nVidia or AMD and video card problems is something along the lines of whining about Intel because your computer's Samsung hard drive failed. You should probably

Damn I just bought an XFX 7850 last week. Performance seems good (though there was I blue screen when I first installed it, but I hadn't installed any drivers yet and it's been stable since), should I expect to be replacing it in a few months?

I have to wonder how much of it is also caused by motherboard/chipset issues too. I have a card in my current computer that was nothing but problems on two computers using AMD chipsets. Put it into an Intel system and it's been rock solid now for 10 months. Funny thing is that it's a Radeon.

I just went over to the Radeon because of the multimonitor support given off of one card. I have 5 monitors attached to my current video card and I like it that way. Before then I bought nVidia because they worked so well without issues. I have had multiple issues from radeon since purchasing it, but oh well I finally got it to work.

Displayport is actually a different protocol than DVI (and HDMI). Most Displayport devices know how to "talk" DVI so cheap converters that just remap the pins work for most applications and the device realizes it's talking to something that's DVI and adjusts accordingly. But for AMD's Eyefinity to work the device must talk Displayport, so unless your displays are Displayport you need an active adapter that converts Displayport to DVI (or VGA, or whatever). Yeah, I didn't know that either so I have an ext

I just went over to the Radeon because of the multimonitor support given off of one card. I have 5 monitors attached to my current video card and I like it that way. Before then I bought nVidia because they worked so well without issues. I have had multiple issues from radeon since purchasing it, but oh well I finally got it to work.

Completely agree with this. The multimonitor support on Radeon is much, much better than nVidia and that's why I moved over as well. I wouldn't say I've had any big "issues" but ATI's driver support (at least on Linux, using the Catalyst drivers) has been a little bit disappointing - I had to stay on an old version of X.org for a while because of the amd64 Xv issue forcing me to use an older driver for example.

I will say, this is why I used arch bang over arch linux because it had automatic x driver detection which works pretty well (no problem playing games and watching movies). I spent two weekends debugging arch linux trying to get the video drivers to work decently and finally gave up.

I just went over to the Radeon because of the multimonitor support given off of one card. I have 5 monitors attached to my current video card and I like it that way. Before then I bought nVidia because they worked so well without issues. I have had multiple issues from radeon since purchasing it, but oh well I finally got it to work.

5 monitors is a bit of a weird setup for gaming though. I suppose their may be some games that use that many, but do they really require a super high frame rate an no latency spikes like we are discussing here though?

Personally I am an online FPS nut and dont think I would find 5 monitors that useful since most games do not let you up the viewing angle to 360 degrees. If games do start allowing a 180 degree viewing angle might up to 3 monitors though, but I can't see how 5 is useful.

I don't use all 5 monitors for gaming. I use only one and alt tab between the game and whatever else I'm working on. I honestly don't game that often though and when I do I usually work while I game (not playing FPS but MMO's etc).

if you're at the low end (R4850, GT240, etc) AMD's stuff is pretty useless. They just crash a lot on everything except the biggest titles. It drive me nuts, because the grandparent is right, AMD has much, much better image quality. I've heard their high end ($300+) doesn't have this issue, but I'm old and broke (family and such) so that ain't happening.

Indeed. It amazes me how people buy the bottom of the barrel model, from the cheapest manufacturer they can find, then complain that their problems must be endemic to all of the chipset manufacturer's designs.

I buy the top of the line, or one spot away from top of the line, and these kinds of problems do not occur for me. My 7970 is sitting pretty right now, barely ticking over. My point being: invest a little money in your video cards, people! This is frecking/. of all sites, this should be second nature

my 8800 sli setup (bought new soon as came out) lasted for years. I only replaced them when they died, after about 3yrs of use.replaced with: pair of gts250...which is essentially just the 8800 on an updated pcb and bit more RAM. at the time i got them, were pretty cheap for a quick fix to a dead video setup. nvidia really got their money's worth out of that design.

still using those gts250's too, near on 3, 4 (?..lost track), years later.(but then I dont need to run 34300x123904 resoulation at a million FPS

At that time NVidia had fallen behind AMD in terms of throughput per die area and was forced to push up clock rates to compensate, causing heating and yield problems. Those were dark days for nVidia, now mostly fading painful memory, but AMD still gets more graphic throughput out of the same die area.

You're obviously one of the lucky nVidia customers who avoided all the shitty laptop chipsets that cracked off the board because of cheap solder [nvidiadefect.com]. Same issue as the 7/8/9xxx desktop series, except you had to basically turf an otherwise perfectly good machine. nVidia shit the bed on both sides in that debacle which spanned several years.

Any serious gamer uses nVidia. Radeon has been behind for years and I am sure this issue isn't going to help.

Yup. Despite some slashdot mod trying to conduct an anti-nvidia reign of terror this is pretty much spot on (every post not slagging off nvidia in this thread is down modded at present, anything pro-ati is +5 insightful for no good reason).

I used to buy ATI due to the cheaper cost but since I moved to nvidia for a GTX6800 I have never looked back. Since then I have bought a couple of GTX280's (for different PC's) and more recently a GTX480 which is still going strong. I am sure nvidia have produced some bad

If you did, as you say, read the article, then you would have seen that this issue happened in both Windows 8 AND Windows 7. In fact, Windows 8 performance was typically better, with less micro-stuttering, than the Windows 7 performance plots. So, to put it mildly, you're speaking out of your ass.

ati pulled that shit with quake3 and their radeon 9000 series, and nvidia pulled that with 3dmark and their geforce 5/fx line. They were outed both times. This doesn't happen anymore. Now, they do optimize for specific applications and that is bad because optimizing for corner cases has a history of breaking basic functionality elsewhere.