OCing the X2 is frustrating. With Intel I know when the northbridge will change frequencies I can't normally see. But with X2 I am never confident what to expect. Except for one thing. When I reach a certain FSB and then need to slow memory down I most definitely take a performance hit. If say I turn it down at 260MHz FSB I may not fully recover performance wise until I hit 280MHz. I hate taking the hit, even more so if I have to take it twice. Over the weekend I went from FSB 280 and "DDR2-667" for a functional DDR2-840, to FSB 240 "DDR-800", which also gave me functional DDR2-840. I thought the numbers may had even improved but I did rotten record keeping. The log suggested otherwise. Then that WU abended early, and the next one took 30 minute checkpoints. I don't know now if its another 1760 pointers or one of those awful 1538 points.

What benchmark gives a good results stressing both CPU and Memory? I would say memory is a bottleneck for this processor. Perhaps due to the small cache? But 1 Meg cache's on paper don't seem signigicantly better, if at all. All the more reason not to invest in all those transistors by AMD.
I will probably spring DDR2-1066 on my next build.

Thoughts?

_________________People who put money and political ideology ahead of truth and ethics are neither﻿ patriots nor human beings.

Why not use SiS Sandra Lite? It's free and can do all kinds of synthetic benchmarking, including memory. Also, from what I understand, synthetic memory performance is generally a very poor predictor of real-world performance compared to synthetic CPU performance (ie faster FSB with slower memory, may still beat slower FSB with faster memory).

I check these ratings over at Toms Hardware. They are useful, I just kinda wish for 1 test or score that kills everything the way Prime95 does.

Well there isn't any single benchmark -- that's why Tom's Hardware uses around 30 or so. That's the thing, different applications will react differently to various combinations of CPU, RAM, and even HDD speeds. So, even if you made a synthetic benchmark that was a blend of various tasks, how would you weight the components to get a final score? The good news about Sandra Lite is that it can do all the synthetic benchmarks you need (in that way it is like Prime95 with its different FFT sizes to stress CPU or RAM). In the end it will be up to you to decide which are the most important. I was just warning you that for most applications, processor speed far outweighs memory speed/bandwidth, so the typical approach is first max out CPU speed and then try and get RAM as fast as possible without coming down on CPU speed. If you really want a single score benchmark, pick an application you use a lot and create a repeatable task using it to serve as your benchmark. After all, what really matters is not making the hypothetically fastest PC, but the one that is fastest in how you actually use it.

After all, what really matters is not making the hypothetically fastest PC, but the one that is fastest in how you actually use it.

Exactly, but there is no folding benchmark. I wish there was.

Can't you try a setting, start the application, run it overnight, and see how far it gets? Surely there's some metric you can use here.

Yes and no. When folding not all work units are the same size. They take checkpoints, 100 per work unit, at a regular interval. I use them on occasion, but overall work units are on a deadline, a rather short 3 days or so. This prevents me for establishing a long term baseline where I can start and a stock 200MHz FSB, keep incrementing 5 MHz, reaching a point where I must slow memory by changing what it appears to be int he BIOS (DDR800 to DDR667 to alter the FSB:DRAM ratio) and then continue to increasing FSB. I can do that a little, but not a whole lot. This is the origin of my inquiry here.

Corrections.

_________________People who put money and political ideology ahead of truth and ethics are neither﻿ patriots nor human beings.

Last edited by aristide1 on Mon Jan 14, 2008 5:57 pm, edited 1 time in total.

That is why you can get a relative benchmark from the fahinfo website. I find that my times - within a project - are stable enough to compare the result of an OC setting. And I seem only to get WUs from a relatively stable population. The last 6 have all come from 2605, for example.

I have a feeling that I remember reading somewhere that an [email protected] benchmark had been created by someone. The idea was that one WU had been 'captured' (can't think of a better term) and that a little script had been written to run that WU for a set period, say the first 10 checkpoints, and then a standard result would be given. Only one WU was ever used so results could be compared.
One problem is that this was at least 12 months ago and I can't seem to find a reference to it again. Even if I could locate it, it would probably not be useful for SMP benchmarks due to its age.
Also, all WUs are definitely not created equal and also different CPU architectures handle different WUs, well differently - some are CPU bound, others more dependant on L2 cache. This would of course not matter too much for calculations based upon one person + one CPU comparisons + different levels of OC though.

Who is online

Users browsing this forum: No registered users and 4 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum