Reader Matt Richards wrote in this morning about a nasty anomalie in Microsoft Word that pushes CPU usage to 100 percent "if the background spell checking option in the Works 2000 word processor is selected." Posted in news: http://www.silentpcreview.com/modules.p ... =0&thold=0

This is certainly bad, but an even worse problem is that the person who reported it didn't "stress test" his system, as I bet many other quiet enthusiasts don't. If it cannot survive 100% CPU usage for long, that's a hardware issue, not software! Not to mention, thermal throttling kicks in around 80C, so it was probably not even running at 2ghz anymore...

I have always been a proponent of worst-case testing by using programs from the CPUburn suite: BURNP6.EXE for Pentium4 and BURNK7.EXE for Athlon. This program pushes CPU power consumption beyond even manufacturer's "expected max" values, producing higher temperatures than any "normal" software can attain. In my experience, BURNP6 out-heats Prime95, Hot CPU Tester, and other programs by a long margin!

My P4/2.26B @ 2.5ghz quiet system (system pic; more pics throughout this thread) attains a max of 66.0C CPU, no errors, no thermal throttling, after running BURNP6.EXE for a long time. At 2.55ghz it would actually encounter errors past 65C -- but every other test would pass with flying colors, even MemTest86! Now that I have a known rock-stable config, I don't even get to 60C with most normal 100%-CPU apps--the worst was 64C after a long UT2003 game with 20+ windows in the background. Idle temp is 37C.

My point is, why run at X GHz if you cannot actually use 100% of it, for good or evil? Rigorous stability testing in worst-case scenarios is mandatory for ultra-quiet system builders--it's already mandatory for overclockers, anyway.

Last edited by LeoV on Sun Oct 20, 2002 11:19 am, edited 1 time in total.

Marketing. It's the same reason they sell hard drives based on a gigabyte being 1000*1000*1000 bytes instead of the true 1024*1024*1024 (effectively telling you the hard drive is about 7.4 times bigger than it is) and why some vendors sell overclocked chips. The answer is (1) because they can and (2) because people like buying things with bigger numbers on them and (3) most consumers don't know they're getting something that doesn't performe as well as the ads imply.

I wasn't talking about vendors' chips. I was talking about people building their own quiet systems, but not testing them properly. Any OEM computer you buy will easily survive even a run of BURN?.EXE because they are designed quite carefully. However, many people who build their systems to push the edge--this includes overclockers as well as quiet PC enthusiasts--aren't aware of the need for very serious testing. As a result, the system may work perfectly 99% of the time, but it's actually a stability time bomb.

Personally, I've gone from moderate overclocking to quiet down my computer a great deal, and during that journey I've come to learn that (at least in overclocking circles) stability testing is a must. As I've ventured into this relatively new area of computing (for me), I have always stability tested things (as far as I could with the equipment I've got), but some people take shortcuts. The guy who recognized this problem may or may not have known about this potential problem. You are, however, absolutely correct when you say that you should test things out first in a 'worst-case-scenario' before putting them into everyday use.

It's better to know potential problems beforehand, rather than finding out afterwards, when it may be too late.

This brings up a basic issue: whether every home PC builder needs to be concerned with stability under long term 100% CPU usage when 99.9% of the time, the longest it will stay at 100% is less than 5 mins. The idea of stress testing for 24/7 (or whatever) is exactly what Intel recommends, what most engineering companies strive for: performance under extreme loads. But is this really relevant for many many home users? It probably is not for Matt Richards.

I can tell you right now that my quietest PC, which is really truly virtually inaudible, will not survive a 100% CPU test for more than maybe half an hour. Do I care? Absolutely not. I know what it can do, and I know exactly how I use it; it is perfect for how I use it. (system mirroring, and second backup machine for occasional use). Why should I be worried about what it does at 100% 24/7? Let the server makers worry about that.

Now that's a personal POV about that specific PC of mine, not a statement abnout all PCs.

WHat I'd suggest generally, though, is that the home PC builder/modder does NOT have to saddle hm/herself with this 24/7 mentality -- rather, the machine should made stable and useful for the way it will actually be used. If you go in for heavy duty 3D games for 6 hour stretches, obviously you need a rather different machine than someone who web surfs, emails, & does office work. Let the system integrators worry about making general purpose machines that can survive any application; we're making ours for ourselves.

I would not sacrifice 3 or 5 or 8 dBA for more airflow to make a system 24/7 stable when it's never turned on for longer than a couple of hours. (for example)

Mike, I don't suggest that people should sacrifice quietness for stability. However, if you don't intend to use your CPU 100%, then why not clock it down? When you overclock a 1.6A Northwood to 2ghz, the implication is that you actually need the extra speed, which means that at some point you expect to run the CPU at 100%. If that's not the case, then *IMHO* you'd have greater peace of mind knowing it's running safely at 1.6ghz than with potential errors and nasty surprises at 2ghz. It's pretty clear that Matt Richards experienced a very unpleasant surprise, as I would have in his place.

I'm not criticizing you, but I think we have differing philosophies on this subject. I have found Murphy's law to hold strong for PCs: anything that could possibly go wrong eventually does. I've had enough nasty surprises in my past PC experiences for a lifetime's worth. Because of this, I'm willing to give up a few MHz for rock stable, error-free, abuse-proof operation.

I may torture my PC much more than the average Joe, but many people (or perhaps their friends/children) may suddenly find the need to run a CPU-intensive program which is sensitive to errors. IMHO they shouldn't have to think about whether a program is "safe" to run on their machine! This is especially true for programs dealing with sensitive information.

Once you know what the best testers are (CPUburn, MemTest86, 3dmark2k are amongst the most bloodthirsty), it's not hard to test--and I bet many people would trade a few mhz in return for no unexpected surprises later.

It definately is an interesting topic Mike, and there's more than one way of looking at it.

Like you say, it's all about personal needs, and my needs include keeping a safety margin in place, if something such as this, or likewise, should occur, and keep a margin for flexibility.
With flexibility I mean multi-purpose.
I'll take my current computer as an example here; I use it (nowadays) mainly for web browsing, filesharing, listening to music, maybe writing the occasional document in word, etc. But, from time to time I get the urge to play computer games, a lot. It can range from a measily 30 minutes, to a hefty five or six hour session. Though I've put most of my gaming aside for now, I used to be quite a counter-strike enthusiast - spending ridiculous amounts of time honing my skills, and from time to time I re-visit my former 'stomping ground', and others before the game of counter-strike.

As you may see, my computer needs present a possible double-edged sword, while on the one hand I like having room to play around with, and the other the increasing dominance for extremely low sound levels, while I do 'play around'.

It'll be interesting what the end result will be. For now though, I'd like to keep that safety margin in place.

When you overclock a 1.6A Northwood to 2ghz, the implication is that you actually need the extra speed

Did Matt do that? I don't think he mentioned that; just that he has a P4-2G.

I did, certainly, in that article, and discussed stability issues at higher than 2G -- I think I got it up to 2.3 at one point, but had to raise the Vcore for Acrobat Distiller (an amazingly useful stability gauge for me) to work error-free. That system is not the very quiet one I was referring to; the very quiet one is a P3.

It will be interesting to ask Matt whether his system is oc'd.

Regarding criticism -- are you kiddning? no one is above that -- criticize away!

I'm not sure we're in disagreement. I want my systems to be totally stable, too, and when I build systems for friends, I make sure they'll be stable for them.

What I have done in some cases is to build in a 12/7 or 12/5 switch for the CPU fan (usually a Panaflo). Generally, with a good HS & good case airflow design, the 12V Panaflo provides enough cooling for 100% 24/7. In one case, I did a switch that toggle both a case fan and the CPU fan. There's your split personality for you, Red.

Reader Matt Richards wrote in this morning about a nasty anomalie in Microsoft Word that pushes CPU usage to 100 percent "if the background spell checking option in the Works 2000 word processor is selected." Posted in news: http://www.silentpcreview.com/modules.p ... =0&thold=0

Dug out an ancient thread here, hehe...

I can confirm that this bug is still present in Word 2002 SP-2, if the background spell checking is on. Seems M$ don't care...

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum