Tuesday, July 10, 2007

Since I won't be using a Mac for desktop publishing and web design...

I tore apart my IBM Intellistation M-Pro 6850 earlier last week. There was method to my madness, because I ripped out the two 1.7Ghz Xeon DP processors, and installed two 2.2Ghz Xeon DP processors as replacements. I then added four more RIMMs of 256mb 800/45 memory, for a grand total of 2.0Gb. I also put in my old ATI TV-Wonder PCI tuner card, to use as a DVR and video input device for editing. So far, so good, so it was time to turn the big black Monolith (Stanley Kubrick would probably agree) back on and watch the show:

No problem getting to the BIOS screen, so I enabled hyperthreading and let it reboot again. Voila'! XP Pro reported that it could make good use of the two hyperthreaded processors:

Now, you'll often hear that hyperthreading doesn't really offer an advantage for non-multithreaded apps. That's true, up to a point. If you have applications that hog a processor or do a lot of multi-tasking on your machine, multi-cpu and/or hyperthreaded processors offer a lot of extra headroom to allow those resource hogs to do their thing at 100% while shuffling background tasks to the additional processors. I learned that a long time ago with my first Tyan dual Pentium 166 running Windows NT 4.0 Workstation.

If you're like me and run multi-threaded stuff like Adobe Creative Suite, or most flavors of AutoCAD, you will definitely appreciate the extra horsepower. The same goes for the more popular applications that edit music and video - they can be quite computer intensive. So while my idea to use a PowerMac G4 for graphics and various media was a good one, my trials and tribulations with the platform simply pointed me into taking the easy route and loading more crunching power into my trusty monster of an IBM.

Just for giggles, I once again loaded BOINC and pointed it towards SETI@Home. This has been a type of benchmark I've always used to check the performance and durability of my various multi-CPU computers over the years. With the new processor and memory upgrades, each CPU can spit out one SETI Work Unit every 12 hours or so. That's not bleeding-edge fast compared to newer processors from Intel and AMD, but remember that little Task Manager graph I pictured earlier? This IBM Intellistation is relying on Symmetric Multi-Processing (SMP) to crank out 4 of those SETI Work Units every 12 hours, or 8 of them per day from just one computer. That ain't half bad.

You're going to see a lot more of the multi-CPU, hyperthreaded, Core 2, Dual Core, Core-Duo , Quad Core whatever processor labels in the near future. Why? Moore's Law has started to fall by the wayside as engineers struggle to shrink die sizes and cram more (Moore?) transistor junctions into a given square centimeter of silicon wafer. Barring any revolutionary breakthroughs in photolithography technology, processor speed has leveled off, pretty much reaching a plateau at the 65 nanometer circuit mask size. Efforts are underway to bring that down to 30 nanometers, but for now, CPU speeds have hit a speed bump near the 4 Ghz mark.

Nor does the clock speed of a processor really tell the whole story of how fast a given CPU will perform tasks. Cache and motherboard memory speed (in the x86 world, at least) usually lag behind the processor's internal speed, so the core is waiting precious clock cycles to retrieve data from cache and motherboard memory. Hyperthreading was a neat trick that made use of the core's down time waiting for external data to go ahead and process a second thread, timing things so that both threads wouldn't ostensibly interfere with each other as they hit the cache. As I mentioned before, having a second CPU on either the chip or the motherboard also allows multithreaded apps to keep crunching efficiently, or at least give some extra headroom to the first processor's task so it can run at 100%.

Since processor, cache, and memory speed have hit a temporary roadblock, Intel and AMD have thrown their eggs into the SMP basket, offering dual-core and quad-core CPUs to the general public, vs. being the exclusive domain of high-end workstations and servers. It would certainly be a bit of overkill, but could you imagine somebody buying a dual processor quad-core system with Hyperthreading enabled, just to browse the Web and send e-mails? Of course, they'd have to run Windows 2003 Server, the Vista equivalent, or a Linux distribution to make use of the mix of 16 real and virtual processors, but my goodness!

As a footnote, my wife, who has the same type of IBM workstation of her own, has been using my machine a bit more lately, and is quite vocal about how zippy it runs. It appears I will have to duplicate the CPU/memory upgrade for her as well. I guess that's only fair.