Archive for May, 2007

My long held belief that cheap memory costs more in a long run was confirmed in the last few days when 3 out of 4 chips in two OCZ 4 GB (2x2GB) PC2-5400 VistaUpgrade edition kits failed miserably in a new Intel Q6600 based system – one memory chip would have errors in memtest and with 2 others the system won’t post at all. New delivery of chips from Crucial sorted it all perfectly – 50% more expensive, but can you really afford bad memory that will lead to very weird crashes that would imply other hardware components are at fault?

While on the subject of memory – testing it with memtest also provides nice benchmark showing bandwidth available to processor in case of L1 and L2 caches and actual RAM: the stats for Q6600 (2.4 Ghz) system running with dual channel memory 4×2 GB at 667 Mhz (CL5) shows that actual speed of ram is just below 4 GB/sec – to put this into perspective it suggests that reading all RAM in that server would take whole 2 seconds! This is of course much faster than reading same 4 GB from hard disk, but still – RAM is anything but very fast insofar as processor speeds are concerned: L1 cache for example is rated as almost 400 GB/sec by the very same memtest, this is almost 100 times faster than accessing data from RAM! L2 cache is slower, but still almost 170 GB/sec.

What this means is that those who wish to obtain very high performance from their code should think carefully about algorithms used so that they are cache friendly – otherwise software might run slow because it is bottlenecked by comparatively slow RAM accesses.

You may come across with a very annoying situation whereby your otherwise nice system won’t show more than 3.5 GB of RAM even though you have got 4 GB installed, or even more – granted you need a good 64-bit system to take advantage of that memory in the first place, but if BIOS won’t allow OS to see that memory then you will lose half a gig or more.

The trick is to use “H/w hole mapping” option and set it to Enabled in Award BIOS: this is present in Frequency/Voltage control submenu…only after you press secret key combination to show this option in the first place – Ctrl-Shift-F1, this the case on all of my NForce4 motherboards with AMD X2 CPUs. Information about solution to this problem is rather scarce, so I thought to post it here, enjoy your full 4 GB of RAM!

Squirrels are very cute animals and I always liked them, perhaps probably because my native city was in the middle of woods and there were lots of red squirrels there, very friendly too – they would eat from your hand, an amazing experience for a child, even though parents were not impressed with squirrels getting into the house and stealing all nuts

In the UK there are lots of squirrels too, even though locals do not seem to particularly rate them highly – perhaps because grey squirrels are actually alien species that invaded British Isles and driven off cute local red squirrels. But while I too prefer red squirrels, the greys are also very cute – unfortunately I never managed to get them to eat from my hands, until yesterday that is

App would die instantly, no exceptions to show where exactly it happened, very annoying if it happens at the end of very long processing. There seems to be very little information on the net as to why exactly this happens probably because most people won’t push framework as hard as we have to.

The issue here is due to buffer overruns – if you use pointers in .NET code then you have to be very careful not to overwrite your memory buffers because otherwise you will destroy data used by .NET itself and this can result in rather weird failures that are not obviously due to your code: if you use unsafe code you need to assume the worst and be on guard at all times.

Naturally one can choose to use languages that do not support pointers at all, like say Java, but this is the choice of convenience that goes against goals of high performance: in my experience usage of pointers in a handful of tight places can double performance of the same C# code written without using pointers.

A few of our fairly complex applications appear to use a LOT more memory when run on 64-bit Longhorn system – at least 50% more, or even 75-100% when compared to same data being processed on 32-bit system. This is rather annoying as it seems to defeat the purpose of expanding memory from 2 GB to 4 GB, and what’s worse this high wastage makes it hard to use consumer grade Quad cores as too little memory left per core. There seems to be very little information on the topic – looks like adoption rates for 64-bit .NET systems are fairly low and those who happen to come across with these issues keep it mum, hence this blog post which hopefully will attract attention of those who experience this problem.

It is obvious that pointer size is double in 64-bit mode, however this particular application tends to deal with primitive value types (ie: int[], long[], byte[]) that have no pointers to them, though a lot of created and disposed small byte[] arrays might be the reason.