Since about two weeks ago, when compiling some app from source, I would sometimes run into problems with GCC failing to compile, but not because of the source code or dependencies, GCC itself would error out with a segmentation fault:

Then last friday, makepkg would also error out while making a package, with a segmentation fault.Yesterday morning it took 4 attempts to build a custom kernel (with the -ck patchset), and later on in the day I wasn't even able to get fluxbox to compile.Finaly today, none of the partitions where I had VL installed would boot properly, the kernel would panic and spit out tons of trace-back code. The few times I was able to login, it would simply freeze up at random...so then I'de finaly had enough.

The most logical seemed to be the memory, so I grabbed a bootable memtest86 iso, burned it on my laptop and booted my PC with it.And after only few seconds memtest started listing bad addresses non-stop....

So its been at this for almost 4 hours now, and it pretty much looks like the second half of my 256mb of ram are completely buggered.

Its kind of strange that over the last two weeks its like that memory module has been "decaying" little by little, instead of dying a "sudden death" like most hardware does...

What could have caused this to happen? There haven't been any blackouts or power surges here for quite a long time (at least 8 months), and that is normally the kind of thing that blasts my hardware to the next life...

oh yes, good old memory module faults. Had a similar experience with an older box 2 weeks ago. Before you trash the RAM stick you might want to just remove it, clean the contacts with a pink eraser and then rubbing alcohol and then reseat and try memtest again. It works for me in about 20% of memory faults.

Logged

"As people become more intelligent they care less for preachers and more for teachers". Robert G. Ingersoll

Thanks for the suggestions guys. I tried cleaning the contacts using an eraser and then alcohol, and strangely enough the number of errors detected by memtest were less this time around, but they were still a lot to consider the module useful (about 90 errors detected).

I've been able to get a second hand module to replace the old one, so I'm back in business

You could use BadRAM. It can prevent the kernel to use bad memory sectors. If the errors are too many, well... the memory size will be small, but can make the module still useful. I tryed it once in the past, on a box owned by a friend. Is just one more patch to your kernel...

« Last Edit: June 28, 2007, 02:43:44 pm by rbistolfi »

Logged

"There is a concept which corrupts and upsets all others. I refer not to Evil, whose limited realm is that of ethics; I refer to the infinite."Jorge Luis Borges, Avatars of the Tortoise. --Jumalauta!!

You could use BadRAM. It can prevent the kernel to use bad memory sectors. If the errors are too many, well... the memory size will be small, but can make the module still useful. I tryed it once in the past, on a box owned by a friend. Is just one more patch to your kernel...

Cool, thanks for the tip!I haven't disposed of the module yet, so I can give that a try too

I'd be interested to see if that works!! I have 2 500 mb sticks and 1 gig stick that are bad. They run, but I do get freeze ups and seg faults under load. I can't afford new ram right now but am VERY interested in recycling them