I seem to be running in circles on this system, and I don't know what is the problem. Last night, on rebooting XP started giving what looked like a flicker of a BSOD and rebooted again. I couldn't seem to resolve it, so I decided to install XP on the IDE drive.

uh, oh. ...

Quote:

But it had a wild hair and decided to format the SATAs instead, so I lost everything.

Ohhhh, God. I feel your pain.
Win2k RC2 did something similar some years back. I am very sorry to hear it happened.

Quote:

Originally Posted by seeker

Not being deterred, I tried again, and then it formatted the drive that it was supposed to, so I installed XP, and afterward tried to install SuSe 10.0 on the 2nd partition of the IDE, since it seemed that the IDE was the only drive that it could see properly. That install appeared normal, but upon rebooting after the first part of the installation, it couldn't proceed. I guess that I will format it and try again, maybe with a different distro, if any of them can see the drive.

My goodness, you do have persistence. Good for you!

Quote:

Originally Posted by seeker

What is even stranger is that when I did attempt to install XP afterward on the SATAs, the installer could not see those drives. That may be due to the fact that Windows did not get rid of everything on them as I thought, because during the SuSe install it did see these with SDB1 as containing NTFS. I deleted that and haven't had time to see what would happen now. If all of this sounds confusing, I totally agree. I have never considered myself an expert on raid systems, but I have never had problems like these.

Well, consider that it might be the first time you have ventured into the unknown. Don't be too surprised by anything that might happen with MS WinXXXX. Really! They do not have a f**g clue of what they are doing if anything is not exactly what they expect. You _must_ do it their way or no way at all. MS is a marketing organization.( and very D* good at it, I will add)

Quote:

What I need is a detailed step by step tutorial specifically for my mobo. The manual instructions is quite barebones.

Nada; doesn't (and won't) exist. You and everybody who uses RAID are in an extreme minority although, it does seem that RAID 0, the only RAID that is not a real RAID, has become popular with the gaming crowd because of the small performance improvement. It might get past the point of everyone running whenever the word RAID is mentioned but not anytime soon.

For what it might be worth:
There are conflicting points of view about whether to install MS or Linux OS first. Actually, IMHO, either way works but there are tradeoffs to each.
The best for me seems to be to install Linux and make room for MS Win* at the same time. Historically, Win* will throw a bitch-fit if it does not have the first primary partition on a drive so format the first 4GB of the first drive as FAT32 and make it a primary partition. In the past I have only used 2GB but Win* is such a hog more is better; up to ~8GB is okay(ithink).
Then install Linux on, in order of best/easiest use
(1) another (physical) drive, leaving the first drive for Win*
(2) the same drive with a /boot partition included in the partitioning scheme. Install GrUB to the /boot partition, NOT the MBR. (/boot only needs to be 50MB or so but can be larger; some Linux OS want it to be >=120MB...)
After Linux is installed and you are happy with the installation, do Win* to the first drive primary partition. Win* will recognize that there is some other OS and add it to the NTLDR. It won't do it correctly so you cannot boot to "Other Unrecognized OS" but it will be there for later modification (which can allow using NTLDR to boot to Linux).
The key to (1) is that in your BIOS you have a choice of what to boot from first. If you have different drives, one will boot to GrUB and the other will boot to the NTLDR
This relives a lot of configuration/frustration pain because you can change the BIOS to boot into the OS of choice until you get each bootloader setup *properly* so you can boot to any listed OS from either bootloader.
The real *trick* to (1), for some BIOS, is using different types of drives. Some BIOS are stupid or lazy or ...whatever... and won't allow booting from just any IDE drive because they group all IDE(PATA) together. Same for SCSI and, IIRC, for SATA. However, if you have one OS on IDE(PATA) and the other on, in your case, SATA, you can select which to use: either PATA or SATA.
I did this for years with SCSI and IDE(PATA); Works like a fr*in charm.
The (2) also works but there is the difficulty that after installing Win*, it will boot from the MBR and there is no way from BIOS to get GrUB. Win*, as you know, is Master of the Universe so it takes over. Mr. Bill does not care if you have any other OS and in fact wants to make your life as difficult as possible to use it. But you can give the 3-fingered salute to Bill with a little prep beforehand. There are several different instruction pages available on the web for this method but this is the only one I have readily available at the moment:http://software.newsforge.com/articl...023237&tid=130

It basically means that you need to do the dd work _before_ you install Win* so that after MS takes over your system from you, you can modify the NTLDR boot.ini to allow use of the "Unrecognized other OS" and get the system back under your control.
I cannot help you with it. I don't use it(prefer GrUB) and don't have a system here, as I mentioned before, with NV 2200/2050 on it.

One other thing that I did which you might to be able to avoid.
When I made the RAID 5 during the Linux install, I used ext3 as the filesystem. This is a good/wise thing because it is a journaling fs. However, MS, as you know, refuses to properly recognize ext3 even after the install is complete. It might be wiser to use FAT32 on at least part of the RAID and that way MS is more or less forced to recognize the SATA drives even during the install. ...

I also noticed that Win* (2k sp4, in my case) did not recognize the SATA which was resolved, mostly, by the installation of the NVIDIA SATA and RAID controller drivers, i.e., both being necessary. You will need to do that during the Win* install so prior to the Win* install, you will need to make a 3rd party drivers disk which contains the NV SATA drivers.

A real mess, huh?
Sorry that I cannot help more. Monarch Computer Systems contacted me by email today and told me the mobo sent to them was "not checked into the system" which I know is not true. It arrived yesterday at noon so I called yesterday evening for status and two different people pulled it up on the computer screen.
I probably won't be getting that mobo back - will be asking for refund if I don't get an email from FedEx tonight that the replacement is on the way. I don't expect the latter.

I'm still digesting the rest of your post, because it seems to have some value. Parts of it are things that I already know and agree with, others I am still weighing out in my mind. Your comment about Raid 0 threw me just a bit, because the only form of raid that I knew was not really raid, is JBOD. However, a rose by any other name.... The other thing that I'm thinking about, is your discussion of using NTLDR as the boot loader. Most of the people with whom I have discussed this, seem to favor either Grub or LILO. These have caused me some problems, i.e. doing a kernel upgrade and the system tearing itself apart. I have also read some about using 3rd party bootloaders, but I still tend to use a Linux flavor. In the end, the best would be the one that is most reliable, but I do not have the experience to judge this for myself.

I found part of what I was looking for...a manual for the Gigaraid controller...but this is only for an IDE raid array, which I don't have. It has still been helpful to some extent, because I have been confused about the BIOS options for both Nvidia raid and Gigaraid. If I could find a similar manual for the Nvidia raid, I might almost understand the systems. However, even the manual that I have doesn't explain the BIOS options and configuration. If I had enough IDE HDs, I might be better off using Gigaraid, because it appears to be a hardware controller vs Nvidia raid, which is a software controller...this being significant with Linux installations.

RAID is a Redundant Array of Inexpensive Disks
RAID 0 was defined in the original document but there is no redundancy and the point was made that the reason for using a bunch of cheap disks was to avoid spending thousands of dollars protecting data with other solutions which RAID 0 does not do... It is a RAID but not a real "Redudant" AID because it just spreads the data across the disks: if one dies all is lost.
JBOD is not RAID at all.

Sorry, but if doing a kernel upgrade is tearing things up, it won't help to use NTLDR. The NTLDR boot.ini will have a line that points to that which you just changed.

If GrUB is affected by a kernel upgrade, the OS must have installed it to the MBR. I have done many upgrades and a few kernel builds with zero problems with GrUB. I always install it to the boot partition on the disk, not the MBR. That has to be done during the install of GrUB, of course, and it is best to have a 120MB or so /boot partition for it.
(That's a pretty big boot partiition but IIRC, RHEL complains if it is less. Using only ~ 30MB is really big enuf unless you have a bunch of kernels and/or are doing kernel hacking/building.)
It is safer there and causes fewer headaches in the long run. Also, MS OS don't screw it up(or vice-versa).
It also has the advantage that if it is on a separate disk controller, say IDE, from the MS OS boot partition(MBR) which could be on a SATA or SCSI disk, the BIOS will allow you to boot to either boot loader by choosing the appropriate disk in the BIOS.

Well, I have done what I did not want to do and bought a different dual Opteron mobo that uses NV 2200 & 2050 as well as the AMD 8232. So chances are good that I will be taking a closer look at HARM in the near future. The only other option is to buy the HW controller and forget using NV RAID which I hate to do since I PAID FOR IT but sometimes one just has to cut the loss and move on. The time lost factor can become overwhelming...

There are Nvidia guides. Nvidia RAID Users guide and another(forget the name at the moment). They are pretty simple "do this and that" without many explanations. If you need to have the steps and basic explanations you can find them on Google for sure but I think NV still has them:
__ Forceware NVRaid_Users_Guide_v1.1.pdf
The newer version was renamed Media Shield:
ForceWare_MediaShield_Users_Guide_v.3.1.pdf
And a 15 page sales pitch which explains some things about the "Media Shield" NV is promoting:
01760-001_v02_MediaShield_090805.pdf

I know nothing about gigaraid. The only RAID controller I have seen in years that is tempting me is the Areca ARC 1200 series(e.g., 1210). It is a true HW SATA RAID controller with Linix, Solaris, MS and ithink even BSD drivers. It is also 400USD...

I appreciate your response, and it would probably help me quite a bit, except that since I last posted, I had some more mobo problems, and removed it and took it back to the store. I now have another Gigabyte mobo of a different model on order (GA-K8NS Ultra-939. It has a number of differences from the previous board, ranging from a faster chipset, more SATA connections, dual BIOS, and raid controller. Instead of the Nvidia raid, it has a Silicon Image 3512 raid controller. I do not yet know if this is a softward or hardware controller, but I'm hoping the latter. I'm not sure if the Gigaraid is still available or not. I don't know if this will work with Linux, but it couldn't be any worse than what I had. If it doesn't work this time, I will probably get a hardware controller, as you suggestecd.

AFAIK, the Sil3512 is another software RAID controller where the "hardware" consists mostly of a BIOS that lets you boot off it :-P
I second the Areca if you want real RAID. Just wish the dang things were only $150 instead of $400 :-(

I said that I had a motherboard on order, but I found out this morning that my supplier could not find his supplier, so I'm still looking. I sent an email to Gigabyte to find out if that motherboard is in production, because after spending most of the day, everybody said that the board was either not in production yet, or had been dropped. I'm still persisting, because it is the only motherboard that I have been able to find that has the features that I want. Apparently, the manufacturers have been dropping their AGP boards, forcing everyone to go PCIe, but that is something that I shall not do.

Even if the Silicon Image Raid controller is run by software, maybe it can work okay...I will just have to find out for myself. I'm more than willing to accept that Areca and the other recommendations, as far as I'm concerned, are superior raid controllers, I can't spend $400 for one. Surely, there is a more affordable option that will work.

If you don't need an mATX board, you might want to look at the "server" boards that have built in SAS RAID chipsets. Motherboard makers might not be able to pull this "hardware RAID means I have a BIOS to boot into software RAID" stunt w/ customers who expect SAS RAID to work as fast as SCSI RAID :-)

If you don't need an mATX board, you might want to look at the "server" boards that have built in SAS RAID chipsets. Motherboard makers might not be able to pull this "hardware RAID means I have a BIOS to boot into software RAID" stunt w/ customers who expect SAS RAID to work as fast as SCSI RAID :-)

Outside of the fact that the board needs an ATX power plug to connect the PS, I'm not sure how ATX differs otherwise from a server board? Since I do not intend to operate a server, would this kind of motherboard deprive me of any other kind of ability?

AFAIK, the Sil3512 is another software RAID controller where the "hardware" consists mostly of a BIOS that lets you boot off it :-P
I second the Areca if you want real RAID. Just wish the dang things were only $150 instead of $400 :-(

I said that I had a motherboard on order, but I found out this morning that my supplier could not find his supplier, so I'm still looking. I sent an email to Gigabyte to find out if that motherboard is in production, because after spending most of the day, everybody said that the board was either not in production yet, or had been dropped. I'm still persisting, because it is the only motherboard that I have been able to find that has the features that I want.

Look at the GigaByte GA-2CEWH: http://www.gigabyte.com.tw/Server/Pr...d_GA-2CEWH.htm
It runs CentOS 4.2.* x64 which is same as Redhat Enterprise Linux 4.2.* without the $upport contract$. It also runs MS Win*, etc... but it is not Mandriva or (k)Unbuntu friendly since neither of them have included the recent kernel patches(apparently).

Quote:

Originally Posted by seeker

Apparently, the manufacturers have been dropping their AGP boards, forcing everyone to go PCIe, but that is something that I shall not do.

I can appreciate hanging onto AGP for budget sakes but it would be a huge mistake not to move to PCIe now if you can. Nobody, and I mean nobody, is going to be using AGP in the future. It is dead. PCI is on the deathwatch too although PCI-X is probably going to make it a long painful death.
And you can add PATA to the list with SCSI not far behind. SATA is already pushing them into the extreme low-end and high-end only markets, respectively. SCSI is mostly high-end anyway...

Quote:

Originally Posted by seeker

Even if the Silicon Image Raid controller is run by software, maybe it can work okay...I will just have to find out for myself. I'm more than willing to accept that Areca and the other recommendations, as far as I'm concerned, are superior raid controllers, I can't spend $400 for one. Surely, there is a more affordable option that will work.

As you and I have found out somewhat painfully, the onboard controllers are not hardware raid controllers. They do setup with the hardware and offload the work to the CPU(s). BUT there is some across platform compatibility which beats none at all if you are going to use software RAID, right?

It may come as a surprise but the Areca is actually priced below most hardware raid solutions that boast such good transfer numbers. LSI(including Dell PERC) and Adaptec are or were twice as much...

The best solution for an individual or small company without huge demands is software RAID. It is the cheapest solution too.

I have my new 2CEWH up and running but it is going to be a while before I can start on HARM (again). I'll be using CentOS 4.2 x64 and Win2k Advanced Server for testing and hopefully it will just work.

And, BTW, spending 400USD on a RAID controller is not in my budget either. Besides, I would need the ARC-1220 and it is ~700USD which is definitely not in the budget at this time. I am still having sticker shock from the mobo, CPUs and graphics card...