So I thought I'd try out the new fangled preupgrade system to go from fc10 to fc13 and it all seemed to be going fine until I rebooted the machine.

First it couldn't detect my RAID (3Ware 8 series) drive at all, but after a lot of searching I found some comments at the bottom of a page somewhere:

pcie_aspm=off

Adding this to the kernel parameters at boot time allowed me to get anaconda to boot the graphical interface. At which point it tells me it can't find the previous file system's root. After pressing ctrl-alt-f2 and digging around I find that lvm lvdisplay shows the drive a NOT available for no known reason. lvm vgchange -a y fixes this in a jiffy but of course I have no choice and it reboots the system at that point.
Interestingly the ctrl-alt-f4 screen shows a message saying LogVol00 has 0 active drives but when I manually enter lvm vgchange -a y I get a message telling me I have 2 active drives. (a swap, and the main drive)

So after even more internet scouring I tried adding a %pre section to the ks.cfg file on my boot drive and I added sleep 10 lvm vgchange -a y and another sleep 10 for good measure, and I also outputted the section's messages to a log file. Sure enough, the commands run as anaconda boots, and the message in the log file says 2 active drives... however when I page over to ctrl-alt-f4 the message saying there are no active drives is on full display and lo and behold anaconda's graphical interface pops up the "cannot find root of old file system" message again.

So... I can see a no. of people have similar or related problems here and there on the net, but does anyone have an inkling on how to fix this?
The big question raised here for me is why, with the upgrade from fc10 to fc13, has lvm set the drives to inactive by default. Ie. why do I now need to manually specify the vgchange? I think this is the crux of the matter.

I don't use lvm, but if your patient someone else might have some ideas. The truth is, I don't think your going to find many folks here that would take the chance of upgrading from F10 to F13 all in one step. There have been a lot of changes to the system in that time. Maybe a search of the forums or even bugzilla might come up with some clues.

Do you think I should upgrade to fc12 first? (I tried that first but didn't find the pcie_aspm flag so it wouldn't even find the drives).

I'm not sure if this is a RAID problem is it? The Raid works fine if I lvchange the lvm volume to active from the console, then I can mount it and access it with no problem.

It "feels" like something to do with LVM - of course it could be to do with the powering up of the card, or the spin up of the disks etc., but I think perhaps it isn't because as I mention above, I put lvchange into the %pre section and it ran and activated the volume (according to the log), but then immediately afterwards in anaconda it reports that zero volumes are active.

If I was doing upgrades, I would step up one release at a time. Recently I've read of a couple of folks that did the yum upgrade method to go from F11 to F13 successfully, but, as I haven't tried it myself, I'm not endorsing the method. I have tried installing the fedora-release package (if your going to F13 then you would install fedora-release-13) then doing yum upgrade. I've used that method successfully going from F11 to F12 and then F12 to F13, but, I'm only stepping up one release at a time. I guess it all depends on how adventurous you are.
As far as your lvm problem, like I said, I don't use it. My only advice would be to check out the common bugs for each release and see if lvm comes up and gives you a clue.
This is a link to F12 common bugs, I don't have one for F11:https://fedoraproject.org/wiki/Common_F12_bugs
But like I said, search bugzilla and see if lvm comes up in any hits:https://bugzilla.redhat.com/index.cgi

I use LVM for RAID. I upgraded from 12 to 13 using preupgrade. I really haven't noticed the difference. I do know that they have made a lot of changes in how Fedora recognizes RAID since F10. I'm not sure what those changes are. If you backed up your files, I suggest you do a clean install. I think it would save you a lot of time.

I kind of worry that if I do a clean install it still won't activate the LVM volume properly
Primarily because this is happening during the upgrade procedure before it even gets to see my old install, so there is something broken in the kernel and system settings it is using for the upgrade procedure. A clean install will have the same problem I think.

I think this has been moved to the wrong forum - this has nothing to do with my prior version of fedora because it hasn't even got that far yet. (it can't even see the original root system)

It is a problem with the preupgrade kickstart system for FC13 so can we move this thread back please?

The system is a 8506 Raid card with an LVM2 volume on a RAID 10 (4 hdds) and works fine in FC10. The anaconda upgrade process and kernel used seems to be causing the problem I list in my original post.

I don't think it's the actual driver that is a problem, but perhaps something to do with power management in the kernel. Could it be accidentally telling the 8506 to power down thus causing LVM to switch the volume to not available?

The problem here isn't that I can't access the drives - I can, from a console (ctrl-alt-f2 during the upgrade process) - I just pop into lvm and do a lvchange -a y to activate them and then I can mount them with no problem. The problem is that even if I do that in a %pre bash script before the upgrade kickstart script runs anaconda, anaconda then reports in its console window that they aren't available, and actually the text saying that (in the alt-f4 or alt-f5 logging console) is weirdly cut off mid-sentence, as though something in the kernel or boot process is going wrong.