Hi All,We had a power failure at home and my system with 810 and when the power came back and it booted back up now the LMCE software RAID is showing all the drives as removed I have tried su mdadm -D /dev/md1 and it comes back with mdadm: md device /dev/md1 does not appear to be active.

I have very limited cmd line knowledge so any help would be appreciated to see if there is anything I can try to recover the RAID and data as all our family photos are on it and I am currently building up a new QNAP NAS with RAID6 as I was advised to do

If the md driver detects a write error on a device in a RAID1, RAID4, RAID5, RAID6, or RAID10 array, it immediately disables that device (marking it as faulty) and continues operation on the remaining devices. If there are spare drives, the driver will start recreating on one of the spare drives the data which was on that failed drive, either by copying a working drive in a RAID1 configuration, or by doing calculations with the parity block on RAID4, RAID5 or RAID6, or by finding and copying originals for RAID10.

In kernels prior to about 2.6.15, a read error would cause the same effect as a write error. In later kernels, a read-error will instead cause md to attempt a recovery by overwriting the bad block. i.e. it will find the correct data from elsewhere, write it over the block that failed, and then try to read it back again. If either the write or the re-read fail, md will treat the error the same way that a write error is treated, and will fail the whole device.

Obviously ignore the parts about parted magic, test disk. fsck is already in linux (if you didn't know that). You have not explained what we are working with here btw. RAID5 external NAS i assume?

Just run these two commands (dev/xxx being one/all of the RAID partitions of course) and report back the info. Unless you feel comfortable fixing it urself. I am not sure its a bad superblock, not a good idea to start trying to fix things without knowing what the problem is first. Just a guess. Good Luck!

Ok, looks like all the drives are clean. Your RAID is showing all the drives as spare (like in the pic) wasn't sure how accurate the gui is. This may not be a superblock problem, but lets find out for sure.

Run

mdadm --assemble --scan -v

this will let us know which have bad superblocks and if that is indeed the problem.

This is odd, if that drive is or was an active hot spare it should be partitioned and ready to write to if drive failure occurs. I would think b,c,d would be the active drives and e the hot spare with f your backup hot spare. Is this correct?