Author
Topic: md RAID 1 not being auto mounted (Read 2138 times)

I had created a RAID 1 md device via LinuxMCE's admin interface, but it never behaved correctly when the core was rebooted. Since I had already put data on it, I decided to stop the device, and then removed it from the interface.

I manually assembled the md device from the two 1 TB drives and put the config in /etc/mdadm/mdadm.conf:

I rebooted the core. This time the raid device came up correctly. LinuxMCE detected the raid and added it to the RAID list in the web interface with the status set to OK.

The problem is LinuxMCE has marked the two drives as REMOVED - wtf?! I think this is the reason why the raid device isn't being auto mounted and why I'm not seeing the "new storage added" on my MD's onscreen orbiter.

You have sort of painted yourself into a corner here. It is hard to get system support for something the system is designed to handle that you have done outside of it. You already implemented a solution that isn't working, and now your data is stuck on it. My recommendation is to copy your data off onto something else temporarily, reconfigure through the system, and then migrate it back once your original issue is fixed.

In the future, when you encounter something not working, 1. See if a ticket exists in http://svn.linuxmce.org which matches your problem, if not, create one. Be as specific as you can describing the environment and circumstances.2. See if you can get help either on the forum or in IRC. This way we can get a better idea of what a solution might be to close your ticket and fix it for everyone. Most of us do not deal with RAID on our production or development systems, we just use NAS bricks... so we cannot reproduce errors you are encountering.

If you go hammering away at it on your own, we don't know something is wrong... and then you end up here... on your own .

I didn't want to create a ticket until I was sure I had encountered a bug.

The first time around I DID use LinuxMCE to create the RAID array - it just didn't behave correctly. I did have another thread about that problem and asked repeatedly for help, but no one replied. When that happened, I started to do my own thing.

I'm not used to using a Linux distro that does this much hand holding. I have to get used to just giving control over to the OS (but only for LinuxMCE!).

I didn't want to create a ticket until I was sure I had encountered a bug.

The first time around I DID use LinuxMCE to create the RAID array - it just didn't behave correctly. I did have another thread about that problem and asked repeatedly for help, but no one replied. When that happened, I started to do my own thing.

I'm not used to using a Linux distro that does this much hand holding. I have to get used to just giving control over to the OS (but only for LinuxMCE!).

I didn't mean to come off like "you screwed everything up".

It should "just work". Something has changed as it used to. I think people might be reluctant to jump into raid problem threads because, like I said, none of us use them... so it isn't an area anyone feels particularly strong in. We all have our strengths and weaknesses. RAID verges on things I understand about the system, but not something I feel "strong" in. It is one of those odd areas where you are more likely to get someone motivated bugging us in IRC. When there ARE tickets (like the one I provided) you can see who it is assigned to, and specifically bug them (you're welcome Merk) as it is going to be assigned to the person strongest on it.

Keep in mind, we are all volunteer... and all have lives outside of this free project, so sometimes needs cannot be met the minute they arise.

Honestly the only reason that I poked my head in, was because it was the second RAID issue I had seen, which often indicates a looming problem... and... the only thing I can think of that has changed recently that deals with RAID is the StorageDevices_Radar.sh, which I rewrote... however that has nothing to do with creation, but how the system discovers them. So if it was my rewrite that broke it, I will do whatever is necessary to fix it. If I didn't break it, I will still loan you the extent of my knowledge on the subject.

Honestly the only reason that I poked my head in, was because it was the second RAID issue I had seen, which often indicates a looming problem... and... the only thing I can think of that has changed recently that deals with RAID is the StorageDevices_Radar.sh, which I rewrote... however that has nothing to do with creation, but how the system discovers them. So if it was my rewrite that broke it, I will do whatever is necessary to fix it. If I didn't break it, I will still loan you the extent of my knowledge on the subject.

A fix has been committed in the above ticket. note that you will still need to run the command "sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf" and reboot once even when the update goes live (and as a workaround until it goes live), but after that this step will be performed any time the RAID configuration is modified.