RAID 0 prevents host from doing full-registration
-
I have an older Optiplex GX620 with an old PCI RAID card in it, the only two HDDs in it are configured as RAID 0.
Full host registration fails because it can’t recognize /dev/sda
I ended up manually registering the host.
sanboot, grub, and grub first hdd do not work (in legacy mode).
It’s not too big of a deal to me - I didn’t really want to image the computer anyways, I was just registering it for the heck of it. I just thought I’d let the @Developers know about it.
I’ll see if I can figure out what device the RAID 0 array is listed as…
-
I believe you need to build the raid.
-
@Tom-Elliott Build the RAID? It’s already built - runs good. Can you elaborate?
-
Actually what I think is going on here is that the boot kernel doesn’t have the required driver for the raid card. I’ve seen this in the server realm quite a bit. In this case it would be impossible to include every one off card in the kernel. It would also greatly expand the kernel size to include these random drivers.
If this was more than just a one off situation (thinking the Dell Precision T3400 through the T5600 series with an on motherboard raid card) depending on the number I had to deploy I might build my own kernel (bzImage) that would include the required drivers. But that is a bit beyond the capabilities of most users of FOG. But it would be a real interesting wiki page to go through the process of building a custom kernel.
-
I was remembering some kernel argument for RAID, found an old thread on it: https://forums.fogproject.org/topic/4218/intel-raid/18
Host Kernel Arguments:
mdraid=true
Going to try that…
-
What raid card did you try this on? Unless the drivers are built into the kernel or included dynamically in the root fs, the kernel will be ignorant of the array.
-
@george1421 I don’t even know what it is, I’ve had them for years. it’s a PCI RAID card, supports SATA1, has four ports on it.
I’m not terribly concerned about it - right now I’m trying to get the ‘exit to hdd’ feature to work…
-
fdisk -l
lists/dev/sda
/dev/sdbthen it gets very interesting… check it out…
it next lists
/dev/mapper/sil_cbbhdgafcffdNow… the fun part… next two entries are…
/dev/mapper/sil_cbbhdgafcffd
1
/dev/mapper/sil_cbbhdgafcffd2
and the one with
1
is marked as boot in the next entry… and the sizes of 1 and 2 are correct (combined /dev/sda and /dev/sdb in RAID 0 )I don’t have the HDD space (at the moment) to test if that works or not…
-
Whats going on here is the kernel is seeing the individual disks and not the raid controller. You can’t boot to /dev/sda since that is only half of your data (i.e. half a raid 0 array). So I can understand why its failing.
When you boot the computers isn’t there a banner or such that is displayed during post to show what raid card is installed?
-
@Wayne-Workman Still interested in getting this done?
-
Should this really be considered a bug? While I do realize that it is unexpected, most consumer grade motherboards only do a software level raid on the “raid controller”. Linux see’s, usually, this as individual disks even if the “volume” is setup.
-
To add on, that basically means there’s nothing I can do directly. Until consumer motherboard raid controllers actually represent the “raid volume” as a single volume rather than presenting the OS with the disks directly, this isn’t really a bug that I can fix.
With all of that, we added the mdraid=true kernel argument passing that should be able to look at the disks that are setup in a raid and combine them as a separate /dev/md0 device.
-
@Tom-Elliott I think the biggest problem that it presents is the inability to register the host. I haven’t tried this in a great while though, maybe things are better now? I’ll try here in a bit.
-
@Wayne-Workman but why is it unable to register the host?
It finds the disk fine, from what I can tell right?
-
This, for now, is the only unresolved bug, any timeframe for when we can know any type of info regarding the status?
-
Bump? Do you still want to try to do this – as it is being considered a bug.
-
@Tom-Elliott I’m on r7410 at home currently, I just deleted this host and tried to network boot and do a full registration. It worked fine lol.
However all it saw was
sda
when I do havesdb
there as well. They are still configured as RAID 0. I’ve learned a little more about this particular RAID card because of one of George’s posts. It’s a hybrid card. It doesn’t actually present one singular drive to the OS, it presents two. But it does have it’s own POST and BIOS menu, and can create volumes and delete them, restore volumes and such… The card is like 6 bucks on ebay - and for what I do with it, it’s a great card. I’ve bought 3 of them lol.