dmraid and mdadm?
-
Server
- FOG Version: Running Version 1.3.0-RC-27 SVN Revision: 6023
- OS: Ubuntu 14.04
Client
- Service Version: ??
- OS: Windows 7 64bit
I’m attempting to image a Lenovo P50 with two HDD configured as a RAID0 array.
I tried setting “Host Kernel Argument” as I do on my Lenovo W530 to “mdraid=true”, but it doesn’t find the md array.
Booting into Debug.
I only have a empty container /dev/md0, I don’t get a /dev/md126 or md127 like I have on my W530.
/proc/mdstats shows no available devices.During my troubleshooting I booted the machine with an Ubuntu 16.04 Live CD and the was able to access the RAID array using dmraid, so I started looking at dmraid in FOG. It isn’t working, looking into the /etc/init.d/S20dmraid script I found It’s not starting because the init script stops at line:8
modprobe dm-mod >/dev/null 2>&1The dm-mod module isn’t install in /lib or /lib64 so it errors, I’m not sure why that stops the bash script.
As demonstrated below:
Here I’ve added an echo to the script after the modprobe and ran it, nothing was outputted
https://goo.gl/photos/CrTtpzBb4wZXURmSAThen I removed the modprobe line:
https://goo.gl/photos/tqSBoFqjv53dmZ318As you can see now the script runs and it finds my raid volume.
Is this a bug or is dmraid deliberately disabled?
-
The change you made, does the W530’s still work?
See, raid is kind of a weird beast.
-
I have no idea, because I’m not sure how to make that change permanent. I’m making the change on a boot P50 booted in debug mode in FOG.
-
You could just start the client in debug mode (both without mdraid=true).
-
To add on, the file that handles assembling of raid devices is: /etc/init.d/S99fog.
Particularly lines 17 thru 20.
Mind you we’re not starting
dmraid
, we’re actually usingmdadm
Maybe try a mechanism that can use mdadm to assemble the disks?
-
I haven’t been able to make mdadm assemble the disk.
For example:
mdadm --assemble /dev/md0 /dev/sda /dev/sdbComplains that disk /dev/sda and /dev/sdb do not contain a superblock.
-
@rbaldwin They wouldn’t, you need to point at the partitions that are defined for them.
(Of course that’s assuming there are partitions defined).
What’s the output of
fdisk -l
for /dev/sda1 and /dev/sdb1? -
What about the output of
mdadm --assemble --scan
-
The raid array has a Windows 7 64bit installation on it and is defined as ~1TB
Raid configuration in Bios:
https://goo.gl/photos/iLZveDYK1vJ6Tijt6mdadm --examine on /dev/sda
https://goo.gl/photos/9DTovJpVxsrLPApA9mdadm --examine on /dev/sdb
https://goo.gl/photos/BY8ynhEzmsFNHtES6mdadm --examine --scan output:
https://goo.gl/photos/Hq3CN682Hc3jshq1AAs you can see mdadm can see the volume but for some reason doesn’t automatically assemble it. I don’t know eneough about how this works to say why. Just looking at dmraid because I’ve seen it work on the Ubuntu 16.04 live cd. Something I can’t say for mdadm so for on the P50. mdadm works fine on the Lenovo W530.
fdisk -l output:
https://goo.gl/photos/jMvtUtfyjsoyPZBH6 -
It’s in my last reply:
https://goo.gl/photos/Hq3CN682Hc3jshq1A -
Sorry just noticed you said mdadm --assemble --scan
“no arrays found in configure file or automatically”
-
It appears you might need to specify a chunk size? I don’t know what that chunksize will be, but if you can test the ideas on the above link, and figure out, I can work to add a “chunk” size argument for the raid if all works.
-
I was able to build the array with the mdadm command shown here:
https://goo.gl/photos/7gtJf1hHVi4VAjj18The build command seems to be the key here, as It works with and without the chunk command.
I haven’t mounted it yet to see if it’s readable, but this is progress from what I’ve seen so far. Not sure what to do now, since this works great in Debug mode, but wont be done automatically on fresh reboots to debug, capture or deploy. -
Any pointers on this, I just need to be able to specify these commands to FOG when these hosts boot up for imaging?
-
@rbaldwin The difference is your commands are implicit (with or without the chunk commands). The methods that are in use when you run just a “normal” runner, the code is greatly different.
Maybe it’s my unknowing how to do so. If you’d like, you can play with the code a bit (in a debug mode maybe?)
The information FOG currently uses (and works for the few other cases we have of people using RAID layout) is located here:
https://github.com/FOGProject/fogproject/blob/dev-branch/src/buildroot/package/fog/scripts/etc/init.d/S99fog#L17 -
Specifically, I’m hoping that you could run the
mdadm --assemble --scan
with achunk
size specified in the command line. If this successfully builds your imsm array, then I can adjust our arguments to accept the “chunk” size.Perhaps:
mdadm --auto-detect --assemble --scan
Of course I don’t have an array I could test this with, sorry.
-
I’m working on thoughts so if my messages seem to drift around it’s probably just thinking a lot.
-
Looking a bit more, I’ve added three hopeful bits for solving this.
First, the mdadm --auto-detect should be by itself as mdadm is then asking the kernel to activate the arrays. (Source: https://linux.die.net/man/8/mdadm)
So --assemble --scan is not “compatible” with the --auto-detect argument (
mdadm --auto-detect --assemble --scan
vsmdadm --auto-detect
).I’ve updated the code base in the init’s for handling this to try these three things:
mdadm --auto-detect mdadm --assemble --scan mdadm --incremental --run --scan
If you jump on the dev-branch, it should update the inits for this new stuff. If these new changes still don’t work, please see if there’s a way we can do an auto assembly on the iMSM raid. If we cannot auto-assemble, it is extremely difficult (currently) to make a working method for you. I suppose I could add a “postinitload” scripts thing, similar to postdownload scripts. I’d really prefer seeing if the current code could handle this with these modifications or an auto assembly model over implicit command to generate the array.
-
@Tom-Elliott said in dmraid and mdadm?:
Specifically, I’m hoping that you could run the
mdadm --assemble --scan
with achunk
size specified in the command line. If this successfully builds your imsm array, then I can adjust our arguments to accept the “chunk” size.Perhaps:
mdadm --auto-detect --assemble --scan
Of course I don’t have an array I could test this with, sorry.
–chunk is an invalid command on the --assemble command as is --auto-detect
-
@Tom-Elliott said in dmraid and mdadm?:
Looking a bit more, I’ve added three hopeful bits for solving this.
First, the mdadm --auto-detect should be by itself as mdadm is then asking the kernel to activate the arrays. (Source: https://linux.die.net/man/8/mdadm)
So --assemble --scan is not “compatible” with the --auto-detect argument (
mdadm --auto-detect --assemble --scan
vsmdadm --auto-detect
).I’ve updated the code base in the init’s for handling this to try these three things:
mdadm --auto-detect mdadm --assemble --scan mdadm --incremental --run --scan
If you jump on the dev-branch, it should update the inits for this new stuff. If these new changes still don’t work, please see if there’s a way we can do an auto assembly on the iMSM raid. If we cannot auto-assemble, it is extremely difficult (currently) to make a working method for you. I suppose I could add a “postinitload” scripts thing, similar to postdownload scripts. I’d really prefer seeing if the current code could handle this with these modifications or an auto assembly model over implicit command to generate the array.
I booted in debug mode and ran the following.
mdadm --auto-detect
mdadm --assemble --scan
mdadm --incremental --run --scan/proc/mdstats shows 0 devices.
mdadm --detail /dev/md0
Shows inactive with 0 devices.A postinitload script would be great, if I could specify that per host. As I have many Lenovo W530/W520s with RAID that work just fine with the existing FOG RAID Configurations. It’s only the brand new EUFI Bios’ed Lenovo P50s that are having this issue.
Is there perhaps a newer version of mdadm tools? Maybe these new Intel fake RAID controllers aren’t supported by the older version?