unable to deploy RAID 1 disk
-
is this software raid ?
-
@eistek Sorry I don’t want to say “fake-raid” as negative. It is a bit negative if you are a server guy. But it is perfectly fine.
Yes, from your screen shot you have the ICH10R controller. That is the intel hardware assisted software raid. To use that raid there is the hard ware component that you setup and then within the operating system there is the other part of the driver. FOG can see that if you tell it to load the software raid drivers.
Actually there is another fog admin who has the same issue as you at the moment @Jonathan-Cool
In your case I did create a tutorial a while ago for managing the intel raid controllers with FOG.
https://forums.fogproject.org/topic/7882/capture-deploy-to-target-computers-using-intel-rapid-storage-onboard-raid -
@george1421
i have checked your links. But i don’t have much experience on raids and kernel .DO i need to add
Host Kernel Arguments: mdraid=true
Host Primary Disk: /dev/md126for this host ?
Let me remind i have captured image before add this parameters. And i will try to deploy this image to raid 1
-
Hi,
Like @george1421 said, i have a similair issue with my o3620.Can you post the results of theses commands on a debug task deployment ? :
- cat /proc/mdstat
- mdadm -D /dev/md126
For me :
- cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md126 : active (read-only) raid1 sda[1] sdb[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdb[1](S) sda[0](S) 5288 blocks super external:imsm unused devices: <none>
- mdadm -D /dev/md126
/dev/md126: Container : /dev/md/imsm0, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : d7788497:a3f2cd21:2ff72bda:c45670f9 Number Major Minor RaidDevice State 1 8 0 0 active sync /dev/sda 0 8 16 1 active sync /dev/sdb
For me, the issue is the read-only state on the array. I just want to know if you have the same issue or if it is an other issue …
Thank you.
-
@Jonathan-Cool
how can i get command prompt?
Deploy is starting immediatelly -
-
I have enabled debug mode ;
-
@eistek Sorry my real job has been very busy today so I have little time at the moment. Jonathan has you on track for a solution. I can say if you are starting with FOG, you started with a very hard target computer. These intel raid controllers are a problem to work with in linux.
I can see from your last image the raid array is /dev/md126 and its currently configured for read only (same problem as Jonathan) as in your case resync=PENDING (means that you just created the array but it hasn’t completed the mirroring as of now). I have a system in my test lab at the same point as you. I hope to spend some time after work hours to see if I can get my test system to activate the array and sync the sectors. If I can then I can give you guidance.
I can say the info you have provided will get us to a solution for you. So please wait until I can get into the lab.
-
@george1421 You are correct the Raid is just created. Because fog broken my raid configuration. I have deleted and rebuilt raid configuration. I am waiting for your solution thanks for your support
-
@george1421 Only note to myself
[Wed May 31 root@fogclient ~]# mdadm --create --verbose /dev/md/imsm /dev/sd[a-b] --raid-devices 2 --metadata=imsm [Wed May 31 root@fogclient ~]# mdadm -C /dev/md124 /dev/md125 -n 2 -l 1 mdadm: array /dev/md124 started. mdadm: failed to launch mdmon. Array remains readonly
Ok after about 5 hours of working on this I have a solution, there is a missing array management utility that needs to be in FOS to get the array to switch from active (read-only) to active (auto-read-only) [small but important difference]. Once I copied the utility over and recreated the array by hand it started syncing (rebuilding) the raid-1 array. I need to talk to the developers to see if we can get this utility built into FOS.
The document that lead to a solution: https://www.spinics.net/lists/raid/msg35592.html
-
@george1421 said in unable to deploy RAID 1 disk:
Ok after about 5 hours of working on this I have a solution
5 hours ! It’s unbelievable !!
So, if you want some command tests output, tell us and we will help you with pleasure to debug FOGMany thank for your investigation !
-
OK we have a functional fix in place now. This fix will only work for FOG 1.4.0 and 1.4.1. You will need to go to where you installed fog from. For git installs it may be /root/fogproject or for svn it may be /root/fog_trunk or what ever. The idea is that there are
binariesXXXXXX.zip
files. remove all of those files the fog installer will download what it needs again. There will be one binariesXXXXX.zip for each version of fog you installed.Once those files are removed, rerun the installer with its default values (you already configured). This will download the updated kernels and inits from the FOG servers.
Now pxe boot your target computer with the raid using debug deploy [or capture depending on what you wanted to do] like was done before. Once you are at the fog key in
cat /proc/mdstat
if md126 now says (auto-read-only) then you win!!. IF it still says (read-only) then you might not have the most current inits. We will deal with that once we see the output of the cat /proc/mdstat.I was able to deploy an image to a test system in the lab using the intel raid so I know it does work with the new inits.
-
@george1421 Correction, it will only work for 1.4.1
-
@george1421 You are absolutely great!!
-
@Tom-Elliott said in unable to deploy RAID 1 disk:
@george1421 Correction, it will only work for 1.4.1
Then corrected, I do stand.
-
@george1421
I have downloaded fog.1.4.1.tgz. After extract this file removed binary1.4.1.zip file. Then started the
./installfog.sh file.
After installation finished.I have checked md126 on debug mode . It was not readonly;
Then i have started to deploy image;
It looks good. I have to leave from Office. we will see the result tomorow
Thanx for everything -
@eistek That first screen shot is prefect!!
What it tells us that the newly created raid array is resyncing (copying the master disk to the slave disk and thus building the array). It rebuilding the array at 70MB/s (which is about the top speed of your sata disks).
-
-
-
@eistek said in unable to deploy RAID 1 disk:
i have changed undionly.kpxe to undionly.kkpxe
The iPXE kernel only manages the FOG iPXE menu and launching of the FOS image. Once FOS Linux has started the iPXE kernel (undionly.kpxe) is discarded. Changing this boot kernel will not have an impact on the error message you now have.
What these images show is that the FOS linux kernel is crashing. This is typically related to a hardware error. This is only a wild guess but I would say memory (RAM chip) or hard drive.