Problem Capturing right Host Primary Disk with INTEL VROC RAID1
-
@george1421 Here are the files.
Unfortunately I have not found a messages or syslog file, I have only found a boot log file in the folder. -
@nils98 Nothing is jumping out at me as to the required module. The VMD module is required for vroc and that is part of the FOG FOS build. Something I hadn’t asked you before, what version of FOG are you using and what version of the FOS Linux kernel are you using? If you pxe boot into the FOS Linux console then run
uname -a
it will print the kernel version. -
@george1421
FOG currently has version 1.5.10.16.
FOS 6.1.63
I set up the whole system a month ago. I only took over the clients from another system, which had FOG version 1.5.9.122.The Raid PC has now been added.
-
@nils98 said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:
FOS 6.1.63
OK good deal I wanted to make sure you were on the latest kernel to ensure we weren’t dealing with something old.
I rebuilt the kernel last night with what thought might be missing, then I saw that mdadm was updated so I rebuilt the entire fos linux system but it failed on the mdadm updated program. It was getting late last night so I stopped.
With the the linux kernel 6.1.63, could you pxe boot it into debug mode and then give root a password with
passwd
and collect the ip address of the target computer withip a s
then connect to the target computer using root and password you defined. Download the /var/log/messages and/or syslog if they exist. I want to see if the 6.1.63 kernel is calling out for some firmware drivers that are not in the kernel by default. If I can do a side by side with what you posted from the live linux kernel I might be able to find what’s missing. -
@george1421 here is the message file
-
@nils98 Ok there have been a few things I gleaned by looking over everything in details.
The stock FOS linux kernel looks like its working because I see this in the messages file during boot. I do see all of the drives being detected.
Mar 1 15:46:40 fogclient kern.info kernel: md: Waiting for all devices to be available before autodetect Mar 1 15:46:40 fogclient kern.info kernel: md: If you don't use raid, use raid=noautodetect Mar 1 15:46:40 fogclient kern.info kernel: md: Autodetecting RAID arrays. Mar 1 15:46:40 fogclient kern.info kernel: md: autorun ... Mar 1 15:46:40 fogclient kern.info kernel: md: ... autorun DONE.
This tells me its scanning but not finding an existing array. It would be handy to have the live CD startup file to verify that is the case.
Intel VROC is the rebranded Intel Rapid Store Technology [RSTe]
There is no setting for
CONFIG_INTEL_RST
in the current kernel configuration file: https://github.com/FOGProject/fos/blob/master/configs/kernelx64.config Its not clear if this is a problem or not, but just connecting the dots between VROC and RSTe: https://cateee.net/lkddb/web-lkddb/INTEL_RST.html I did enable it in the test kernel belowTest kernel based on linux kernel 6.6.18 (hint: newer kernel that is available via fog repo).
https://drive.google.com/file/d/12IOjoKmEwpCxumk9zF1vtQJt523t8Sps/view?usp=drive_linkTo use this kernel copy it to /var/www/html/fog/service/ipxe directory and keep its existing name. This will not overwrite the FOG delivered kernel. Now go to the FOG Web UI and go to FOG Configuration->FOG Settings and hit the expand all button. Search for bzImage, replace bzImage name with bzImage-6.6.18-vroc2 then save the settings. Note this will make all of your computers that boot into fog load this new kernel. Understand this is untested and you can always put things back by just replacing bzImage-6.6.18-vroc2 with bzImage in the fog configuration.
Now pxe boot into a debug console on the target computer.
Do the normal routine to see if lsblk and
cat /proc/mdstat
andmdm --detailed-platform
returns anything positive.If the kernel doesn’t assemble the array correctly then we will have to try to see if we can manually assemble the array using mdadm tool.
I should say that we need to ensure the array already exists before we perform these test because if the array is defunct or not created we will not see it with the above tests.
-
@george1421 Unfortunately, nothing has changed.
“mdm --detailed-platform” does not find “mdm” with “mdadm --detail-platform” it still shows the same error.
I have also searched the log files under the live system again but unfortunately found nothing.
-
@nils98 Well that’s not great news. I really thought that I had it with including the intel rst driver. Would you mind sending me the messages log from booting this new kernel? Also make sure when you are in debug mode that you run
uname -a
and make sure the kernel version is right. -
@george1421 Here are the logs
-
I apologise for not getting in touch for so long.
But I was able to find startup logs with Ubuntu live and my raid is recognised directly.
Hope the logs help. -
This post is deleted! -
Hi everyone, reading the investigations allready done gives me a feeling you got close to a fix to this.
I got the experimental vroc file from the download link earlier in this topic.
I have exactly the same issues, Intel VROC / Optane with 2 NVME in raid1.
I can see the individual nvme’s but not the raid array/volume.Is there anywhere near to be expected a fix for this?
-
@rdfeij For the record, what computer hardware do you have?
-
@george1421
SuperMicro X13SAE-F server board with Intel Optane / VROC in raid1 mode.
2x NVME in raid1. -
With me yes: in bios raid1 exists over 2 nvme’s
mdraid=true is enabledmd0 indeed is empty
lsblk only shows content on the 2 nvme but not with md0I hope this will be fixed soon, otherwise we are forced to another (WindowsPE based?) imaging platform since we get more and more VROC/Optane servers/workstations with raid enabled (industrial/security usage).
I’m willing to help out to get this solved.
-
yes: in bios raid1 exists over 2 nvme’s
mdraid=true is enabledmd0 indeed is empty
lsblk only shows content on the 2 nvme but not with md0I hope this will be fixed/solved soon, otherwise we are forced to another (WindowsPE based?) imaging platform since we get more and more VROC/Optane servers/workstations with raid enabled (industrial/security usage).
I’m willing to help out to get this solved.
-
@rdfeij said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:
@george1421
SuperMicro X13SAE-F server board with Intel Optane / VROC in raid1 mode.
2x NVME in raid1.In addition:
the NVMe raid controller id is 8086:177f ( https://linux-hardware.org/?id=pci:8086-a77f-8086-0000 )
0000:00:0e.0 RAID bus controller [0104]: Intel Corporation Volume Management Device NVMe RAID Controller Intel Corporation [8086:a77f]RST controller, i think it is involved since all other sata controllers are disabled in bios:
0000:00:1a.0 System peripheral [0880]: Intel Corporation RST VMD Managed Controller [8086:09ab]And NVMe’s: (but not involved i think;
10000:e1:00.0 Non-Volatile memory controller [0108]: Sandisk Corp WD Black SN770 NVMe SSD [15b7:5017] (rev 01)
10000:e2:00.0 Non-Volatile memory controller [0108]: Sandisk Corp WD Black SN770 NVMe SSD [15b7:5017] (rev 01) -
@rdfeij Well the issue we have is that non of the developers have access to one of these new computers so its hard to solve.
Also I have a project for a customer where we were loading debian on a Dell rack mounted precision workstation. We created raid 1 with the firmware but debian 12 would not see the mirrored device only the individual disks. So this may be a limitation with the linux kernel itself. If that is the case there is nothing FOG can do. Why I say that is the image that clones the hard drives is a custom version of linux. So if linux doesn’t support these raid drives then we are kind of stuck.
I’m searching to see if I can find a laptop that has 2 internal nvme drives for testing, but no luck as of now.
-
@rdfeij said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:
Intel Corporation Volume Management Device NVMe RAID Controller Intel Corporation [8086:a77f]
FWIW the 8086:a77f is supported by the linux kernel, so if we assemble the md device it might work, but that is only a guess. It used to be if the computer was in uefi mode, plus linux, plus raid-on mode the drives couldn’t be seen at all. At least we can see the drives now.
-
@george1421 tinkering on
As described here :https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/linux-intel-vroc-userguide-333915.pdf from chapter4 we need a raid container (in my case raid1 with 2 nvme) and within the container create a volume.
But how can i test this, debug mode doesnt let me boot to fog after tinkering