@george1421
In addition to this topic
I have it working with FOG version 1.5.10.41 (dev version)
Kernel 6.6.34
Init 2024.02.3
stable (also with latest kernel/init) is not working.
@george1421
In addition to this topic
I have it working with FOG version 1.5.10.41 (dev version)
Kernel 6.6.34
Init 2024.02.3
stable (also with latest kernel/init) is not working.
@elchapulin remove hibernation file (shown hidden files) or use sysprep
FOG won’t run with those on the system.
@george1421
I have it working, created a postinit script with:
IMSM_NO_PLATFORM=1 mdadm --verbose --assemble --scan
rm /dev/md0
ln -s /dev/md126 /dev/md0
Although it recognizes md126 it still tries to do everything to md0, that’s why the symlink is in.
Thank to @Ceregon https://forums.fogproject.org/post/154181
Tested and working with SSD Raid1, NVME raid1 and resizable en non-resizable imaging.
It would be nice if there is a possibility to select postinit scripts per host(group).
This way there is no need for difficult extra scripting to define if correct hardware is in the system.
@george1421 said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:
@rdfeij Well the issue we have is that non of the developers have access to one of these new computers so its hard to solve.
Also I have a project for a customer where we were loading debian on a Dell rack mounted precision workstation. We created raid 1 with the firmware but debian 12 would not see the mirrored device only the individual disks. So this may be a limitation with the linux kernel itself. If that is the case there is nothing FOG can do. Why I say that is the image that clones the hard drives is a custom version of linux. So if linux doesn’t support these raid drives then we are kind of stuck.
I’m searching to see if I can find a laptop that has 2 internal nvme drives for testing, but no luck as of now.
I can give u ssh access if you want, my test box is online. But i’m further also:
I can’t post since i get a spam is detected notice when submitting…
@george1421 tinkering on
As described here :https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/linux-intel-vroc-userguide-333915.pdf from chapter4 we need a raid container (in my case raid1 with 2 nvme) and within the container create a volume.
But how can i test this, debug mode doesnt let me boot to fog after tinkering
@rdfeij said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:
@george1421
SuperMicro X13SAE-F server board with Intel Optane / VROC in raid1 mode.
2x NVME in raid1.
In addition:
the NVMe raid controller id is 8086:177f ( https://linux-hardware.org/?id=pci:8086-a77f-8086-0000 )
0000:00:0e.0 RAID bus controller [0104]: Intel Corporation Volume Management Device NVMe RAID Controller Intel Corporation [8086:a77f]
RST controller, i think it is involved since all other sata controllers are disabled in bios:
0000:00:1a.0 System peripheral [0880]: Intel Corporation RST VMD Managed Controller [8086:09ab]
And NVMe’s: (but not involved i think;
10000:e1:00.0 Non-Volatile memory controller [0108]: Sandisk Corp WD Black SN770 NVMe SSD [15b7:5017] (rev 01)
10000:e2:00.0 Non-Volatile memory controller [0108]: Sandisk Corp WD Black SN770 NVMe SSD [15b7:5017] (rev 01)
yes: in bios raid1 exists over 2 nvme’s
mdraid=true is enabled
md0 indeed is empty
lsblk only shows content on the 2 nvme but not with md0
I hope this will be fixed/solved soon, otherwise we are forced to another (WindowsPE based?) imaging platform since we get more and more VROC/Optane servers/workstations with raid enabled (industrial/security usage).
I’m willing to help out to get this solved.
With me yes: in bios raid1 exists over 2 nvme’s
mdraid=true is enabled
md0 indeed is empty
lsblk only shows content on the 2 nvme but not with md0
I hope this will be fixed soon, otherwise we are forced to another (WindowsPE based?) imaging platform since we get more and more VROC/Optane servers/workstations with raid enabled (industrial/security usage).
I’m willing to help out to get this solved.
@george1421
SuperMicro X13SAE-F server board with Intel Optane / VROC in raid1 mode.
2x NVME in raid1.
Hi everyone, reading the investigations allready done gives me a feeling you got close to a fix to this.
I got the experimental vroc file from the download link earlier in this topic.
I have exactly the same issues, Intel VROC / Optane with 2 NVME in raid1.
I can see the individual nvme’s but not the raid array/volume.
Is there anywhere near to be expected a fix for this?