So yes, this is a perfect solution since Primary host disk can now be set by size. I have one image for the OS disk, and one for the “D” drive. I just switch the Primary Host disk setting depending on which image I want to capture or deploy.
Posts
-
RE: NVMe madness
-
RE: NVMe madness
I copied over the updated init.xz to
/var/www/html/fog/service/ipxe/
Then set the host primary disk to 1000204886016 and attempted to capture
It worked great
Thank you very much
-
RE: NVMe madness
Thanks. I will test as soon as I can. Probably middle of the night or early tomorrow.
-
RE: NVMe madness
No worries I didn’t expect the fix to be available when I updated. I was just testing/wondering about .size files and why they were missing and or not being created automatically. I didn’t know they were only created in all disk mode and thought maybe something else was wrong. Thanks for clearing that up.
-
RE: NVMe madness
Wow. After creating the .size files I captured an image from each NVMe drive to their corresponding existing image. the .size files are now gone. Did I need to change the owner of the .size files to root?
Anyways so I updated FOG dev build to the latest as of today 1.5.9.29 captured an image from the NVMe. The .size files weren’t created. Maybe this is indicative of another problem?
-
RE: NVMe madness
@Sebastian-Roth
ThanksSome of the images were initially captured on the 1.5.9 RC candidates 11 - 16 from the dev branch. One new image was captured on 1.5.9 final dev branch, (just a couple days ago), neither image directory had the .size files.
I have created d1.size and d2.size files inside the directories of the 2 NVMe based images using the values from the blockdev --getsize64 output
-
RE: NVMe madness
Thanks. I checked the images directory and the only file in it was .mntcheck then the directories containing the images and postdownloadscripts directory as well as a dev directory none of them contained a d1.size or d2.size file.
-
RE: NVMe madness
The other NVMe is 1TB. Is there a way to specify the target NVMe by size? I can’t find that setting.
-
RE: NVMe madness
Hello,
I am running “the latest stable version: 1.5.9”
Yes I want to back up or deploy to a certain 256GB NVMe drive. I set the host primary disk to /dev/nvme0n1 and it was working. I would schedule a capture and reboot manually and it would back up the correct drive. Early this morning I set a task to capture and the FOG client rebooted the PC. I noticed it was capturing from the wrong drive but no settings were changed. So i went in an set a task for a capture and manually rebooted and it captured from the correct drive.
lsblk showed nvme0n1 was the 256GB drive
It is too bad there is not a way when you boot PXE from a client you can’t choose the drive.
-
NVMe madness
After reading up I found NVMe drive are initialized at different times during the boot process.
This causes issues when trying to capture or deploy to the right drive. Perhaps FOG could add something to choose the drive when deploying and capturing.
/dev/nvme0n1 is my OS drive I like to capture on regular intervals. Sometimes it is seen as /dev/nvme1n1 which causes a problem.
Feedback?
-
host primary disk
I’ve added a 2nd NVMe drive to my PC. Now when I capture an image it captures from the wrong NVMe drive.
I am not sure what to put for the Host Primary Disk setting. How can I find out?
-
RE: Odd issue with Win 10 UEFI images after updating from 1.5.9-RC2.15 to 1.5.9-RC2.17
Cool an even newer version is out now. Gonna try 2.19
-
RE: Odd issue with Win 10 UEFI images after updating from 1.5.9-RC2.15 to 1.5.9-RC2.17
I don’t know. The Win 10 UEFI image on the FOG server at work and a completely different Win 10 UEFI image on my home FOG server both deployed and worked fine the day before. Nobody else has access to either FOG server and the only change made was the update from RC2.15 to 1.5.9-RC2.17
-
RE: Odd issue with Win 10 UEFI images after updating from 1.5.9-RC2.15 to 1.5.9-RC2.17
Sorry I am not explaining this well.
I was able to resolve the situation by creating a new UEFI image from a legacy image and capturing it. The original UEFI based image that got corrupted is not working.
-
RE: Odd issue with Win 10 UEFI images after updating from 1.5.9-RC2.15 to 1.5.9-RC2.17
The Windows 10 EFI image is just the legacy image converted to EFI
Since the Legacy image was still working fine I deployed the legacy image, converted it to EFI then re-captured it replacing the broken Win 10 EFI image
-
RE: Odd issue with Win 10 UEFI images after updating from 1.5.9-RC2.15 to 1.5.9-RC2.17
Yes that update was installed on both images.
The odd thing is it only affected the EFI image(s) not Legacy one and the same behavior on 2 different fog servers and 2 different images that deployed fine prior to the FOG upgrade.
But yeah who knows. It was easy to sort it just seemed related to the upgrade at the time.