• RE: Restrict access to web management UI?

    @fogcloud Pxe boot has to get to the boot.php file. It does this over port 80 or 443 if you have https enforced. When you enforce https ipxe is compiled with the fog ca and the certificate generated by said ca as trusted certs within your local version of ipxe.
    I’m not quite sure what you mean by restricting access only to the web UI. Do you mean close all other ports? Because that will likely break tftp and nfs as they use other ports and imaging and pxe boot will be broken. ipxe itself will be fine if you’ve booted to it outside of native pxe boot where the ipxe boot file (i.e. ipxe.efi or snponly.efi) is downloaded via tftp. ipxe then downloads the boot.php file from the fog web server and boots to it to get to the fog pxe menu.

    posted in General Problems
  • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

    @rodluz I got everything to compile, but it was a pita.

    I did compile the 0.3.32 version of partclone and 4.3 of mdadm on buildroot 2024.05.1. The new compiler complains when package developer references files outside of the buildroot tree. Partclone referenced /lib/include/ncursesw (the multibyte version of ncurses). Buildroot did not build the needed files in the target directory. So to keep compiling I copied from by linux mint host system the files it was looking for into the output target file path then I manually updated the references in the partclone package to point to the output target. Not a solution at all but got past the error. Also partimage did the same thing but references and include slang directory. That directory did exists in the output target directory, so I just updated the package refereces to that location and it compiled. In the end the updated mdadm file did not solve the vroc issue. I’m going to boot next with a linux live distro and see if it can see the vroc drive, if yes then I want to see what kernel drivers are there vs fos.

    posted in General Problems
  • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

    @rdfeij Well I have a whole evening into trying to rebuild the fog inits (virtual hard drive)…

    On my test system I can not get fos to see the raid array completely. When I tried to manually create the array it says the disks are already part of an array. Then I went down the rabbit hole so to speak. The version of mdadm in FOS linux is 4.2. The version that intel deploys with their already built kernel drivers for redhat is 4.3. mdadm 4.2 is from 2021, 4.3 is from 2024. My thinking is that there must be updated programming in mdadm to see the new vroc kit.

    technical stuff you don’t care about but documenting here.
    buildroot 2024.02.1 has mdadm 4.2 package
    buildroot 2024.05.1 has mdadm 4.3 package (i copied this package to 2024.02.1 and it built ok)

    But now I have an issue with partclone its failing to compile on an unsafe path in an include for ncurses. I see what the developer of partclone did, but buildroot 2024.02.1 is not building the needed files…

    I’m not even sure if this is the right path. I’ll try to patch the current init if I can’t create the inits with buildroot 2024.05.1

    posted in General Problems
  • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

    @rdfeij OK good you found a solution. I did find a Dell Precison 3560 laptop that has dual nvme drives. I was just about to begin testing when I see you post.

    Here are a few comments based on your previous post.

    1. When in debug mode either a capture or deploy you can single step through the imaging process by calling the master imaging script called fog at the debug cmd prompt just key in fog and the capture/deploy process will run in single step mode. You will need to press enter at each breakpoint but you can complete the entire imaging process this way.

    2. The postinit script is the proper location to add the raid assembly. You have full access to the fog variables in the postinit script. So its possible if you use one of the other tag fields to signal when it should assemble the array. Also it may be possible to use some other key hardware characteristics to identify this system like if the specific hardware exists or a specific smbios value exists.

    I wrote a tutorial a long time ago that talked about imaging using the intel rst adapter: https://forums.fogproject.org/topic/7882/capture-deploy-to-target-computers-using-intel-rapid-storage-onboard-raid

    posted in General Problems
  • RE: Error messages, Windows 11, Sysprep

    @zguo There are two things I can think of to create the initial error message.

    1. You have the fog client installed and the services hasn’t been disabled before image capture. The fog client is starting to do its action before windows is fully installed.
    2. You have a driver install that is forcing a spontaneous (all of a sudden) reboot of the computer before windows is installed.
    posted in Windows Problems
  • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

    @rdfeij said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:

    Intel Corporation Volume Management Device NVMe RAID Controller Intel Corporation [8086:a77f]

    FWIW the 8086:a77f is supported by the linux kernel, so if we assemble the md device it might work, but that is only a guess. It used to be if the computer was in uefi mode, plus linux, plus raid-on mode the drives couldn’t be seen at all. At least we can see the drives now.

    posted in General Problems
  • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

    @rdfeij Well the issue we have is that non of the developers have access to one of these new computers so its hard to solve.

    Also I have a project for a customer where we were loading debian on a Dell rack mounted precision workstation. We created raid 1 with the firmware but debian 12 would not see the mirrored device only the individual disks. So this may be a limitation with the linux kernel itself. If that is the case there is nothing FOG can do. Why I say that is the image that clones the hard drives is a custom version of linux. So if linux doesn’t support these raid drives then we are kind of stuck.

    I’m searching to see if I can find a laptop that has 2 internal nvme drives for testing, but no luck as of now.

    posted in General Problems
  • RE: Group Management Settings not saving

    @MatMurdock You can also do a full host registration and that allows you to set the group and the snapin associations at registration and kick off the image from there.

    I use the API powershell module (see my signature) and have created custom functions and powershell tools to manage most my assignments. That takes a bit more work to get setup at scale but gives you more customization options.

    Starting fresh, well depends on how fresh, the best answer depends on how you’re going to use Fog. Like if these are all brand new computers that aren’t in any other system yet, then doing quick reg on them all might be best.
    I myself do full registration and inventory for new hosts. If all your computers already exist on the network or in Active Directory you could get the host information and import. Many moons ago I made this host scanner example https://forums.fogproject.org/topic/9560/creating-a-csv-host-import-from-a-network-scan?_=1721413305258 that will create a csv of all hosts and their macs on your network in the provided subnets.
    If you can get them all in before hand, then mass-setting the snapins would be much easier.

    posted in FOG Problems
  • RE: Group Management Settings not saving

    @MatMurdock A newly imaged machine will automatically deploy any assigned snapins.

    The design is flexible and you can do it in many different ways but here’s a general example that would utilize a group.

    • You have a group named ‘Group A’ with computers you want to image with the same image and join the domain in the same ou and have them use the same bunch of snapins
    • You assign the image via group management, they all now have the same image
    • You assign the AD information, they all now have the same AD info
    • You assign some snapins, they all now have those snapins assigned (in addition to anything else those hosts already have assigned, you could also do a group remove of all snapins first if desired)
    • You push the task to deploy or multicast deploy on the group
    • All the machines in that group now have a deploy task for the image and a deploy task for the snapins associated
    posted in FOG Problems
  • RE: Fog Client replaced powershell script with "Please update your FOG Client, this is old and insecure"

    @MatMurdock That is correct.
    If git pull gives you trouble (sometimes happens on upgrades) then do this within your git folder (i.e. /root/fogproject)

    git fetch --all
    git checkout working-1.6
    git reset --hard origin/working-1.6
    git pull
    

    Then the cd bin and installfog.sh are good.

    Also lols to CrowdStrike

    posted in Windows Problems