• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Piotr86PL
    3. Posts
    P
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 18
    • Best 5
    • Controversial 0
    • Groups 0

    Posts made by Piotr86PL

    • RE: CentOS Partition Count Failed

      @george1421 said in CentOS Partition Count Failed:

      @wt_101 Would you switch to the dev branch to upgrade to 1.5.9.200 or later. The reason is that version of partclone has been upgraded to 0.3.20 from 0.3.13.

      In theory you should be able to use single disk resizable with XFS file system. I have not personally tried cloning an XFS system but as long as its a standard partition (not lvm) it should just work.

      I also tried cloning an XFS partition and it doesn’t work on the “Resizable” setting either. I think it is related to this code in the fog.upload file of the FOS system.

                  if [[ $ntfscnt -eq 0 && $extfscnt -eq 0 && $btrfscnt -eq 0 ]]; then
                      echo "Failed"
                      debugPause
                      handleError "No resizable partitions found ($0)\n   Args Passed: $*"
                  fi
      

      The script checks how many NTFS, EXT and BTRFS partitions there are and if the number is zero, it throws an error. For systems that only have XFS partitions, this causes the above error as the script doesn’t count how many XFS partitions there are (because XFS partitions can’t be resized anyway).

      posted in FOG Problems
      P
      Piotr86PL
    • RE: Wake on LAN on Realtek cards

      @sebastian-roth

      It’s a good idea to release a separate kernel build, with integrated Realtek drivers. At the moment I have to deal with FOG almost every day and I would be able to see to it that the patch remains compatible (but I don’t know what the future will bring). People could test at their place how this custom kernel works and indeed as if there would be no problems, it could be used as default kernel. Of course, if the ‘r8169’ driver built into the kernel gets fixed, then we could go back to generic kernel builds.

      Hmmmmm, I might have just stumbled upon a pain licence issue here. The Realtak drivers are released under GPLv2-only

      I’m no legal expert, but it seems to me that as long as the kernel is not an integral part of the FOG code (and is only downloaded during installation), it doesn’t matter what license the FOG is under. I just don’t know what it would be like to share pre-built kernels on Github (just sharing a patch would not likely be a problem - the code is public, after all). Nevertheless, it would be better to consult someone knowledgeable about the topic.

      posted in Hardware Compatibility
      P
      Piotr86PL
    • RE: Wake on LAN on Realtek cards

      Here is link to the patch: https://mega.nz/file/z5swlQjS#RuS5gGVD2Kc2IpQr3YTgEnw8xg3o8vaABeDBwuDEST0

      posted in Hardware Compatibility
      P
      Piotr86PL
    • RE: Wake on LAN on Realtek cards

      @sebastian-roth
      As it turns out, this topic is a bit complicated.

      The “r8169” driver built into the kernel supports the following Realtek chips:
      RTL8169 Gigabit Ethernet
      RTL8168 Gigabit Ethernet
      RTL8101 Fast Ethernet
      RTL8125 2.5GBit Ethernet

      Whereas the “r8168” driver provided by Realtek only supports:
      RTL8168
      RTL8111

      And installing r8168 REQUIRES removing r8169 from the kernel. Therefore, in order to maintain functionality, I was forced to integrate into the kernel not only r8168 from Realtek, but also other drivers provided by Realtek. These drivers are:
      r8101 - support for RTL8101 and related chipsets
      r8125 - support for RTL8125 and related chipsets
      r8169 - support for RTL8169 and related chipsets (DO NOT MISS THE KERNEL DRIVER!!! These are not the same drivers)

      Only by integrating these four drivers can the integrated r8169 driver be replaced while maintaining support for all chips. And removing the integrated r8169 driver is required - as I mentioned.

      I can provide a patch file that I use to compile modified kernels (the patch completely swaps the integrated r8169 for these 5 drivers mentioned earlier).

      Versions I use:
      r8101 - 1.037.01
      r8125 - 9.010.01
      r8168 - 8.050.03
      r8169 - 6.031.00 (recall, this is also the name of the Realtek driver, which currently has nothing to do with the one built into the kernel)

      There is one more important thing!
      The Realtek drivers r8101 and r8168 have a function called “mdio_real_read” in the r8101_n.c and r8168_n.c files. This function is not set as “static” so the linker throws an error if you integrate these two drivers in the kernel. You need to make small modifications and add “static” in both files to these functions (r8125 already has static added). My patch already contains this modification.

      posted in Hardware Compatibility
      P
      Piotr86PL
    • RE: Wake on LAN on Realtek cards

      @george1421
      It is likely that the settings for MagicPacket WoL are stored in some kind of memory on the NIC. The same is true on Windows, where there is a “Wake on Magic Packet” option in the Device Manager and its setting is required for WoL to work - in Linux this is done with the command ethtool -s enp1s0 wol g.

      Unfortunately, the built-in driver in the kernel - r8169 probably has poorly implemented WoL support, as regardless of the setting with the ethtool tool WoL does not work, and after a reboot the setting returns to Disabled. Browsing through internet resources, I came across information that this is a common bug with some Realtek cards and apparently the ones built into my motherboards suffer from this. The forums recommend installing the r8168 driver from the Realtek website, which does not have this bug. On distributions like Ubuntu this is not a problem - worse with FOS.

      So in the end, I replaced the r8169 driver with r8168 in the FOS kernel and created a corresponding patch, which I put in the “patch” directory to make it easier to build my own modified kernels. The problem disappeared.

      Additionally, in other computer labs, there are Dell Vostro 3681 PCs which also have a Realtek card and which suffer from an even worse issue - images restore at 3GB/min using Unicast. When I use my modified kernel with the r8168 driver, the speed is 13GB/min. The same situation occurs with the laptops we use - Vostro 3500. Original kernel: 3GB/min, modified kernel: 13GB/min.

      I don’t know where the difference comes from, but this driver embedded in the kernel is not very famous anyway - it creates a lot of problems with new models of Realtek cards. I will use the modified kernel for now, until r8169 is fixed.

      posted in Hardware Compatibility
      P
      Piotr86PL
    • Wake on LAN on Realtek cards

      Hi,
      I have a computer lab consisting of 17 computers that are equipped with a built-in Realtek card. I have tried to run WoL on it to restore operating systems faster (sometimes Windows, sometimes Ubuntu) and there is no problem with this under Windows, however, there was a problem under Linux. Well, the r8169 driver embedded in the kernel, does not support WoL correctly. And this is not a problem because all I had to do was install the r8168-dkms package. Unfortunately, FOS, which uses the kernel driver, keeps disabling WoL for me so that I can’t easily boot computers over the network.

      Normally under Windows the system activates WoL itself so I only have to start it once and WoL comes back. Linux does not do this and even with the r8168-dkms package I have to use ethtool to activate WoL because after any operation in FOS, WoL is deactivated by the r8169 driver.

      Is there any solution to this problem? As a last resort I could build my own kernel with the r8168 driver, but I would like to avoid this.

      posted in Hardware Compatibility
      P
      Piotr86PL
    • RE: Include partclone 3.20?

      Of course - the more variety the better the test results will be. I will slowly start to deploy my kernels and inits in other computer labs at school, where there is also diversity in computers and configurations, so I will know from the lab supervisors if something on their hardware does not work.

      Funny thing, because there are still some issues with each lab having a different hardware configuration, but at least now it’s useful for FOG testing.

      posted in General
      P
      Piotr86PL
    • RE: Include partclone 3.20?

      There is no need to rush. The most important thing is that everything works stably - I will keep testing and monitoring my setup.

      As for this partclone 0.3.20. The images I am operating on were still created by version 0.3.13 and there is no problem with that. The new ones also work. Here I can agree that APFS is somehow not a popular filesystem, but when it comes to BTRFS, I notice that more and more distributions are pushing this as the main filesystem, e.g. Fedora already offers BTRFS at the start. It may not be as widely used at the moment but it could be soon. “Soon” meaning not tomorrow, so there is still plenty of time to test partclone 0.3.20 before releasing it into production.

      Regarding SPECULATION_MITIGATIONS, I’ve compiled a kernel with this option enabled and will test it a bit on different hardware platforms and see if it actually affects this performance that much or not. If not, I’ll leave it enabled - it’s always some sort of security layer, and also disabling this option displays a warning message when FOS starts up, and that doesn’t look too good.

      posted in General
      P
      Piotr86PL
    • RE: Include partclone 3.20?

      Here you are right. Partclone shows the speed of writing to the disk, not the speed of downloading the image from the server. I wouldn’t look for the increase in write speed in improving the network parameters either, but more in the storage parameters of my machine. I switched /images to XFS and on top of that I made sure that in Fedora, no unnecessary background services were running. I also forgot to add that I tweaked the parameters of the machine itself - from 6 cores to 12 and from 8GB of RAM to 12GB. Previously, the machine may simply not have been able to keep up with writes and reads from the array.

      I also don’t rule out that changes to the kernel and init also had an impact on the speed increase. It is likely that the SPECULATION MITIGATIONS option may have had some bearing on the matter. The iperf tests showed, full 1Gbps speed from the FOG server to the FOS, both with the old setup and the new one. So by saying that the server setup matters, I’m referring specifically to the server’s hardware configuration and disk configuration. I’ve made too many changes to the FOG to say unequivocally what specifically caused such an increase in speed - I’m no less pleased that I was able to squeeze out something more. I am happy with what I have and am unlikely to try to get more. The important thing for me is that the current speeds are stable and nothing has crashed yet.

      posted in General
      P
      Piotr86PL
    • RE: Include partclone 3.20?

      It seems to me that when it comes to increasing the performance of FOG, server configuration makes a big difference.

      The previous server was based on Debian 10 with FOG 1.5.9, and the /images directory was on the EXT4 partition. The server was virtualised with VMware ESXi and files were kept on an array of HP 7200RPM SAS drives. The configuration was the default - as it was after the FOG installation. That’s when I was achieving 7-8GB/min on PCs (with NVMe drives) for capture and deployment and 12GB/min for multicast. We are talking about a Windows 10 image (NTFS). With an image with Ubuntu 20.04 (EXT4), the speeds were definitely higher - 18GB/min with Multicast and 12GB/min with Unicast.

      However, I wondered if it would be possible to squeeze more out of the whole thing, so I set up a virtual machine based on Fedora 36 with the XFS file system (which supposedly handles large files well, which disk images certainly are). I installed the latest development version of FOG. After installation, I manually compiled Udpcast version 28.03.2020 (the latest one doesn’t work with FOG - it crashes as soon as a multicast session starts). I added the following options to the sysctl.conf file:
      net.ipv6.conf.all.disable_ipv6 = 1
      net.ipv6.conf.default.disable_ipv6 = 1
      net.core.rmem_default = 312144
      net.core.rmem_max = 312144
      net.core.wmem_default = 312144
      net.core.wmem_max = 312144
      to disable IPv6 (which I do not use) and to increase the socket buffer size. Additionally, I set 128 threads in the NFS settings, as the default is a small number (and the server is running on a vSphere cluster). Still in the FOG itself in the Storage Node settings I set the bitrate to 1000M. I’ve also compiled the latest kernel to make sure the latest patches are working (I’ve disabled SPECULATION_MITIGATIONS as I’ve read that this can affect CPU performance, and sometimes I find myself imaging computers with really weak CPUs). I also prepared my own init.xz with partclone 0.3.20, and then added these patches to the scripts and updated Buildroot to not have problems with BTRFS.

      Also, this is how my current FOG configuration looks - as if anyone was curious what exactly I changed that made me achieve the speeds I did. A bit of an offtopic, but maybe it will be of use to someone.

      posted in General
      P
      Piotr86PL
    • RE: Include partclone 3.20?

      I built init.xz with partclone 0.3.13, with no changes to buildroot or scripts, and ran a series of tests. The end result was a difference of 1GB/min in some cases in favour of partclone 0.3.20, but this is a marginal difference to take into account. I noticed that partclone 0.3.13 was a little slower to start than partclone 0.3.20, i.e. after the message “Attempting to deploy image”, it took longer to start partclone 0.3.13 than 0.3.20 - about half a minute.

      In general, changing the partclone version here is unlikely to affect the speed of image restoration, but more on things like support for non-Linux filesystems like NTFS or support for more exotic filesystems like BTRFS.

      Also, I wouldn’t rush to update partclone to version 0.3.20, but I would definitely see it already when FOG 1.5.10 is released, as it fixes a few bugs after all and doesn’t cause me any problems. I would also give buildroot in a newer version for the release of FOG 1.5.10, so as not to scare users away with those udev warnings that appear because of eudev. But it’s all up to the developers now. I’ll keep an eye on the operation of my setup and as soon as something happens I’ll investigate it and let you know.

      posted in General
      P
      Piotr86PL
    • RE: Include partclone 3.20?

      I’m currently testing init.xz images built by myself using the partclone-0320 branch. In addition, these images have, for my part, added these two commits of mine from github that fix bugs with BTRFS (https://github.com/FOGProject/fos/pull/47 https://github.com/FOGProject/fos/pull/45). The whole thing was built using Buildroot 2022.02.5, which fixes bugs related to udev (https://github.com/FOGProject/fos/issues/46). I know it’s too many changes to treat my experience as relevant to the addition of this particular partclone version, but I think it’s worth sharing anyway.

      The FOG server on which this custom init.xz runs is based on Fedora 36 (/images on XFS), the latest (at the time of writing this) development version of FOG, compiled kernel 5.15.71, and an updated version of UDPcast to 20200328 (the latest does not work with FOG). This server, runs in production, so there are 30 images restored each day, and sometimes more (mostly Windows 10 and Ubuntu). But it happens to image other computers that have other systems, such as Fedora 36 Workstation with the BTRFS file system. I mainly use Multicast restore (computers in the school computer lab), but sometimes I need to restore one computer and I use Unicast.

      The previous FOG server was based on Debian 10 (/images on EXT4), FOG 1.5.9 and the latest official kernel and init.xz. The situation has changed dramatically after the migration.

      On the old system, using Unicast I was getting speeds of around 7-8GB/min (for both restore and capture). With Multicast, this speed increased to 12GB/min.

      After I built the server on Fedora, I used init.xz from the partclone-0320 branch without patches and the official 5.15.68 kernel and the speed increased to 14GB/min using both Multicast and Unicast. The capture speed went up to 9GB/min. After changing init.xz to the current one, the speeds have not changed, but at least I can safely restore the BTRFS file system without any errors.

      I don’t know how much of this is due to Partclone 3.20 and how much is due to migrating the system to Fedora with XFS, but I can say that so far Partclone 3.20 is running very stably and hasn’t crashed yet. And I have already restored images based on NTFS, XFS, EXT4 and BTRFS. If I only notice any flaws with the operation of the whole thing I will describe them. But so far I have no complaints about my system. If it continues to perform as well as it does now I will migrate the FOGs in other computer labs to what I am using in this particular one.

      posted in General
      P
      Piotr86PL
    • RE: Chainloading Failed when using EXIT method for drive boot

      @george1421

      Now with a browser go to http://<fog_server_ip>/fog/service/ipxe/boot.php?mac=00:00:00:00:00:00 That will display the text behind the iPXE menu. On that page search for default. That will be the section that is called when the timeout happens and boots from the local hard drive. By changing globally the exit modes for both bios and uefi to exit, it should put the exit command in the iPXE Menu script.

      And it is putting the exit command in the iPXE Menu script

      choose --default fog.local --timeout 5000 target && goto ${target}
      :fog.local
      exit || goto MENU
      

      I’ve tried to mess with the embedded script itself and I found that if I replace

      :netboot
      chain tftp://${next-server}/default.ipxe ||
      prompt --key s --timeout 10000 Chainloading failed, hit 's' for the iPXE shell; reboot in 10 seconds && shell || reboot
      

      with this

      chain tftp://${next-server}/default.ipxe || echo Chainloading failed
      

      and try to “Boot from Hard Drive” using “EXIT” method, it is WORKING! The “Chainloading failed” is not echoing back to me. But if I write this like that

      chain tftp://${next-server}/default.ipxe || 
      echo Chainloading failed
      

      the “Chainloading failed” is echoing back to me. So I guess the issue here is not with the chain command but with the syntax.

      Apparently

      command || command
      

      is not the same as

      command ||
      command
      

      So I’ve tried to leave this “prompt” command but in this manner:

      :chainloadfailed
      prompt --key s --timeout 10000 Chainloading failed, hit 's' for the iPXE shell; reboot in 10 seconds && shell || reboot
      
      :netboot
      chain tftp://${next-server}/default.ipxe || goto chainloadfailed
      

      And now it works! When I use EXIT by clicking “Boot from Hard Drive”, iPXE is correctly exiting. And if I rename default.ipxe on my server to something else (to simulate failed chainloading), the “Chainloading failed, hit ‘s’ (…)” message is appering so I guess that the core of this issue is incorrect syntax in iPXE script, and the solution is to write this like I did.

      Straight away I say that if I write it like this (in one line)

      :netboot
      chain tftp://${next-server}/default.ipxe || prompt --key s --timeout 10000 Chainloading failed, hit 's' for the iPXE shell; reboot in 10 seconds && shell || reboot
      

      the script is not working, beacuse it is dropping me straight to iPXE shell, so the better solution is to write this like this:

      :chainloadfailed
      prompt --key s --timeout 10000 Chainloading failed, hit 's' for the iPXE shell; reboot in 10 seconds && shell || reboot
      
      :netboot
      chain tftp://${next-server}/default.ipxe || goto chainloadfailed
      

      So I guess, we’ve solved this mistery. Next-server variable is working - I’ve tried to echo it and it echoed IP address of my FOG server, by the way.

      posted in FOG Problems
      P
      Piotr86PL
    • RE: Chainloading Failed when using EXIT method for drive boot

      @george1421

      No, no. I was able to get to the FOG iPXE menu. Everything here is working. Besides the option “Boot from Hard Drive” when I’m setting “Exit to Hard Drive Type” to “EXIT” in FOG Menu. SANBOOT is not working because it is not recognizing NVMe Drivers, so no chance using SANBOOT.

      When “Exit to Hard Drive Type” is being set to “EXIT” and I click “Boot from Hard Drive” in FOG iPXE Menu, an error pops - “Chainloading failed, hit ‘s’ for the iPXE shell; reboot in 10 seconds”.

      I thought it is an error which iPXE embeds in it’s code and it is iPXE’s direct fault that this error is happening. But today I search through FOG’s sourcecode, where I found that this error ("Chainloading failed…) is not handled by iPXE itself but rather an iPXE script that is embedded in iPXE binary by FOG. I mean this script.

      Here is this code:

      :netboot
      chain tftp://${next-server}/default.ipxe ||
      prompt --key s --timeout 10000 Chainloading failed, hit 's' for the iPXE shell; reboot in 10 seconds && shell || reboot
      

      and if I understand correctly, it should inform the user when iPXE can’t chainload default.ipxe file and it is invoked when default.ipxe file can’t be chainloaded.

      But it turns out that when you use iPXE’s exit command somewhere after loading default.ipxe (for example in the Boot from Hard Drive option), it won’t exit WHOLE iPXE but only chainloaded scripts and it continues to execute iPXE’s embedded script which goes to the “prompt” command and invokes “Chainloading failed” error.

      When I left only this command, deleting “prompt --key…” like this:

      :netboot
      chain tftp://${next-server}/default.ipxe
      

      then iPXE’s exit command now works invoked from scripts, therefore the “Boot from Hard Drive” option started to work.

      I don’t know if this is how iPXE should work. I think it should exit whole iPXE, even with this fragment of code which I’ve deleted to make things work. But it is not working that way and only deleting it and leaving only “chain” command makes “exit” command work. Of course the rest of script is left intact by me.

      posted in FOG Problems
      P
      Piotr86PL
    • RE: Chainloading Failed when using EXIT method for drive boot

      I’ve might found the core of the issue. iPXE which is generated by FOG embeds a iPXE script (Link to FOG Github). This script tries to chainload default.ipxe from TFTP Server, but if the chainloading failes it throws the “Chainloading failed” error.

      I though it was an error notification created by iPXE Team, but to my surprise it is written by FOG Project Team. So I misunderstood some things (I was searching the solution on iPXE forums, thinking it may be iPXE causing this error). When I deleted the whole chainload error handler, leaving only this:

      :netboot
      chain tftp://${next-server}/default.ipxe
      

      and recompiling iPXE, the exit mode is now working properly and iPXE is exitting. I don’t know though why is the original script causing the error occurence. According to the iPXE wiki, “exit” should exit whole shell. But apparently it is exiting only chainloaded scripts - not the embedded one. And the embedded script continues to execute, throwing the chainload error. I don’t know if that should work that way and it is not just an iPXE bug, that using “exit” will not exit the whole iPXE but only the chainloaded scripts (besides the embedded one).

      Either way - I found what’s causing the issue and repaired it (works on VM, I must check if the ASUS PC’s are also working now). Not an elegant way though, so I’m looking forward to hear from FOG Project Team, an more reliable solution to this issue. This might involve further and more sophisticated debugging.

      Thanks for helping!

      posted in FOG Problems
      P
      Piotr86PL
    • RE: Chainloading Failed when using EXIT method for drive boot

      @george1421 Yes, the target computer have a true BIOS mode. When I’m turning on CSM, I can select if I want to boot from PXE, Hard Disks, etc. using “Legacy Mode” or “UEFI Mode”, so I’ve selected everywhere that I want to use Legacy Mode. Now everything is working in classic BIOS mode. OS’es are installed in BIOS mode, PXE is also booting using legacy PXE (and undionly.kpxe file). So I guess it has a true BIOS mode (in some sort). I’ll try setting both exit modes to EXIT, when I’ll get opportunity to get to the school (holidays in my country, so I’m not everyday at school).

      I have vSphere infrastructure at school, so I remotely tested FOG on a Virtual Machine with both exit modes setted to EXIT - and here I the same error. It may not be problem with motherboard but with FOG itself. I can post screen of this issue. The virtual machine is running a true BIOS so it is not the issue of using CSM. Typing “exit” in shell, here works too.

      5da8696e-cbeb-4994-be06-22bf7a5761d2-image.png

      posted in FOG Problems
      P
      Piotr86PL
    • RE: Chainloading Failed when using EXIT method for drive boot

      @george1421 Thanks for your response!

      I’ve might put things badly (english is not my native language), so I’m sorry. The thing is - I’ve tried setting “exit” as Boot to Hard Drive option. But it is failing. The “Chainloading failed” error is being thrown by iPXE when choosing Boot to Hard Drive option in FOG iPXE Menu, then the PC reboots after few seconds. But it gives me opportunity to enter iPXE shell, and here I can type “exit” and it works. I thought that this chainloading issue may be happening because the BIOS is corrupted in some sort (I’ve read on ipxe forums that some BIOS implementations may not work with exit command). But it is not the case here, because manually invoked exit in iPXE shell works just fine. It is the “exit” method of disk booting in FOG’s menu that is failing here.

      The solution of editing boot order in Windows and Linux is some sort of way for figuring out this issue. But I can use it like that, if there is nothing to do to fix this “Chainloading Failed” error when changing hard disk boot method to “EXIT” in FOG and trying to boot to the drive from FOG.

      I’ve forgot to mention. I’ve tried the stable FOG 1.5.9 and the latest dev version of 1.5.9 but neither is working.

      posted in FOG Problems
      P
      Piotr86PL
    • Chainloading Failed when using EXIT method for drive boot

      Hello!

      I’ve been using FOG Project in my school for several years. Until now, the PC’s were imaged using Unicast deployment. Recently I’ve started using Multicast to speed up the process. My initial goal is to have automatic image restoration (I click “Deploy” and PCs are waked on LAN and automatically booting from PXE).

      Computers in my classroom are based on ASUS PRIME Z490-P motherboard, with Realtek NIC onboard. Disks I’m using are some ADATA NVMe ones. When the mobo is in UEFI mode, everytime I restore the image, the boot order changes and PXE becomes is going down the list. It is not an issue when deploying the same image (boot order stays the same). But in my classroom, students are being teached Linux-based and Windows-based networks, so with one class I need to deploy Ubuntu and with another I need to deploy Windows Server. But boot order changes when I switch the image I am deploying.

      My plan was to turn on CSM on the mobo and install OSes in BIOS mode to prevent boot order overwriting. Unfortunely iPXE in SANBOOT exit mode is not recognizing the NVMe SSDs. And when using “EXIT” mode iPXE won’t exit displaying “Chainload Failed” error. I’ve tried using undionly.kpxe and undionly.kkpxe - nothing works. But when I am dropped to iPXE shell and manually type “exit” everything works flawlessy and OS is booting.

      Is there any solution to the “Chainload Failed” error? I can turn back UEFI mode but in UEFI mode on the other hand, boot order is been overwritten when image restored. Dell Optiplex had option called “Wake-on-LAN to PXE”, I wish normal motherboards have this option.

      Thank you in advance for your help.

      posted in FOG Problems ipxe boot nvme bios
      P
      Piotr86PL
    • 1 / 1