@george1421 I see the set fog-ip variable and set storage-ip variable. Could I set those two variables with IPs of different interfaces? So for example, the interface IP server ipxe will be the “set fog-ip” and the interface IP serving NFS will be the “set storage-ip” var. Would that work?
Posts made by DBCountMan
-
RE: Configure FOG Server with two NICs
-
RE: Configure FOG Server with two NICs
@george1421 Even if I configure the storage group with one subnet and tftp with another?
-
Configure FOG Server with two NICs
Here’s the scenario I’d like to hash out. I want to virtualize FOG with the following config:
The VM server will have two NICs. Is it possible to serve TFTP on two interfaces, one of which will also host DHCP on the internal VM network? The other interface will function normally as if config’d to work with an existing DHCP server on the prod net. I’m trying to find ways to speed up imaging and reduce prod network load to image a set of VMs on the same hyper-v server. -
RE: HyperV Gen1 Hangs on iPXE Initializing Devices
I know this is an old post, but I too have issues booting Gen 1 Hyper-V with ipxe. Turns out the message I get (error 1c25e002) is related to a bug in ipxe when used with gen 1 hyper-v. I tried recompiling as you suggested but it didn’t make a difference. The error code appears right after default.ipxe is downloaded during the boot,php download, says “Invalid Argument”. Works fine on Vbox via legacy boot though.
-
Limit disk space that FOG can use
Is it possible to limit the amount of disk space that FOG can use without partitioning? I built a new Hyper-V server with about 13TB that will host VMs, FOG included, and I will be mounting the images from the host to the VM with an internal switch via SMB. I understand that the FOG UI will show total size available wherever the /images is located, even if you have a network share mounted on /images. So without partitioning the volume, is there a way to tell FOG to limit the available space to use for images?
-
RE: Very slow imaging performance on XCP-NG guest vm
@Milheiro Zstd level is compression level right? That I’m aware of. Makes sense if you increase compression that it would take longer to capture/deploy. What I was experiencing was linux kernel level errors, only when deploying to a virtual disk that resides on the xenserver’s local storage array. I wonder if it is something like Truenas where it performs worse when you let the storage controller handle the array vs a soft-raid controlled by the OS.
-
RE: Very slow imaging performance on XCP-NG guest vm
@Sebastian-Roth The devel kernel version 6.1.22 didn’t help, BUT, I was using the server’s local storage to store the virtual disk. I mounted an SMB share to the xen server, stored the virtual disk on that, voila, no more errors.
Conclusion: Nothing to do with FOG.
-
RE: Very slow imaging performance on XCP-NG guest vm
@Sebastian-Roth FOG version 1.5.10. Not sure how to find the FOS kernel version. Are you referring to the bzImage kernel version? If so the kernel version I’m on is 5.15.93.
-
Suggestion for install.sh script
I see that the install script still tries to set up tftpd-hpa. I have my FOG set up as a next-server, and my DHCP server forwards packets to it. I use dnsmasq instead of tftpd-hpa. The installer errors out at the very end with
Mar 29 20:08:29 fogserver systemd[1]: Starting LSB: HPA's tftp server... Mar 29 20:08:29 fogserver tftpd-hpa[3413975]: * Starting HPA's tftpd in.tftpd Mar 29 20:08:29 fogserver systemd[1]: tftpd-hpa.service: Control process exited, code=exited, status=71/OSERR Mar 29 20:08:29 fogserver systemd[1]: tftpd-hpa.service: Failed with result 'exit-code'. Mar 29 20:08:29 fogserver systemd[1]: Failed to start LSB: HPA's tftp server.
TFTP still serves files via dnsmasq though. To be honest I forgot why I moved to dnsmasq, its been so long. Is it possible to mod/update the install script to ask if you want to use tftpd or dnsmasq?
-
Very slow imaging performance on XCP-NG guest vm
Not a critical issue as I use XCP-NG to image vms for testing the images. Was just wondering if anyone else has this issue.
Whenever I deploy an image to a vm, it crawls down to less than 1GB/min with several delays between screen updates. Eventually I see these message appear:
The host is a Dell PowerEdge R710. I give the guest 4 cores and 8GB ram. I tried 1 socket, 4 cores and also tried 4 sockets 1 core. The network connection is 1Gbps. I tried imaging the same vm using Acronis True Image which took about 8min. If I deploy an image with FOG on the same vm, the process can take +20min.
-
RE: ipxe boot slow after changing to HTTPS
@Sebastian-Roth said in ipxe boot slow after changing to HTTPS:
never got UEFI PXE booting to work in vbox on Linux
Even when using the Paravirtualized Network Adapter in VBox?
-
RE: ipxe boot slow after changing to HTTPS
@Sebastian-Roth said in ipxe boot slow after changing to HTTPS:
The default on Linux virtualbox: Intel PRO/1000 MT Desktop (82540EM)
Hmm. Assuming you’re booting legacy pxe instead of UEFI, since UEFI PXE boot on Vbox requires the Paravirtualized Network Adapter (virtio) adapter. Not that it made make a difference for me at least once ipxe is loaded.
-
RE: ipxe boot slow after changing to HTTPS
@Sebastian-Roth Cool thanks, much apppreciated. By the way, this isn’t an operation-breaking critical issue, so take your time.
-
RE: ipxe boot slow after changing to HTTPS
@Sebastian-Roth Definitely looks like it is isolated to ipxe.
@Sebastian-Roth said in ipxe boot slow after changing to HTTPS:
I have never seen this on my HTTPS setups
Out of curiosity, what NICs do you typically run ipxe on?
-
RE: ipxe boot slow after changing to HTTPS
@Sebastian-Roth said in ipxe boot slow after changing to HTTPS:
- like a packet from a different connection (but on the same ports!)
This could be the NAT’d VM IP. I ran wireshark on the Default Hyper-V Switch adapter.
-
RE: ipxe boot slow after changing to HTTPS
@Sebastian-Roth I pm’d you a pcap
Ran these tests on my hyper-v and xcp vms:
- In the FOG debug console (Both Hyper-V and XCP showed this result)
wget --no-check-certificate https://fogserverip/fog/service/ipxe/bzImage wget: not an http or ftp url: https://fogserverip/fog/service/ipxe/bzImage
- kernel bzImage took about 3-4 seconds on hyper-v, 10 seconds on xcp, then returned with
bzImage...ok
-
RE: ipxe boot slow after changing to HTTPS
I just want to reiterate that when I say slow/fast, I’m referring to the time it takes to initiate a download (get) of a file via HTTPS. Once the download starts, then the speed is fine.