• Issue with creating hooks

    Unsolved
    8
    0 Votes
    8 Posts
    2k Views
    F

    @Tom-Elliott That did not have an impact. still seeing ${serial} on the host list. If it helps I am also seeing at the bottom of my host management page

    ${pingstatus} ${host_name} ${deployed} ${image_name} ${serial}

  • 0 Votes
    9 Posts
    2k Views
    J

    Sorry to resurrect an old thread. I ran into this issue today and have found a resolution (at least in my case).

    I had an image that was quite old that I wanted to update. The client I wanted to image was from a completely different system (a VM). I cloned over top the existing image with the VM. I got the same exact 504 error and it would eventually fail to update the database.

    I created a new image in fog and assigned the new host that new image. Went through flawlessly.

    It’s weird because I’ve done this in the past on multiple occasions and never had an issue. It was more than likely I created the older image (that I was just trying to clone over) on v1.5.9. I did update to v1.5.10 a couple months ago.

    So if you have this same problem, try creating a new image if overwriting an old image fails with the 504 error.

  • FOG dhcp server assign different addresses same macaddress

    Unsolved
    2
    0 Votes
    2 Posts
    425 Views
    Y

    @helian

    Hi,

    Which Distribution do you use?
    there is many solutions for this Problem.

    Example:
    1- you have to edit your dhcp-range

    I do not use the fog server as DHCP-Server. But i can help u if u have Debian Distribution 😉

    nano /etc/dhcp/dhcpd.conf

    find the line like this:

    subnet 192.168.14.0 netmask 255.255.255.0 { range 192.168.14.10 192.168.14.100; option routers 192.168.14.1; option domain-name-servers 8.8.8.8, 8.8.4.4; default-lease-time 600; max-lease-time 7200; }

    you can setup the range of DHCP-Server as u want. I will prefer from 150-230. be careful just edit the last octet do not change the your whol network.

    Restart ur DHCP Server

    systemctl restart isc-dhcp-server

    Now you can reserve the IP-Adresses in the file DHCP-Leases. In this file you will find every Host in this network. You can use Every IP-Address under 192.168.14.150. do not use the IP-Address of the FogServer or any IP-Address which is already in used.

    Nano /var/lib/dhcpd/dhcpd.leases systemctl restart isc-dhcp-server

    I hope that i could help 🙂

  • Can't "create new image"

    Unsolved
    7
    0 Votes
    7 Posts
    1k Views
    S

    I was able to resolve this by recreating the Storage Management Node on the master server. My master node does not host any images but it was required to have a node present for the Create New Image page to be displayed.

  • WOL - bug or feature ?

    Unsolved
    1
    0 Votes
    1 Posts
    221 Views
    No one has replied
  • Active Tasks tab http error 500 (only this tab)

    Unsolved
    1
    0 Votes
    1 Posts
    187 Views
    No one has replied
  • kernel panic when imaging

    Solved
    9
    0 Votes
    9 Posts
    2k Views
    george1421G

    @tlehrian Ok so you get a BSOD with windows and under linux you get a kernel panic (same but different). So that points to a hardware issue with this computer.

    I would swap memory with a known good computer, even if the mem test comes back OK. By moving parts hopefully the problem will move with the hardware.

    Move other pcie attached devices like GPUs (if external) or other add in riser cards.

    Now that you updated the bios, go in and reset the firmware settings back to factory defaults. In case there was a firmware setting change with the new firmware that corrects a known issue.

    Sorry I can only give random ideas, but since this seems to be a hardware issues I can only make logical guesses here.

  • Red dot in Fog

    Unsolved
    10
    0 Votes
    10 Posts
    2k Views
    T

    @george1421 Thanks a lot, it works !
    Have a nice day and thank you again !

  • FOG Main Server cannot determine free storage from a fog storage node

    11
    0 Votes
    11 Posts
    5k Views
    Tom ElliottT

    @kamburta I am not sure this is the right topic to be trying to bring back from the dead. There has been quite a lot (almost entirely) that’s changed about FOG since 2014.

    Registering nodes ot the master is, now, automated during the installation process of a new storage node.

  • Specifying the target disk works with HDD but not with NVMe.

    Unsolved
    4
    0 Votes
    4 Posts
    745 Views
    Tom ElliottT

    @mashina hd isn’t a “true” kernel argument, and as such isn’t built as an environment variable at load up. As the OS loads, the getHardDisk function will get called and make its best guess to what drive to use.

    If you want the specific disk of a host to be /dev/nvme1n1, then you should set it on the host under kernel device (or if it’s always global, set fdrive=/dev/nvme1n1 on the extra args instead of hd=/dev/nvme1n1)

  • What's the best way to rename the computer before joining the domain

    Unsolved
    4
    0 Votes
    4 Posts
    1k Views
    george1421G

    @professorb24 Here is a wiki page on the fog client install and setup: https://docs.fogproject.org/en/latest/installation/client/install-fog-client/

    The unattend.xml file is a windows thing. There are many resources on the internet that discusses its setup: https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/update-windows-settings-and-scripts-create-your-own-answer-file-sxs?view=windows-11

    The unattend.xml file is an auto answer file used by the windows setup program to preanswer all of the questions that the installer might ask during installation. There are even answer file generators on the internet that you can answer a few simple questions and it will create the answer file in the proper format like this one: https://www.windowsafg.com/win10x86_x64_uefi.html (I would be careful entering your actual license key on a internet web page, just edit the answer file when you get it by hand to include your key).

    I also have some tutorials on fog post install scripts. This one has code snippets at the bottom of the post that discuss the unattend.xml file and how to potentially update the file with the script. https://forums.fogproject.org/topic/7740/the-magical-mystical-fog-post-download-script The way the forum works read the first post and then scroll to the end to read the second and third posts in the series.

  • Multicast works, but gets stuck before finishing

    Unsolved
    1
    0 Votes
    1 Posts
    197 Views
    No one has replied
  • 0 Votes
    1 Posts
    199 Views
    No one has replied
  • FOG + PXE / SNPonly.efi + ipxe.efi

    Unsolved
    1
    0 Votes
    1 Posts
    388 Views
    No one has replied
  • Wake on LAN not working after deploying with shutdown

    Solved
    3
    0 Votes
    3 Posts
    929 Views
    K

    Thanks for clarifying it.
    The default bzImage kernel already had the network drivers needed for my setup (onboard Realtek NIC). I checked the WoL status of the ethernet adapter with ethtool. It showed:

    Supports Wake-on: pumbg Wake-on: d

    So WoL was supported, but disabled.
    Then I followed this guide to enable WoL on boot: https://wiki.archlinux.org/title/Wake-on-LAN
    I created the /etc/udev/rules.d/81-wol.rules file in init.xz and init_32.xz, and added this rule:

    ACTION=="add", SUBSYSTEM=="net", ATTR{address}!="00:00:00:00:00:00", RUN+="/usr/sbin/ethtool -s $name wol g"

    I use ATTR{address} instead of NAME, so this rule would apply to any network interface (except loopback), regardless of the interface name.
    With this rule in place I can wake up computers after a shutdown task (deploy with shutdown or capture with shutdown).

    TLDR; I only needed to add an udev rule to the pre-built init.xz and init_32.xz (https://github.com/FOGProject/fos/releases/latest) to enable WoL.

  • Isolated Network Install - No Internet

    Solved
    5
    0 Votes
    5 Posts
    960 Views
    george1421G

    @atlas When it comes to opensource, the only wrong answer is one that doesn’t work. Well done!

    Another hackish way would be to instead of changing the programming, you could enter a fake/but valid entry in the /etc/hosts table to point the dns entry to your internal server. This way you can use fog native code when version next comes out. But again if it worked for you it was the right answer.

  • Storage node in different subnet - deployment

    Unsolved
    5
    0 Votes
    5 Posts
    569 Views
    D

    Never mind, I renamed the file /opt/fog/.fogsettings and runned the installer again with ssl enabled. It’s working just fine.
    Probem solved, this topic can be closed. 👍

  • Accidentaly Deleted /Images/Dev.Help

    Unsolved
    3
    0 Votes
    3 Posts
    347 Views
    Tom ElliottT

    @fadi you can try re-running the installer.

    I don’t know why you have /images2 and /images2/dev, but re-running the installer should re-associate the missing directories, permissions,ant .mnt file (I cannot remember the full name of the tester file sorry)

  • Run sth on server after imaging?

    Unsolved
    5
    0 Votes
    5 Posts
    970 Views
    george1421G

    @flodo First let me say I don’t know Ansible, I know of it but that is about it. So I always force my way through things.

    But with imaging there are two approaches to take.

    Leave bread crumbs (deploy time configuration info) behind for the target OS to pick up with its internal setup program to consume. In the MS Windows realm that might be mounting the C drive and updating the unattend.xml file with deployment time settings like computer name, calculated target OU, timezone, etc.

    Using a FOG postinstall script to mount the root partition on the target computer and use a bash script to make system changes like set the hostname in /etc/hostname and other parameters that the target system will need at first boot. If you can script it with bash you can probably update it in the target linux OS. When you are done, you would unmount the mounted partition and then let FOG finish with the deployment and reboot the target computer. I have an example of how to do this for windows, it can easily be translated to linux.

    If you wanted to go the ansible route, then you need to identify what you need on the FOS Linux OS and then we/you will need to include that plus the ansible code when FOS Linux is compiled. It is not as hard as it sounds. You need to find the dependencies needed for ansible and then update the buildroot configuration to included the dependencies. FWIW: the FOS engine does have ssh server installed, but with only one user root. So to login remotely to the FOS Linux engine you need to give root a password then you can connect remotely to FOS Linux over ssh.

  • Adding storage - failed to open stream: no such file or directory

    Unsolved
    5
    0 Votes
    5 Posts
    1k Views
    george1421G

    @ian77 That is strange that the storage variable points to /images2 and yet /images gets mapped.

    Now understand on the FOS linux side it will be /images/dev, but what its mapped to should be /images2/dev.

    So you are in debug capture mode. What you will do is confirm that the storage variable is pointing to you /images2/dev directory.

    The first step is list manually try to mount the directory to see if nfs is working correct…

    Please post the output of /etc/exportfs, the fsid variable needs to be different between the /images and /images2 paths if they are the same you will get exactly what you are seeing. It placing the files in the wrong directory.

    /images should have fsid of 0
    /images/dev fsid 1
    /images2 fsid 2
    /images2/dev fsid 3

    once we sort out the fsid I’ll continue with debugging instructions.

220

Online

12.4k

Users

17.4k

Topics

155.9k

Posts