• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 67
    • Topics 113
    • Posts 15,382
    • Groups 2

    Posts

    Recent Best Controversial
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 I guess lets run these commands to see where they go.

      lspci -nn| more We want to look through the output. We are specifically looking for hardware related to the disk controller. Normally I would have you look for raid or sata but I think this hardware is somewhere in between. I specifically need the hex code that identifies the hardware. It will be in the form of [XXXX:XXXX] where the X’s will be a hex value.

      The output of lsblk

      Then this one is going to be a bit harder but lets run this command, but if it doesn’t output anything then you will have to manually look through the log file to see if there are any messages about missing drivers.

      grep -i firm /var/log/syslog The first one will show us if we are missing any supplemental firmware needed to configure the hardware.

      grep -i sata -i raid -i drive /var/log/syslog This one will look for those keywords in the syslog.

      If that fails you may have to manually look through this log.

      posted in General Problems
      george1421G
      george1421
    • RE: Run sth on server after imaging?

      @flodo First let me say I don’t know Ansible, I know of it but that is about it. So I always force my way through things.

      But with imaging there are two approaches to take.

      1. Leave bread crumbs (deploy time configuration info) behind for the target OS to pick up with its internal setup program to consume. In the MS Windows realm that might be mounting the C drive and updating the unattend.xml file with deployment time settings like computer name, calculated target OU, timezone, etc.

      2. Using a FOG postinstall script to mount the root partition on the target computer and use a bash script to make system changes like set the hostname in /etc/hostname and other parameters that the target system will need at first boot. If you can script it with bash you can probably update it in the target linux OS. When you are done, you would unmount the mounted partition and then let FOG finish with the deployment and reboot the target computer. I have an example of how to do this for windows, it can easily be translated to linux.

      If you wanted to go the ansible route, then you need to identify what you need on the FOS Linux OS and then we/you will need to include that plus the ansible code when FOS Linux is compiled. It is not as hard as it sounds. You need to find the dependencies needed for ansible and then update the buildroot configuration to included the dependencies. FWIW: the FOS engine does have ssh server installed, but with only one user root. So to login remotely to the FOS Linux engine you need to give root a password then you can connect remotely to FOS Linux over ssh.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Run sth on server after imaging?

      @flodo First let me say this, FOG doesn’t typically step into the target system during a deployment. FOG’s function is to move disk blocks from storage location to storage location as fast as it can and then exit, leaving the target computer to do its configuration and setup thing during first boot. This holds true for MS Windows based computers as well as linux based systems.

      With that said, FOG calls bash scripts before image deployment / capture. One script is called just after the FOS Linux kernel boots. We would use that to setup or preconfigure any hardware needed for image capture/deploy. The second script is called on a deployment just after the image has been transferred to the target computer and before the target computer reboots. I’ve used these post deployment scripts to set linux host names, or to update specific configuration files based on user definable fields in the fog web ui. Once the post deployment scripts finish and the target computer reboots then FOG is out of the picture (* the exception is if you use the FOG Client there are some post reboot actions that can be managed from the fog server).

      So the deployment tool FOG uses is based on a customized version of linux. Ansible runs on linux so that is a good start. I don’t know what dependencies are needed in FOS linux to support Ansible. But if we knew that its possible to recreate fos linux with the required bits.

      So after all of that, I can say its possible, but it breaks the standard FOG workflow of move the bits quickly and get out.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Use a serial console

      @rpeterson ok, so the pcbios is sending characters to the serial port and CONSOLE_SERIAL is also sending characters to the serial port. This is what I’m guessing because of this post: https://superuser.com/questions/1358359/putty-serial-port-access-to-rs-232-console-showing-double-character-display-with

      So I have to ask the question. Have you tested the fog stock ipxe boot loader? If PCBIOS is redirecting what would go to the screen out the serial port then that is all you should need. Now once you get the fog ipxe menu to display correctly, when you pick a menu item on the ipxe menu and bzImage and init.xz are loaded that is where the kernel parameters console=ttyS0,115200n8 come into play. You don’t need to edit bootmenu.class… just go into FOG Configuration->FOG Settings->Kernel Parameters and add in that line. It will be added to every computer that pxe boots into fog.

      posted in Feature Request
      george1421G
      george1421
    • RE: Use a serial console

      @rpeterson This is a little out of my wheelhouse, but do these devices only ship with serial port interface to the firmware? Do you access the firmware settings via the serial port. Is this the design by the manufacturer?

      I’m asking for a specific reason that might explain the double characters.

      posted in Feature Request
      george1421G
      george1421
    • RE: Adding storage - failed to open stream: no such file or directory

      @ian77 That is strange that the storage variable points to /images2 and yet /images gets mapped.

      Now understand on the FOS linux side it will be /images/dev, but what its mapped to should be /images2/dev.

      So you are in debug capture mode. What you will do is confirm that the storage variable is pointing to you /images2/dev directory.

      The first step is list manually try to mount the directory to see if nfs is working correct…

      Please post the output of /etc/exportfs, the fsid variable needs to be different between the /images and /images2 paths if they are the same you will get exactly what you are seeing. It placing the files in the wrong directory.

      /images should have fsid of 0
      /images/dev fsid 1
      /images2 fsid 2
      /images2/dev fsid 3

      once we sort out the fsid I’ll continue with debugging instructions.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 from the cat command it doesn’t look like your array was assembled. From a previous post it looks like you found a command sequence with mdadm to assemble that array.

      If you schedule a debug deploy (tick the debug checkbox before hitting the submit button). pxe boot the target computer and that will put you into debug mode. Now type in the command sequence to assemble the array. Once you verify the array is assembled key in fog to begin the image deployment sequences. You will have to press enter at each breakpoint. This will give you a chance to see and possibly trap any errors during deployment. If you find an error, press ctrl-C to exist the deployment sequence and fix the error. Then restart the deployment sequence by pressing fog once again. Once you get all the way through the deployment sequence and the target system reboots and comes into windows. You have established a valid deployment path.

      Now that you have established the commands needed to build the array before deployment, you need to place those commands into a pre deployment script so that the FOS engine executes them every time a deployment happens. We can work on the script to have it execute only under certain conditions, but first lets see if you can get 1 good manual deployment, 1 good auto deployment, and then the final solution.

      posted in General Problems
      george1421G
      george1421
    • RE: Adding storage - failed to open stream: no such file or directory

      @ian77 I’m suspecting its a file permission issue because everything else looks right.

      There is a debugging dividing line we need to find. So the first question I have is: If you look at /images2/dev after an image capture to the /images2 storage node is there a directory that is named line a mac address?

      If no, then the issue is on the NFS side of the image capture.
      If yes, then the issue is on the FTP side of image capture.

      If the issue is on the nfs side then we will need to start another image capture in debug mode (by selecting debug before you pick image capture when you schedule an image capture in the fog web ui.

      If the issue is on the ftp side then from a windows computer (like I think you have already done) ftp to the fog server using the fogproject user account (the password is in a hidden file in /opt/fog called .fogsettings [named with a leading dot]. On your computer just create a small text file [test.txt] that we can use for testing. You will transfer that file to the fog server. Now start the ftp program and connect to the fog server using the fogproject user.

      In the ftp client issue the following commands:

      cd /images2/dev
      put test.txt
      mv /images2/dev/test.txt /images2
      mkdir /images2/dev/aaaa
      mv /images2/dev/aaaa /images2
      cd /images2
      rm test.txt
      rmdir /images2/aaaa
      

      The above will test the actions that FOG does during an image capture. If the above works OK then we know the permissions are good for the fogproject user. The only gotcha is that when the FOS engine (running on the target computer captures the image and transfers it to the FOG server /images2/dev it does this using the root user account. So the permissions on the captured images directory may not be right.

      But first lets identify if the directory is being created in /images2/dev first.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Storage node in different subnet - deployment

      @Daniel_ said in Storage node in different subnet - deployment:

      At the moment I am copying the images with rsync (manually or cronjob). Is there a way to copy images via the web GUI of the main server?

      If the fog server and storage node are part of the same storage group the fog server will sync the image files from the main fog server to all storage nodes in the storage group. That is by design. The storage nodes don’t need to be a fog install storage node, a NAS will work with some configuration. The active component for the replication is on the master node in a fog storage group.

      At the moment the storage node is not visible in the dashboard (storage node disk usage) although Graph Enabled (On Dashboard) is enabled. Do I need more ports open so that the main server can reach and get information from the node?

      This sounds suspicious. You need to have the following protocols enabled. http, https, ftp, and mysql between the fog server and storage node.

      How can I configure the node so that clients pull their images from the node only? At the moment they seem to try reaching the main server.

      In this case you need to install the location plugin. When its installed then you can 1) create your locations. 2) assign a storage node to a location, 3) when you register a new computer, you assign it to a location.

      When your target computers pxe boot, you can have your remote storage node configured to send the ipxe menu files, but the target computers will always reach out to the master node it find out about their local storage node. But this initial checkin is very quick and has low bandwidth requirements.

      Is there a way to configure the PXE menu which is shown there independently from the main server menu?

      You can if you want to be creative. You would need to intercept the call to default.ipxe with your own default.ipxe, but you would really need to look into the use case before going down that path. It is possible, yes. Will it take some work, yes. Is it plug and play, no.

      posted in FOG Problems
      george1421G
      george1421
    • RE: New Fog 1.5.10 deployment. Host registration and quick registration hang

      @srcauley If you want to debug the next bit I can help with that too. I’m going to suspect its the nic driver, nic firmware, or raid/disk controller issue. If you setup a debug capture/deploy we can then investigate the errors.

      ip a s
      lspci -nn | grep -i net
      lsblk
      grep -i firm /var/syslog

      Will get you started

      posted in FOG Problems
      george1421G
      george1421
    • RE: Group Kernel Arguments not applied to Host after adding Host to Group while Full Registration

      @Ceregon said in Group Kernel Arguments not applied to Host after adding Host to Group while Full Registration:

      I had the fear, that when we will have machines without software-raid that will make problems.

      That setting just enables the possibility of software raid in the kernel, you still need to use the mdm utility to configure it.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Group Kernel Arguments not applied to Host after adding Host to Group while Full Registration

      @Ceregon Under FOG Configuration -> FOG Settings, press the expand all button and then search for kernel. In there search for kernel parameters. Add the kernel parameter there. It will be applied to all hosts and all boot methods. For this specific parameter its OK to have it enabled on all hosts, even ones that don’t use software raid.

      posted in FOG Problems
      george1421G
      george1421
    • RE: New Fog 1.5.10 deployment. Host registration and quick registration hang

      @srcauley Well I found someone with the same issue as you from last summer, but it looks like the poster ghosted us https://forums.fogproject.org/topic/16922/hand-off-to-fos-kernel-fails-on-certain-gen4-xeon-sapphire-rapids-based-systems-dell-r760-supermicro-x13-etc

      This is the one I was thinking of: https://forums.fogproject.org/topic/16993/client-hangs-at-efi-stub it did have a solution with the same error message as you.

      posted in FOG Problems
      george1421G
      george1421
    • RE: New Fog 1.5.10 deployment. Host registration and quick registration hang

      @srcauley To quickly explain where its failing.

      If you can get to the FOG iPXE menu then undionly/snp drivers are doing what they should. After you pick a menu item from the fog iPXE menu then FOS linux is transferred to the target computer. This is in the form of the kernel (bzImage) and the virtual hard drive (init.xz). The issue is with the linux kernel failing to start up.

      Issue #1 that you have is FOS’ kernel is really targeted to workstation class machines, not servers. Servers have “special” hardware that requires drivers that might not be in the FOS kernel. Saying that I know others have imaged Dell servers with FOG.

      Issue #2 The R760s are pretty new hardware. You have to remember that linux kernel lags behind bleeding edge hardware. The actual linux kernel might not have the required brains to be able to init this new hardware.

      So what should you do?

      The first thing is make sure your firmware is up to date (always first step)

      Second thing is make sure you are running the latest FOS Linux kernel from the UI FOG Configuration -> Kernel update. Run the latest version 6 release there is.

      Third go into FOG Configuration -> FOG Settings, hit the expand all button and then search for log set the kernel logging level to 7 that will send out the most info during the kernel startup. The default is logging level 1 which masks all but the most critical of errors.

      Now pxe boot the target computer, see if there is any other info we can glean from the kernel startup.

      Also test a linux live boot image, either debian or ubuntu. See if you can live boot the server into linux.

      I know there was some debugging in the fog forum with a server that had one of those new scale processors. I think they were able to get that running.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Different bios files for different network cards

      @rodrigoantunes said in Different bios files for different network cards:

      wouldn’t it be a dhcp server functionality rather than fog

      FOG could do it with some programming, but its probably easier to do it on the dhcp server side. Its not super simple to do, but basically you would setup dhcp reservations or uniqueness based on the mac address of the target computer. When the dhcp server sees a known mac address then it sends out the alternate pxe boot loader. You would make the default pxe boot loader (like undionly.kpxe) be the default and then override it with reservation settings for the troubled computers. It can be done.

      I find its strange that the undi driver doesn’t work on some computers but does on others. The undi specification has been around for 30 years. Most firmware has had time to work out the bugs.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @Ceregon said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:

      Maybe i can work with groups of hosts to decide when this postinit script is used

      It is possible to access FOG system variables in a pre install script (where I would recommend you do any initialization needed for mdm arrays). So you could use like fog user field 1 or 2 (in the host definition) to contain something that indicates you need to init the mdm array. You might also be able to key it off of the target device name too. I have not looked that deep into it, but it should be possible at least on the surface.

      posted in General Problems
      george1421G
      george1421
    • RE: capture image : bug at the end to rename tmp folder

      @collins91 I don’t have an answer for you, but I can connect what you see to how FOG works.

      With FOG, the FOS Engine (OS that gets loaded onto the target computer for capturing and deploying images) uses NFS to move disk information between the FOG server and target computer.

      On the fog server there are two NFS shares.

      1. /images that is shared as read only. That is where all of the captured image files end up. The read only attribute keeps the image files from being messed with (i.e. deleted) over the network once they are captured.
      2. /images/dev that is shared as read/write. That is where the files temporarily stored at while being captured.

      So now you have the basis of the design.
      At capture the FOS engine creates a temporary directory in /images/dev share as the mac name of the target computer for uniqueness.

      Once the NFS upload is completed. The FOS engine connects to the FOG server using the ftp protocol using the fogproject user account and instructs the OS to move the file pointer for the image from the temp location on /images/dev/<mac_name> to /images/<image_name>. Since only the file pointers are updated this “move” happens very fast.

      Then the target computer reboots.

      Now to tie this back into what you are seeing, the process seems to be failing on the ftp login and use of the mv command. Your condition kind of tells me the problem is a file permission issue where the (linux user) fogproject doesn’t have the proper permissions needed to execute the mv command.

      I would start there for trying to understand why its failed, for example lets say your dead link folder was created by root, and the fogproject user doesn’t have the rights to change a directory created by root. That would cause the process to fail as you outlined.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Providing installation media via pxe booting for UEFI systems.

      @mashina said in Providing installation media via pxe booting for UEFI systems.:

      Interestingly, the problem doesn’t occur when Ubuntu is already present, and then Windows is deployed. Anyway, that is not a big problem at this moment.

      But in this case the uefi firmware has already registered ubuntu as a bootable OS. So it just goes, oh hello I see you again on disk1. But if the entry doesn’t exist then it needs to be fixed up. You might be able to test this on a working system, go into the uefi firmware and delete the entry for ubuntu on the second disk, only leaving windows in the uefi boot manager. Upon reboot does it need to fix itself up again?

      Just be aware that FOG doesn’t touch the uefi firmware or boot manager. BUT you can do that with in a FOG post install script and using the linux uefi manager (not the actual name) app. You can add remove uefi boot manager entries at your need.

      Your suggestion works well for putting Linux on Disk1, but if the user needs to reinstall Windows, it’ll also go to /dev/nvme1n1, messing everything up

      True it will mess everything up. But also I took your inital post as you will load windows once and then could potentially reload ubuntu or the OS on the second drive multiple times. If you “had to” you could write a FOG preinstall script to ask what drive do you want to send the image too, but that gets a bit messy, but its possible.

      posted in General
      george1421G
      george1421
    • RE: PXE Fog Subnet Problem?

      @pilgrimage This problem is connected to your other post: https://forums.fogproject.org/topic/17255/about-dhcp-and-pxe-problem?_=1707994213348

      Your issue is infrastructure and not a fog specific issue. DHCP works based on broadcast messages. These broadcast messages typically are not passed between subnets. On your subnet router there is a service that can be turned on to relay dhcp communications between subnets. I would also think this service is configured on your network because you have client computers on one subnet and servers on another.

      Since you are using dnsmasq to provide the pxe boot information since your dhcp server can’t be changed, you need to update this dhcp-helper / dhcp relay agent on your network router to include your dnsmasq server in the list of dhcp servers this service notifies of a dhcp request. Once that is done your dnsmasq server will hear the dhcp request from the remote subnet and respond with the pxe boot information.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG Project instead of CloneZilla Lite Server

      @Orfeous said in FOG Project instead of CloneZilla Lite Server:

      My goal here is to install Debian or Ubuntu on a PC to be run as a Server. I have a couple of NUCs that I want o deploy an image to via isolated network. Server and Client machines connected to the same switch. No router or such in play.

      You can do this on an isolated network completely or install 2 nics in your FOG server and have one connected to your imaging network and one to your business network for remote management on the fog server.

      You can also set this up on your business network without interfering with your business network communications. So it can work either way. In some instances you might need access to your business network for AD integration as your target computers boot during its first boot. I understand your goal is linux so AD is not required. But the point is either way FOG will work.

      I want this Server to run a DHCP server and broadcast ips to the client machines that will be netbooting via PXE.

      If you want to run on an isolated imaging network, just pick to include the FOG DHCP server and the installer script will install the dhcp server and configure it for you.

      I want to use those NUCs to boot via PXE and then automatically disk will be restored from image.

      If i get other PC vendors and models I want to use another image for those.

      No problem on multiple vendors. You just need to really be mindful of the firmware on the target computer bios or uefi modes because the target image is handled a little bit differently between the two firmware classes. FWIW you can not deploy a bios computer captured image to a uefi based computer. The same holds true in reverse.

      Is it possible to use my CloneZilla disk image that has already been saved?

      While Clonezilla and FOG both use partclone to capture the disk image, the images are stored and compressed differently on either platform. So you can not share the images between the two environments. You will need to capture with FOG if you want to deploy with FOG, or capture with Clonezilla if you want to deploy with clonezilla.

      Client NUCs uses NVme ssd and Windows 10 or 11 is located on the disk image.

      Now you introduced Windows 10 into the picture. No problem, but that also might mean needing AD during firstboot. You have to remember that the FOS engine (the OS that boots on the target computer) is linux based. So nvme drives have a different disk label that sata drives. But you can capture from a sata drive and deploy to an nvme drive, but that is not a common situation.

      Is this possible with FOG Project?

      Yes it is

      posted in General
      george1421G
      george1421
    • 1
    • 2
    • 17
    • 18
    • 19
    • 20
    • 21
    • 769
    • 770
    • 19 / 770