• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 66
    • Topics 113
    • Posts 15,373
    • Groups 2

    Posts

    Recent Best Controversial
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 said in Problem Capturing right Host Primary Disk with INTEL VROC RAID1:

      FOS 6.1.63

      OK good deal I wanted to make sure you were on the latest kernel to ensure we weren’t dealing with something old.

      I rebuilt the kernel last night with what thought might be missing, then I saw that mdadm was updated so I rebuilt the entire fos linux system but it failed on the mdadm updated program. It was getting late last night so I stopped.

      With the the linux kernel 6.1.63, could you pxe boot it into debug mode and then give root a password with passwd and collect the ip address of the target computer with ip a s then connect to the target computer using root and password you defined. Download the /var/log/messages and/or syslog if they exist. I want to see if the 6.1.63 kernel is calling out for some firmware drivers that are not in the kernel by default. If I can do a side by side with what you posted from the live linux kernel I might be able to find what’s missing.

      posted in General Problems
      george1421G
      george1421
    • RE: Red dot in Fog

      @Tanguy Just to be clear, you can’t locate the system by using its short name like ping host1 but you can if you use the fqdn like ping host1.domain.com? If yes, then you need to add the search parameter with a domain and it will use the search list to find the fqdn.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 Nothing is jumping out at me as to the required module. The VMD module is required for vroc and that is part of the FOG FOS build. Something I hadn’t asked you before, what version of FOG are you using and what version of the FOS Linux kernel are you using? If you pxe boot into the FOS Linux console then run uname -a it will print the kernel version.

      posted in General Problems
      george1421G
      george1421
    • RE: Red dot in Fog

      @Tanguy We might need a bit more information, but in general once FOG deploys a registered target computer, the only way it can find that computer on your network is via its host name. So to that point, the name you register in FOG for the host, must be the name of the computer after its attached to AD.

      (this is more to the point of your problem), the fog server needs to be able to resolve the host name via DNS. So to test open the fog server, open a command window, then try to ping the computer via its name as its known to FOG. If it can’t resolve the name, then you need to update the dns server setting so that its proper for your network. I think fedora has a ui tool to do this, or the old school way is to update /etc/resolv.conf file. But I would use the ui tool because it probably sets this file and might overwrite anything you put in there manually.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Hiren BootCD 1.0.2

      @cplemaster Excellent explanation of how to go about solving this.

      I can tell you that pxe booing large files over the tftp protocol can take time. You can and probably should download the wim file over http protocol. Its much faster and scales better than tftp.

      Such as in this parameter block

      set tftp-path tftp://${fog-ip}
      set http-path http://${fog-ip}/images/tools/hbcd102
      kernel ${tftp-path}/win/wimboot gui
      imgfetch --name bootmgr.exe ${http-path}/bootmgr.exe bootmgr.exe
      imgfetch --name bootx64.efi ${http-path}/efi/boot/bootx64.efi bootx64.efi
      imgfetch --name BCD ${http-path}/boot/bcd BCD
      imgfetch --name boot.sdi ${http-path}/boot/boot.sdi boot.sdi
      imgfetch --name boot.wim ${http-path}/sources/boot.wim boot.wim
      boot || goto MENU
      

      ref: https://forums.fogproject.org/topic/10944/using-fog-to-pxe-boot-into-your-favorite-installer-images/10

      posted in General
      george1421G
      george1421
    • RE: iPxe boot fails : Coud not boot :Permission denied (https://ipxe.org/0216eb8f)

      @Thierry-carlotti said in iPxe boot fails : Coud not boot :Permission denied (https://ipxe.org/0216eb8f):

      iPxe boot fails : Coud not boot :Permission denied (https://ipxe.org/0216eb8f)

      This really sounds like secure boot is enabled keeping ipxe from loading/running.

      posted in Linux Problems
      george1421G
      george1421
    • RE: How do I get dnsmasq to direct PXE to my fogserver IP instead of my dnsmasq server IP?

      @45lightrain Lets cover a few things that I’m aware of that might help you get to the bottom of the issue.

      1. When you have a ProxyDHCP server and normal dhcp server on your network you will get two dhcp OFFERS. Well I guess it would be the same if you had two dhcp server you would see two offers, but that is a bit off point. You can tell if an OFFER packet is a proxy dhcp packet because dhcp option 60 will be set to PXEClient in the ProxyDHCP OFFER. That is a signal to the pxe booting client to come back and ask about pxe booting from the ProxyDHCP server. The pxe booting client will ignore the pxe boot information from the main dhcp server.
      2. A properly formed dhcp packet will have both the fields in the header {next-server} and {boot-file} filled out for the bootp booting protocol part of the protocol AND should have dhcp option 66 and 67 set for the dhcp booting protocol. Both must be filled out because its up to the pxe booting client to pick either the dhcp or bootp protocol.
      3. Your ltsp files are correct. The last one you posted has the dhcp-range… commented out, that turns off the proxydhcp feature in dnsmasq.

      SO if you have a pihole server that runs dnsmasq, why are you running an external dnsmasq server. Does your pihole server have pxe boot options? I know pfsense has built in pxe boot fields you can fill out that supports dynamic (bios/uefi) boot loaders.

      When trying to debug this wireshark/tcpdump is your friend that tells you what is flying down the wire. Just remember that dhcp is based on broadcast messages so you can “hear” them from any network port, and proxyDHCP like other unicast messages must be captured at the source, destination, or via a mirrored port.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 Nice, this means its possible with the FOG FOS kernel. If the linux live cd did not work then you would be SOL.

      OK so lets start with (under the live image) lets run this commands.

      lsmod > /tmp/modules.txt
      lspci -nnk > /tmp/pcidev.txt

      use scp or winscp on windows to copy these tmp files out and post them here. Also grab the /var/log/messages or /var/log/syslog and post them here. Let me take a look at them to see 1) what dynamic modules are loaded and/or the kernel modules linked to the PCIe devices.

      posted in General Problems
      george1421G
      george1421
    • RE: How do I get dnsmasq to direct PXE to my fogserver IP instead of my dnsmasq server IP?

      @45lightrain The pcaps show confusion. Its not your dnsmasq configuration at the moment but more something going on with the pihole dhcp.

      The pcap captured from the fog server -67 is what I would expect as a normal DORA process. Initially the pihole didn’t respond to the first DISCOVER packet but it did for the subsequent ones.

      As expected from a soho router its posting that its the PXE boot server.That’s not a problem because your dnsmasq server has also sent an OFFER packet saying its a ProxyDHCP server. It completes the Request and Ack bits of DHCP and then 3 seconds later it restarts the process over again. This bit is strange.

      Looking at the puhole pcap this one doesn’t align with what the fog server saw. This one only shows the pihole in response without the OFFER from dnsmasq AND the pihole server is now sending out the proper next server (fog server) and boot file names (!!) AND its saying it is the ProxyDHCP server (!!) It would have worked except that it said it was a proxydhcp server, because it failes on the next step.

      The fog server 4011 is expected since ProxyDHCP is unicast messaging and not broadcast like dhcp.

      The pihole 4011 is the proxyDHCP packet, but now the pihole is back saying its the next server (pxe boot) and not your FOG server.

      posted in Linux Problems
      george1421G
      george1421
    • RE: How do I get dnsmasq to direct PXE to my fogserver IP instead of my dnsmasq server IP?

      @45lightrain Are you running this environment on virtual box? The pxe boot messages don’t look typical FOG pxe boot messages.

      So what device is 192.168.2.60?

      Lets find out what actors are at play here. I have a tutorial to use the fog server to monitor the pxe booting process. You can run this from the fog server, but I think since you have dnsmasq running some place else, run the probe from that server. I want to capture the proxydns request on port 4011. That is unicast messaging so we need to capture that from the dnsmasq server’s perspective.
      https://forums.fogproject.org/topic/9673/when-dhcp-pxe-booting-process-goes-bad-and-you-have-no-clue

      https://forums.fogproject.org/topic/9673/when-dhcp-pxe-booting-process-goes-bad-and-you-have-no-clue

      I think there is more going on that we currently know.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 There are a few interesting things in here, but nothing remarkable. I see this is a server chassis of some kind. I also see there is sata and nvme disks in this server. A quick look of vroc and this is designed for nvme drives and not sata and this is on cpu raid.

      Is your array with the sata drives /dev/sda and /dev/sdb or with the nvme drives?

      I remember seeing something in the forums regarding the intel xscale processor and vmd. I need to see if I can find those posts.

      For completeness, what is the manufacturer and model of this server. What is the target OS for this server. Did you setup the raid configuration in the bmc or firmware, so the drive array is already configured?

      And finally if you boot a linux live cd does it properly see the raid array.
      Lastly for debugging with FOS linux if you do the following you can remote into the FOS Linux system.

      1. PXE boot into debug mode (capture or deploy)
      2. Get the ip address of the target computer with ip a s
      3. Give root a password with passwd just make it something simple like hello it will be reset at next reboot.
      4. now with putty or ssh you can connect to the fos linux engine to run commands remotely. This makes it easier to copy and paste into the fos linux engine.
      posted in General Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 I guess lets run these commands to see where they go.

      lspci -nn| more We want to look through the output. We are specifically looking for hardware related to the disk controller. Normally I would have you look for raid or sata but I think this hardware is somewhere in between. I specifically need the hex code that identifies the hardware. It will be in the form of [XXXX:XXXX] where the X’s will be a hex value.

      The output of lsblk

      Then this one is going to be a bit harder but lets run this command, but if it doesn’t output anything then you will have to manually look through the log file to see if there are any messages about missing drivers.

      grep -i firm /var/log/syslog The first one will show us if we are missing any supplemental firmware needed to configure the hardware.

      grep -i sata -i raid -i drive /var/log/syslog This one will look for those keywords in the syslog.

      If that fails you may have to manually look through this log.

      posted in General Problems
      george1421G
      george1421
    • RE: Run sth on server after imaging?

      @flodo First let me say I don’t know Ansible, I know of it but that is about it. So I always force my way through things.

      But with imaging there are two approaches to take.

      1. Leave bread crumbs (deploy time configuration info) behind for the target OS to pick up with its internal setup program to consume. In the MS Windows realm that might be mounting the C drive and updating the unattend.xml file with deployment time settings like computer name, calculated target OU, timezone, etc.

      2. Using a FOG postinstall script to mount the root partition on the target computer and use a bash script to make system changes like set the hostname in /etc/hostname and other parameters that the target system will need at first boot. If you can script it with bash you can probably update it in the target linux OS. When you are done, you would unmount the mounted partition and then let FOG finish with the deployment and reboot the target computer. I have an example of how to do this for windows, it can easily be translated to linux.

      If you wanted to go the ansible route, then you need to identify what you need on the FOS Linux OS and then we/you will need to include that plus the ansible code when FOS Linux is compiled. It is not as hard as it sounds. You need to find the dependencies needed for ansible and then update the buildroot configuration to included the dependencies. FWIW: the FOS engine does have ssh server installed, but with only one user root. So to login remotely to the FOS Linux engine you need to give root a password then you can connect remotely to FOS Linux over ssh.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Run sth on server after imaging?

      @flodo First let me say this, FOG doesn’t typically step into the target system during a deployment. FOG’s function is to move disk blocks from storage location to storage location as fast as it can and then exit, leaving the target computer to do its configuration and setup thing during first boot. This holds true for MS Windows based computers as well as linux based systems.

      With that said, FOG calls bash scripts before image deployment / capture. One script is called just after the FOS Linux kernel boots. We would use that to setup or preconfigure any hardware needed for image capture/deploy. The second script is called on a deployment just after the image has been transferred to the target computer and before the target computer reboots. I’ve used these post deployment scripts to set linux host names, or to update specific configuration files based on user definable fields in the fog web ui. Once the post deployment scripts finish and the target computer reboots then FOG is out of the picture (* the exception is if you use the FOG Client there are some post reboot actions that can be managed from the fog server).

      So the deployment tool FOG uses is based on a customized version of linux. Ansible runs on linux so that is a good start. I don’t know what dependencies are needed in FOS linux to support Ansible. But if we knew that its possible to recreate fos linux with the required bits.

      So after all of that, I can say its possible, but it breaks the standard FOG workflow of move the bits quickly and get out.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Use a serial console

      @rpeterson ok, so the pcbios is sending characters to the serial port and CONSOLE_SERIAL is also sending characters to the serial port. This is what I’m guessing because of this post: https://superuser.com/questions/1358359/putty-serial-port-access-to-rs-232-console-showing-double-character-display-with

      So I have to ask the question. Have you tested the fog stock ipxe boot loader? If PCBIOS is redirecting what would go to the screen out the serial port then that is all you should need. Now once you get the fog ipxe menu to display correctly, when you pick a menu item on the ipxe menu and bzImage and init.xz are loaded that is where the kernel parameters console=ttyS0,115200n8 come into play. You don’t need to edit bootmenu.class… just go into FOG Configuration->FOG Settings->Kernel Parameters and add in that line. It will be added to every computer that pxe boots into fog.

      posted in Feature Request
      george1421G
      george1421
    • RE: Use a serial console

      @rpeterson This is a little out of my wheelhouse, but do these devices only ship with serial port interface to the firmware? Do you access the firmware settings via the serial port. Is this the design by the manufacturer?

      I’m asking for a specific reason that might explain the double characters.

      posted in Feature Request
      george1421G
      george1421
    • RE: Adding storage - failed to open stream: no such file or directory

      @ian77 That is strange that the storage variable points to /images2 and yet /images gets mapped.

      Now understand on the FOS linux side it will be /images/dev, but what its mapped to should be /images2/dev.

      So you are in debug capture mode. What you will do is confirm that the storage variable is pointing to you /images2/dev directory.

      The first step is list manually try to mount the directory to see if nfs is working correct…

      Please post the output of /etc/exportfs, the fsid variable needs to be different between the /images and /images2 paths if they are the same you will get exactly what you are seeing. It placing the files in the wrong directory.

      /images should have fsid of 0
      /images/dev fsid 1
      /images2 fsid 2
      /images2/dev fsid 3

      once we sort out the fsid I’ll continue with debugging instructions.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @nils98 from the cat command it doesn’t look like your array was assembled. From a previous post it looks like you found a command sequence with mdadm to assemble that array.

      If you schedule a debug deploy (tick the debug checkbox before hitting the submit button). pxe boot the target computer and that will put you into debug mode. Now type in the command sequence to assemble the array. Once you verify the array is assembled key in fog to begin the image deployment sequences. You will have to press enter at each breakpoint. This will give you a chance to see and possibly trap any errors during deployment. If you find an error, press ctrl-C to exist the deployment sequence and fix the error. Then restart the deployment sequence by pressing fog once again. Once you get all the way through the deployment sequence and the target system reboots and comes into windows. You have established a valid deployment path.

      Now that you have established the commands needed to build the array before deployment, you need to place those commands into a pre deployment script so that the FOS engine executes them every time a deployment happens. We can work on the script to have it execute only under certain conditions, but first lets see if you can get 1 good manual deployment, 1 good auto deployment, and then the final solution.

      posted in General Problems
      george1421G
      george1421
    • RE: Adding storage - failed to open stream: no such file or directory

      @ian77 I’m suspecting its a file permission issue because everything else looks right.

      There is a debugging dividing line we need to find. So the first question I have is: If you look at /images2/dev after an image capture to the /images2 storage node is there a directory that is named line a mac address?

      If no, then the issue is on the NFS side of the image capture.
      If yes, then the issue is on the FTP side of image capture.

      If the issue is on the nfs side then we will need to start another image capture in debug mode (by selecting debug before you pick image capture when you schedule an image capture in the fog web ui.

      If the issue is on the ftp side then from a windows computer (like I think you have already done) ftp to the fog server using the fogproject user account (the password is in a hidden file in /opt/fog called .fogsettings [named with a leading dot]. On your computer just create a small text file [test.txt] that we can use for testing. You will transfer that file to the fog server. Now start the ftp program and connect to the fog server using the fogproject user.

      In the ftp client issue the following commands:

      cd /images2/dev
      put test.txt
      mv /images2/dev/test.txt /images2
      mkdir /images2/dev/aaaa
      mv /images2/dev/aaaa /images2
      cd /images2
      rm test.txt
      rmdir /images2/aaaa
      

      The above will test the actions that FOG does during an image capture. If the above works OK then we know the permissions are good for the fogproject user. The only gotcha is that when the FOS engine (running on the target computer captures the image and transfers it to the FOG server /images2/dev it does this using the root user account. So the permissions on the captured images directory may not be right.

      But first lets identify if the directory is being created in /images2/dev first.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Storage node in different subnet - deployment

      @Daniel_ said in Storage node in different subnet - deployment:

      At the moment I am copying the images with rsync (manually or cronjob). Is there a way to copy images via the web GUI of the main server?

      If the fog server and storage node are part of the same storage group the fog server will sync the image files from the main fog server to all storage nodes in the storage group. That is by design. The storage nodes don’t need to be a fog install storage node, a NAS will work with some configuration. The active component for the replication is on the master node in a fog storage group.

      At the moment the storage node is not visible in the dashboard (storage node disk usage) although Graph Enabled (On Dashboard) is enabled. Do I need more ports open so that the main server can reach and get information from the node?

      This sounds suspicious. You need to have the following protocols enabled. http, https, ftp, and mysql between the fog server and storage node.

      How can I configure the node so that clients pull their images from the node only? At the moment they seem to try reaching the main server.

      In this case you need to install the location plugin. When its installed then you can 1) create your locations. 2) assign a storage node to a location, 3) when you register a new computer, you assign it to a location.

      When your target computers pxe boot, you can have your remote storage node configured to send the ipxe menu files, but the target computers will always reach out to the master node it find out about their local storage node. But this initial checkin is very quick and has low bandwidth requirements.

      Is there a way to configure the PXE menu which is shown there independently from the main server menu?

      You can if you want to be creative. You would need to intercept the call to default.ipxe with your own default.ipxe, but you would really need to look into the use case before going down that path. It is possible, yes. Will it take some work, yes. Is it plug and play, no.

      posted in FOG Problems
      george1421G
      george1421
    • 1
    • 2
    • 16
    • 17
    • 18
    • 19
    • 20
    • 768
    • 769
    • 18 / 769