• Linux host name change after imaging?

    Unsolved
    19
    0 Votes
    19 Posts
    3k Views
    Tom ElliottT

    @adam1972 The AD setting isn’t general. Basically it’s enforcing the reboot happens to change the hostname and/or complete joining the domain.

    I think since there’s multiple points of hostname changing this isn’t working correctly (obviously) as the hostname shouldn’t need more than maybe 1 or 2 times to change it. Snapins running still is the end of the problem though? Not sure how to approach. Since hostname change is expecting to reboot maybe this is preventing the snapins from running. We could test that by disabling the hostname changing option altogether on this host?

  • Massive CPU usage from a service

    Solved
    14
    0 Votes
    14 Posts
    2k Views
    L

    @LLamaPie Everything has been clean now for about a week. I would consider this at least resolved on our end. Still no answer about when it became compromised exactly. Our hyper-paranoid theory is it may have been a “time bomb”. This could have been on the server for months before popping up. Our long-term solution is keeping endpoint protection in place. I have nothing else to add but if I discover anything I will let everyone know.

  • Can't PXE boot properly once MTU is set to anything over 1500

    Unsolved
    4
    0 Votes
    4 Posts
    1k Views
    george1421G

    @45lightrain ok so lets start with some basics.

    a 1GbE link (under theoretical conditions) is 1 Giga bits per second or 128MB/sec or 7.5GB/min raw data. Understand that there is ethernet overhead and you will never achieve 7.5GB/min.

    So how is it possible to see speeds above 7.5GB/min on a 1GbE link? Simply data compression. So what you are seeing in the part clone screen is a composite speed including fog server speed sending the data to the network, network transfer time, the client receiving the image, expanding in it memory, and then writing it to disk.

    If you are getting 5.5-6.1GB/min in partclone on a pure 1GbE network your fog environment is well designed and network well managed.

    I wrote an article a few years ago that has some benchmark tools you can use to see where you can get additional speed out of your setup. https://forums.fogproject.org/topic/10459/can-you-make-fog-imaging-go-fast

    So the executive summary is that if you want to go fast.

    Install at least (1) 10GbE network link. (If you have many computers running the fog clients, run (2) 10GbE links in lag configuration. If you have many clients hitting your fog server while you are trying to image use a ssd disk or nvme drives on your fog server (I would look at spending here the last, typically its not the disk that is slow, unless you have just a single spindle hdd driving your fog server). Try to get the 10GbE network as close to the target computers as you can. If you are trying to image multiple target computers at the same time look into fog casting your image to the target computers. When capturing your image use the zstd compression tool over gzip. Set zstd at compression 11 to start. If your target computers have a lot of horsepower, 16GB of ram, and fast nvme disks you can get more data through your network by compressing the data more. This will put a heavier load on the target computer expanding the image and writing it to disk.

    Think if your imaging as 3 factor triangle. You have server speed to get the image to the network adapter, the speed at which your network can move the image to the target computer, and finally the time it takes for the target computer to intake the image, expand it in memory and write to disk. In the imaging process the fog server typically has the least impact on imaging of the three.

  • An Error detected, fails capture

    Solved
    14
    0 Votes
    14 Posts
    1k Views
    A

    @JJ-Fullmer Good to know, thank you so much 😎💯

  • Unable to capture image with raid1 software array

    Unsolved
    8
    0 Votes
    8 Posts
    711 Views
    T

    @Tom-Elliott

    No worries, all the help is appriciated!

    199cd3f7-9311-4f08-80df-dd932845af2d-image.png

    a6b999ce-4c5b-4462-9181-3746b2972f8a-image.png
    a900cb0f-9abb-41fa-84b8-cef64c317cc6-image.png

    6d685e25-ca2f-4c18-8a27-add58d1470e0-image.png

  • Issues with capturing an image with a raid0 array.

    Unsolved
    4
    0 Votes
    4 Posts
    547 Views
    george1421G

    @45lightrain said in Issues with capturing an image with a raid0 array.:

    Also is fog able to capture both the OS SSD data and raid array data on the NVMe drives?

    FOG captures disks in block mode. It doesn’t care about partitions (mostly). Also make sure that md0 is the true device for these drives. When you are in the FOS linux shell you can / should create a mount point and then mount /dev/mdX partition to see if you can read the content. I have seen where sometimes md0 is created but the real raid array is /dev/md126.

    Also FOG calls a script before imaging starts. Sometimes its needed to assemble the array in this postinit script. Or do other things to prep the system for imaging. So the postinit script (found in the /images directory on the fog server) is a the place to put that code.

    Also when you start in debug mode (when debugging the imaging process) you can actually start the imaging process from the command line by keying in fog. The imaging will stop at each debugPause; command in the imaging code as a breakpoint. If you notice something wrong, you can hit ctrl-C to exit back to the command prompt. To restart the imaging process again just rekey in fog.

    While it doesn’t exactly apply here I wrote an article 8 years ago about imaging using the intel rst adapter that may contain a nugget of help: https://forums.fogproject.org/topic/7882/capture-deploy-to-target-computers-using-intel-rapid-storage-onboard-raid

  • Ubuntu vmlinuz.efi missging

    Unsolved
    2
    0 Votes
    2 Posts
    316 Views
    george1421G

    @theyikes Well the first thing I would do is look to see what the kernel name is in the casper directory. Ubuntu does change this kernel name from time to time. I do think the file name listed looks suspicious. I would expect something like vmlinuz without the extension. But look to see the kernel name and adjust accordingly.

  • FOG Update errors

    Solved
    10
    0 Votes
    10 Posts
    1k Views
    JGeearJ

    @Tom-Elliott said in FOG Update errors:

    ./installfog.sh -y

    Thanks Tom! I am up and running again!

  • Kernel Panic on Ubuntu 22.04.1

    Unsolved
    2
    0 Votes
    2 Posts
    286 Views
    R

    Disregard I was not selecting deploy image.

  • snapin bash

    Solved
    9
    0 Votes
    9 Posts
    949 Views
    F

    @Tom-Elliott thanks for all !

  • "Not shrinking (/dev/nvme0n1p1) as it is detected as fixed size

    Unsolved
    8
    0 Votes
    8 Posts
    1k Views
    Tom ElliottT

    @J-Redshaw I suspect yours is likely related to Bitlocker encrypting the free space of your drive.

    https://forums.fogproject.org/post/134625

  • New install (install/update your database schema) fails

    Unsolved
    1
    0 Votes
    1 Posts
    205 Views
    No one has replied
  • iPXE Binary Compile Error

    Unsolved
    5
    0 Votes
    5 Posts
    2k Views
    R

    @TaTa Glad that worked for you! There hasn’t been any update to the github thread since 1/31, but I was able to compile binaries today using the original buildipxe.sh script without the additional line without error.

  • iPxe boot fails : Coud not boot :Permission denied (https://ipxe.org/0216eb8f)

    Unsolved
    3
    0 Votes
    3 Posts
    1k Views
    george1421G

    @Thierry-carlotti said in iPxe boot fails : Coud not boot :Permission denied (https://ipxe.org/0216eb8f):

    iPxe boot fails : Coud not boot :Permission denied (https://ipxe.org/0216eb8f)

    This really sounds like secure boot is enabled keeping ipxe from loading/running.

  • How do I get dnsmasq to direct PXE to my fogserver IP instead of my dnsmasq server IP?

    Unsolved
    6
    0 Votes
    6 Posts
    1k Views
    george1421G

    @45lightrain Lets cover a few things that I’m aware of that might help you get to the bottom of the issue.

    When you have a ProxyDHCP server and normal dhcp server on your network you will get two dhcp OFFERS. Well I guess it would be the same if you had two dhcp server you would see two offers, but that is a bit off point. You can tell if an OFFER packet is a proxy dhcp packet because dhcp option 60 will be set to PXEClient in the ProxyDHCP OFFER. That is a signal to the pxe booting client to come back and ask about pxe booting from the ProxyDHCP server. The pxe booting client will ignore the pxe boot information from the main dhcp server. A properly formed dhcp packet will have both the fields in the header {next-server} and {boot-file} filled out for the bootp booting protocol part of the protocol AND should have dhcp option 66 and 67 set for the dhcp booting protocol. Both must be filled out because its up to the pxe booting client to pick either the dhcp or bootp protocol. Your ltsp files are correct. The last one you posted has the dhcp-range… commented out, that turns off the proxydhcp feature in dnsmasq.

    SO if you have a pihole server that runs dnsmasq, why are you running an external dnsmasq server. Does your pihole server have pxe boot options? I know pfsense has built in pxe boot fields you can fill out that supports dynamic (bios/uefi) boot loaders.

    When trying to debug this wireshark/tcpdump is your friend that tells you what is flying down the wire. Just remember that dhcp is based on broadcast messages so you can “hear” them from any network port, and proxyDHCP like other unicast messages must be captured at the source, destination, or via a mirrored port.

  • Uninstall FOG completely from RHEL

    Unsolved
    1
    0 Votes
    1 Posts
    423 Views
    No one has replied
  • Upgrade to PHP8

    Unsolved
    4
    0 Votes
    4 Posts
    580 Views
    L

    @Tom-Elliott I made a mistake – Our current FOG version is actually 1.5.9 but the php version is still 7.2.24. Would that change the possibilities of a PHP upgrade ?

  • fog 1.5.10 install on rocky linux 8.7 installation error

    Unsolved
    6
    0 Votes
    6 Posts
    1k Views
    L

    @limbooface Is this still an issue? If not, I’ll close this issue.

  • 0 Votes
    4 Posts
    622 Views
    R

    @jfernandz
    No.
    I capture an image to the fog server and that works. I have several images on the server that we use. When we go to deploy an image we enable PXE boot save changes and go back into the BIOS. Using the Quick boot menu select the PXE boot on the NIC that is connected to the FOG server.

    From here the PXE boots fine and I am able to select Quick deployment login using the username / password. Select any of the captured images. It will go through the whole deployment process with no errors posted. When it boots we normally go back into the BIOS and disable PXE boot. Save changes and restart. It will then post Reboot and Select proper Boot Device or Insert Boot Media in selected Boot device and press a key.

    If I use clonezilla and restore an older image to the PC and then use FOG Server using the same steps to deploy it will then boot into the OS correctly.

  • Проблема с grub

    Unsolved
    5
    0 Votes
    5 Posts
    847 Views
    Tom ElliottT

    @slawa-jad I don’t think grub_is_lockdown is the issue, just the message presented.

    If you’re able to load up the original machine you captured the image from, look at your /etc/fstab file.

    I suspect you’ll see something to the effect of:

    UUID=a2476193-12cf-4601-a38d-6c798fc42708 /boot ext4 defaults 1 2 UUID=AACD-BEFE /boot/efi vfat umask=0077,shortname=winnt 0 2

    If your file.

    This is where things are breaking down.

    Change the UUID= portion to the actual drive name.

    The boot efi part, I’m not 100% certain what to change it to, but I would suspect probably

    /dev/sda1 /boot ext4 defaults 1 2 /dev/sda2 /boot/efi vfat umask=0077,shortname=winnt 0 2

    This is just my guess. I would not make these changes on the “originally captured” machine but rather make it on one of the machines that you deployed to and fix it there. Once it’s fixed, I’d then suggest re-capturing the image from that “now working” machine and deploy as freely as needed.

167

Online

12.2k

Users

17.4k

Topics

155.6k

Posts