• Proofread concept

    10
    0 Votes
    10 Posts
    2k Views
    george1421G

    @Piplup said in Proofread concept:

    I discarded the VLAN idea because it’s too late to implement safely now for me.
    You’re right - there is 1 L3 10 Gigabit Switch and A lot of L2 1 Gigabit Switches
    My question was, is a network as described with the network plan I provided realistic?

    I worried that because everything is in one LAN (192.168.5.0/24), and the ISP router is effectively the DHCP Server, that this may lead to broadcast storming or other fatal performance loss in the network because every Client has a dynamic IP.

    No worries for the number of hosts. With tcpip there is not really an issue with broadcast storms. If you are using an old lan technology like netbeui, spx, or banyan vines then broadcasts would be a concern. With TCP the main type of broadcast are ARP messaging (in general).

    Regarding the HDD - it’s supposed to be 2 SAS HDD’s in RAID 1, because these are the only harddrives in the paper server. So effectively 1 HDD. I know 200 Mbit’s is much, I’m still debating in changing it to 2 SSD’s. I was just worried they would break faster.

    One HDD or 2 in Raid-1 same difference since only one is the leader disk an the other is the mirror or follower disk. If you are using a traditional RAID controller then the onboard cache memory will help a bit with performance. But remember you are dealing with multi GB files for imaging so the cache will only help so much. In regards to SSDs, for FOG imaging they will not break faster than HDD. What breaks SSDs is many writing to the drive. In the case of standard fog imaging its a write once, deploy (read) many times. SSDs are ideally suited for FOG imaging. I would say the HDD would have a shorter life because of the head thrashing about the disk when you have multiple imaging going on at the same time.

    Last thing regarding the Bottleneck … So, the image server cannot deploy faster than his own read speed and the write speed of the Client, right?

    Here are actually the bottlenecks in imaging. Lets assume a deployment here server->client

    FOG Server disk to network Network infrastructure Network to fog imaging Fog imaging to disk

    In the case of a FOG deployment, the fog server does very minimal work. On the FOG server it only moves data from the disk storage to the network adapter and then manages the overall progress of imaging. If you wanted to you could run the FOG server on a Raspberry PI 4 server. The key is getting a fast data path from disk to the network.

    For fog imaging the target computer does all of the work. The target computer takes in the image from the network, decompresses the image dynamically, and then writes the image to the local hard drive on the target computer. So impacts on deployment speed is network, CPU (Ghz and number of cores), memory speed, and local storage drive.

    So if you were to setup FOG and deploy to a computer the program that writes the image to disk is called PartClone. PartClone gives a performance number. This is usually in GB/min. This number is actually a composite number that indicates how fast Partclone can write the image to disk. But behind that number is all of the defined bottlenecks. Lets say you take 2 computers one is a 2010 Core2 Duo with a HDD and the second is a 2019 Quad Core with an NVMe drive. Using that same FOG server the Core2 computer will probably deploy in the 4GB/m range (bottleneck is CPU or local HDD). Where that Quad Core with NVMe drive will deploy in the 6.5GB/min range (bottle neck is the 1GbE network)

  • Need help for a test setup

    12
    0 Votes
    12 Posts
    3k Views
    george1421G

    @init32 said in Need help for a test setup:

    Your (client) IP address: 192.168.1.229
    ****Next server IP address: 192.168.1.1
    Relay agent IP address: 0.0.0.0

    Above is the problem with soho routers they put themselves as the pxe boot server. This is where we use dnsmasq to override this poor behavior.

    Well done getting it setup!

  • Storage Node Disk not showing

    Solved
    8
    0 Votes
    8 Posts
    2k Views
    S

    @AlexPDX When I looked at this on the smartphone earlier today it did not load the second picture and so I misunderstood the issue. The master nodes sends a HTTP request (URL http://x.x.x.x/fog/status/freespace.php?path=/images) to the storage node to get that information. The storage node checks if the given path exists, is readable and a directory. If not it returns no values.

    So your link /images -> /home/fogproject/images is causing the issue. Linking can be somewhat hideous and I would never advice anyone to link to the images directory just to mask that it was setup the wrong way in the first place.

    I suggest you move all the content from /home/ to a temporary location, re-mount that partition (/dev/mapper/centos-home) into /images and move all the stuff you had in /home/fogproject/imagesover to the new/images` (residing on the extra partition.

  • weird tftp slow file transfer transferring files to any clients

    Solved
    4
    0 Votes
    4 Posts
    1k Views
    G

    WOW that solved it.
    changed it from tftp to http and instantly see gigabit transfers.
    thanks for your help @george1421
    also noteworthy to make sure your files are moved over to the default http directory of /var/www/html

  • Storage node Disk Usage not showing full disk

    5
    0 Votes
    5 Posts
    1k Views
    george1421G

    @dgcortes While I haven’t been following the thread, I see what is common in a setup if you are not watching.

    The /root partition has 45G allocated to it (where the /images default to) and /home has 435GB. The /home is never used in a FOG setup so you have 435GB wasted space.

  • ISO Boot Ubuntu 18/19 LTS

    14
    0 Votes
    14 Posts
    4k Views
    R

    @xardoniak
    I was using Ubuntu 18 Server install, but I was able to get it working like this.

    I created a /images/os/ubuntu folder and copied the ubuntu ISO file contents in there.

    I created a folder /tftpboot/os/ubuntu
    There I put the initrd and vmlinuz files from the casper folder in the ubuntu iso.

    Here are the parameters I used in FOG.
    kernel tftp://192.168.160.12/os/ubuntu/vmlinuz
    initrd tftp://192.168.160.12/os/ubuntu/initrd
    imgargs vmlinuz initrd=initrd boot=casper root=/dev/nfs netboot=nfs nfsroot=192.168.160.12:/images/os/ubuntu/ ip=dhcp rw
    boot || goto MENU

    This allowed me to start the installer from that usb, but I haven’t got a working live OS running yet, though I hope to have that soon.

  • Can you support Chinese?

    2
    0 Votes
    2 Posts
    424 Views
    S

    @xubohead Do you mean language support in FOG software or here in the forums?

    FOG already has Chinese language pack included but I am not sure how well the translation is made. In the forums we don’t seem to have anyone understanding Chinese and so we prefer English. But sure you can use online translators and post your request both in Chinese and English at the same time. This way we have the best chance of getting all the details. Our answers will be in English.

  • Size Difference after capture

    5
    0 Votes
    5 Posts
    1k Views
    S

    @Dan_Ansel Ok, now that the other issue is solved and I have a bit more time we might take a look at this one again.

    Though I would not trust the “size on client” value shown in the FOG web UI in all cases but if you really see the image size to be 510 GB in the blue partclone capture screen I would imagine the disk really is that big.

    Please post the contents of the text file d1.partitions you find in the /images/IMAGENAME/ directory on your FOG server. That will surely give us a clue on the actual disk size.

    Possibly this is just a calculation issue?! e.g. 510 GB * 1000 * 1000 / 1024 / 1024 = 486 GB?!?

    @Dan_Ansel said in Size Difference after capture:

    udevd [3760] failed to execute ‘/lib/udev/${exec_prefix}/bin/udevadm’ ‘${exect_prefix}/bin/udevadm trigger -s block -p ID_BTRFS_READY=0’ : No such file or directory

    Where exactly do you see this message? On your FOG server or on the host/client when capturing/deploying? I have not seen this message before. At first I thought you’d see this on your FOG server logs but the more I think about it I can imagine this to be on the host/client and it might point us to something being wrong with your partition layout. Just a wild guess here.

  • Host Screen Resolution

    23
    0 Votes
    23 Posts
    4k Views
    Matthieu JacquartM

    @Sebastian-Roth Hi Seb, no problem I get it very well, you all made great job ! Keep me in touch if you need any test 😉

  • Dhcp vendor class question

    8
    0 Votes
    8 Posts
    4k Views
    george1421G

    @george1421 ok so lets wrap this thread up nice and neat.

    On the HP EliteDesk 705 G5 computers, they for what ever reason, do not like the unidonly.kpxe iPXE boot loader. iPXE undionly.kpxe will issue a dhcp request and the dhcp servers will send an OFFER packet but iPXE rejects the offer and just sends a DISCOVER packet again. And it continues over and over with the DISCOVER and receiving an OFFER but rejecting the given OFFER.

    We did find that ipxe.kpxe did work correctly on these HP systems. So this kind of tells me the UNDI firmware driver in the network adapter is faulty. I’m suspecting a future firmware update will address the issue. In the mean time we had to work out a solution to send ipxe.kpxe to these computers only and send undionly.kpxe to all other bios based systems. Luckily the OP had linux dhcp servers on this subnet so we set out to see if we can identify these systems based on their UUID. Through testing unfortunately the UUID on these HP systems are globally unique instead of encoding the model and unique ID in the UUID field like Dell does. So the OP settled on identifying the systems based on mac prefix. This is the following setting we added to the isc-dhcp server on his network.

    class “Legacy-hpbroken” { match if (substring(option vendor-class-identifier, 0, 20) = “PXEClient:Arch:00000”) and (substring(hardware, 1, 3) = 00:01:02;); filename “ipxe.kkpxe”; }
  • boot ipxe with 2 LAN

    2
    0 Votes
    2 Posts
    449 Views
    george1421G

    You are going to have to explain a bit more how FOG fits into this picture. What LAN is the fog server on?
    What LAN is the FOG server’s imaging nic connected to?
    Where is your main dhcp server?
    Do you have full routing between the subnets?
    Do you have a dhcp helper service configured on your subnet router?

  • Cloud FOG Imaging with iPXE boot using USB

    9
    0 Votes
    9 Posts
    3k Views
    george1421G

    @p4cm4n You can do that, there is a tutorial (for uefi) to create a boot drive the easy way. This will load iPXE from a usb stick and then boot into FOG. https://forums.fogproject.org/topic/6350/usb-boot-uefi-client-into-fog-menu-easy-way

    For those that can’t use iPXE I have FOS Linux on a usb stick too. You lose about 30% of the functionality of FOG but you can image no problem with it. https://forums.fogproject.org/topic/7727/building-usb-booting-fos-image

  • LDAP with Access Control, default role assignment at first login

    Solved
    3
    0 Votes
    3 Posts
    506 Views
    Tom ElliottT

    I’ve seen this request but not quite sure how to move forward.

    Please understand, Access controls, with this iteration of FOG Server, are coded after the fact.

    What do I mean by this?

    FOG didn’t really have any real security controls in place. You, indeed, needed to be logged in to do actions of course, but there weren’t any utilities in place for “modifying” access.

    For a period of time, there was a thing called “mobile” user which basically just allowed a user to use a mobile interface. This interface was coded along side the FOG system, and was a cumbersome tool to maintain. So when we moved to a responsive design, I removed that “mobile” gui as the new GUI is also mobile accessible.

    The Access control plugin is a huge leap toward getting a tool available to limit access based on rules/roles etc…, but it’s not a perfect system as it relies on the User existing in the database first.

    I’m sure we could work to add a utility to enable a “default” role association but right now it doesn’t exist.

  • iPXE open command line

    11
    0 Votes
    11 Posts
    6k Views
    O

    @george1421 Thanks, I really appreciate your help!

  • Inventory

    6
    0 Votes
    6 Posts
    1k Views
  • Image Size differences -Legacy/Uefi

    4
    0 Votes
    4 Posts
    1k Views
    george1421G

    @fatbunny I think I would take this approach. Recreate your master image using the smallest disk possible. Make sure the root lvm volume is the last one allocated on the disk. Capture with FOG it will still capture as RAW. Then using a post install script that detects either the image name or linux issue the LVM commands to extend the lvm volume to the size of the (new) disk and then extend the root lvm volume to the size of the lvm disk. Its a bit strange on how to handle it, but it should work.

    The basic idea is to create your source image as small as possible then expand it post deployment.

  • Invalid Storage Group

    Solved
    17
    0 Votes
    17 Posts
    5k Views
    D

    I know this is kind of old, but I had this same issue yesterday. I fixed it by running the installer and upgrading.

    I think it would have been fixed by simply re-running the installer if I didn’t have to upgrade. I was only one version behind. I’ve notice most issues with Fog can be fixed this way.

  • New Fog server set up

    17
    0 Votes
    17 Posts
    5k Views
    JJ FullmerJ

    I just figured out that sending stuff out with the image doesn’t work as well as one would like. None of the fog hosts seem to be working and the like. I try to make it grab the new image off one of the clients and it doesn’t actually grab them.

    What do you mean by this? Do you see anything in the C:\fog.log on the clients? Or are you saying that they aren’t imaging correctly?

    For that example snapin, if you’re just wanting that file to show up in C:\users a simple snapin pack. I’d take a look at the link @george1421 gave on snapinpacks and make a zip with a script that copies that file to C:\users.

    As a simpler test you could create a powershell or batch script that just makes a hello world text file and see if that works.

    i.e. powershell

    "Hello World!" | Out-File -encoding oem -filePath C:\users\public\Desktop\hello.txt -force;

    So put that into a file called hello.ps1 and make a new snapin with the Powershell template and upload the simple script. The snapin read-only command at the bottom should look like this
    powershell.exe -ExecutionPolicy Bypass -NoProfile -File hello.ps1
    Then add it to a host and deploy it as a single snapin task and see if it works.
    You can deploy it then if you have access to the host you can run this in powershell to open up a dynamic version of the fog log to watch what’s happening on the client

    cat C:\fog.log -wait ##cat is an alias for get-content. You can also do this with Get-FogLog if you install the FogApi powershell module
  • FOG In Remote Environment

    6
    0 Votes
    6 Posts
    1k Views
    N

    @george1421 Yeah, I think having a FOG server at the 3 locations is probably the logical option. Which weirdly enough building those out and ongoing support would still be significantly cheaper than the ongoing subscription we have with SmartDeploy.

  • Unkown character appears; UEFI boot "ipxe.efi�"

    Solved
    7
    0 Votes
    7 Posts
    2k Views
    S

    @george1421
    Thank you very much, thats all i need to know.

80

Online

12.6k

Users

17.5k

Topics

156.3k

Posts