• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Best
    • Profile
    • Following 1
    • Followers 64
    • Topics 113
    • Posts 15,326
    • Best 2,773
    • Controversial 0
    • Groups 2

    Best posts made by george1421

    • Can php-fpm make fog web-gui fast

      Note this is more of a document blog than a question. Its intended to be as basis of the configuration required to enable php-fpm to speed up php code processing.
      Note: This post is only about looking into options to speed up the fog web-gui front end and will have NO IMPACT on FOG imaging. I have another thread on what is required to make FOG Imaging faster.

      For this post I’m focusing on Centos 7, but I suspect the configuration will be similar for Debian variants.

      By default the fog installer installs php-fpm, but the fog installer doesn’t activate apache to use it. The follow steps are what I did to enable php-fpm to see if it would speed up the FOG web-gui. This speed up is not so much for interacting with the web-gui, but for the backend fog client that will check into the web master server every XX minutes to look for new tasks. If you have 1000 computers checking in every 5 minutes (fog default) then you will have (if averaged out) every second you will have 1.3 check ins. But we know that this check in time will be randomized where you might have 20 check ins within one second and nothing for 3 seconds then a flood again for the next.

      1. Edit the mpm configuration file to tell the apache mpm module to use events and not pre-fork
        vi /etc/httpd/conf.modules.d/00-mpm.conf
      2. Comment out
        #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
      3. Uncomment
        LoadModule mpm_event_module modules/mod_mpm_event.so
      4. Save and exit from 00-mpm.conf
      5. Change to the apache configuration directory
        /etc/httpd/conf.d
      6. Copy the standard php.conf file to create a new php-fpm.conf file
        cp /etc/httpd/conf.d/php.conf /etc/httpd/conf.d/php-fpm.conf
      7. Rename the original php.conf file so that apache won’t see it during startup
        mv /etc/httpd/conf.d/php.conf /etc/httpd/conf.d/php.conf.disabled
      8. Edit the copied php-fpm.conf file
        vi /etc/httpd/conf.d/php-fpm.conf
      9. Comment out the following line
        #SetHandler application/x-httpd-php
      10. Insert the following line just below the commented out line
        SetHandler "proxy:fcgi://127.0.0.1:9000"
      11. Save and exit the php-fpm.conf file
      12. Change to the php-fpm directory
        cd /etc/php-fpm.d
      13. Remove the default www.conf in the php-fpm.d directory (this file is created by the fog installer and not used in this setup)
        rm /etc/php-fpm.d/www.conf
      14. Create a new file called fog.conf
        vi /etc/php-fpm.d/fog.conf
      15. Paste in the following
      [fog]
      user = apache
      group = apache
      
      listen = 127.0.0.1:9000
      
      ;listen.owner = apache
      ;listen.group = apache
      ;listen.mode = 0660
      ;listen.acl_users = apache
      ;listen.acl_groups =
      
      listen.allowed_clients = 127.0.0.1
      
      pm = dynamic
      pm.max_children = 50
      pm.start_servers = 5
      pm.min_spare_servers = 5
      pm.max_spare_servers = 35
      ;pm.process_idle_timeout = 10s;
      pm.max_requests = 500
      ;pm.status_path = /status
      ;ping.path = /ping
      ;ping.response = pong
       
      access.log = /var/log/php-fpm/$pool.access.log
      ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%"
      slowlog = /var/log/php-fpm/$pool-slow.log
      
      ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com
      ;php_flag[display_errors] = off
      php_admin_value[error_log] = /var/log/php-fpm/fog-error.log
      php_admin_flag[log_errors] = on
      ;php_admin_value[memory_limit] = 128M
      php_value[session.save_handler] = files
      php_value[session.save_path]    = /var/lib/php/session
      
      ; we will use these settings when memcache (d) is configured.
      ;php_value[session.save_handler] = memcached
      ;php_value[session.save_path] = "127.0.0.1:11211"
      
      php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache
      
      1. Save and exit fog.conf
      2. Create a php information page in the web root
        vi /var/www/html/info.php
      3. Paste in the following
      <html>
      <body>
      
      <?php
       phpinfo();
      ?>
      
      </body>
      </html>
      
      1. Save and exit info.php page.
      2. Change the owner of that page to apache
        chown apache:apache /var/www/html/info.php
      3. From a web browser call the info page with http://<fog_server_ip>/info.php. The page should look similar to below
        <insert_info_page_image>
      4. Now lets restart both apache and php-fpm
        systemctl restart php-fpm
        systemctl restart httpd
      5. Give it a few seconds for both services to initialize.
      6. Now call the same info.php page. The page should look similar to below. NOTE: that the Server API variable is now ‘FPM/FastCGI’
        <insert_info_page_image>
      7. Now access the FOG management page console
      8. Access a few of the managment pages, you will note that after a few page clicks the pages will respond faster to your page selections.
      9. To confirm that php-fpm is working you can inspect the logs files being created in /var/log/php-fpm directory. The php errors will now be listed there instead of the default apache error log (because php-fpm is now handling the php code).
      10. You may remove the info.php page in the apache root directory. Its no longer needed.
      11. Done (for now, I’m currently looking into the memcache option. But more on that later).

      With these updates we have now handed off php code execution to a dedicated php engine and we are no longer relying on apache to execute both web pages management and php code execution. Will this help the sites that have hundreds of clients? I hope so.

      Next steps installing memcache

      posted in General Problems
      george1421G
      george1421
    • RE: WIN10 Multicast Imaging Issues

      I remoted into Joe’s system and looked about for a bit.

      What I found was there was/is 3 versions of php and php-fpm installed and all were trying to run at the same time.

      I stopped, php-fpm v5, php-fpm v7.0, and left php-fpm 7.2 running.

      I could not find a handy copy of the official fog configuration for the www.conf file so I hand edited the distro’s version setting max processes to 50, start, min to 5 and spares to 6. I also set php max memory size to 256M.

      After that I asked Joe to multicast deploy an image. He reported that he deployed to 24 systems. I watched the process status screen and the number of active php-fpm processes never went above 6 workers (where 50 is the max). Joe stated that all 24 machines deployed to completion.

      So at this point I’m pretty sure the issue with this system was having multiple versions of php installed as the root cause. I’m not ready to rule out a www.conf file setting that is not right, but I’m happy with what I saw from the performance of php-fpm service in this multicast deployment.

      posted in Windows Problems
      george1421G
      george1421
    • Dynamic FOG Replicator transfer rates

      Note: this code has not been completely tested. As it stands right now its more of a proof of concept than a functioning system

      I have a desire to have different replication rates between storage nodes based on time of day. During the day time I want to cap transfer rates between the FOG Master node and Storage nodes to 1Mb/s and then during the night time transfer as fast as possible (based on the WAN link speed).

      So hacking about for a while I came up with this solution. Its a bit complex but simple at the same time. The first step is to update the bitrate values in the FOG database and then restart the FOG replicator service.

      For my project I only need 2 different bit rates day and night. You might expand this concept to any number of transfer rates per day. I might suggest against having a more than 2 transfer periods, but that decision is based on speculation.

      For this hack I needed to create 2 bash scripts. One for daytime rates and one for nighttime rates.

      1. Lets start by making a place to put our bash scripts. For this example I’m placing my files in a new directory in the /opt/fog directory.
        sudo mkdir /opt/fog/cron
      2. Next we need to create our daytime bash script
        sudo touch /opt/fog/cron/replicator.daytime
        sudo chmod 755 /opt/fog/cron/replicator.daytime
        sudo vi /opt/fog/cron/replicator.daytime
      3. In the replicator.daytime we’ll paste the following code
      #!/bin/bash
      
      . /opt/fog/.fogsettings
      
      ## Update the array with the storage node [name]=value value pairs ##
      declare -A StorageNodes=( [ATLNode]=1000 [NYCNode]=1000 [LANode]=600 )
      ## don't change anything below this line ##
      
      # Loop through all nodes in the associative array
      for snode in "${!StorageNodes[@]}"
      do 
        sql="mysql -u ${snmysqluser} --password='${snmysqlpass}' -e 'UPDATE nfsGroupMembers SET ngmBandwidthLimit=${StorageNodes[$snode]} WHERE ngmMemberName like \"$snode\" ' fog";
        eval $sql;
      done
      
      service FOGImageReplicator restart;
      

      The notable bits in this is the line declare -A StorageNodes=( [ATLNode]=1000 NYCNode]=1000 [LANode]=600 ) This is a key value pair that will be used to update the FOG database with the new bit rates. You can add as many or as few storage nodes to this key value array as necessary. Just ensure you keep the proper formatting, capitalization and spacing. The key value must match exactly the name of the storage node from the FOG management GUI. The value must be an integer value. Again watch your spacing adding or removing storage nodes. There may be better ways to go about this, but “it works for me” YMMV.
      4) Next we need to create our nighttime bash script
      sudo touch /opt/fog/cron/replicator.nighttime
      sudo chmod 755 /opt/fog/cron/replicator.nighttime
      sudo vi /opt/fog/cron/replicator.nighttime
      5) In the replicator.nighttime we’ll paste the following code

      #!/bin/bash
      
      . /opt/fog/.fogsettings
      
      ## Update the array with the storage node [name]=value value pairs ##
      ## in our declarations below we'll set the value to 0 (no restrictions on speed)
      declare -A StorageNodes=( [ATLNode]=0 [NYCNode]=0 [LANode]=0 )
      ## don't change anything below this line ##
      
      # Loop through all nodes in the associative array
      for snode in "${!StorageNodes[@]}"
      do 
        sql="mysql -u ${snmysqluser} --password='${snmysqlpass}' -e 'UPDATE nfsGroupMembers SET ngmBandwidthLimit=${StorageNodes[$snode]} WHERE ngmMemberName like \"$snode\" ' fog";
        eval $sql;
      done
      
      service FOGImageReplicator restart;
      
      1. We should probably test our bash scripts to ensure we didn’t have a type-o in the script.
      2. Key in the following from the command line:
        sudo /opt/fog/cron/replicator.nighttime
        sudo /opt/fog/cron/replicator.daytime
      3. The last bit we need to do is create an entry for the cron service to run the appropriate bash script at the right time of the day.
      4. We need to edit the crontab files using this command:
        sudo crontab -e
      5. Add this command for the night time schedule (I want to start at 5:30pm)
      30 17 * * * /opt/fog/cron/replicator.nighttime
      
      1. Add this command for the day time schedule (I want to start at 5:00am)
      0 5 * * * /opt/fog/cron/replicator.daytime
      
      1. Exit out of the crontab editor
      2. Restart cron with latest data:
        sudo service crond restart

      To help you understand the cryptic contab format here is the field position structure.

      0 5 * * * /opt/fog/cron/replicator.daytime
      
      MIN HOUR DOM MON DOW CMD
      
      Format definitions and allowed value:
      MIN     Minute field    0 to 59
      HOUR    Hour field      0 to 23
      DOM     Day of Month    1-31
      MON     Month field     1-12
      DOW     Day Of Week     0-6
      CMD     Command to be executed.
      
      Note: the star ( * ) is a position holder that matches 'any' 
      

      This concludes the setup required to change the replication rates based on time of day. Is this an ideal solution, no. Will it work, yes. Are there some caveats in the design, yes. So in short it worked for me.

      posted in Tutorials
      george1421G
      george1421
    • RE: New to Fog + Tough scenario = Mobile FOG Server

      @george1421 I had a lengthy chat session with the OP. I also remoted in with teamviewer to look at his install. His install of Ubuntu was a bit confused and Ubuntu’s desire to have NetworkManager take over the management of dnsmasq added a bit of complexity to getting this up and running. Per our discussion I was going to look to see if linux mint was a better choice than ubuntu 16.04. In a way it is and in a way its the same.

      I was able to install FOG on linux mint 18.2 without any issues. I was able to unhook dnsmasq from NetworkManager without breaking dnsmasq. So I have a process on how to upgrade the native dnsmasq 2.75 to 2.77 that will work reliably.

      In the end the OP’s goal is to make a mobile deployment server he can take to remote sites, plugin and image computers. He won’t have access to the remote site’s dhcp server so he does need dnsmasq to overrride any settings that the remote site has for pxe booting. There is still a chance it won’t work in all situations, but I feel confident that it should work in most.

      So the recommendation I have for the OP is that you can install Ubuntu 16.04 or 17.04, or Linux Mint 18.2 and we can make it work. We will also need Wayne’s mobile fog script from here: https://github.com/FOGProject/fog-community-scripts to complete the mobile FOG setup.

      posted in General Problems
      george1421G
      george1421
    • RE: Windows reimaging rights questions

      @OrKarstoft Well lets start off with licensing rights are out of scope for the FOG Project. It is the IT admin’s responsibility to ensure their company is in compliance with all EULAs. FOG is only a tool to aid in image deployment.

      With that said, if you haven’t read this article from Spiceworks its worth a read: https://community.spiceworks.com/how_to/124056-reimaging-rights-for-windows-10-licensing-how-to

      In a nut shell Windows OEM licenses doesn’t cover reimaging rights. You may only install an OEM image from an authorized OEM media. You may not adjust the OEM image recapture and redeploy that image. OEM images must be installed as they came from the OEM manufacturer without any adjustments.

      What you can do is buy a single Windows 10 (Pro) VL license. That single license will allow you to create a custom image for deployment. You would use the VL media to create your golden image and not OEM media. You will use your VL key to activate the workstation. As long as your OEM windows version is exactly the same as your VL media you are allowed to do this. You are not allowed to purchase 1 VL key for Windows 10 Pro (version does not matter) and upgrade your OEM Windows 10 Home to Windows 10 Pro license. This is a platform upgrade. If this is your case then you need to purchase the number of VL keys where you are converting from Win10 Home to Win10Pro. The same holds true upgrading from Win10 Pro OEM to Win10 Enterprise, in this case you will need 1 Win10 Ent license for every Win10 Pro OEM you want to upgrade.

      Your VL key will not activate a OEM image.

      posted in Windows Problems
      george1421G
      george1421
    • FOG Postinit scripts, before the magic begins...

      Part 1 Postint Scripts

      The FOG developers added a new feature with the release of FOG 1.3.4. This feature is called “postinit scripts”. These scripts are akin to the FOG postdownload scripts which are called after the system has been imaged, but before the target computer rebooted. In contract the postinit scripts are called just after the FOS engine starts and before it starts working on the target computer. More precisely, these user created scripts are called just before the main fog selector program executes. Please be aware that the postinit hook script fog.postinit is called every time the FOS engine (the high performance customized Linux OS that runs on the target hardware) is loaded. To state this another way, the postinit scripts are run before the following actions: disk wipe, compatibility tests, quick registration, full registration, inventory, quick imaging, capture, and deploy. Your custom postinit scripts must take this into account and act accordingly so that your code only runs on specific actions (more on this later).

      This feature was created to allow the FOG admin the ability to interact with the target hardware just prior to the FOS engine starts to interact with the target computer. For example lets say you had a raid controller that required specific commands to be executed before image capture can take place. You would use a FOG postinit script to prepare the raid array for imaging (flush cache to disk, quiesce the array, etc). Another example would be that some hardware assisted software raid (i.e. commonly called “fake-raid”) controllers require specific drivers to be loaded into the operating system before the OS can actually “see” the raid array as an array. FOG does provide a software raid controller (mdadm) that can see MS Windows software raid and some intel software raid controllers [ https://forums.fogproject.org/topic/7882/capture-deploy-to-target-computers-using-intel-rapid-storage-onboard-raid ] without any custom commands. But there are other software raid controllers that need specific setup parameter to ensure FOS is connected to the array properly. This is where the postinit scripts comes into play.

      In the example below this command will initialize a specific hardware assisted software raid array. Without this command the FOS engine just sees /dev/sda and /dev/sdb as just a bunch of (independent) disks [JBOD] and not a real raid array.

      mdadm --build /dev/md0 --raid-devices=2 --chunk=16 --level=0 /dev/sda /dev/sdb
      

      So how do I use this fancy new feature?

      As I stated above, beginning with FOG 1.3.4 the developers created a directory in the /images/dev directory to hold your postinit scripts. The full path the FOG hook script is /images/dev/postinitscripts/fog.postinit for image capture and /images/postinitscripts/fog.postinit during all other FOS engine operations. Since this location changes based on the type of operation being performed, its probably best for your script to use the FOG supplied variable of $postinitpath which will always point to the proper location. You can either add the required commands directly to the fog.postinit script (not recommended), or create a new bash script, then call your new script from the fog.postinit script (recommended).

      The following is an example framework you can use to write your own fog.postinit script. In this example lets call our fog.postinit script fog.LenovoP50
      touch /images/dev/postinitscripts/fog.LenovoP50
      chmod 755 /images/dev/postinitscripts/fog.LenovoP50
      vi /images/dev/postinitscripts/fog.LenovoP50
      Then paste in the following text:

      #!/bin/bash
      
      # place script commands here that should execute every time for every FOS action
      
      ## We need this command to run to enable the software raid on the Lenovo P50
      mdadm --build /dev/md0 --raid-devices=2 --chunk=16 --level=0 /dev/sda /dev/sdb
      
      # I added some additional check here just in case you wanted to highly customize the postinit script's actions. This section is not mandatory.    
      if [[ -n $mode && $mode != +(*debug*) && -z $type ]]; then
          case $mode in
              wipe)
                  # fog wipe disk
                  ;;
              checkdisk)
                  # fog check disk
                  ;;
              badblocks)
                  # fog disk surface test
                  ;;
              autoreg)
                  # fog quick registration
                  ;;
              manreg)
                  # fog full registration
                  ;;
              inventory)
                  # fog full inventory
                  ;;
              quickimage)
                  # fog quick image
                  ;;
              *)
                  # all other generic operations
                  ;;
          esac
          # place script commands here that should be run for any of the utility functions
      else
          case $type in
              down)
                  # fog image deploy
                  ;;
              up)
                  # fog image capture
                  ;;
              *)
                  # the code should never get here, we'll just add so the script doesn't break
                  ;;
          esac
          # place script commands here that should be run for either image capture or deploy
      fi
      
      

      Now to call our custom postinit script fog.LenovoP50 we need to append the fog hook bash script fog.postinit with following line.

      . $postinitpath/fog.LenovoP50
      

      ref: https://forums.fogproject.org/topic/9056/dmraid-and-mdadm

      posted in Tutorials
      george1421G
      george1421
    • RE: FOG MENU FILE

      @breit See here is where we have the problem. Can we assume you migrated from FOG 0.32 or earlier?

      FOG 1.3.0 or newer (possibly 1.1.x and newer) doesn’t use syslinux (i.e. pxelinux.0) for anything. So you building pxelinux.cfg doesn’t do you any good. FOG uses iPXE for its boot loader now and not syslinux. The menu structures are not compatible You can build a pretty complex iPXE menu structure, but that needs to be done using the tools given to you. The way to manage the FOG iPXE menus is via the FOG web gui. I suppose you might be able to chainload to syslinux from FOG, but the iPXE menuing system is substantially more robust than syslinux.

      posted in General Problems
      george1421G
      george1421
    • RE: Question about Kaspersky

      If you know the unattended command line switches to install Kaspersky then it should be no problem via a snapin, as long as the fog service is installed.

      The only caveat (in general) is the fog service runs as the local computer user SYSTEM which has no desktop, but full authority on the local computer. If you have applications that have to interact with the desktop or reach outside the computer to a local network share, you will have to make adjustments.

      posted in Windows Problems
      george1421G
      george1421
    • RE: File to file network backup (Not a tutorial yet)

      Software like FOG and Clonezilla do backups based on the block level. They image the entire drive, block by block. For what you want to back up, you really need a file level backup so that you can restore the files post imaging instead of the entire disk (as you would with FOG).

      Now you could pxe boot a live image of linux (i.e. puppy linux) and copy off the user files to a nfs share, then image the system, then pxe boot again using a live linux and copy them back.

      I can tell you that when we migrate users at our office, we use USMT to copy their profile and files to a network share. Clone the system and then at the end use USMT to copy the files back to the new system. This is then all done in the windows realm.

      posted in Tutorials
      george1421G
      george1421
    • RE: Fog general questions

      @rabus The client communicates with the server over http for command and control functions and then NFS and FTP during imaging .

      The client uses http and tftp during pxe booting process.

      I feel there are some undisclosed meaning in your questions. If you were clearer with what you need, we may be able to provide a better quality answer for you.

      posted in General Problems
      george1421G
      george1421
    • RE: Issue joining domain & activating Windows after deployment

      @Sebastian-Roth said in Issue joining domain & activating Windows after deployment:

      @george1421 has an idea on why this is making problems?

      @ckasdf was the golden image created from OEM media or did you use the MS VLK media to create this image?

      Was this golden image sysprepp’d?

      I find it strange that setupcomplete.cmd is only running when someone logs in. This should not happen (ever) since there should be no connection with setupcomplete.cmd and the login process. This batch file is run by WinSetup at the end of OOBE and just before the first login prompt is displayed.

      I could see a case if someone used OEM media where setupcomplete.cmd would not run and was using a first run section of the unattend.xml file to run it where windows would be confused and wait to start the fog service until someone logged in.

      That also brings me to the error message about starting the fog server. When the setupcomplete.cmd batch file is run at the end of OOBE, it is executed in the SYSTEM user context. When it was run after login it is run in the context of the current user. Even if the current user is a local admin, it would need to be run from an elevated command window to interact with service settings. So I understand why fog is failing to start when the user logs in.

      posted in Windows Problems
      george1421G
      george1421
    • RE: FOG-casting across VLANs (subnets)

      (document placeholder)

      0_1495237622562_Firewall_ Rules_01.png

      0_1495237634074_Firewall_ Rules_02.png

      posted in Tutorials
      george1421G
      george1421
    • RE: Bitlocker network unlock (WDS) and FOG

      Right now your only option is to image on an isolated network away from your production network. I suspect that WDS is using proxydhcp which will override your setting in dhcp options 66 and 67. There is no way around this AFAIK.

      It would be interesting to see what WDS is actually doing this tutorial tells you how to do this with the fog server: https://forums.fogproject.org/topic/9673/when-dhcp-pxe-booting-process-goes-bad-and-you-have-no-clue

      Or you can use wireshark with the capture filters of port 67 or port 68 or port 69 or port 4011

      If you want us (or me) to look at it upload the pcap to a google drive and either post the link here or IM me the link and I’ll review it. It would be interesting to know exactly what WDS is doing here. But in the end, having an isolated (but routable) imaging network is probably your only solution. You just need a network where you can limit the broadcast domain to only that subnet.

      posted in General Problems
      george1421G
      george1421
    • RE: Because? and who? change Sequence boot UEFI?

      Just be aware with uefi machines, windows has the ability (and will) change the boot order to make windows boot manager first in the list. This is a Microsoft thing and not something that FOG is doing.

      How can you tell? Right after imaging but before windows OOBE start, go into the firmware and look at the boot order. If FOG was doing it, right then it should be pointing to the windows boot manager. But since FOG is not doing this (AFAIK) it still should be pxe boot. But then look at it after windows OOBE runs, my bet it will be windows boot manager, thank you microsoft. So why doesn’t this happen for bios computers? Because bios doesn’t have the ability for the operating systems to change the boot order.

      posted in Windows Problems
      george1421G
      george1421
    • Adding additional image storage space to FOG server

      This tutorial will cover adding an additional hard drive to your FOG server to store more images. We will do this by formatting and mounting the new hard drive on the FOG server, and then in the FOG server configuration we will create a second storage node configuration on the FOG server
      [editor note] I don’t care for your wording here

      In this example I’ve added an additional vmdk (hard drive) to my testing FOG server. In the example below this new hard drive is connected to the fog server as /dev/sdb

      ( note1: for testing I also added a 3rd vmdk just in case I needed it later. That one is mounted as /dev/sdc. That hard drive will not be used in this tutorial so you may ignore it for the rest of this document )

      ( note2: my testing fog server’s OS is Centos 7, so the instructions are going to be Centos centric. You should be able to translate them to other linux distributions pretty easy with a little Google-fu )

      1. First we’ll use lsblk to understand what block devices are connected to our fog server. You’ll notice that there are 2 “new” hard drives attached to my fog server without any partitions (sdb and sdc).
      # lsblk
      NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda               8:0    0   30G  0 disk
      ├─sda1            8:1    0  500M  0 part /boot
      └─sda2            8:2    0 29.5G  0 part
       ├─centos-root 253:0    0 26.5G  0 lvm  /
       └─centos-swap 253:1    0    3G  0 lvm  [SWAP]
      sdb               8:16   0   40G  0 disk
      sdc               8:32   0   50G  0 disk
      sr0              11:0    1 1024M  0 rom
      
      1. Now lets create a partition on /dev/sdb using the fdisk command. I’m just going to post the keystrokes needed to create the partition using fidsk
        fdisk /dev/sdb

      And now the required keystrokes

      n
      p
      1
      <enter>
      <enter>
      w
      

      The actual fdisk actions will look like this:

      Welcome to fdisk (util-linux 2.23.2).
      
      Changes will remain in memory only, until you decide to write them.
      Be careful before using the write command.
      
      Command (m for help): n
      Partition type:
         p   primary (0 primary, 0 extended, 4 free)
         e   extended
      Select (default p): p
      Partition number (1-4, default 1): 1
      First sector (2048-83886079, default 2048):
      Using default value 2048
      Last sector, +sectors or +size{K,M,G} (2048-83886079, default 83886079):
      Using default value 83886079
      Partition 1 of type Linux and of size 40 GiB is set
      
      Command (m for help): w
      The partition table has been altered!
      
      Calling ioctl() to re-read partition table.
      Syncing disks.
      #
      
      1. Use the lsblk command to confirm the partition is now visible on /dev/sdb
        lsblk
      NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda               8:0    0   30G  0 disk
      ├─sda1            8:1    0  500M  0 part /boot
      └─sda2            8:2    0 29.5G  0 part
        ├─centos-root 253:0    0 26.5G  0 lvm  /
        └─centos-swap 253:1    0    3G  0 lvm  [SWAP]
      sdb               8:16   0   40G  0 disk
      └─sdb1            8:17   0   40G  0 part
      sdc               8:32   0   50G  0 disk
      sr0              11:0    1 1024M  0 rom
      
      1. Now lets format the partition (/dev/sdb1)
        mkfs.ext4 /dev/sdb1
        [Editor note: maybe should consider xfs file system instead of ext4, because xfs handles huge files better]
      mke2fs 1.42.9 (28-Dec-2013)
      Filesystem label=
      OS type: Linux
      Block size=4096 (log=2)
      Fragment size=4096 (log=2)
      Stride=0 blocks, Stripe width=0 blocks
      2621440 inodes, 10485504 blocks
      524275 blocks (5.00%) reserved for the super user
      First data block=0
      Maximum filesystem blocks=2157969408
      320 block groups
      32768 blocks per group, 32768 fragments per group
      8192 inodes per group
      Superblock backups stored on blocks:
              32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
              4096000, 7962624
      
      Allocating group tables: done
      Writing inode tables: done
      Creating journal (32768 blocks): done
      Writing superblocks and filesystem accounting information: done
      
      1. Next we will create a new mount point (directory) to attach our hard drive partition (/dev/sdb1) to.
        mkdir /images2
      2. Edit the fstab so our new drive is mounted to our mount point every time we reboot the fog server.
        vi /etc/fstab
      3. Insert this line at the bottom of the fstab
      /dev/sdb1 /images2 ext4 defaults 0 1
      
      1. Use the df command to confirm the new drive is not connected (just to show a before and after example)
        df -h
      Filesystem               Size  Used Avail Use% Mounted on
      /dev/mapper/centos-root   27G  9.5G   18G  36% /
      devtmpfs                 1.9G     0  1.9G   0% /dev
      tmpfs                    1.9G     0  1.9G   0% /dev/shm
      tmpfs                    1.9G  8.6M  1.9G   1% /run
      tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
      /dev/sda1                497M  217M  281M  44% /boot
      tmpfs                    380M     0  380M   0% /run/user/0
      
      1. Notice that /dev/sdb does not appear in the printout above
      2. Tell the file system to mount all devices in the fstab. I’m doing it this way to ensure when the FOG server reboots that the drives are mounted correctly. We could use the mount command directly such as in mount -t ext4 /dev/sdb1 /images2. But that wouldn’t guaranty that we keyed things in right in the fstab file.
        mount -a
      3. Now repeat the df command
        df -h
      Filesystem               Size  Used Avail Use% Mounted on
      /dev/mapper/centos-root   27G  9.5G   18G  36% /
      devtmpfs                 1.9G     0  1.9G   0% /dev
      tmpfs                    1.9G     0  1.9G   0% /dev/shm
      tmpfs                    1.9G  8.6M  1.9G   1% /run
      tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
      /dev/sda1                497M  217M  281M  44% /boot
      tmpfs                    380M     0  380M   0% /run/user/0
      /dev/sdb1                 40G   49M   38G   1% /images2
      
      1. Note that now we are mounting /dev/sdb1 to /images2
      2. Lets create the required directory structure on our new drive
      mkdir /images2/dev
      mkdir /images2/dev/postinitscripts
      mkdir /images2/postdownloadscripts
      
      cp /images/dev/postinitscripts/* /images2/dev/postinitscripts
      cp /images/postdownloadscripts/* /images2/postdownloadscripts
      touch /images2/dev/.mntcheck
      touch /images2/.mntcheck
      chown -R fogproject.root /images2 
      
      1. We need to update/create our nfs shares for our new disk. The FOS engine will mount these shares to capture and deploy images.
      2. Use the showmount command to list the existing (current) shares
        showmount -e 127.0.0.1
      Export list for 127.0.0.1:
      /images/dev *
      /images     *
      
      1. You will notice that we have both /images and /images/dev shares currently shared. Now we will add our /image2 directories to the share list. We will do this by editing the exports file (similar to the fstab file, but for shared directories)
        vi /etc/exports
      2. Append the following lines to the end of the /etc/exportfs file:
        Note: Ensure you update the fsid values if you copy and paste existing lines.
      /images2 *(ro,sync,no_wdelay,no_subtree_check,insecure_locks,no_root_squash,insecure,fsid=3)
      /images2/dev *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=4)
      
      1. Save and exit the editor
      2. Now tell the OS to reload the exports file.
        exportfs -ra
      3. Rerun the showmount command to confirm we have our new drive shared
        showmount -e 127.0.0.1
      Export list for 127.0.0.1:
      /images2/dev *
      /images2     *
      /images/dev  *
      /images      *
      
      1. That concludes the operating system setup steps. Refer to Part 2 For the FOG server configuration to finish the setup.
      posted in Tutorials
      george1421G
      george1421
    • RE: One StorageNode for multiple Masters

      @stefan-hanke I still don’t understand your logic here mixed with how I know FOG works.

      Each FOG Master node will have its own database. It will not know about other master nodes or images stored on the shared storage node (storage node). The FOG server (normal mode) is a supervisory computer that is responsible for managing the deployment process. It is responsible for creating the iPXE boot menu and sending the FOS system (FOG’s customized linux OS to the target computer that captures and deploys images). The FOS engine does all of the work of imaging. The FOG computer on “watches” what happens. If you have a central storage node that would be accessible to all areas of your network and not have a single FOG Master node available is a bit confusing.

      Now with that said, it is possible to do what you want. You just have to remember / keep in mind that the target computer must access at least one FOG Master node and the common storage node or everything will fall down.

      How you will set this up is to configure one FOG Master node, then setup your central storage node. This storage node can be a fog server in storage node mode, a standard linux server, or a NAS like synology or qnap. On your first FOG Master node, you will have a storage group with your FOG Master Node and your storage node. Change the roles so that the storage node is the master node and the FOG Server is a storage node. Then set the max clients on the FOG Server to zero. This will tell FOS to capture and deploy using the storage node only. Confirm that this setup works like you need. Then add your next FOG Master node, use the same storage node as you have setup before. Again change the FOG Master node into a storage node and promote storage node to the master role in the storage group on the second FOG server. Hopefully you will see that each new fog server will be added as a storage node and your central storage node will be master. Since your central storage node will not be a full fog server, no replication can happen since FOG replication is a push and there is no service to push the images.

      Understand this is not a supported configuration for FOG, but should work. I have not setup this type of environment so I can only guess that it should work.

      posted in General Problems
      george1421G
      george1421
    • RE: Windows 10 Recovery Partition - Beginning of Drive?

      First of all its not the best place to put the recovery partition at the beginning of the disk. The MS recommends the efi partition be on disk 1 partition 1. It has been that way for years. Also I would question the logic of needing a recovery partition at all. With FOG its much faster to rebuilt the system then to try to recover it. I understand (as someone recently pointed out) that if the computer is at a remote location to the FOG server then the only option is to try to recover it. On my campus I don’t use/create the recovery partition at all. I do use MDT to build the golden image, with MDT I can chose how to create the disk layout for both bios and uefi systems. If I had a choice I would place the recovery partition before the 😄 drive partition, that would make the biggest resizable partition last on the disk where it can be easily resized.

      posted in Windows Problems
      george1421G
      george1421
    • RE: Using FOG to PXE boot into your favorite installer images

      Centos 7

      1. First we’ll create the required directories:
      mkdir -p /images/os/centos/7
      mkdir -p /tftpboot/os/centos/7
      
      1. Now we’ll mount the Centos 7 installer over the loop directory. Then we’ll copy the contents of the DVD to the directory we built above.
      mount -o loop -t iso9660 /{full path where you have the iso stored}/CentOS7-x86_64.iso /mnt/loop
      
      cp -R /mnt/loop/* /images/os/centos/7
      umount /mnt/loop
      
      1. Finally we’ll copy the pxe boot kernel and intfs to the tftpboot directory.
      cp /images/os/centos/7/images/pxeboot/vmlinuz /tftpboot/os/centos/7
      cp /images/os/centos/7/images/pxeboot/initrd.img /tftpboot/os/centos/7
      
      1. The last bit of magic we need to do is setup a new FOG iPXE boot menu entry for this OS.
      2. In the fog WebGUI go to FOG Configuration->iPXE New Menu Entry
        Set the following fields
        Menu Item: os.Centos7
        Description: Centos 7 v1607 {or what ever version you are building}
        Parameters:
        kernel tftp://${fog-ip}/os/centos/7/vmlinuz
        initrd tftp://${fog-ip}/os/centos/7/initrd.img
        imgargs vmlinuz initrd=initrd.img root=live:nfs://${fog-ip}:/images/os/centos/7/LiveOS/squashfs.img ip=dhcp inst.repo=nfs:${fog-ip}:/images/os/centos/7 splash quiet
        boot || goto MENU
        Menu Show with: All Hosts
      3. That’s it, just pxe boot your target system and pick Centos 7 from the FOG iPXE boot menu.

      References:
      https://forums.fogproject.org/topic/8488/how-to-pxe-boot-cent-os-7/63
      https://www.tecmint.com/install-pxe-network-boot-server-in-centos-7/

      posted in Tutorials
      george1421G
      george1421
    • RE: Sending client machine files using Snap-Ins

      @zacadams You can create a snapin pack (think zip file). That snapin pack will contain a batch file (to do the moves on the system locally) and the target file you want to install.

      To do this your snap in pack will contain the file you want to move and a batch file similar to below.

      Rem copy this file to that
      copy "%~dp0\File_to_move.txt" "c:\Windows"
      

      The thing you have to remember is that the snapins run as the local account SYSTEM. This account has no domain rights. If you need to reach out of the target system for a file, you will need to use this drive mapping command to connect as a user.

      net use t: \\server\filepath /user:domain\user password
      

      Understand there are risks with leaving a plain text password in your batch file. Bundling the file to move it a much cleaner and secure path to use. But either way does work.

      posted in General Problems
      george1421G
      george1421
    • RE: Bitlocker issues

      @technicaltroll So what is partition #4. Is it was would be the C drive?

      If yes, windows some does things a bit problematic in that it will encrypt free/unused space on a drive where bitlocker is not enabled.

      From an elevated windows cmd prompt key in

      manage-bde -off C:
      
      manage-bde -status C:
      

      and confirm that bitlocker has been disabled.

      posted in Windows Problems
      george1421G
      george1421
    • 1
    • 2
    • 5
    • 6
    • 7
    • 8
    • 9
    • 138
    • 139
    • 7 / 139