• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. jburleson
    J
    • Profile
    • Following 0
    • Followers 1
    • Topics 8
    • Posts 48
    • Best 11
    • Controversial 0
    • Groups 0

    jburleson

    @jburleson

    14
    Reputation
    1.2k
    Profile views
    48
    Posts
    1
    Followers
    0
    Following
    Joined Last Online

    jburleson Unfollow Follow

    Best posts made by jburleson

    • FOG in LXC Container - How to configure NFS Server

      I am currently working on converting my FOG server from OpenVZ to LXC. I am no expert, but here is what I did to get the NFS Server running inside the container.

      I run Proxmox 4.x but this should work for LXC in general. These instructions are from a post on the Proxmox Forum (https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006) and tweaked just a little.

      LXC OS: Ubuntu 16.04
      FOG Version: 1.3.0 (pulled from git)

      By default LXC has Apparmor enabled. There are two choices here, disable Apparmor or create a profile to allow NFS. I do not recommend disabling Apparmor, but it can be helpful for testing purposes.

      Option 1 - Disable Apparmor:

      • Edit the container configuration file and add the line lxc.aa_profile: unconfined.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Option 2 - Create an Apparmor profile that allows NFS:

      • Create the file /etc/apparmor.d/lxc/lxc-default-with-nfs with the following content.
      # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
      # will source all profiles under /etc/apparmor.d/lxc
      
      profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {  #include <abstractions/lxc/container-base>
      
      # allow NFS (nfs/nfs4) mounts.
        mount fstype=nfs*,
        mount fstype=rpc_pipefs,
      }
      
      • Reload Apparmor: apparmor_parser -r /etc/apparmor.d/lxc-containers
      • Edit the container configuration file and add the line lxc.aa_profile: lxc-container-default-with-nfs.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Make sure to restart your container after you make any changes to the configuration file.

      Hope this helps!

      posted in Tutorials
      J
      jburleson
    • RE: Dell 3040

      @JMacavali I can confirm that the Dell Optiplex 3040 works with the latest kernel 4.6.2. Here is my server info:

      OS: Ubuntu 16.04
      Fog:
      Running Version 8515
      SVN Revision: 5882

      I was able to inventory, register, and deploy an image using uefi boot (secure boot disabled). However, after the it finished deploying the image and rebooted it got stuck in a boot loop. Switching EFI Exit types did not have an impact either. Changing back to legacy boot did fix the issue.

      I did also try updating the 3040 bios to the latest version available on the Dell site but it did not change the uefi booting issue.

      My recommendation is to disable secure boot and switch to legacy boot. If you do that then it should work fine. I have imaged 3 OP3040s with these settings.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Migration from 0.32 to 1.3.0

      @Tom-Elliott I have to say, FOG has some of the speediest devs 🙂 Y’all do great work. The migration from 0.32 to 1.3.0 was pretty straight forward and worked quited well. To be honest, I had expected more issues since I made such a big jump in versions but they were minor and easy to fix. This is a testament to how dedicated the FOG team and community are to this project.

      For anyone still running 0.32 (if I was not the last hold out), the migration is straight forward. Follow the directions on the wiki and you will be up and running in no time.

      Thanks for the help everyone.

      posted in FOG Problems
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      Update
      For Option 2, you do not need to create a new profile. Instead you can modify the file /etc/apparmor.d/lxc/lxc-default-cgns. Here is the content of the file after I added the nfs mount options.

      # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
      # will source all profiles under /etc/apparmor.d/lxc
      
      profile lxc-container-default-cgns flags=(attach_disconnected,mediate_deleted) {
        #include <abstractions/lxc/container-base>
      
        # the container may never be allowed to mount devpts.  If it does, it
        # will remount the host's devpts.  We could allow it to do it with
        # the newinstance option (but, right now, we don't).
        deny mount fstype=devpts,
        mount fstype=cgroup -> /sys/fs/cgroup/**,
        mount fstype=nfs*,
        mount fstype=rpc_pipefs,
      }
      

      You will not need to edit your container configuration file using this method.

      If you are not using Proxmox and your host kernel is NOT cgroup namespace aware, you will need to edit the file /etc/apparmor.d/lxc/lxc-default instead.

      posted in Tutorials
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @chrisdecker

      I have successfully deployed an image to the Optiplex 7040 with the same SSD as yours using UEFI (Secure Boot Disabled).

      FOG Information:
      Running Version 1.3.1-RC-1
      SVN Revision: 6052
      Kernel Version: bzImage Version: 4.9.0

      Host EFI Exit Type: Refined_EFI
      PXE File: ipxe7156.efi

      Image: Windows 10

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Can php-fpm make fog web-gui fast

      @george1421

      I think I found the issue here. The php session save path on Ubuntu should be /var/lib/php/sessions. Update the php_value[session.save_path] in /etc/php/7.1/fpm/pool.d/fog.conf and you should be good to go.

      posted in General Problems
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      @Sebastian-Roth Thanks for seeing this. I have been meaning to update this post. I noticed the change when I upgrade from Proxmox 5.1 to 5.2. I have added a new post that hopefully will help.

      posted in Tutorials
      J
      jburleson
    • Creating Group from Host Management Page

      FOG:
      Running Version 8355
      SVN Revision: 5805

      Creating groups from the Host Management page adds a phantom Group Member.

      Steps to reproduce:

      1. Go to the Hosts Management Page
      2. Search for a host or hosts
      3. Select the host
      4. Scroll to the bottom and Create a new group
      5. Switch to the Group Management Page and take a look at the group you just created. The numbers of members listed in the bottom left will be 1 more than the actual numbers of members listed on the Membership page.

      This only occurs if you create the group from the Host Management page. If you create the group from the Group Management page and then add hosts from either the Group page or the Host Management page, the member number is correct.

      posted in Bug Reports
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @george1421

      I modify the BIOS when the computers come in. One of the settings I change is to switch the SATA operation to AHCI.

      I switched from ipxe.efi since the Surface Pro 4 would not boot from it.

      ipxe7156.efi does not work for RAID mode either (just tested it).

      After my next appointment I will run debug and see if I can get you some additional information on it.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: FOG 1.5.0 RC 12 - Update to Client v0.11.13 not working

      @joe-schmitt Thanks. I downloaded both of them and the clients are now updating.

      posted in Bug Reports
      J
      jburleson

    Latest posts made by jburleson

    • RE: FOG 1.5.6 Officially Released

      @Tom-Elliott For Windows 10, I think this is only true if you use Microsoft Edge. If you use a different browser (Firefox, Chrome, etc.), then you will get the browser warning until you add an exception for the cert in the browser. At least that has been my experience so far on all the Windows 10 machines that I have deployed.

      posted in Announcements
      J
      jburleson
    • RE: FOG 1.5.6 Officially Released

      @Wayne-Workman

      If the FOG client uses HTTP to communicate, is there any reason that we have to use the generated self-signed certificate for HTTPS?

      Why not run both but allow the admin to change the cert for just HTTPS? No need to change the way the client works but allows the admin to use a signed certificate if they want to avoid the browser warnings.

      I kind of do this now except I use the FOG generated certificate. I do not really mind the browser warning (as long as they do not start outright blocking it).

      I have not tested this using my signed certificate but I can test it next Monday if there is interest.

      posted in Announcements
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      @Sebastian-Roth Thanks for seeing this. I have been meaning to update this post. I noticed the change when I upgrade from Proxmox 5.1 to 5.2. I have added a new post that hopefully will help.

      posted in Tutorials
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      Updated for Proxmox 5.x and LXC 3.x

      LXC OS: Ubuntu 18.04 (should be applicable to others as well)
      FOG Version: 1.5.4 (pulled from git)

      By default LXC has Apparmor enabled. There are three choices here, disable Apparmor, create a profile to allow NFS or modify the default profile used for all containers. I do not recommend disabling Apparmor, but it can be helpful for testing purposes.

      Starting with LXC 2.1, configuration keys have change: lxc.apparmor.profile should be used instead of lxc.aa_profile

      Option 1 - Disable Apparmor:

      • Edit the container configuration file and add the line lxc.apparmor.profile: unconfined.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Option 2 - Create an Apparmor profile that allows NFS:

      • Create the file /etc/apparmor.d/lxc/lxc-default-with-nfs with the following content.
      # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
      # will source all profiles under /etc/apparmor.d/lxc
      
      profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {  #include <abstractions/lxc/container-base>
      
       deny mount fstype=devpts,
       mount fstype=nfs*,
       mount fstype=rpc_pipefs,
       mount fstype=cgroup -> /sys/fs/cgroup/**,
      }
      
      • If your host kernel is not namespace aware, remove the line mount fstype=cgroup -> /sys/fs/cgroup/**,.
      • Reload Apparmor: apparmor_parser -r /etc/apparmor.d/lxc-containers
      • Edit the container configuration file and add the line lxc.apparmor.profile: lxc-container-default-with-nfs.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Option 3 - Modify default Apparmor profile to allows NFS.

      You should only choose this options if the majority of your containers need access to NFS. If you only have a couple of containers that need access to NFS, you should use Option 2. In addition, if you do decide to modify the default profile, you should create an additional profile without NFS mounts to assign to the containers that do not need access to NFS (see Option 2 for instructions on creating a new profile.)

      The default profile for LXC is lxc-container-default-cgns if the host kernel is cgroup namespace aware, or lxc-container-default otherwise (from lxc.container.conf(5) man page).

      Starting with Proxmox 5.2, it seems that LXC is using lxc-container-default as the default Apparmor profile.

      • Modify the appropriate default profile.
        lxc-container-default-cgns => /etc/apparmor.d/lxc-default-cgns
        lxc-container-default => /etc/apparmor.d/lxc-default
      • Add these two lines to the end of the file right before the closing brace:
        mount fstype=nfs*,
        mount fstype=rpc_pipefs,
      • Reload Apparmor: apparmor_parser -r /etc/apparmor.d/lxc-containers

      Make sure to restart your container after you make any changes to the configuration file or to Apparmor.

      posted in Tutorials
      J
      jburleson
    • RE: NVMe Information for Inventory

      New script using smartctl. Grabs firmware now as well.

      #!/bin/bash
      
      hd=$(lsblk -dpno KNAME -I 3,8,9,179,202,253,259 | uniq | sort -V | head -1)
      
      #FOG expects the string in the following format
      ##Model=ST31500341AS, FwRev=CC1H, SerialNo=9VS34TD2
      
      disk_info=$(smartctl -i $hd)
      model=$(echo "${disk_info}" | sed -n 's/.*\(Model Number\|Device Model\):[ \t]*\(.*\)/\2/p')
      sn=$(echo "${disk_info}" | sed -n 's/.*Serial Number:[ \t]*\(.*\)/\1/p')
      firmware=$(echo "${disk_info}" | sed -n 's/.*Firmware Version:[ \t]*\(.*\)/\1/p')
      hdinfo="Model=${model}, iFwRev=${firmware}, SerialNo=${sn}"
      
      echo $hdinfo
      

      This works for NVMe and SATA. Note that SATA drives use a ‘Device Model’ where as NVMe drives use ‘Model Number’. This could lead to issues if other drives report it differently.

      posted in Feature Request
      J
      jburleson
    • RE: NVMe Information for Inventory

      I missed that smartctl was in FOS. I will switch it over to smartctl then. That can actually be used for all the drive types. Will require more parsing but it should not be to bad. jq is in FOS, at least it is in 1.5-RC14.

      posted in Feature Request
      J
      jburleson
    • NVMe Information for Inventory

      I have found a method to get at least some of the information about NVMe drives. This will get the model and serial number. I was not able to get the firmware version without using either the nvme-cli or smartctl tools.

      Here is a bash script I used for testing.

      !/bin/bash
      
      hd=`lsblk -dpno KNAME -I 3,8,9,179,202,253,259 | uniq | sort -V | head -1`
      hdinfo=$(hdparm -i $hd 2>/dev/null)
      
      #FOG expects the string in the following format
      ##Model=ST31500341AS, FwRev=CC1H, SerialNo=9VS34TD2
      
      if [[ ! -z $hdinfo ]]; then
        disk_info=`lsblk -dpJo model,serial ${hd}`
        model=`echo ${disk_info} | jq --raw-output '.blockdevices[] | .model' | sed -r 's/^[ \t\n]*//;s/[ \t\n]*$//'`
        sn=`echo ${disk_info} | jq --raw-output '.blockdevices[] | .serial' | sed -r 's/^[ \t\n]*//;s/[ \t\n]*$//'`
        hdinfo="Model=${model},SerialNo=${sn}"
      else
        hdinfo=`echo ${hdinfo} | grep Model=`
      fi
      
      echo $hdinfo
      
      posted in Feature Request
      J
      jburleson
    • RE: Inventory - Case Asset, HDD Information not being populated

      @tom-elliott The case asset tag issue is fixed. It is displayed while the inventory task is running and it is updating the database now. Thanks Tom.

      Since there is not much we can do for the NVMe drive information, this is solved. I will continue to poke around with it and see if I can get any additional information. If I do, I will start a new topic.

      Thanks again.

      posted in Bug Reports
      J
      jburleson
    • RE: Inventory - Case Asset, HDD Information not being populated

      @sebastian-roth They are the same.

      root@fog:/# sha256sum /var/www/fog/service/ipxe/init*
      7ca8048eadcaf3a408ed9d358b2636fc009dfce25b585fbf989609c87606719d  /var/www/fog/service/ipxe/init_32.xz
      58442c312bd6755bb815ff5c842656175628c99e077a69ad807a6f13a0e5bb1b  /var/www/fog/service/ipxe/init.xz
      
      posted in Bug Reports
      J
      jburleson
    • RE: Inventory - Case Asset, HDD Information not being populated

      @tom-elliott I checked out the working branch and ran the installer but nothing changed. The case asset tag is still not being display or recorded in the database. I double checked that I was on the working branch.

      posted in Bug Reports
      J
      jburleson