• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. jburleson
    3. Best
    J
    • Profile
    • Following 0
    • Followers 1
    • Topics 8
    • Posts 48
    • Best 11
    • Controversial 0
    • Groups 0

    Best posts made by jburleson

    • FOG in LXC Container - How to configure NFS Server

      I am currently working on converting my FOG server from OpenVZ to LXC. I am no expert, but here is what I did to get the NFS Server running inside the container.

      I run Proxmox 4.x but this should work for LXC in general. These instructions are from a post on the Proxmox Forum (https://forum.proxmox.com/threads/advice-for-file-sharing-between-containers.25704/#post-129006) and tweaked just a little.

      LXC OS: Ubuntu 16.04
      FOG Version: 1.3.0 (pulled from git)

      By default LXC has Apparmor enabled. There are two choices here, disable Apparmor or create a profile to allow NFS. I do not recommend disabling Apparmor, but it can be helpful for testing purposes.

      Option 1 - Disable Apparmor:

      • Edit the container configuration file and add the line lxc.aa_profile: unconfined.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Option 2 - Create an Apparmor profile that allows NFS:

      • Create the file /etc/apparmor.d/lxc/lxc-default-with-nfs with the following content.
      # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
      # will source all profiles under /etc/apparmor.d/lxc
      
      profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {  #include <abstractions/lxc/container-base>
      
      # allow NFS (nfs/nfs4) mounts.
        mount fstype=nfs*,
        mount fstype=rpc_pipefs,
      }
      
      • Reload Apparmor: apparmor_parser -r /etc/apparmor.d/lxc-containers
      • Edit the container configuration file and add the line lxc.aa_profile: lxc-container-default-with-nfs.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Make sure to restart your container after you make any changes to the configuration file.

      Hope this helps!

      posted in Tutorials
      J
      jburleson
    • RE: Dell 3040

      @JMacavali I can confirm that the Dell Optiplex 3040 works with the latest kernel 4.6.2. Here is my server info:

      OS: Ubuntu 16.04
      Fog:
      Running Version 8515
      SVN Revision: 5882

      I was able to inventory, register, and deploy an image using uefi boot (secure boot disabled). However, after the it finished deploying the image and rebooted it got stuck in a boot loop. Switching EFI Exit types did not have an impact either. Changing back to legacy boot did fix the issue.

      I did also try updating the 3040 bios to the latest version available on the Dell site but it did not change the uefi booting issue.

      My recommendation is to disable secure boot and switch to legacy boot. If you do that then it should work fine. I have imaged 3 OP3040s with these settings.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Migration from 0.32 to 1.3.0

      @Tom-Elliott I have to say, FOG has some of the speediest devs 🙂 Y’all do great work. The migration from 0.32 to 1.3.0 was pretty straight forward and worked quited well. To be honest, I had expected more issues since I made such a big jump in versions but they were minor and easy to fix. This is a testament to how dedicated the FOG team and community are to this project.

      For anyone still running 0.32 (if I was not the last hold out), the migration is straight forward. Follow the directions on the wiki and you will be up and running in no time.

      Thanks for the help everyone.

      posted in FOG Problems
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      Update
      For Option 2, you do not need to create a new profile. Instead you can modify the file /etc/apparmor.d/lxc/lxc-default-cgns. Here is the content of the file after I added the nfs mount options.

      # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
      # will source all profiles under /etc/apparmor.d/lxc
      
      profile lxc-container-default-cgns flags=(attach_disconnected,mediate_deleted) {
        #include <abstractions/lxc/container-base>
      
        # the container may never be allowed to mount devpts.  If it does, it
        # will remount the host's devpts.  We could allow it to do it with
        # the newinstance option (but, right now, we don't).
        deny mount fstype=devpts,
        mount fstype=cgroup -> /sys/fs/cgroup/**,
        mount fstype=nfs*,
        mount fstype=rpc_pipefs,
      }
      

      You will not need to edit your container configuration file using this method.

      If you are not using Proxmox and your host kernel is NOT cgroup namespace aware, you will need to edit the file /etc/apparmor.d/lxc/lxc-default instead.

      posted in Tutorials
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @chrisdecker

      I have successfully deployed an image to the Optiplex 7040 with the same SSD as yours using UEFI (Secure Boot Disabled).

      FOG Information:
      Running Version 1.3.1-RC-1
      SVN Revision: 6052
      Kernel Version: bzImage Version: 4.9.0

      Host EFI Exit Type: Refined_EFI
      PXE File: ipxe7156.efi

      Image: Windows 10

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Can php-fpm make fog web-gui fast

      @george1421

      I think I found the issue here. The php session save path on Ubuntu should be /var/lib/php/sessions. Update the php_value[session.save_path] in /etc/php/7.1/fpm/pool.d/fog.conf and you should be good to go.

      posted in General Problems
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      @Sebastian-Roth Thanks for seeing this. I have been meaning to update this post. I noticed the change when I upgrade from Proxmox 5.1 to 5.2. I have added a new post that hopefully will help.

      posted in Tutorials
      J
      jburleson
    • Creating Group from Host Management Page

      FOG:
      Running Version 8355
      SVN Revision: 5805

      Creating groups from the Host Management page adds a phantom Group Member.

      Steps to reproduce:

      1. Go to the Hosts Management Page
      2. Search for a host or hosts
      3. Select the host
      4. Scroll to the bottom and Create a new group
      5. Switch to the Group Management Page and take a look at the group you just created. The numbers of members listed in the bottom left will be 1 more than the actual numbers of members listed on the Membership page.

      This only occurs if you create the group from the Host Management page. If you create the group from the Group Management page and then add hosts from either the Group page or the Host Management page, the member number is correct.

      posted in Bug Reports
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @george1421

      I modify the BIOS when the computers come in. One of the settings I change is to switch the SATA operation to AHCI.

      I switched from ipxe.efi since the Surface Pro 4 would not boot from it.

      ipxe7156.efi does not work for RAID mode either (just tested it).

      After my next appointment I will run debug and see if I can get you some additional information on it.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: FOG 1.5.0 RC 12 - Update to Client v0.11.13 not working

      @joe-schmitt Thanks. I downloaded both of them and the clients are now updating.

      posted in Bug Reports
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @george1421 I updated the BIOS on mine to 1.5.7. Same results.

      posted in Hardware Compatibility
      J
      jburleson
    • 1 / 1