• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. jburleson
    3. Posts
    J
    • Profile
    • Following 0
    • Followers 1
    • Topics 8
    • Posts 48
    • Best 11
    • Controversial 0
    • Groups 0

    Posts made by jburleson

    • RE: FOG 1.5.6 Officially Released

      @Tom-Elliott For Windows 10, I think this is only true if you use Microsoft Edge. If you use a different browser (Firefox, Chrome, etc.), then you will get the browser warning until you add an exception for the cert in the browser. At least that has been my experience so far on all the Windows 10 machines that I have deployed.

      posted in Announcements
      J
      jburleson
    • RE: FOG 1.5.6 Officially Released

      @Wayne-Workman

      If the FOG client uses HTTP to communicate, is there any reason that we have to use the generated self-signed certificate for HTTPS?

      Why not run both but allow the admin to change the cert for just HTTPS? No need to change the way the client works but allows the admin to use a signed certificate if they want to avoid the browser warnings.

      I kind of do this now except I use the FOG generated certificate. I do not really mind the browser warning (as long as they do not start outright blocking it).

      I have not tested this using my signed certificate but I can test it next Monday if there is interest.

      posted in Announcements
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      @Sebastian-Roth Thanks for seeing this. I have been meaning to update this post. I noticed the change when I upgrade from Proxmox 5.1 to 5.2. I have added a new post that hopefully will help.

      posted in Tutorials
      J
      jburleson
    • RE: FOG in LXC Container - How to configure NFS Server

      Updated for Proxmox 5.x and LXC 3.x

      LXC OS: Ubuntu 18.04 (should be applicable to others as well)
      FOG Version: 1.5.4 (pulled from git)

      By default LXC has Apparmor enabled. There are three choices here, disable Apparmor, create a profile to allow NFS or modify the default profile used for all containers. I do not recommend disabling Apparmor, but it can be helpful for testing purposes.

      Starting with LXC 2.1, configuration keys have change: lxc.apparmor.profile should be used instead of lxc.aa_profile

      Option 1 - Disable Apparmor:

      • Edit the container configuration file and add the line lxc.apparmor.profile: unconfined.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Option 2 - Create an Apparmor profile that allows NFS:

      • Create the file /etc/apparmor.d/lxc/lxc-default-with-nfs with the following content.
      # Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
      # will source all profiles under /etc/apparmor.d/lxc
      
      profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) {  #include <abstractions/lxc/container-base>
      
       deny mount fstype=devpts,
       mount fstype=nfs*,
       mount fstype=rpc_pipefs,
       mount fstype=cgroup -> /sys/fs/cgroup/**,
      }
      
      • If your host kernel is not namespace aware, remove the line mount fstype=cgroup -> /sys/fs/cgroup/**,.
      • Reload Apparmor: apparmor_parser -r /etc/apparmor.d/lxc-containers
      • Edit the container configuration file and add the line lxc.apparmor.profile: lxc-container-default-with-nfs.
        On Proxmox the configuration file is located at /etc/pve/lxc/CTID.conf, where CTID is the ID number of the container.

      Option 3 - Modify default Apparmor profile to allows NFS.

      You should only choose this options if the majority of your containers need access to NFS. If you only have a couple of containers that need access to NFS, you should use Option 2. In addition, if you do decide to modify the default profile, you should create an additional profile without NFS mounts to assign to the containers that do not need access to NFS (see Option 2 for instructions on creating a new profile.)

      The default profile for LXC is lxc-container-default-cgns if the host kernel is cgroup namespace aware, or lxc-container-default otherwise (from lxc.container.conf(5) man page).

      Starting with Proxmox 5.2, it seems that LXC is using lxc-container-default as the default Apparmor profile.

      • Modify the appropriate default profile.
        lxc-container-default-cgns => /etc/apparmor.d/lxc-default-cgns
        lxc-container-default => /etc/apparmor.d/lxc-default
      • Add these two lines to the end of the file right before the closing brace:
        mount fstype=nfs*,
        mount fstype=rpc_pipefs,
      • Reload Apparmor: apparmor_parser -r /etc/apparmor.d/lxc-containers

      Make sure to restart your container after you make any changes to the configuration file or to Apparmor.

      posted in Tutorials
      J
      jburleson
    • RE: NVMe Information for Inventory

      New script using smartctl. Grabs firmware now as well.

      #!/bin/bash
      
      hd=$(lsblk -dpno KNAME -I 3,8,9,179,202,253,259 | uniq | sort -V | head -1)
      
      #FOG expects the string in the following format
      ##Model=ST31500341AS, FwRev=CC1H, SerialNo=9VS34TD2
      
      disk_info=$(smartctl -i $hd)
      model=$(echo "${disk_info}" | sed -n 's/.*\(Model Number\|Device Model\):[ \t]*\(.*\)/\2/p')
      sn=$(echo "${disk_info}" | sed -n 's/.*Serial Number:[ \t]*\(.*\)/\1/p')
      firmware=$(echo "${disk_info}" | sed -n 's/.*Firmware Version:[ \t]*\(.*\)/\1/p')
      hdinfo="Model=${model}, iFwRev=${firmware}, SerialNo=${sn}"
      
      echo $hdinfo
      

      This works for NVMe and SATA. Note that SATA drives use a ‘Device Model’ where as NVMe drives use ‘Model Number’. This could lead to issues if other drives report it differently.

      posted in Feature Request
      J
      jburleson
    • RE: NVMe Information for Inventory

      I missed that smartctl was in FOS. I will switch it over to smartctl then. That can actually be used for all the drive types. Will require more parsing but it should not be to bad. jq is in FOS, at least it is in 1.5-RC14.

      posted in Feature Request
      J
      jburleson
    • NVMe Information for Inventory

      I have found a method to get at least some of the information about NVMe drives. This will get the model and serial number. I was not able to get the firmware version without using either the nvme-cli or smartctl tools.

      Here is a bash script I used for testing.

      !/bin/bash
      
      hd=`lsblk -dpno KNAME -I 3,8,9,179,202,253,259 | uniq | sort -V | head -1`
      hdinfo=$(hdparm -i $hd 2>/dev/null)
      
      #FOG expects the string in the following format
      ##Model=ST31500341AS, FwRev=CC1H, SerialNo=9VS34TD2
      
      if [[ ! -z $hdinfo ]]; then
        disk_info=`lsblk -dpJo model,serial ${hd}`
        model=`echo ${disk_info} | jq --raw-output '.blockdevices[] | .model' | sed -r 's/^[ \t\n]*//;s/[ \t\n]*$//'`
        sn=`echo ${disk_info} | jq --raw-output '.blockdevices[] | .serial' | sed -r 's/^[ \t\n]*//;s/[ \t\n]*$//'`
        hdinfo="Model=${model},SerialNo=${sn}"
      else
        hdinfo=`echo ${hdinfo} | grep Model=`
      fi
      
      echo $hdinfo
      
      posted in Feature Request
      J
      jburleson
    • RE: Inventory - Case Asset, HDD Information not being populated

      @tom-elliott The case asset tag issue is fixed. It is displayed while the inventory task is running and it is updating the database now. Thanks Tom.

      Since there is not much we can do for the NVMe drive information, this is solved. I will continue to poke around with it and see if I can get any additional information. If I do, I will start a new topic.

      Thanks again.

      posted in Bug Reports
      J
      jburleson
    • RE: Inventory - Case Asset, HDD Information not being populated

      @sebastian-roth They are the same.

      root@fog:/# sha256sum /var/www/fog/service/ipxe/init*
      7ca8048eadcaf3a408ed9d358b2636fc009dfce25b585fbf989609c87606719d  /var/www/fog/service/ipxe/init_32.xz
      58442c312bd6755bb815ff5c842656175628c99e077a69ad807a6f13a0e5bb1b  /var/www/fog/service/ipxe/init.xz
      
      posted in Bug Reports
      J
      jburleson
    • RE: Inventory - Case Asset, HDD Information not being populated

      @tom-elliott I checked out the working branch and ran the installer but nothing changed. The case asset tag is still not being display or recorded in the database. I double checked that I was on the working branch.

      posted in Bug Reports
      J
      jburleson
    • Inventory - Case Asset, HDD Information not being populated

      Fog Version: 1.5.0 RC 12

      Case Asset Tag:
      I had the client machine re-run the inventory task and as I was watching, the field for Case Asset Number was blank.

      Reviewing the FOS funcs.sh script I saw that the doInventory function assigns the chassis-asset-tag using this:

      casesasset=$(dmidecode -s chassis-asset-tag)
      

      However, in fog.inventory the following is used to display the case asset tag:

      dots "Case Asset Number:"
      echo "$caseasset"
      

      Changing the echo statement to use $casesasset while in FOS Debug mode, fixes the echo that occurs during the inventory task.

      The case asset tag, however, is still not being recorded in the database. I think there is just a variable mismatch where some code is using ‘caseasset’ and other code is using ‘casesasset’. I have not been able to confirm this yet.

      Hard Disk Model, Firmware and Serial Number:
      I think there is a compatibility issue with NVMe drives. I have looked at a couple of different clients and it seems that only the machines that have NVMe drives is impacted. Here is the output I gathered on a Dell Optiplex 5050 with a M.2 256GB PCIe Class 40 SSD for FOS Debug.

      # lsblk -dpno KNAME -I 3,8,9,179,202,253,259 | uniq | sort -V
      /dev/nvme0n1
      
      # hdparm -i /dev/nvme0n1 
      /dev/nvme0n1:
       HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
       HDIO_GET_IDENTITY failed: Inappropriate ioctl for device
      
      posted in Bug Reports
      J
      jburleson
    • RE: FOG 1.5.0 RC 12 - Update to Client v0.11.13 not working

      @tom-elliott Thank Tom.

      I remove the binary package and re-ran the installer. Then uninstalled 0.11.3 on one of my machines and install 0.11.2. It updated from the server to version 0.11.3.

      posted in Bug Reports
      J
      jburleson
    • RE: FOG 1.5.0 RC 12 - Update to Client v0.11.13 not working

      @joe-schmitt Thanks. I downloaded both of them and the clients are now updating.

      posted in Bug Reports
      J
      jburleson
    • FOG 1.5.0 RC 12 - Update to Client v0.11.13 not working

      Server OS: Ubuntu 16.04
      Client OS: Windows 10 (1709)

      Upgraded to 1.5.0 RC 12 this morning to pull the new client down.

      It does not seem to be updating however. The 0.11.2 client finds the update, seems to complete and then repeats the process.

      Here is a portion of the fog.log file.

      ------------------------------------------------------------------------------
      ---------------------------------ClientUpdater--------------------------------
      ------------------------------------------------------------------------------
       2/8/2018 7:45 AM Client-Info Client Version: 0.11.12
       2/8/2018 7:45 AM Client-Info Client OS:      Windows
       2/8/2018 7:45 AM Client-Info Server Version: 1.5.0-RC-12
       2/8/2018 7:45 AM Middleware::Response Success
       2/8/2018 7:45 AM Middleware::Communication Download: http://fog.cs.astate.edu/client/SmartInstaller.exe
       2/8/2018 7:45 AM Data::RSA FOG Project cert found
       2/8/2018 7:45 AM ClientUpdater Update file is authentic
      ------------------------------------------------------------------------------
      
       2/8/2018 7:45 AM Bus Emmiting message on channel: Update
       2/8/2018 7:45 AM Service-Update Spawning update helper
       2/8/2018 7:45 AM UpdaterHelper Shutting down service...
       2/8/2018 7:45 AM UpdaterHelper Killing remaining processes...
       2/8/2018 7:45 AM UpdaterHelper Applying update...
       2/8/2018 7:45 AM UpdaterHelper Starting service...
       2/8/2018 7:45 AM Main Overriding exception handling
       2/8/2018 7:45 AM Main Bootstrapping Zazzles
       2/8/2018 7:45 AM Controller Initialize
       2/8/2018 7:45 AM Controller Start
      
       2/8/2018 7:45 AM Service Starting service
       2/8/2018 7:45 AM Bus Became bus server
       2/8/2018 7:45 AM Bus Emmiting message on channel: Status
       2/8/2018 7:45 AM Service Invoking early JIT compilation on needed binaries
      
      ------------------------------------------------------------------------------
      --------------------------------Authentication--------------------------------
      ------------------------------------------------------------------------------
       2/8/2018 7:45 AM Client-Info Version: 0.11.12
       2/8/2018 7:45 AM Client-Info OS:      Windows
       2/8/2018 7:45 AM Middleware::Authentication Waiting for authentication timeout to pass
       2/8/2018 7:45 AM Middleware::Communication Download: http://fog.cs.astate.edu/management/other/ssl/srvpublic.crt
       2/8/2018 7:45 AM Data::RSA FOG Server CA cert found
       2/8/2018 7:45 AM Middleware::Authentication Cert OK
       2/8/2018 7:45 AM Middleware::Communication POST URL: http://fog.cs.astate.edu/management/index.php?sub=requestClientInfo&authorize&newService
       2/8/2018 7:45 AM Middleware::Response Success
       2/8/2018 7:45 AM Middleware::Authentication Authenticated
      
      
       2/8/2018 7:45 AM Middleware::Communication URL: http://fog.cs.astate.edu/management/index.php?sub=requestClientInfo&configure&newService&json
       2/8/2018 7:45 AM Middleware::Response Success
       2/8/2018 7:45 AM Middleware::Communication URL: http://fog.cs.astate.edu/management/index.php?sub=requestClientInfo&mac=00:E0:4C:02:49:42|98:5F:D3:31:80:AD|9A:5F:D3:31:84:AC|9A:5F:D3:31:81:AC|98:5F:D3:31:80:AE||02:15:BE:92:C7:F8&newService&json
       2/8/2018 7:45 AM Middleware::Response Success
       2/8/2018 7:45 AM Middleware::Communication URL: http://fog.cs.astate.edu/service/getversion.php?clientver&newService&json
       2/8/2018 7:45 AM Middleware::Communication URL: http://fog.cs.astate.edu/service/getversion.php?newService&json
      
       2/8/2018 7:45 AM Service Creating user agent cache
       2/8/2018 7:45 AM Middleware::Response Module is disabled globally on the FOG server
       2/8/2018 7:45 AM Middleware::Response Success
       2/8/2018 7:45 AM Middleware::Response Module is disabled on the host
       2/8/2018 7:45 AM Service Initializing modules
      
      ------------------------------------------------------------------------------
      ---------------------------------ClientUpdater--------------------------------
      ------------------------------------------------------------------------------
       2/8/2018 7:45 AM Client-Info Client Version: 0.11.12
       2/8/2018 7:45 AM Client-Info Client OS:      Windows
       2/8/2018 7:45 AM Client-Info Server Version: 1.5.0-RC-12
       2/8/2018 7:45 AM Middleware::Response Success
       2/8/2018 7:45 AM Middleware::Communication Download: http://fog.cs.astate.edu/client/SmartInstaller.exe
       2/8/2018 7:45 AM Data::RSA FOG Project cert found
       2/8/2018 7:45 AM ClientUpdater Update file is authentic
      ------------------------------------------------------------------------------
      
       2/8/2018 7:45 AM Bus Emmiting message on channel: Update
       2/8/2018 7:46 AM Service-Update Spawning update helper
       2/8/2018 7:46 AM UpdaterHelper Shutting down service...
       2/8/2018 7:46 AM UpdaterHelper Killing remaining processes...
       2/8/2018 7:46 AM UpdaterHelper Applying update...
       2/8/2018 7:46 AM UpdaterHelper Starting service...
       2/8/2018 7:46 AM Main Overriding exception handling
       2/8/2018 7:46 AM Main Bootstrapping Zazzles
       2/8/2018 7:46 AM Controller Initialize
       2/8/2018 7:46 AM Controller Start
      
       2/8/2018 7:46 AM Service Starting service
       2/8/2018 7:46 AM Bus Became bus server
       2/8/2018 7:46 AM Bus Emmiting message on channel: Status
       2/8/2018 7:46 AM Service Invoking early JIT compilation on needed binaries
      

      I then went to my fog site and downloaded the Smart Installer from the client page, uninstalled the existing client and installed using the Smart Installer. When I check the Windows Apps Setting page after installing, it list the FOG version as 0.11.2. After a reboot, the FogService sees a client update on the server and repeats the process in that you see in the log file above.

      I also tried installing using the MSI for the client page with the same results.

      Do we need to download the 0.11.3 client separately from the git pull?

      Thanks

      posted in Bug Reports
      J
      jburleson
    • RE: Can php-fpm make fog web-gui fast

      @george1421

      I think I found the issue here. The php session save path on Ubuntu should be /var/lib/php/sessions. Update the php_value[session.save_path] in /etc/php/7.1/fpm/pool.d/fog.conf and you should be good to go.

      posted in General Problems
      J
      jburleson
    • RE: Printer Setup Using FOG

      @Raj-G I use FOG to manage our printers. I have the print drivers stored on a separate nfs server but you could use the image directory on the FOG server as well.

      FOG does a good job managing the printers. I have ran into some issues where they printer does not show up in the device list but a reboot of the computer fixes that.

      posted in Tutorials
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @george1421 I updated the BIOS on mine to 1.5.7. Same results.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @jburleson This seems to be a known issue or least it has been reported elsewhere.

      Here is an ArchLinux post from a year ago about the same issue.
      https://bbs.archlinux.org/viewtopic.php?id=204629

      I also found posts on superuser about linux not finding the nvme drive under UEFI with RAID on.

      Ultimately the solution was to switch to ACHI.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @jburleson Second test. Switch from UEFI to Legacy but left SATA Operation in RAID mode.

      You are going to like this.

      lsblk

      ubuntu@ubuntu:~$ lsblk
      NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
      sda           8:0    1  14.9G  0 disk /cdrom
      ├─sda1        8:1    1   1.4G  0 part 
      └─sda2        8:2    1   2.3M  0 part 
      loop0         7:0    0   1.4G  1 loop /rofs
      nvme0n1     259:0    0 238.5G  0 disk 
      ├─nvme0n1p1 259:1    0   450M  0 part 
      ├─nvme0n1p2 259:2    0   100M  0 part 
      ├─nvme0n1p3 259:3    0    16M  0 part 
      └─nvme0n1p4 259:4    0 237.9G  0 part 
      ubuntu@ubuntu:~$ 
      

      Onboard Hardware:

      ubuntu@ubuntu:~$ lspci -nn
      00:00.0 Host bridge [0600]: Intel Corporation Sky Lake Host Bridge/DRAM Registers [8086:191f] (rev 07)
      00:01.0 PCI bridge [0604]: Intel Corporation Sky Lake PCIe Controller (x16) [8086:1901] (rev 07)
      00:02.0 VGA compatible controller [0300]: Intel Corporation Sky Lake Integrated Graphics [8086:1912] (rev 06)
      00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31)
      00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31)
      00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31)
      00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822] (rev 31)
      00:1b.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Root Port #17 [8086:a167] (rev f1)
      00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a146] (rev 31)
      00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31)
      00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31)
      00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31)
      00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)
      02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller [144d:a802] (rev 01)
      ubuntu@ubuntu:~$ 
      

      Notice the new addition

      02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller [144d:a802] (rev 01)
      

      Kernel Drivers

      ubuntu@ubuntu:~$ lspci -k
      00:00.0 Host bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers (rev 07)
      	Subsystem: Dell Skylake Host Bridge/DRAM Registers
      00:01.0 PCI bridge: Intel Corporation Sky Lake PCIe Controller (x16) (rev 07)
      	Kernel driver in use: pcieport
      	Kernel modules: shpchp
      00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 06)
      	Subsystem: Dell Skylake Integrated Graphics
      	Kernel driver in use: i915_bpo
      	Kernel modules: i915_bpo
      00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31)
      	Subsystem: Dell Sunrise Point-H USB 3.0 xHCI Controller
      	Kernel driver in use: xhci_hcd
      00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31)
      	Subsystem: Dell Sunrise Point-H Thermal subsystem
      00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31)
      	Subsystem: Dell Sunrise Point-H CSME HECI
      	Kernel driver in use: mei_me
      	Kernel modules: mei_me
      00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] (rev 31)
      	Subsystem: Dell SATA Controller [RAID mode]
      	Kernel driver in use: ahci
      	Kernel modules: ahci
      00:1b.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Root Port #17 (rev f1)
      	Kernel driver in use: pcieport
      	Kernel modules: shpchp
      00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31)
      	Subsystem: Dell Sunrise Point-H LPC Controller
      00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31)
      	Subsystem: Dell Sunrise Point-H PMC
      00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)
      	Subsystem: Dell Sunrise Point-H HD Audio
      	Kernel driver in use: snd_hda_intel
      	Kernel modules: snd_hda_intel
      00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31)
      	Subsystem: Dell Sunrise Point-H SMBus
      	Kernel modules: i2c_i801
      00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
      	Subsystem: Dell Ethernet Connection (2) I219-LM
      	Kernel driver in use: e1000e
      	Kernel modules: e1000e
      02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller (rev 01)
      	Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller
      	Kernel driver in use: nvme
      	Kernel modules: nvme
      ubuntu@ubuntu:~$ 
      

      Picked up the Samsung controller here as well.

      posted in Hardware Compatibility
      J
      jburleson
    • RE: Dell 7040 NVMe SSD Boot Issue

      @george1421

      Here is what I got when I booted Ubuntu 16.04 from USB. I ran through the same commands you ran previously in the thread.

      BIOS: UEFI
      SATA Operation: RAID

      lsblk still does not show the hard drive.

      ubuntu@ubuntu:~$ lsblk
      NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda      8:0    1 14.9G  0 disk /cdrom
      ├─sda1   8:1    1  1.4G  0 part 
      └─sda2   8:2    1  2.3M  0 part 
      loop0    7:0    0  1.4G  1 loop /rofs
      ubuntu@ubuntu:~$ 
      

      Onboard Hardware:

      ubuntu@ubuntu:~$ lspci -nn
      00:00.0 Host bridge [0600]: Intel Corporation Sky Lake Host Bridge/DRAM Registers [8086:191f] (rev 07)
      00:01.0 PCI bridge [0604]: Intel Corporation Sky Lake PCIe Controller (x16) [8086:1901] (rev 07)
      00:02.0 VGA compatible controller [0300]: Intel Corporation Sky Lake Integrated Graphics [8086:1912] (rev 06)
      00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31)
      00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31)
      00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31)
      00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822] (rev 31)
      00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a146] (rev 31)
      00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31)
      00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31)
      00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31)
      00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)
      ubuntu@ubuntu:~$ 
      

      Kernel Drivers

      ubuntu@ubuntu:~$ lspci -k
      00:00.0 Host bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers (rev 07)
      	Subsystem: Dell Skylake Host Bridge/DRAM Registers
      00:01.0 PCI bridge: Intel Corporation Sky Lake PCIe Controller (x16) (rev 07)
      	Kernel driver in use: pcieport
      	Kernel modules: shpchp
      00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 06)
      	Subsystem: Dell Skylake Integrated Graphics
      	Kernel driver in use: i915_bpo
      	Kernel modules: i915_bpo
      00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31)
      	Subsystem: Dell Sunrise Point-H USB 3.0 xHCI Controller
      	Kernel driver in use: xhci_hcd
      00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31)
      	Subsystem: Dell Sunrise Point-H Thermal subsystem
      00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31)
      	Subsystem: Dell Sunrise Point-H CSME HECI
      	Kernel driver in use: mei_me
      	Kernel modules: mei_me
      00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] (rev 31)
      	Subsystem: Dell SATA Controller [RAID mode]
      	Kernel driver in use: ahci
      	Kernel modules: ahci
      00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31)
      	Subsystem: Dell Sunrise Point-H LPC Controller
      00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31)
      	Subsystem: Dell Sunrise Point-H PMC
      00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)
      	Subsystem: Dell Sunrise Point-H HD Audio
      	Kernel driver in use: snd_hda_intel
      	Kernel modules: snd_hda_intel
      00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31)
      	Subsystem: Dell Sunrise Point-H SMBus
      	Kernel modules: i2c_i801
      00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
      	Subsystem: Dell Ethernet Connection (2) I219-LM
      	Kernel driver in use: e1000e
      	Kernel modules: e1000e
      
      posted in Hardware Compatibility
      J
      jburleson
    • 1
    • 2
    • 3
    • 1 / 3