• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. JJ Fullmer
    3. Posts
    • Profile
    • Following 5
    • Followers 4
    • Topics 55
    • Posts 947
    • Best 254
    • Controversial 0
    • Groups 3

    Posts made by JJ Fullmer

    • RE: How to use fog with two different VLANs

      @professorb24
      I use fog within multiple VLANs but they are all routable to one another.
      If you have isolated vlans you’d have to at least have storagenodes on the other vlans and then could maybe make that work with multiple adapters having replication take place on an adapter on the same vlan as the master and imaging take place on the other vlan with a separate adapter.

      The steps to configure routing depend on your network infrastructure and can involve switch config and firewall config.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Dell Latitude 3340 with USB-C Ethernet Adapter - bad mac address registered

      @pbriec The specs of my lenovo adapter also state WOL is supported but I can’t get it to work from off or from sleep with or without FOG being the one to send the wake packet.
      But I am able to get WOL to work on a standard desktop from Fog without issue now. So do another pull of working-1.6 to get fog’s WOL task updated.

      I’d suggest running it without -y to make sure you get prompted to do any database schema updates via the web-ui. It’s not often that you have to push that button but there are for sure some when upgrading from 1.5.x to 1.6.x. Unless @Tom-Elliott has added something to the installer to make -y automate the schema updates.

      cd /path/to/fog/installer
      git checkout working-1.6
      git pull
      cd bin
      sudo ./installfog.sh
      
      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Dell Latitude 3340 with USB-C Ethernet Adapter - bad mac address registered

      @george1421 I don’t often use the s0 or s3 sleep on my laptop, I just go full on full off. I don’t have a lot of need for WOL to work on laptops in our environment so I’m not too worried about it, but I’ll test wake from sleep mode.

      @pbriec The specs say it does WOL from s0/s3 which are sleep modes, not full off modes.
      I was able to recreate the issue where the wol task wouldn’t delete and was causing computers to boot to fog and showing that error screen and @Tom-Elliott believes he has that fixed (I’m about to test that)

      I did confirm that add WOL to a normal task (like image deploy or inventory) does work as expected in the Newer Fog. So WOL functionality still works the same. I tested that on a standard desktop with built-in ethernet.

      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Dell Latitude 3340 with USB-C Ethernet Adapter - bad mac address registered

      @pbriec I can confirm that these sql commands fix the issue with the snapin table ajax error.

      I am trying to recreate your wol issues too.

      I am unable to get wol with or without fog on my lenovo laptop with the lenovo branded usb-c ethernet adapter. Are you using a usb/usb-c adapter for ethernet or are you using a docking station?

      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Cannot find disk on system (get harddisk) - Dell Latitude 3140

      @luilly23 said in Cannot find disk on system (get harddisk) - Dell Latitude 3140:

      @Kureebow Dell’s website says storage can be UFS, eMMC, or SSD.
      What’s on your laptop?
      https://www.dell.com/en-us/search/latitude 3140

      Perhaps there is still no fog or partclone support for UFS or eMMC storage.

      I’ve previously been able to image with partclone on eMMC. I’ve moved away from such devices as we found them painfully slow for our needs, but if you’re having issues with that I’d be happy to help.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Cannot find disk on system (get harddisk) - Dell Latitude 3140

      @Kureebow said in Cannot find disk on system (get harddisk) - Dell Latitude 3140:

      @luilly23 I cant seem to find anything in the fog wiki for UFS being unsupported.

      While not everything from the wiki has been migrated to it just yet, docs.fogproject.org is the new home of our docs. Also posts within this forum are another great place to look.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Cannot find disk on system (get harddisk) - Dell Latitude 3140

      @Kureebow UFS is supported in the latest dev branch, you could also just download the latest kernel and init and that may do the trick.

      The Surface Go 4 has UFS storage and we had to update the kernel config to support UFS drives. See https://forums.fogproject.org/topic/17112/surface-go-4-incompatible/2?_=1716208953314
      and
      https://github.com/FOGProject/fos/pull/78
      and
      https://github.com/FOGProject/fos/commit/71b1a3a46c43b61b692e31de21754dfc55606b64 and https://github.com/FOGProject/fos/blob/dc9656b08f369f9746372020456158d95cd2e0fa/configs/kernelx64.config#L3093-L3100

      In that post you’ll also see that UFS, at least on the surface go 4, only supports native 4k blocks. Which means, if you are making you image on a VM (VMware for sure on this) then you’re partitioning with 512e blocks instead of 4k blocks. 512e (e for emulated) is still the most common block size as it allows for better backwards compatibility while still using “better” 4k block storage on your disk in the background.
      This matters because you won’t be able to correctly deploy a 512 block image to a 4k block disk, the block numbers won’t align properly and it will either completely fail or it will just not resize the disk correctly at the end.
      I ended up maintaining a separate 4kn image and had to get approval to buy a separate surface go 4 to maintain the image on. On the plus side, that did give me the motivation to dial in my image creation process further with lots of automation.

      I imagine there is a VM Hypervisor out there that allows for setting the block size, but I know for sure that VMWare doesn’t. I found that bhyve within FreeBSD did have a method for this, but it required other work arounds for getting around Windows 11 security requirements, and I didn’t want to base my image off something with security workarounds.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Surface Go 4 incompatible

      In the end I am maintaining a separate image, I was able to get management to let us buy a separate surface go 4 for maintaining the 4k disk image.
      I found that bhyve based VMs can be set to 4k blocks but it was cumbersome to get it to boot to fog to capture the image at the end. And when that image was deployed, it did not expand on the surface go.

      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Surface Go 4 incompatible

      Some info from the debug session

      cat /images/4KDisk-Base-Dev/d1.original.fstypes
      /dev/vda3 ntfs
      
      [Fri Jan 26 root@fogclient ~]# cat /images/4KDisk-Base-Dev/d1.partitions
      label: gpt
      label-id: 9865AAFC-B984-4860-ACF5-4D6F2513747D
      device: /dev/vda
      unit: sectors
      first-lba: 6
      last-lba: 16777210
      sector-size: 4096
      
      /dev/vda1 : start=         256, size=       76800, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=7C4743B5-7150-4672-B521-7B537528D7E7, name="EFI system partition", attrs="GUID:63"
      /dev/vda2 : start=       77056, size=        4096, type=E3C9E316-0B5C-4DB8-817D-F92DF00215AE, uuid=F6216A84-0172-4445-B616-E36DFA20C731, name="Microsoft reserved partition", attrs="GUID:63"
      /dev/vda3 : start=       81152, size=    16501760, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=B6DA06DC-A5D0-434C-A6FE-494A1EFB515E, name="Basic data partition"
      /dev/vda4 : start=    16582912, size=      193792, type=DE94BBA4-06D1-4D40-A16A-BFD50179D6AC, uuid=7BB90563-4BB7-4281-98EA-3FF4BCF1FCA5, attrs="RequiredPartition GUID:63"
      
      [Fri Jan 26 root@fogclient ~]# cat /images/4KDisk-Base-Dev/d1.minimum.partitions
      label: gpt
      label-id: 9865AAFC-B984-4860-ACF5-4D6F2513747D
      device: /dev/vda
      unit: sectors
      first-lba: 6
      last-lba: 16777210
      sector-size: 4096
      
      /dev/vda1 : start=         256, size=       76800, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=7C4743B5-7150-4672-B521-7B537528D7E7, name="EFI system partition", attrs="GUID:63"
      /dev/vda2 : start=       77056, size=        4096, type=E3C9E316-0B5C-4DB8-817D-F92DF00215AE, uuid=F6216A84-0172-4445-B616-E36DFA20C731, name="Microsoft reserved partition", attrs="GUID:63"
      /dev/vda3 : start=       81152, size=    16501760, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=B6DA06DC-A5D0-434C-A6FE-494A1EFB515E, name="Basic data partition"
      /dev/vda4 : start=    16582912, size=      193792, type=DE94BBA4-06D1-4D40-A16A-BFD50179D6AC, uuid=7BB90563-4BB7-4281-98EA-3FF4BCF1FCA5, attrs="RequiredPartition GUID:63"
      
      [Fri Jan 26 root@fogclient ~]# cat /images/4KDisk-Base-Dev/d1.fixed_size_partitions
      1:2:4
      
      [Fri Jan 26 root@fogclient ~]# cat /images/4KDisk-Base-Dev/d1.
      d1.fixed_size_partitions  d1.minimum.partitions     d1.original.swapuuids     d1.shrunken.partitions
      d1.mbr                    d1.original.fstypes       d1.partitions
      
      
      [Fri Jan 26 root@fogclient ~]# cat /images/4KDisk-Base-Dev/d1.shrunken.partitions
      label: gpt
      label-id: 9865AAFC-B984-4860-ACF5-4D6F2513747D
      device: /dev/vda
      unit: sectors
      first-lba: 6
      last-lba: 16777210
      sector-size: 4096
      
      /dev/vda1 : start=         256, size=       76800, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=7C4743B5-7150-4672-B521-7B537528D7E7, name="EFI system partition", attrs="GUID:63"
      /dev/vda2 : start=       77056, size=        4096, type=E3C9E316-0B5C-4DB8-817D-F92DF00215AE, uuid=F6216A84-0172-4445-B616-E36DFA20C731, name="Microsoft reserved partition", attrs="GUID:63"
      /dev/vda3 : start=       81152, size=    16501760, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=B6DA06DC-A5D0-434C-A6FE-494A1EFB515E, name="Basic data partition"
      /dev/vda4 : start=    16582912, size=      193792, type=DE94BBA4-06D1-4D40-A16A-BFD50179D6AC, uuid=7BB90563-4BB7-4281-98EA-3FF4BCF1FCA5, attrs="RequiredPartition GUID:63"
      
      gdisk -l /dev/sda
      GPT fdisk (gdisk) version 1.0.8
      
      Partition table scan:
        MBR: protective
        BSD: not present
        APM: not present
        GPT: present
      
      Found valid GPT with protective MBR; using GPT.
      Disk /dev/sda: 31246336 sectors, 119.2 GiB
      Model: KLUDG4UHGC-B0E1
      Sector size (logical/physical): 4096/4096 bytes
      Disk identifier (GUID): 9865AAFC-B984-4860-ACF5-4D6F2513747D
      Partition table holds up to 128 entries
      Main partition table begins at sector 2 and ends at sector 5
      First usable sector is 6, last usable sector is 31246330
      Partitions will be aligned on 256-sector boundaries
      Total free space is 14469877 sectors (55.2 GiB)
      
      Number  Start (sector)    End (sector)  Size       Code  Name
         1             256           77055   300.0 MiB   EF00  EFI system partition
         2           77056           81151   16.0 MiB    0C01  Microsoft reserved ...
         3           81152        16582911   62.9 GiB    0700  Basic data partition
         4        16582912        16776703   757.0 MiB   2700
      

      It’s a 128 GB drive, the image was a 64 GB drive, I expected it to expand to 128 GB

      ntfsresize info on parts 4 and 3

       ntfsresize --info /dev/sda4
      ntfsresize v2022.10.3 (libntfs-3g)
      Device name        : /dev/sda4
      NTFS volume version: 3.1
      Cluster size       : 4096 bytes
      Current volume size: 793772032 bytes (794 MB)
      Current device size: 793772032 bytes (794 MB)
      Checking filesystem consistency ...
      100.00 percent completed
      Accounting clusters ...
      Space in use       : 14 MB (1.7%)
      Collecting resizing constraints ...
      You might resize at 13193216 bytes or 14 MB (freeing 780 MB).
      Please make a test run using both the -n and -s options before real resizing!
      
      [Fri Jan 26 root@fogclient ~]# ntfsresize --info /dev/sda3
      ntfsresize v2022.10.3 (libntfs-3g)
      Device name        : /dev/sda3
      NTFS volume version: 3.1
      Cluster size       : 4096 bytes
      Current volume size: 67591208960 bytes (67592 MB)
      Current device size: 67591208960 bytes (67592 MB)
      Checking filesystem consistency ...
      100.00 percent completed
      Accounting clusters ...
      Space in use       : 24382 MB (36.1%)
      Collecting resizing constraints ...
      You might resize at 24381546496 bytes or 24382 MB (freeing 43210 MB).
      Please make a test run using both the -n and -s options before real resizing!
      
      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Problem Capturing right Host Primary Disk with INTEL VROC RAID1

      @Ceregon I’ve never messed with cloning a raid array. Anything can be done, but whether or not it’s going to work with built-in stuff is a different question.
      I imagine you have vroc/vmd enabled in the bios on the machine where you’re deploying already. I’ve never got to play with Vroc but I’m familiar with it, just wasn’t able to convince management to buy me the stuff to try it a few years back.
      My first guess is that /dev/md124 doesn’t exist because the raid volume doesn’t exist yet, but it sounds like you found that in a debug session on a host you’re trying to deploy too. So that’s probably out. But I just wonder if the VROC volume needs to be created beforehand to be deployed to, but I don’t have a full understanding of when that volume is made.

      My next guess would be that a RAID array is a multiple disk system, so the image needs to be captured in multiple disk mode e7c6e221-d546-4b02-9e5b-6668cb7bcca4-image.png
      Are you having different disk sizes for these RAID volumes? would capturing with multiple disk or dd be an option?
      In theory a RAID is a single volume, and you may be able to capture it correctly and it sounds like you’ve found others in the forum that have done that?

      Other possibility is the need for different VROC drivers in the bzImage kernel, but I feel like if that was the case, then you wouldn’t be able to see the disk at all when capturing.

      You could also capture in debug mode and mount the windows drive before starting the capture to see if you can read stuff?
      This is from part of a postdownload script that will mount the windows disk to the path /ntfs

      . /usr/share/fog/lib/funcs.sh
      mkdir -p /ntfs
      getHardDisk
      getPartitions $hd
      for part in $parts; do
          umount /ntfs >/dev/null 2>&1
          fsTypeSetting "$part"
          case $fstype in
              ntfs)
                  dots "Testing partition $part"
                  ntfs-3g -o force,rw $part /ntfs
                  ntfsstatus="$?"
                  if [[ ! $ntfsstatus -eq 0 ]]; then
                      echo "Skipped"
                      continue
                  fi
                  if [[ ! -d /ntfs/windows && ! -d /ntfs/Windows && ! -d /ntfs/WINDOWS ]]; then
                      echo "Not found"
                      umount /ntfs >/dev/null 2>&1
                      continue
                  fi
                  echo "Success"
                  break
                  ;;
              *)
                  echo " * Partition $part not NTFS filesystem"
                  ;;
          esac
      done
      if [[ ! $ntfsstatus -eq 0 ]]; then
          echo "Failed"
          debugPause
          handleError "Failed to mount $part ($0)\n    Args: $*"
      fi
      echo "Done"
      

      Also, hot tip, once you’re in debug mode, you can run passwd and set a root password for that debug session. Then run ifconfig to get the ip. Then you can ssh into your debug session with ssh root@ip then put in the password you set when prompted. Then you can copy and paste this stuff and it’s a lot easier to copy the output or take screenshots.

      Another possibilty could be using pre and post download scripts to fix the raid volume in the linux side, I found this information https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/linux-intel-vroc-userguide-333915.pdf but I didn’t dig into to that too much.

      posted in General Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Surface Go 4 incompatible

      Did a debug session and ntfsresize with -c to check and --info shows partition 3 as resizable but it’s not being resized after imaging.

      Running the image in deploy and seeing what it says after imaging and will see if there are any errors.

      I fear this is going to be a 4kn drive alignment/resize issue.

      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Surface Go 4 incompatible

      @JJ-Fullmer I just took another look as I’ve been just maintaining a separate 4kn image and realized that the disk isn’t expanding after imaging.
      I’ll do a debug session tomorrow and report back.

      posted in Hardware Compatibility
      JJ FullmerJ
      JJ Fullmer
    • RE: Windows Images - Too large.

      @sami-blackkite Looks like you’re on the dev branch, which is good. Updating is always a good first step in troubleshooting stuff like this in case it’s already fixed.
      I’m not seeing anything in the commits (https://github.com/FOGProject/fogproject/commits/dev-branch) that actually looks related since the version you’re on, but sometimes a refresh still helps.

      I took a look at the default settings of a new image and it’s close to what I use, only difference is I set compression to 11.

      Are you familiar with a debug capture/deploy task? If not, simply check the ‘debug task’ checkbox when queuing up the capture task. It allows you to step through the capture process and you can catch any messages that might be telling us why it’s behaving odd. Just use the command fog once it’s booted up and ready to start the image and you’ll hit enter for each step. You can also set it up to watch it and step through over ssh if you want by, before running the fog command, getting the ip address with ifconfig and then setting a password with passwd then you can ssh into the debug session from your workstation with ssh root@ip.add.re.ss then the password you set. The password will only exist for that session on that machine. SSH just makes screenshotting and or copying any error messages to share here a bit easier.

      posted in Windows Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Capture and deploy image hostname always same

      @Lukaz You could try doing a post download script that takes the hostname you set for that fog host and put it in the /etc/hostname file.
      I think there may be other locations that hostnamectl sets on newer linux builds, but this could still work.

      I haven’t done a linux post download script but you can see some other examples in these posts, mostly windows based, but I’m sure we can apply the idea to linux without too much trouble. Just gotta mount the client disk in FOS (Fog operating system that you boot into over the network for the imaging process) after imaging is done and inject the fog hots hostname variable (not sure what the var name is off the top of my head) into /etc/hostname on the client. It’s a bit of work initially, but once setup should just work from then on.

      https://forums.fogproject.org/topic/8889/fog-post-install-script-for-win-driver-injection?_=1682187993801

      https://forums.fogproject.org/topic/7740/the-magical-mystical-fog-post-download-script

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Windows Images - Too large.

      @sami-blackkite That’s the right option.
      What version of FOG are you on?

      posted in Windows Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Can't "create new image"

      @TanguyPSV I would suggest giving the dev-branch a try. See the “choosing a fog version” of this page.
      https://docs.fogproject.org/en/latest/install-fog-server

      Running the installer on the current server will upgrade your instance without losing the db. This will get you to the latest and greatest version.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Can't run chown -R fogproject:root /images/

      @GlaDio Check out https://docs.fogproject.org/en/latest/storage-node

      Your existing fog server will manage the database. You can set up your NAS as a storage node or a separate server as a storage node by running the fog installer in storage node mode on that server.

      Then you can set an image to be synced between the main server and the node or set an image to only be stored on the node and clients will boot to the fog server which will point them to the storage node to download the image.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Can't run chown -R fogproject:root /images/

      @GlaDio Now I could be wrong about this but I don’t think this is how you want to go about this.
      Maybe if your NAS can do iscsi you could do that as it would appear more like a real disk but because you would have a file share for what is also a file share you’ll probably run into problems. The client will try to mount the /images share from fog but even if it is successful, that client won’t in turn be able to chain to the next file share. Even if that did work, you’re adding another link in the chain during imaging that could be unreliable.

      I believe there’s some old guides around on setting up a NAS as a storage node. It’s much better to have a full linux server where you can install fog as a storage node, but technically speaking it can be made to work by simply having an NFS share to point to as a storage node. Here’s one from the wiki that hasn’t been converted to the new docs site just yet https://wiki.fogproject.org/wiki/index.php?title=NAS_Storage_Node

      I would suggest going that route instead. I’ve tried to make sub directories of the /images folder that are share paths (like for drivers for driver injection) but found that clients couldn’t get to them after mounting the /images share and I believe you’d run into the same issue.

      All that said, if you want to continue down this other path the permissions probably need to be set on the NAS first then on FOG. There’s also probably some special nfs mount parameters to make it read/write and to allow permission changes to traverse.

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Can't run chown -R fogproject:root /images/

      @GlaDio Can you give the full error?
      And can you also give us the output of

      ls -l /images
      

      Also, what exactly are you trying to configure? Are you wanting to set up a separate storage node on a NAS? Are you trying to mount an nfs share on the fog server that you’re hoping will then mount to clients via the fog server? Are you trying to move the /images directory from the server to a NAS?

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • RE: Could not mount images folder (/bin/fog.download)

      @SOSF2 Oh good.
      Just re-read some of your post and I have more questions

      Why did you move /images to the home path? I suppose it should work as long as the /etc/exports file on the server is exporting that path as an nfs share and that path has the proper permissions (which I see it does).

      What does /etc/exports say on your server?

      posted in FOG Problems
      JJ FullmerJ
      JJ Fullmer
    • 1 / 1