• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Tom Elliott
    3. Posts
    • Profile
    • Following 27
    • Followers 83
    • Topics 117
    • Posts 18,938
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Windows Recovery Screen after imaging with USB dongle

      @JYost I’m not aware of anything that would have changed regarding the type of network used especially if it works fine for the on-board network but not when you do it through the USB C dongle adapter?

      The FOG Client can use dongles as well, but the fact you’re seeing the recovery screen when coming off the USB Adapter? That is strange and different and more likely something manufacturer specific (best I can guess.)

      As for the FOG Client software operationality: the USB Dongle should work just fine if it’s associated to the device and has the expected snapins, etc… If it’s a generic Adapter, you may have enabled “ignore for client” on the mac address just to ensure all machines using that dongle aren’t getting assigned the same Hostname, Snapins, printers, etc…

      Though, normally/ideally when the FOG Client checks in it looks up all MAC’s passed in, and attempts to find the host using any one of the MAC addresses associated to that machine even if ti’s not the one directly plugged in. It’s possible the onboard mac (when unplugged and system initially loaded) isn’t even recognized or sending as part of this authentication process, but without the FOG Client logs, I can only make WAGs at this point. Probably still some what blind with the logs, but at least we have data to work with that way.

      As for the recovery screen, I’m not sure I have an answer for that. I can make another Guess there, and most likely it’s related to the kernel version. You could try some older version kernels using the kernel updater page and see if one of those older ones don’t force your machine into a Recovery Screen. It still seems very vendor specific, but I’ve seen odd issues of firmware temporary writes that do some weird things to Network devices between the FOS and Windows boots that might be happening in your case as well. Again, just a WAG.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Updated from 1.5.10.1721 to 1.5.10.1725 and FOG Multicast Manager is creating Zombies again. Will the script eventually be patched?

      @Fog_Newb Does this mean things are working as you expected now?

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Updated from 1.5.10.1721 to 1.5.10.1725 and FOG Multicast Manager is creating Zombies again. Will the script eventually be patched?

      @Fog_Newb Can you try pulling again?

      I have made a slight change to what you performed as I’m handling this in /opt/fog/service/lib/service_lib.php under the Service_Register_Signal_handler function. This should be the central point where all the FOG Services get their base line order and configuration for starting the processes. So, from your original code, you are only applying it to MulticastManager.

      While I believe that, currently, is the only service that may spawn children (due to the udpcast calls), I’m still of the mind to handle it more centrally so maybe a future process spawning thing will not need the same action.

      I have pushed what i hope to fix this (using your baseline of course) and hope this hsould work for your needs.

      Thank you for testing in advance and sorry for missing this before the stable release.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas I wont make you do a PR. I already pushed it if you wanted to use the the pushed code. 🙂 thanks for testing and letting us all know!

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas Even (slightly) better:

      $GLOBALS['TimeZone'] = $fog_settings[4] ?? (ini_get('date.timezone') ?: 'UTC');
      

      We lose a storage variable (freeing up a tiny bit of memory) and just get the value directly as possible.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas What about simplifying the whole stanza?

      I’m not saying I don’t like your suggestions, just that this stanza should effectively do exactly the same thing, just much more simplified.

      $GLOBALS['TimeZone'] = $fog_settings[4] ?? ($defTz ?? 'UTC');
      
      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas So the display time of thigns should follow the TZ Info, but hte /etc/php.ini (or its equivalent) for timezone will need to be set as well as that’s the point the service starts up with.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub We have performed the same action (we get all hdd’s and only return the unique drives on the system. it will fall back to preferring the order of the drive in which lsblk returns them instead of lexicographically sorting.)

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub Have you updated to dev-branch? You are free to run any task as a debug, initially to validate things are working as expected before things get too far and do any actual “data loss activities”.

      The latest FOS code is the default pull in:

      if it helps you to see how it functions please review the getHardDisk funciton starts at line 1501 of the Code link I provided you:

      If it helps to see the funciton as a whole:

      getHardDisk() {
          hd=""
          disks=""
      
          # Get valid devices (filter out 0B disks) once, sort lexicographically for stable name order
          local devs
          devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" { print $1 }' | sort -u)
      
          if [[ -n $fdrive ]]; then
              local found_match=0
              for spec in ${fdrive//,/ }; do
                  local spec_resolved spec_norm spec_normalized matched
                  spec_resolved=$(resolve_path "$spec")
                  spec_norm=$(normalize "$spec_resolved")
                  spec_normalized=$(normalize "$spec")
                  matched=0
      
                  for dev in $devs; do
                      local size uuid serial wwn
                      size=$(blockdev --getsize64 "$dev" | normalize)
                      uuid=$(blkid -s UUID -o value "$dev" 2>/dev/null | normalize)
                      read -r serial wwn <<< "$(lsblk -pdno SERIAL,WWN "$dev" 2>/dev/null | normalize)"
      
                      [[ -n $isdebug ]] && {
                          echo "Comparing spec='$spec' (resolved: '$spec_resolved') with dev=$dev"
                          echo "  size=$size serial=$serial wwn=$wwn uuid=$uuid"
                      }
                      if [[ "x$spec_resolved" == "x$dev" || \
                            "x$spec_normalized" == "x$size" || \
                            "x$spec_normalized" == "x$wwn" || \
                            "x$spec_normalized" == "x$serial" || \
                            "x$spec_normalized" == "x$uuid" ]]; then
                          [[ -n $isdebug ]] && echo "Matched spec '$spec' to device '$dev' (size=$size, serial=$serial, wwn=$wwn, uuid=$uuid)"
                          matched=1
                          found_match=1
                          disks="$disks $dev"
                          # remove matched dev from the pool
                          devs=${devs// $dev/}
                          break
                      fi
                  done
      
                  [[ $matched -eq 0 ]] && echo "WARNING: Drive spec '$spec' does not match any available device." >&2
              done
      
              [[ $found_match -eq 0 ]] && handleError "Fatal: No valid drives found for 'Host Primary Disk'='$fdrive'."
      
              disks=$(echo "$disks $devs" | xargs)   # add unmatched devices for completeness
      
          elif [[ -r ${imagePath}/d1.size && -r ${imagePath}/d2.size ]]; then
              # Multi-disk image: keep stable name order
              disks="$devs"
          else
              if [[ -n $largesize ]]; then
                  # Auto-select largest available drive
                  hd=$(
                      for d in $devs; do
                          echo "$(blockdev --getsize64 "$d") $d"
                      done | sort -k1,1nr -k2,2 | head -1 | cut -d' ' -f2
                  )
              else
                  for d in $devs; do
                      hd="$d"
                      break
                  done
              fi
              [[ -z $hd ]] && handleError "Could not determine a suitable disk automatically."
              disks="$hd"
          fi
      
          # Set primary hard disk
          hd=$(awk '{print $1}' <<< "$disks")
      }
      

      Ultimately, the part I’m worried about is the sort -u as that will lexographically sort the drives regardless of how lsblk returns (which is the part I was stating earlier, there’s no true OS load method as PCI tends to load faster than serial -> parallel:

      I have adjusted the code slightly and am rebuilding with that adjustment in the beginning of the function where we get all available drives:

      devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" { print $1 }' | sort -u)
      

      Instead of sort -u I’m going to try:

      devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" && !seen[$1]++ { print $1 }')
      

      Basically that will get only unique drive entries but keep in in the order of which lsblk sees the drives.

      I doubt this will “fix” the issue you’re seeing, but it’s worth noting.

      I still need to clarify, however, that this isn’t the coding fault. There’s 0 guaranteed method to ensure we always get the right drive, because in newer systems what is labelled the drive this cycle, can easily be labelled something else the next cycle.

      hdd will always load in hda, hdb, hdc, hdd - this is about the only “guarantee” we can give.

      Serial (USB, SATA, etc…) SATA would load (generally) in the channel order appropriately, but USB might or might not load before: so Something in the USB might take /dev/sda on this boot, and on the next, the channel 0 controller might take /dev/sda.

      NVME, what’s nvme0n1 on this cycle, might become nvme1n1 on the next.

      This is why the function you see is “best guess” at best.

      I am wanting to make this seemingly more stable on your side of things, for sure, but just want to be clear on what you’re seeing, there’s never any potential we can guarantee we got the “right” thing.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Upgrading FOG

      @jfernandz When you see this error (Attempting to check in … Fail) can you also look at your http/php error logs and see if anything is logging there?

      I’m not aware of anything problematic due to the ability to “Force task”. Are you forcing the task and it’s causing the issue or just the ability to “Force task” is causing the issue? Just trying to understand.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Proper way to reinstall the FOG Client

      @jfernandz THis sounds like (at first glance) the security token issue:

      So the FOG server has a security token defined to a host, but the client is new on the same host:

      Host prints out “Bad sec token data failed to authenticate”

      One time, I think is fine as there is an initial time but the FOG Server believes something is already trusted.

      You should be able to get around this by resetting the encryption data (from the fog server) for those hosts needing the client reinstalled.

      You can do this via a group.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Report Download

      @ecoele The fact that dev-branch is in 1700’s this informatino seems to indicate your’e still running the latest stable.

      Once you switch to dev-branch you need to pull in the changes:

      cd /your/path/to/fogproject
      git checkout dev-branch
      git pull
      cd bin
      sudo ./installfog.sh -y
      

      Should get you installed.

      You may also need to (from the browser) do a “CTRL + SHIFT + R” to do whats called a hard refresh in the browser to get all the latest/new javascript information.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub The code exists in the FOS system (when you boot a machine for a task, not on your server)

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub /usr/share/fog/lib/funcs.sh getHardDisk function is where the code is located.

      You’ll want to look at github.com/fogproject/fos for this under the path https://github.com/FOGProject/fos/tree/master/Buildroot/board/FOG/FOS/rootfs_overlay/usr/share/fog/lib/funcs.sh specifically.

      Recent updates have been made that were attempting to account for “Host Primary Disk” field allowing serial/wwn/disk size lookups to help pinpoint what drive to use for imaging when this is set.

      In a point of consistency it now de-duplicates and sorts the drives, so it’s possible:

      /dev/hdX is chosen as the primary drive before /dev/nvmeX because of the sorting feature.

      There’s no real way to consistently ensure nvme is loaded before HDD’s though so there was always the potential, just that nvme runs on the PCI bus directly rather than the ATA busses (which are generally much slower to power on)

      Now /dev/sdX (in the new layout) would most likely be safe because lexicographically speaking it would fall in after the nvme’s in naming sorting I’d imagine?

      Currently, I’m aware that the released version of inits likely is also presorting by disk size first (assuming the largest drives are the primary disk you’d want to send the image to when you’re not using the Host Primary Disk feature.)

      From my viewpoint (limited as it may be) you may need to start using UUID/WWN/Serial formatting more for these multi-disk connections where you don’t want to accidentally overwrite a disk.

      Easier said than done, but my point is the getHardDisk feature is a best guess algorithm at its core. It “seemed” better in older systems, but as new technologies and methodologies of reading data come about, there’s no real “this is definitely the drive this user wants the OS to sit on” method available to anyone.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Kernel Versions blank

      @Clebboii Please try a hard refresh. (CTRL + SHIFT + R). I believe the reason you’re seeing it stuck is because cache needs to be reloaded (and javascript does sometimes get loaded into browser cache).

      I know it’s working from all the testing I’ve done and I believe that’s the true bit you’ll need.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: rocky linux 9.6 quirks & php 8

      @mrowand The whole point of the checkAuthAndCSRF is to prevent unauthorized access. Based on the message I’m seeing, the 403 forbidden is happening because it’s crossing origin to get the data or the CSRF token isn’t passing correctly:

      Here’s the code that validates:

          // Optional defense-in-depth: Origin/Referer check for state-changing requests
          public static function checkOrigin(array $allowedOrigins): void
          {
              $method = strtoupper($_SERVER['REQUEST_METHOD'] ?? 'GET');
              if (!in_array($method, ['POST','PUT','PATCH','DELETE'], true)) {
                  return;
              }
              $origin = $_SERVER['HTTP_ORIGIN'] ?? null;
              $referer = $_SERVER['HTTP_REFERER'] ?? null;
              if ($origin) {
                  foreach ($allowedOrigins as $allowed) {
                      if (stripos($origin, $allowed) === 0) {
                          return;
                      }
                  }
                  http_response_code(403);
                  echo _('Forbidden (disallowed Origin)');
                  exit;
              } elseif ($referer) {
                  foreach ($allowedOrigins as $allowed) {
                      if (stripos($referer, $allowed) === 0) {
                          return;
                      }
                  }
                  http_response_code(403);
                  echo _('Forbidden (disallowed Referer)');
                  exit;
              }
              // If neither header is present, you can decide to be strict or lenient.
              // Often lenient to avoid breaking weird client setups.
          }
      

      I suspect your console has more information leading to the specific error that was hit.

      ultimately the code is working as expected and there’s something in your environment causing the issue. Now, to be fair, you said you installed Stable, and Dev-branch has a fix of which I admit I missed.

      If you’re willing/able to install the dev-branch I suspect you’ll see this is working much better.

      posted in Bug Reports
      Tom ElliottT
      Tom Elliott
    • RE: rocky linux 9.6 quirks & php 8

      @mrowand This is strange to me, since I installed Rocky yesterday so I could do some testing.

      I ran an install of Rocky 9 and Rocky 10 and didn’t have to make any single change to the installer for things to install perfectly fine.

      I’m not sure why you’re having issues but in both rocky 10 and 9, php version is by default 8 so I’m unsure what you’re asking about.

      I’m not able to replicate the issues that you’re describing. To be fair, I’m not behind all your same firewalls, but based on your information I’m unable to replicate any issues between Rocky 9 or 10 and with PHP 8.0

      I didn’t need to manually install any packages. I installed dev-branch right away because of security and what not, but as far as package changes, there haven’t been any between stable and dev-branch.

      Now, tftp starts using fog-tftp.socket (so it will only turn on as requested rather than constantly running in the background).

      It looks like your firewall is preventing outgoing access to the internet or other devices on your network (likely firewall-cmd needs to have http/https services added:

      sudo firewall-cmd --zone=public --add-service=http
      sudo firewall-cmd --zone=public --add-service=https
      

      (So you can see the version of your fog server.

      Then again I don’t know your environment.

      posted in Bug Reports
      Tom ElliottT
      Tom Elliott
    • RE: Export image doesn't seem to work anymore

      @boombasstic This is known and will be fixed automatically on the 15th, but please if you need to switch to the dev-branch and install it. Then you should be able to export reports.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Kernel Versions blank

      @rbusdom71 Please switch to dev-branch git checkout dev-branch; git pull then install. This should be fixed.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Resizable Linux

      @slawa-jad Single disk resizable is able to be put on smaller/larger drives and it should be expanding once the image completes.

      The “Fixed size” partitions wouldn’t be touched as far as size goes because that’s expected.

      I think we need more details to understand and help with he issue. Resizable does what it seems you’re saying it’s not doing. Please help us help you with more details such as Images of the error’s you’re seeing, and what you’ve tried.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 946
    • 947
    • 5 / 947