• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Tom Elliott
    3. Posts
    • Profile
    • Following 27
    • Followers 82
    • Topics 116
    • Posts 18,850
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Wrong target device

      @Floppyrub Have you updated to dev-branch? You are free to run any task as a debug, initially to validate things are working as expected before things get too far and do any actual “data loss activities”.

      The latest FOS code is the default pull in:

      if it helps you to see how it functions please review the getHardDisk funciton starts at line 1501 of the Code link I provided you:

      If it helps to see the funciton as a whole:

      getHardDisk() {
          hd=""
          disks=""
      
          # Get valid devices (filter out 0B disks) once, sort lexicographically for stable name order
          local devs
          devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" { print $1 }' | sort -u)
      
          if [[ -n $fdrive ]]; then
              local found_match=0
              for spec in ${fdrive//,/ }; do
                  local spec_resolved spec_norm spec_normalized matched
                  spec_resolved=$(resolve_path "$spec")
                  spec_norm=$(normalize "$spec_resolved")
                  spec_normalized=$(normalize "$spec")
                  matched=0
      
                  for dev in $devs; do
                      local size uuid serial wwn
                      size=$(blockdev --getsize64 "$dev" | normalize)
                      uuid=$(blkid -s UUID -o value "$dev" 2>/dev/null | normalize)
                      read -r serial wwn <<< "$(lsblk -pdno SERIAL,WWN "$dev" 2>/dev/null | normalize)"
      
                      [[ -n $isdebug ]] && {
                          echo "Comparing spec='$spec' (resolved: '$spec_resolved') with dev=$dev"
                          echo "  size=$size serial=$serial wwn=$wwn uuid=$uuid"
                      }
                      if [[ "x$spec_resolved" == "x$dev" || \
                            "x$spec_normalized" == "x$size" || \
                            "x$spec_normalized" == "x$wwn" || \
                            "x$spec_normalized" == "x$serial" || \
                            "x$spec_normalized" == "x$uuid" ]]; then
                          [[ -n $isdebug ]] && echo "Matched spec '$spec' to device '$dev' (size=$size, serial=$serial, wwn=$wwn, uuid=$uuid)"
                          matched=1
                          found_match=1
                          disks="$disks $dev"
                          # remove matched dev from the pool
                          devs=${devs// $dev/}
                          break
                      fi
                  done
      
                  [[ $matched -eq 0 ]] && echo "WARNING: Drive spec '$spec' does not match any available device." >&2
              done
      
              [[ $found_match -eq 0 ]] && handleError "Fatal: No valid drives found for 'Host Primary Disk'='$fdrive'."
      
              disks=$(echo "$disks $devs" | xargs)   # add unmatched devices for completeness
      
          elif [[ -r ${imagePath}/d1.size && -r ${imagePath}/d2.size ]]; then
              # Multi-disk image: keep stable name order
              disks="$devs"
          else
              if [[ -n $largesize ]]; then
                  # Auto-select largest available drive
                  hd=$(
                      for d in $devs; do
                          echo "$(blockdev --getsize64 "$d") $d"
                      done | sort -k1,1nr -k2,2 | head -1 | cut -d' ' -f2
                  )
              else
                  for d in $devs; do
                      hd="$d"
                      break
                  done
              fi
              [[ -z $hd ]] && handleError "Could not determine a suitable disk automatically."
              disks="$hd"
          fi
      
          # Set primary hard disk
          hd=$(awk '{print $1}' <<< "$disks")
      }
      

      Ultimately, the part I’m worried about is the sort -u as that will lexographically sort the drives regardless of how lsblk returns (which is the part I was stating earlier, there’s no true OS load method as PCI tends to load faster than serial -> parallel:

      I have adjusted the code slightly and am rebuilding with that adjustment in the beginning of the function where we get all available drives:

      devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" { print $1 }' | sort -u)
      

      Instead of sort -u I’m going to try:

      devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" && !seen[$1]++ { print $1 }')
      

      Basically that will get only unique drive entries but keep in in the order of which lsblk sees the drives.

      I doubt this will “fix” the issue you’re seeing, but it’s worth noting.

      I still need to clarify, however, that this isn’t the coding fault. There’s 0 guaranteed method to ensure we always get the right drive, because in newer systems what is labelled the drive this cycle, can easily be labelled something else the next cycle.

      hdd will always load in hda, hdb, hdc, hdd - this is about the only “guarantee” we can give.

      Serial (USB, SATA, etc…) SATA would load (generally) in the channel order appropriately, but USB might or might not load before: so Something in the USB might take /dev/sda on this boot, and on the next, the channel 0 controller might take /dev/sda.

      NVME, what’s nvme0n1 on this cycle, might become nvme1n1 on the next.

      This is why the function you see is “best guess” at best.

      I am wanting to make this seemingly more stable on your side of things, for sure, but just want to be clear on what you’re seeing, there’s never any potential we can guarantee we got the “right” thing.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Upgrading FOG

      @jfernandz When you see this error (Attempting to check in … Fail) can you also look at your http/php error logs and see if anything is logging there?

      I’m not aware of anything problematic due to the ability to “Force task”. Are you forcing the task and it’s causing the issue or just the ability to “Force task” is causing the issue? Just trying to understand.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Proper way to reinstall the FOG Client

      @jfernandz THis sounds like (at first glance) the security token issue:

      So the FOG server has a security token defined to a host, but the client is new on the same host:

      Host prints out “Bad sec token data failed to authenticate”

      One time, I think is fine as there is an initial time but the FOG Server believes something is already trusted.

      You should be able to get around this by resetting the encryption data (from the fog server) for those hosts needing the client reinstalled.

      You can do this via a group.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Report Download

      @ecoele The fact that dev-branch is in 1700’s this informatino seems to indicate your’e still running the latest stable.

      Once you switch to dev-branch you need to pull in the changes:

      cd /your/path/to/fogproject
      git checkout dev-branch
      git pull
      cd bin
      sudo ./installfog.sh -y
      

      Should get you installed.

      You may also need to (from the browser) do a “CTRL + SHIFT + R” to do whats called a hard refresh in the browser to get all the latest/new javascript information.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub The code exists in the FOS system (when you boot a machine for a task, not on your server)

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub /usr/share/fog/lib/funcs.sh getHardDisk function is where the code is located.

      You’ll want to look at github.com/fogproject/fos for this under the path https://github.com/FOGProject/fos/tree/master/Buildroot/board/FOG/FOS/rootfs_overlay/usr/share/fog/lib/funcs.sh specifically.

      Recent updates have been made that were attempting to account for “Host Primary Disk” field allowing serial/wwn/disk size lookups to help pinpoint what drive to use for imaging when this is set.

      In a point of consistency it now de-duplicates and sorts the drives, so it’s possible:

      /dev/hdX is chosen as the primary drive before /dev/nvmeX because of the sorting feature.

      There’s no real way to consistently ensure nvme is loaded before HDD’s though so there was always the potential, just that nvme runs on the PCI bus directly rather than the ATA busses (which are generally much slower to power on)

      Now /dev/sdX (in the new layout) would most likely be safe because lexicographically speaking it would fall in after the nvme’s in naming sorting I’d imagine?

      Currently, I’m aware that the released version of inits likely is also presorting by disk size first (assuming the largest drives are the primary disk you’d want to send the image to when you’re not using the Host Primary Disk feature.)

      From my viewpoint (limited as it may be) you may need to start using UUID/WWN/Serial formatting more for these multi-disk connections where you don’t want to accidentally overwrite a disk.

      Easier said than done, but my point is the getHardDisk feature is a best guess algorithm at its core. It “seemed” better in older systems, but as new technologies and methodologies of reading data come about, there’s no real “this is definitely the drive this user wants the OS to sit on” method available to anyone.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Kernel Versions blank

      @Clebboii Please try a hard refresh. (CTRL + SHIFT + R). I believe the reason you’re seeing it stuck is because cache needs to be reloaded (and javascript does sometimes get loaded into browser cache).

      I know it’s working from all the testing I’ve done and I believe that’s the true bit you’ll need.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: rocky linux 9.6 quirks & php 8

      @mrowand The whole point of the checkAuthAndCSRF is to prevent unauthorized access. Based on the message I’m seeing, the 403 forbidden is happening because it’s crossing origin to get the data or the CSRF token isn’t passing correctly:

      Here’s the code that validates:

          // Optional defense-in-depth: Origin/Referer check for state-changing requests
          public static function checkOrigin(array $allowedOrigins): void
          {
              $method = strtoupper($_SERVER['REQUEST_METHOD'] ?? 'GET');
              if (!in_array($method, ['POST','PUT','PATCH','DELETE'], true)) {
                  return;
              }
              $origin = $_SERVER['HTTP_ORIGIN'] ?? null;
              $referer = $_SERVER['HTTP_REFERER'] ?? null;
              if ($origin) {
                  foreach ($allowedOrigins as $allowed) {
                      if (stripos($origin, $allowed) === 0) {
                          return;
                      }
                  }
                  http_response_code(403);
                  echo _('Forbidden (disallowed Origin)');
                  exit;
              } elseif ($referer) {
                  foreach ($allowedOrigins as $allowed) {
                      if (stripos($referer, $allowed) === 0) {
                          return;
                      }
                  }
                  http_response_code(403);
                  echo _('Forbidden (disallowed Referer)');
                  exit;
              }
              // If neither header is present, you can decide to be strict or lenient.
              // Often lenient to avoid breaking weird client setups.
          }
      

      I suspect your console has more information leading to the specific error that was hit.

      ultimately the code is working as expected and there’s something in your environment causing the issue. Now, to be fair, you said you installed Stable, and Dev-branch has a fix of which I admit I missed.

      If you’re willing/able to install the dev-branch I suspect you’ll see this is working much better.

      posted in Bug Reports
      Tom ElliottT
      Tom Elliott
    • RE: rocky linux 9.6 quirks & php 8

      @mrowand This is strange to me, since I installed Rocky yesterday so I could do some testing.

      I ran an install of Rocky 9 and Rocky 10 and didn’t have to make any single change to the installer for things to install perfectly fine.

      I’m not sure why you’re having issues but in both rocky 10 and 9, php version is by default 8 so I’m unsure what you’re asking about.

      I’m not able to replicate the issues that you’re describing. To be fair, I’m not behind all your same firewalls, but based on your information I’m unable to replicate any issues between Rocky 9 or 10 and with PHP 8.0

      I didn’t need to manually install any packages. I installed dev-branch right away because of security and what not, but as far as package changes, there haven’t been any between stable and dev-branch.

      Now, tftp starts using fog-tftp.socket (so it will only turn on as requested rather than constantly running in the background).

      It looks like your firewall is preventing outgoing access to the internet or other devices on your network (likely firewall-cmd needs to have http/https services added:

      sudo firewall-cmd --zone=public --add-service=http
      sudo firewall-cmd --zone=public --add-service=https
      

      (So you can see the version of your fog server.

      Then again I don’t know your environment.

      posted in Bug Reports
      Tom ElliottT
      Tom Elliott
    • RE: Export image doesn't seem to work anymore

      @boombasstic This is known and will be fixed automatically on the 15th, but please if you need to switch to the dev-branch and install it. Then you should be able to export reports.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Kernel Versions blank

      @rbusdom71 Please switch to dev-branch git checkout dev-branch; git pull then install. This should be fixed.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Resizable Linux

      @slawa-jad Single disk resizable is able to be put on smaller/larger drives and it should be expanding once the image completes.

      The “Fixed size” partitions wouldn’t be touched as far as size goes because that’s expected.

      I think we need more details to understand and help with he issue. Resizable does what it seems you’re saying it’s not doing. Please help us help you with more details such as Images of the error’s you’re seeing, and what you’ve tried.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Resizable Linux

      @slawa-jad I’m not sure I fully understand your question.

      The exact information you posted tells you what is needed for resizing linux images. So yes, it is possible.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Report Download

      @ecoele What version of FOG are you running? I presume “stable” which we learned of a problem with the report downloads.

      Please checkout dev-branch and install it and you should have the ability to download the reports again.

      Or wait until Oct 15th when the next release will be rolled out including these fixes.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: [Problem] Storage Node connection issues after updating to FOG 1.6

      @Fog_Newb Yep, it’s as I suspected:

      The Line:

      Subsystem	sftp	/usr/lib/openssh/sftp-server
      

      should be changed to:

      Subsystem	sftp	internal-sftp
      

      Then restart ssh services: systemctl restart sshd

      Then your Storage Node testing should succeed!

      posted in Bug Reports
      Tom ElliottT
      Tom Elliott
    • RE: Stuck at resizing after successful capture.

      @Fog_Newb Yep, it’s as I suspected:

      The Line:

      Subsystem	sftp	/usr/lib/openssh/sftp-server
      

      should be changed to:

      Subsystem	sftp	internal-sftp
      

      Then restart ssh services: systemctl restart sshd

      Then your Storage Node testing should succeed!

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: [Problem] Storage Node connection issues after updating to FOG 1.6

      @Fog_Newb 1.6 prefers using SSH access for SFTP connectivity to the Nodes:

      Can you output your /etc/ssh/sshd_config file? I suspect the sftp line is screwy at this point.

      posted in Bug Reports
      Tom ElliottT
      Tom Elliott
    • RE: Group Export

      @Richarizard504 I’m not really sure how to help:

      Can you do a hard refresh in your browser and see if that helps anything?

      (CTRL + SHIFT + R in chrome generally)

      I have a screencast where i show you all I did, and all works, so I’m thinking maybe cache isn’t loading the updated Javascript elements necessary for this to function appropriately.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Windows on ARM

      @MarkG Can you increase the Loglevel and see if the kernel is spewing out anything?

      I believe on the GUI there’s a loglevel under FOG Configuration->FOG Settings->Kernel Loglevel or something like it.

      It’s defaulted to 4, but if you increase it, it should make the logging a bit more verbose.

      I’m not holding my breath, but it’s worth a shot at least. Hopefully there’s more information we might glean if there’s anythign hidden due to the lower log level.

      posted in Hardware Compatibility
      Tom ElliottT
      Tom Elliott
    • RE: Help Setting up replication across storage groups

      @Clebboii You would need to assign the image with the groups you want to transfer between.

      Having multiple groups is fine, but those groups also need nodes. Is this not working?

      Basically find your image, associate the groups you want to it, and indicate which group is the master group, adn it should start replicating between groups.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • 1
    • 2
    • 3
    • 4
    • 5
    • 942
    • 943
    • 1 / 943