• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Tom Elliott
    • Profile
    • Following 27
    • Followers 82
    • Topics 116
    • Posts 18,857
    • Groups 0

    Tom Elliott

    @Tom Elliott

    5.1k
    Reputation
    38.9k
    Profile views
    18.9k
    Posts
    82
    Followers
    27
    Following
    Joined
    Last Online

    Tom Elliott Unfollow Follow

    Best posts made by Tom Elliott

    • Gratitudes

      I know I’ve been out of this for a little bit. I check in here or there, but just been extremely busy.

      I don’t want to stop contributing, I just am taking time for myself after my workly duties.

      I have to give a big gratitude and thanks for everyone here trying to help out whether by code, by helping the rest of the community, or documentation.

      @Sebastian-Roth I know you’re busy but you’ve kept the project rolling even with the minimal availability you have. Thank you.
      @george1421 I’m sure you’re busy, but I still see you posting and helping where possible and amenible. Thank you.
      @Wayne-Workman I know you’re helping where you can as well. (Of course I can’t exactly post everybody because I’ve been busy and honestly not keeping up with the forums as much as I probably should.)

      @everyone Thank you. Thank you for still believing in this project. We’re doing the best with what we have. Please understand in we’re lacking, it’s most likely unintentional. I know I’m just busy.

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • FOG 1.3.5 and Client 0.11.11 Officially Released

      https://news.fogproject.org/fog-1-3-5-and-client-0-11-11-officially-released/

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • FOG 1.5.0 RC 11

      https://news.fogproject.org/fog-1-5-0-rc-11/

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • Ubuntu is FOG's enemy

      TLDR; Rerun the fog installer if you have lost “Database Connectivity” to your fog server, or run the ALTER USER syntax shown below.

      So Ubuntu 16, among others I suppose, enable a “security updates” to be applied automatically as a “default” to things. Why, well it makes it simpler to ensure your Ubuntu systems are in compliance and patched for any potential exploits. This causes unknown and unexpected issues.

      I figured it’d be a safe thing to express that there could be problems (as many of you have already experienced) that when these updates go up (with or without your knowledge) it can break functionality in unexpected and inopportune ways.

      The quickest fix is to simply rerun the fog installer which should correct the problem.

      As a note, it seems this problem is specific only when the mysql account is the 'root' user AND the password is blank.

      The “fix” if you must do it manually is to open a terminal and obtain root:
      Super (Windows Key) + T then sudo -i (in most cases).

      From there, open mysql with mysql -u root

      NOTE: MySQL MUST be run with ROOT.

      Run:

      ALTER USER 'root'@'127.0.0.1' IDENTIFIED WITH mysql_native_password BY ''; AND
      ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '';

      It’s okay if one of them fails. This is going to fix Most people’s issues.

      I would highly recommend removing the unattended-upgrades as many of these “sudden” issues came as a security patch ubuntu pushed out. By default Ubuntu typically set’s this for you as enabled and it can cause havoc on you as you (the admin) may not have “done” anything.

      To prevent this problem from happening in the future you could run:

      apt-get -y remove unattended-upgrades (AS Root again).

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • FOG Activity - Status

      FOG is still actively being developed. It’s not necessarily readily apparent, but we can assure you things are still being worked on. These updates may not be communicated in a way that everybody just knows, but can easily be seen if one were to look at our repository site.

      Between our own schedules and lives, we can get very busy. We try to keep things updated and help out on the forums even during lull periods. This might mean we aren’t pushing an RC or release as frequently. It may mean we’re working on other things for the project, such as can be seen if looking at our github site.

      Our forums are heavily active, and this should point as an indicator to our “status” as well.

      If anybody would like to see an increase in developers donating their time to making this free software, consider donating either with monetary support or by spending personal time to help with development.

      FOG is an open source project - it’s even in the name. It is driven by people donating their time and resources. The releases of FOG revolve around when developers can spare a few hours throughout the week. Sometimes that will mean releases will be further, sometimes that will mean releases will be faster. That’s just the nature of our project, and many other open source projects.

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • I'm away, but back?

      Hey everybody,

      I know you see me here on occasion from time to time. Life decisions have made it more difficult for me to do things I would normally be doing. Rest assured, I am still around, and while I’m not quite as active as I was in the past, it’s not because I don’t want to be.

      I had to move, and as part of that I have none of my normal development stuff readily available. Part of the move made me not have a laptop, until today.

      I need to setup my dev environment again, so it may take a little bit, but I will be back up.

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • RE: Release plan for FOG

      That’s correct. The main reason fog is constantly moving forward is because the codebase is improved upon. Major bugs tend to be addressed for the next release. We don’t do an LTS because there’s really two main people working on fog in a consistent manor. Those two are @Joe-Schmitt and myself. Debian and Libreoffice have the team too be able to perform such a feat. Their product is Opensource but they have an employment team which can afford them that luxury. FOG has a team but we make no money and as such are required to work full time jobs. We work on FOG in our free time. I’ve had the ability to even work on it from work because we used the software.

      Maintaining many different versions is difficult. And we don’t have a support team. WYSIWYG and I think we’ve done pretty well on support, even if we don’t have the ability to do dedicated support for our product. 1.5 was a major step toward modernizing the GUI. 1.6 will vastly improve on this. It was only recently we kind of came up with a road map on how best to proceed. Of note, 1.5 will be maintained until 1.6 is released. 1.6 is focused on making he GUI much more modern. 1.7 will be focused mostly toward fixing and refactoring the FOG client. 1.8 will focus on making the FOS system more modular and usable. I don’t know yet for 1.9. 2.0 will bridge the gap for our rewrite based on the work from 1.5 and up. While we do plan to try to do backports where possible, it’s much easier to ask people to update to the latest version than it is to try to maintain many different versions with backports in mind. At least for what FOG does.

      I doubt this will appease anybody, but it’s what I think needs to be said. We are working hard and provide support for our product as best we can. The community makes fogs support system, I think, one of the best around. Add to that and you can almost always have a developer working side by side to help and fix issues as they come up, I don’t think it’s unfair to ask users to update to a specific version. Even if there are bugs, we will always try to correct what we can, when we can. (And normally it’s a pretty quick turn around).

      I’m not perfect and I’ll give you that. We don’t even have a test suite to know if things are working as intended. We have to rely on the community and suggestions are great, just understand our answers won’t always be what people want to hear.

      posted in Feature Request
      Tom ElliottT
      Tom Elliott
    • FOG 1.4.0 Officially Released

      https://news.fogproject.org/fog-1-4-0-officially-released/

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • FOG 1.4.4 Officially Released

      https://news.fogproject.org/fog-1-4-4-officially-released/

      posted in Announcements
      Tom ElliottT
      Tom Elliott
    • FOG 1.5.0 RC 12 and FOG Client v0.11.13 Released

      https://news.fogproject.org/fog-1-5-0-rc-12/

      posted in Announcements
      Tom ElliottT
      Tom Elliott

    Latest posts made by Tom Elliott

    • RE: Updated from 1.5.10.1721 to 1.5.10.1725 and FOG Multicast Manager is creating Zombies again. Will the script eventually be patched?

      @Fog_Newb Does this mean things are working as you expected now?

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Updated from 1.5.10.1721 to 1.5.10.1725 and FOG Multicast Manager is creating Zombies again. Will the script eventually be patched?

      @Fog_Newb Can you try pulling again?

      I have made a slight change to what you performed as I’m handling this in /opt/fog/service/lib/service_lib.php under the Service_Register_Signal_handler function. This should be the central point where all the FOG Services get their base line order and configuration for starting the processes. So, from your original code, you are only applying it to MulticastManager.

      While I believe that, currently, is the only service that may spawn children (due to the udpcast calls), I’m still of the mind to handle it more centrally so maybe a future process spawning thing will not need the same action.

      I have pushed what i hope to fix this (using your baseline of course) and hope this hsould work for your needs.

      Thank you for testing in advance and sorry for missing this before the stable release.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas I wont make you do a PR. I already pushed it if you wanted to use the the pushed code. 🙂 thanks for testing and letting us all know!

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas Even (slightly) better:

      $GLOBALS['TimeZone'] = $fog_settings[4] ?? (ini_get('date.timezone') ?: 'UTC');
      

      We lose a storage variable (freeing up a tiny bit of memory) and just get the value directly as possible.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas What about simplifying the whole stanza?

      I’m not saying I don’t like your suggestions, just that this stanza should effectively do exactly the same thing, just much more simplified.

      $GLOBALS['TimeZone'] = $fog_settings[4] ?? ($defTz ?? 'UTC');
      
      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: cron-style scheduled task starts on UTC, not local time

      @RAThomas So the display time of thigns should follow the TZ Info, but hte /etc/php.ini (or its equivalent) for timezone will need to be set as well as that’s the point the service starts up with.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub We have performed the same action (we get all hdd’s and only return the unique drives on the system. it will fall back to preferring the order of the drive in which lsblk returns them instead of lexicographically sorting.)

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Wrong target device

      @Floppyrub Have you updated to dev-branch? You are free to run any task as a debug, initially to validate things are working as expected before things get too far and do any actual “data loss activities”.

      The latest FOS code is the default pull in:

      if it helps you to see how it functions please review the getHardDisk funciton starts at line 1501 of the Code link I provided you:

      If it helps to see the funciton as a whole:

      getHardDisk() {
          hd=""
          disks=""
      
          # Get valid devices (filter out 0B disks) once, sort lexicographically for stable name order
          local devs
          devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" { print $1 }' | sort -u)
      
          if [[ -n $fdrive ]]; then
              local found_match=0
              for spec in ${fdrive//,/ }; do
                  local spec_resolved spec_norm spec_normalized matched
                  spec_resolved=$(resolve_path "$spec")
                  spec_norm=$(normalize "$spec_resolved")
                  spec_normalized=$(normalize "$spec")
                  matched=0
      
                  for dev in $devs; do
                      local size uuid serial wwn
                      size=$(blockdev --getsize64 "$dev" | normalize)
                      uuid=$(blkid -s UUID -o value "$dev" 2>/dev/null | normalize)
                      read -r serial wwn <<< "$(lsblk -pdno SERIAL,WWN "$dev" 2>/dev/null | normalize)"
      
                      [[ -n $isdebug ]] && {
                          echo "Comparing spec='$spec' (resolved: '$spec_resolved') with dev=$dev"
                          echo "  size=$size serial=$serial wwn=$wwn uuid=$uuid"
                      }
                      if [[ "x$spec_resolved" == "x$dev" || \
                            "x$spec_normalized" == "x$size" || \
                            "x$spec_normalized" == "x$wwn" || \
                            "x$spec_normalized" == "x$serial" || \
                            "x$spec_normalized" == "x$uuid" ]]; then
                          [[ -n $isdebug ]] && echo "Matched spec '$spec' to device '$dev' (size=$size, serial=$serial, wwn=$wwn, uuid=$uuid)"
                          matched=1
                          found_match=1
                          disks="$disks $dev"
                          # remove matched dev from the pool
                          devs=${devs// $dev/}
                          break
                      fi
                  done
      
                  [[ $matched -eq 0 ]] && echo "WARNING: Drive spec '$spec' does not match any available device." >&2
              done
      
              [[ $found_match -eq 0 ]] && handleError "Fatal: No valid drives found for 'Host Primary Disk'='$fdrive'."
      
              disks=$(echo "$disks $devs" | xargs)   # add unmatched devices for completeness
      
          elif [[ -r ${imagePath}/d1.size && -r ${imagePath}/d2.size ]]; then
              # Multi-disk image: keep stable name order
              disks="$devs"
          else
              if [[ -n $largesize ]]; then
                  # Auto-select largest available drive
                  hd=$(
                      for d in $devs; do
                          echo "$(blockdev --getsize64 "$d") $d"
                      done | sort -k1,1nr -k2,2 | head -1 | cut -d' ' -f2
                  )
              else
                  for d in $devs; do
                      hd="$d"
                      break
                  done
              fi
              [[ -z $hd ]] && handleError "Could not determine a suitable disk automatically."
              disks="$hd"
          fi
      
          # Set primary hard disk
          hd=$(awk '{print $1}' <<< "$disks")
      }
      

      Ultimately, the part I’m worried about is the sort -u as that will lexographically sort the drives regardless of how lsblk returns (which is the part I was stating earlier, there’s no true OS load method as PCI tends to load faster than serial -> parallel:

      I have adjusted the code slightly and am rebuilding with that adjustment in the beginning of the function where we get all available drives:

      devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" { print $1 }' | sort -u)
      

      Instead of sort -u I’m going to try:

      devs=$(lsblk -dpno KNAME,SIZE -I 3,8,9,179,202,253,259 | awk '$2 != "0B" && !seen[$1]++ { print $1 }')
      

      Basically that will get only unique drive entries but keep in in the order of which lsblk sees the drives.

      I doubt this will “fix” the issue you’re seeing, but it’s worth noting.

      I still need to clarify, however, that this isn’t the coding fault. There’s 0 guaranteed method to ensure we always get the right drive, because in newer systems what is labelled the drive this cycle, can easily be labelled something else the next cycle.

      hdd will always load in hda, hdb, hdc, hdd - this is about the only “guarantee” we can give.

      Serial (USB, SATA, etc…) SATA would load (generally) in the channel order appropriately, but USB might or might not load before: so Something in the USB might take /dev/sda on this boot, and on the next, the channel 0 controller might take /dev/sda.

      NVME, what’s nvme0n1 on this cycle, might become nvme1n1 on the next.

      This is why the function you see is “best guess” at best.

      I am wanting to make this seemingly more stable on your side of things, for sure, but just want to be clear on what you’re seeing, there’s never any potential we can guarantee we got the “right” thing.

      posted in FOG Problems
      Tom ElliottT
      Tom Elliott
    • RE: Upgrading FOG

      @jfernandz When you see this error (Attempting to check in … Fail) can you also look at your http/php error logs and see if anything is logging there?

      I’m not aware of anything problematic due to the ability to “Force task”. Are you forcing the task and it’s causing the issue or just the ability to “Force task” is causing the issue? Just trying to understand.

      posted in General Problems
      Tom ElliottT
      Tom Elliott
    • RE: Proper way to reinstall the FOG Client

      @jfernandz THis sounds like (at first glance) the security token issue:

      So the FOG server has a security token defined to a host, but the client is new on the same host:

      Host prints out “Bad sec token data failed to authenticate”

      One time, I think is fine as there is an initial time but the FOG Server believes something is already trusted.

      You should be able to get around this by resetting the encryption data (from the fog server) for those hosts needing the client reinstalled.

      You can do this via a group.

      posted in General Problems
      Tom ElliottT
      Tom Elliott