• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. nrg
    N
    • Profile
    • Following 0
    • Followers 0
    • Topics 6
    • Posts 26
    • Best 1
    • Controversial 0
    • Groups 0

    nrg

    @nrg

    2
    Reputation
    533
    Profile views
    26
    Posts
    0
    Followers
    0
    Following
    Joined Last Online

    nrg Unfollow Follow

    Best posts made by nrg

    • RE: Password changed on fog trunk update

      You guys are right. fixed the .fogsettings config password (pretty sure it’s generated by fog itself).
      matched it with the password I changed it too. now while updating, it will read the right password.

      Thank you!

      posted in Bug Reports
      N
      nrg

    Latest posts made by nrg

    • RE: Very slow cloning speed on specific model

      @Middle said in Very slow cloning speed on specific model:

      nvme set-feature -f 0x0c -v=0 /dev/nvme0

      so the solution was to put that under fog general settings?? I tried and it didn’t work on my end.

      I’m also facing the same issue. I have 5 hp 840 g6 and only 4 of them has this issue. One of them does not have this issue. They’re all identical laptop in hardware and software. even the bios is the same.

      I can fog to the rest of my 800 computers perfectly fine at normal speed. It’s just these new G6.

      disk drive: Samsung MZVLB256HAHQ-000H1
      NIC: Intel Ethernet I219-LM
      bios ver: R70 Ver. 01.03.04 11/06/2019

      FOG server: 1.5.6 on ubuntu 18.04.3 lts

      posted in FOG Problems
      N
      nrg
    • RE: FOG Unresponsive under "heavy" load

      @tom-elliott not sure how the fog version got mixed… I just ran the git pull and rerun the installfog.sh in command to update to latest in dev-branch.
      maybe something in 1.5.0 rc10 that I had that screwed it up going to 1.5.4??
      btw, I’ve been running 1.4.4 and been updating in command line ever since.

      coming in this morning, the replicator is looking normal. the node usage pie chart is normal compare to the main server chart. before it would show 20-40% as it was constantly deleting and replicating. replicator logs are saying No need to sync which is good.

      Interesting to see others in here experienced the same issue updating to 1.5.4

      I appreciate the offer but I have to get these computer labs up and running soon. Love the support and fog has been a life saver. 😃

      posted in FOG Problems
      N
      nrg
    • RE: FOG Unresponsive under "heavy" load

      i went back to 1.5.0 final release. so far it has fixed my issue. imaging is back to normal. node is now copying things over and wont know until tomorrow if it fixed the cycle of replicating. imaging 10 clients right now and the web gui is smooth. iPxe boot is back to normal too. the command line screen during imaging doesn’t sit at deleting mbr/gpt anymore. new feature??

      current Load Average 18.16, 10.60, 4.86

      with 1.5.4, loads were in the high 19 across all three. constantly staying around 9 while nothing was imaging.

      will wait next year for 1.8.0 =D. anyways thanks.!

      posted in FOG Problems
      N
      nrg
    • RE: FOG Unresponsive under "heavy" load

      @george1421 Yes, it’s an exact identical hardware/software as my master fog server. I disabled the node server and there’s still an unresponsive issue with the main server. the repeated replicating was giving the main server a load time of 10. After shutting it off and disabling it, the average load of the main server was normal. Then I tried imaging up to 10 computers and up to the 7th computer, the web GUI was unresponsive. The waiting in line slot message on the client computer stopped responding. I believe there’s a huge issue with this release.

      will try the php tweaks then if that doesn’t work. will figure out a way to go back to 1.5.0 rc10. redo the node fresh.

      edit:
      i edit the memory to 256 but could not find “/etc/httpd/conf.d/fog.conf” anywhere to edit.

      posted in FOG Problems
      N
      nrg
    • RE: FOG Unresponsive under "heavy" load

      @george1421 will try it. but I think it might be an issue with my node. Going to rebuild my node (or disable it) to stop it from replicating over and over. Before 1.5.4 update, my node was stable with hard drive use of 70%. Now I can see it constantly deleting images because it says it’s not matching. it would go from 20%-40% hard drive usage use so it’s cycling through images that it think doesn’t match.
      will report back.

      posted in FOG Problems
      N
      nrg
    • RE: FOG Unresponsive under "heavy" load

      I’m having the same exact issues ever since updating to 1.5.4.

      server: 14.04 LTS ubuntu.

      I had 1.5.0 rc10 and it was running smooth. I only updated because I was having issues with 1709 not restarting after getting a host name. I needed .10.16 client and the latest 1.5.3 had it.
      Before, with 10 slots open, i could image all 10 computers with an average of 1.5gbp/min. now it’s barely holding at 200mbp/min on each client. the entire web GUI is unresponsive. I also have some computers queued up in line waiting for a slot to open and it would time out mid way.
      0_1528826600391_fog_error_gateway.jpg

      I have a server node and updated both the main server and node to 1.5.4 and ever since, it’s been busy replicating over and over.

      Also the PXE boot bz img part takes much longer to do. I’m going to go back to 1.5.0 rc10

      how would I go about downgrading?

      posted in FOG Problems
      N
      nrg
    • RE: Can't image from storage node

      Re: Can’t image from storage node
      Not sure what happened but it seems to be working now. Well imaging from the storage node works when I tried it last week in another group of computers from a separate image that’s stored on the server. I saw the storage node IP on the client computer when imaging.
      I have not touched or updated neither the main or node servers.
      I have not tried to image from LABGEN from storage node. so 50% working from the main issue. Will follow up or capture a new iamge for LABGEN image soon.

      thank you everyone!

      posted in FOG Problems
      N
      nrg
    • RE: Can't image from storage node

      @sebastian-roth said in Can’t image from storage node:

      memtest

      did a badblock test on all three sda partition on the storage node. no errors.

      administrator@31-FOGNODE1:~$ sudo badblocks -v /dev/sda1
      Checking blocks 0 to 524287
      Checking for bad blocks (read-only test): done                                                       
      Pass completed, 0 bad blocks found. (0/0/0 errors)
      administrator@31-FOGNODE1:~$ sudo badblocks -v /dev/sda2
      Checking blocks 0 to 483710975
      Checking for bad blocks (read-only test): done
      Pass completed, 0 bad blocks found. (0/0/0 errors)
      administrator@31-FOGNODE1:~$ sudo badblocks -v /dev/sda3
      Checking blocks 0 to 4149247
      Checking for bad blocks (read-only test): done
      Pass completed, 0 bad blocks found. (0/0/0 errors)
      

      btw, i have my fog storage node running Ubuntu EFI installed. could that be an issue?
      also, i couldn’t find memtest in the grub menu. i also tried putting it back and installing it without success. guess im SOL.

      don’t think there’s anything wrong with the client I tried to image from the storage node. Reason, I was able to image from the master node just fine.

      everyone, thank you for all your efforts 😃

      edit: I ran memtest86 and there is no errors. pass perfectly fine. so no hard drive issue or memory issue… dont know whats wrong =\

      posted in FOG Problems
      N
      nrg
    • RE: Can't image from storage node

      @sebastian-roth

      master node:

      822a5ed907cb41c30631dc6c160f243f  /images/LABGEN/d1.fixed_size_partitions
      dc9ea3b81f67be37d7d63b297dec1941  /images/LABGEN/d1.mbr
      69d95b8f7a25b7cb62095f3cd358e55c  /images/LABGEN/d1.minimum.partitions
      b293627989626a35e2d6631747b45faf  /images/LABGEN/d1.original.fstypes
      d41d8cd98f00b204e9800998ecf8427e  /images/LABGEN/d1.original.swapuuids
      72c1e188a1f4c3594d7712c41657d227  /images/LABGEN/d1.original.uuids
      2c72b745cb040e72a8f0de0cabc18120  /images/LABGEN/d1p1.img
      9fa1331ab13e0b4f0798289b26752109  /images/LABGEN/d1p2.img
      2021c4a6186077ef7e52320ff7718ef5  /images/LABGEN/d1p3.img
      1fd435efeeb55203f8277a883ff1c17c  /images/LABGEN/d1.partitions
      

      Storage Node:

      822a5ed907cb41c30631dc6c160f243f  /images/LABGEN/d1.fixed_size_partitions
      dc9ea3b81f67be37d7d63b297dec1941  /images/LABGEN/d1.mbr
      69d95b8f7a25b7cb62095f3cd358e55c  /images/LABGEN/d1.minimum.partitions
      b293627989626a35e2d6631747b45faf  /images/LABGEN/d1.original.fstypes
      d41d8cd98f00b204e9800998ecf8427e  /images/LABGEN/d1.original.swapuuids
      72c1e188a1f4c3594d7712c41657d227  /images/LABGEN/d1.original.uuids
      2c72b745cb040e72a8f0de0cabc18120  /images/LABGEN/d1p1.img
      9fa1331ab13e0b4f0798289b26752109  /images/LABGEN/d1p2.img
      2021c4a6186077ef7e52320ff7718ef5  /images/LABGEN/d1p3.img
      1fd435efeeb55203f8277a883ff1c17c  /images/LABGEN/d1.partitions
      

      The logs look fine.
      [12-01-17 8:10:21 pm] | LABGEN: No need to sync d1p3.img file to 31-FOGNODE1
      [12-01-17 8:10:21 pm] | LABGEN: No need to sync d1p3.img file to 31-FOGNODE1
      [12-01-17 8:10:21 pm] | LABGEN: No need to sync d1p2.img file to 31-FOGNODE1
      [12-01-17 8:10:21 pm] | LABGEN: No need to sync d1p2.img file to 31-FOGNODE1
      [12-01-17 8:10:20 pm] | LABGEN: No need to sync d1p1.img file to 31-FOGNODE1
      [12-01-17 8:10:20 pm] | LABGEN: No need to sync d1p1.img file to 31-FOGNODE1
      [12-01-17 8:10:20 pm] | LABGEN: No need to sync d1.partitions file to 31-FOGNODE1
      [12-01-17 8:10:20 pm] | LABGEN: No need to sync d1.partitions file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.original.uuids file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.original.uuids file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.original.swapuuids file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.original.swapuuids file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.original.fstypes file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.original.fstypes file to 31-FOGNODE1
      [12-01-17 8:10:19 pm] | LABGEN: No need to sync d1.minimum.partitions file to 31-FOGNODE1
      [12-01-17 8:10:18 pm] | LABGEN: No need to sync d1.minimum.partitions file to 31-FOGNODE1
      [12-01-17 8:10:18 pm] | LABGEN: No need to sync d1.mbr file to 31-FOGNODE1
      [12-01-17 8:10:18 pm] | LABGEN: No need to sync d1.mbr file to 31-FOGNODE1
      [12-01-17 8:10:18 pm] | LABGEN: No need to sync d1.fixed_size_partitions file to 31-FOGNODE1
      [12-01-17 8:10:18 pm] | LABGEN: No need to sync d1.fixed_size_partitions file to 31-FOGNODE1
      [12-01-17 8:10:17 pm] | Image Name: LABGEN

      for the LABGEN image, it was first captured Jun 15th 2016 on the master node and it works fine.
      the LABGEN image was replicated on the storage node on Nov 9th 2017. The reason why you see Dec 1st on the two files on the storage node is because I manually deleted d1p3.img on the storage node and had the masternode replicate the file again. I knew that it runs the replicator service every 30mins? so if the server did not see the file, it would replicate itself again. that explains the date.

      NOTE: WTF? why does the logs show duplicated lines? bug?

      posted in FOG Problems
      N
      nrg
    • RE: Can't image from storage node

      This is from the storage node 10.31.1.16:

      total 18148960
      drwxrwxrwx  2 fog fog         4096 Dec  1 09:07 .
      drwxrwxrwx 15 fog root        4096 Nov  9 15:14 ..
      -rwxrwxrwx  1 fog fog            4 Dec  1 09:07 d1.fixed_size_partitions
      -rwxrwxrwx  1 fog fog      1048576 Nov  9 15:14 d1.mbr
      -rwxrwxrwx  1 fog fog          629 Nov  9 15:14 d1.minimum.partitions
      -rwxrwxrwx  1 fog fog           15 Nov  9 15:14 d1.original.fstypes
      -rwxrwxrwx  1 fog fog            0 Nov  9 15:14 d1.original.swapuuids
      -rwxrwxrwx  1 fog fog          215 Nov  9 15:14 d1.original.uuids
      -rwxrwxrwx  1 fog fog     11614954 Nov  9 15:14 d1p1.img
      -rwxrwxrwx  1 fog fog      2171231 Nov  9 15:14 d1p2.img
      -rwxrwxrwx  1 fog fog  18569660631 Dec  1 09:10 d1p3.img
      -rwxrwxrwx  1 fog fog          629 Nov  9 15:14 d1.partitions
      

      this is from the master node 10.31.1.15:

      total 18148960
      drwxrwxrwx  2 fog root        4096 Jun 15  2016 .
      drwxrwxrwx 15 fog root        4096 Aug 17 12:41 ..
      -rwxrwxrwx  1 fog root           4 Jun 15  2016 d1.fixed_size_partitions
      -rwxrwxrwx  1 fog root     1048576 Jun 15  2016 d1.mbr
      -rwxrwxrwx  1 fog root         629 Jun 15  2016 d1.minimum.partitions
      -rwxrwxrwx  1 fog root          15 Jun 15  2016 d1.original.fstypes
      -rwxrwxrwx  1 fog root           0 Jun 15  2016 d1.original.swapuuids
      -rwxrwxrwx  1 fog root         215 Jun 15  2016 d1.original.uuids
      -rwxrwxrwx  1 fog root    11614954 Jun 15  2016 d1p1.img
      -rwxrwxrwx  1 fog root     2171231 Jun 15  2016 d1p2.img
      -rwxrwxrwx  1 fog root 18569660631 Jun 15  2016 d1p3.img
      -rwxrwxrwx  1 fog root         629 Jun 15  2016 d1.partitions
      
      

      You can see the date change of d1p3 on the storage node because I deleted the file and had fog replicate it again today. replicating works, mysql and ftp works.

      I believe fog is designed to jump to the storage node on the 2nd computer when imaging.
      reading through the wiki, it says that computer 1 = MasterNode, computer 2 = Storage node.
      I can see the IP of the storage node on the 2nd computer in the partsclone screen.
      0_1512152787122_error_1.png
      0_1512153744637_error_2.png

      kernal parameter
      0_1512153651684_error_3.jpg

      posted in FOG Problems
      N
      nrg