• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. dolf
    3. Posts
    • Profile
    • Following 0
    • Followers 0
    • Topics 15
    • Posts 107
    • Best 18
    • Controversial 0
    • Groups 1

    Posts made by dolf

    • WOL after moving HDD

      This is not FOG-related, but the audience here might relate to this question…

      After moving the HDD from PC with MAC *:18 to a PC with MAC *:f3, computer *:f3 turns on whenever I try to WOL *:18. Has anyone seen this behaviour? How is it even possible?

      Both PCs are Dell Optiplex 990.

      posted in General
      dolfD
      dolf
    • RE: How to manually upload an existing image

      If the CloneZilla image is not compressed, it will not have the .gz extention (easy to check for), and the last command becomes:

      cat /home/user/czimg/sda2.ntfs-ptcl-img.* | pigz -stdout > /images/fogimg/d1p2.img
      

      If CloneZilla was invoked using dd, we can concatenate and pipe to partclone:

      cat /home/user/czimg/sda2.ntfs-dd-img.* | partclone.$fstype -fsck-src-part -c -s - -O - | pigz -c > /images/fogimg/d1p2.img
      

      etc… There might be a few more cases to consider.

      posted in General
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      I didn’t defrag, but I analyzed the fragmentation, and it reported 1% fragmented.
      However, last night the hard disk of the PC I originally used to develop this image started acting up. chkdsk /R /F /V /X on reboot returned no error or bad sectors, but the DELL Pre-boot System Assessment reports that the HDD has Error Code 2000-0142. I couldn’t find what that code means, other than that the HDD has failed. I think it’s probably a problem with the HDD’s electronics, rather than the disk surface, because the diagnostic utility only took a minute, so it obviously didn’t scan the disk surface. I’m replacing the disk now, to check.

      posted in FOG Problems
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      Just to show that it does work if you make the wiggle room a tad (where tad=4GB) bigger: gparted_details_70GB.htm

      That’s using the same “broken” image. Everything works perfectly on that image, so I wouldn’t really call it broken. chkdsk agrees with me. It does, however, contain massive software packages with millions of files.

      posted in FOG Problems
      dolfD
      dolf
    • RE: How to manually upload an existing image

      Feel free to put this in the wiki if you think it’s worthy.

      I’m working with a Windows 7 installation, where sda1 is a small boot partition and sda2 is the large partition called C:.

      1. Use GParted (from your favourite linux disc or GParted Live) to resize sda2 to a minimum.
      2. Use CloneZilla to capture the disk. I used beginner mode with the savedisk option. I transferred it to /home/user/czimg/… on the FOG server over ssh, but you could use a USB HDD or any other method.
      3. Create a new resizable image on the FOG web interface. Note the image location.
      4. Create the location specified in the previous step, e.g. mkdir /images/fogimg
      5. Do magic:
      cp /home/user/czimg/sda-pt.sf /images/fogimg/d1.minimum.partitions
      cp /home/user/czimg/sda-pt.sf /images/fogimg/d1.partitions
      cp /home/user/czimg/sda-mbr /images/fogimg/d1.mbr
      echo "1" > /images/fogimg/d1.fixed_size_partitions
      echo "/dev/sda2 ntfs" > /images/fogimg/d1.original.fstypes
      cp /home/user/czimg/sda1.ntfs-ptcl-img.gz.aa /images/fogimg/d1p1.img
      cat /home/user/czimg/sda2.ntfs-ptcl-img.gz.* > /images/fogimg/d1p2.img
      

      The last command will take the longest. If you want to see something happening, install pipe viewer and replace the last command with:

      cat /home/user/czimg/sda2.ntfs-ptcl-img.gz.* | pv > /images/fogimg/d1p2.img
      

      Have fun

      posted in General
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      I know it’s a long thread, but here it is: https://forums.fogproject.org/topic/8059/pc-unbootable-after-capture-fails/10

      posted in FOG Problems
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      @Wayne-Workman I’m not great at shell scripting. I google about 5 pages for every line I write. I mostly do Python, PHP and C.

      @Tom-Elliott I’ll have to dissapoint you 😛

      1
      
      posted in FOG Problems
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      Sorry, actually no, the image where the resize succeeded has the same mbr, but fewer files in sda2 (about 10GB less than the one that fails to resize).

      The suggestion for making the capture process safer still holds, though 🙂

      I even tested it: If I resize to 70GB instead of the minimum (about 66GB), it works just fine. I suspect that it isn’t possible to know exactly what the minimum size of an NTFS partition will be without simulating. That’s probably why the authors of ntfsresize include messages like this (emphasis mine):

      • Estimating smallest shrunken size supported …
      • You might resize at 71189536768 bytes or 71190 MB (freeing 178764 MB).
      • Please make a test run using both the -n and -s options before real resizing!

      Luckily, simulation takes about 10 seconds for a 250GB drive, so it won’t be a large performance hit.

      posted in FOG Problems
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      @Wayne-Workman Good to hear that it works for you. The fact that it usually works, but didn’t work for me is the definition of an edge case. And things should not break when edge cases happen.

      I just realized that I unknowingly tested exactly what you suggested, and that’s probably why it worked. When I try to resize the problematic image, however, I get this: gparted_details_bad.htm

      Still, GParted wins, because it safely terminates before destroying the disk. FOG should, too.

      This discussion shows that most people aren’t really sure why this happens. We could use the following algorithm to work around the problem (expanding on what GParted does):

      increment := "1GB or a certain percentage of the disk size"
      partition = /dev/sda2
      
      calibrate partition
      
      target_size := check file system on partition for errors and fix them and get estimate of smallest supported shrunken size
      
      if there are errors
        stop
      
      do
        simulate resizing to target_size
        target_size += increment
      while simulation fails and target_size < disk_size
      
      if target_size < disk_size
        // this means the simulation must have succeeded for the current value of target_size
        actually resize the file system
        actually resize the partition
        // note that file systems and partitions are not the same thing, and are not necessarily the same size... TODO: this is yet another edge case to consider
      
      // if all simulations failed, we just don't resize the disk, and the capture process can still continue uninterrupted
      
      
      posted in FOG Problems
      dolfD
      dolf
    • RE: PC unbootable after capture fails

      Back at it! I tried resizing with GParted, which is known to very carefully check everything before touching the drive. I simply booted GParted Live, and resized the big partition, sda2 to a minimum. Here is the log: gparted_details.htm

      Maybe FOG could learn from (or even directly use) GParted in this regard 🙂

      posted in FOG Problems
      dolfD
      dolf
    • Snapins do not replicate to storage node

      Using the latest FOG server (8540) and client (0.11.3). One storage group with two nodes: the main server and one slave.

      On the client, in C:\fog.log:

      ------------------------------------------------------------------------------
      ---------------------------------SnapinClient---------------------------------
      ------------------------------------------------------------------------------
       2016/07/13 09:18 PM Client-Info Client Version: 0.11.3
       2016/07/13 09:18 PM Client-Info Client OS:      Windows
       2016/07/13 09:18 PM Client-Info Server Version: 8540
       2016/07/13 09:18 PM Middleware::Response Success
       2016/07/13 09:18 PM SnapinClient Snapin Found:
       2016/07/13 09:18 PM SnapinClient     ID: -1
       2016/07/13 09:18 PM SnapinClient     Name: 
       2016/07/13 09:18 PM SnapinClient     Created: -1
       2016/07/13 09:18 PM SnapinClient     Action: 
       2016/07/13 09:18 PM SnapinClient     Hide: False
       2016/07/13 09:18 PM SnapinClient     TimeOut: -1
       2016/07/13 09:18 PM SnapinClient     RunWith: 
       2016/07/13 09:18 PM SnapinClient     RunWithArgs: 
       2016/07/13 09:18 PM SnapinClient     File: 
       2016/07/13 09:18 PM SnapinClient     Args: 
       2016/07/13 09:18 PM SnapinClient ERROR: Snapin hash does not exist
      ------------------------------------------------------------------------------
      

      The ID is NOT -1 in the database. In my experience, error messages relating to FOG are usually confusing or misinforming, which is unfortunate. However, after reading this, I started investigating:

      http://mymainserver/fog/status/getsnapinhash.php?filepath=/opt/fog/snapins/res1360x768.ps1 returns a hash, but http://mystoragenode/fog/status/getsnapinhash.php?filepath=/opt/fog/snapins/res1360x768.ps1 returns 0.

      There were no files in /opt/fog/snapins/ on the storage node. The Snapin Replicator log reported:

      [07-13-16 10:18:36 pm] * Starting Sync Actions
      [07-13-16 10:18:36 pm] | CMD:
      			lftp -e 'set ftp:list-options -a;set net:max-retries 10;set net:timeout 30; mirror -c -R -i res1360x768.ps1 --ignore-time -vvv --exclude 'dev/' --exclude 'ssl/' --exclude 'CA/' --delete-first /opt/fog/snapins /opt/fog/snapins; exit' -u fog,[Protected] ip.ip.ip.ip
      [07-13-16 10:18:36 pm] * Started sync for Snapin Resolution 1360x768
      mirror: Access failed: 553 Could not create file. (res1360x768.ps1)
      

      Then I read this and tried:

      # chown -R fog:root /opt/fog/snapins
      

      So it works now.

      BUT: This was on a fresh install of the latest trunk on a brand new Debian Jessie machine, following the installation instructions and “best practices” (sudo -i and what not) as best I could. So maybe that chown command should go in the installer?

      posted in Bug Reports
      dolfD
      dolf
    • RE: Fog Services not starting on Server

      Check your server’s timezone:

      cat /etc/timezone
      

      Find your php.ini files:

      sudo find /etc -name php.ini
      

      Example output:

      /etc/php5/cli/php.ini
      /etc/php5/apache2/php.ini
      /etc/php5/fpm/php.ini
      

      Open each one and check the value of date.timezone

      posted in FOG Problems
      dolfD
      dolf
    • RE: Fog Services not starting on Server

      I also had this problem. Read here. YMMV.

      posted in FOG Problems
      dolfD
      dolf
    • RE: Imaged Laptops and Desktops not connecting to the domain.

      @Zirushton I’m having the same problem. Which boxes did you tick?

      posted in FOG Problems
      dolfD
      dolf
    • RE: Bandwidth graph: Transmit and Receive swapped?

      Hi Tom, what do you mean by “on the same setup”? These are two physically distinct boxes in two different rooms.

      posted in Bug Reports
      dolfD
      dolf
    • Bandwidth graph: Transmit and Receive swapped?

      It looks like the transmit and receive rates are swapped in the bandwidth graph:

      0_1468405903001_Screenshot from 2016-07-13 12:29:45.png

      All of that bandwidth is due to FOGImageReplicator copying my images from lab2-server to Gleeble, but the graph (as far as I can see) shows that Gleeble is the one transmitting lots of data.

      Fog version ??? (SVN 5892) is running on Ubuntu 14.04 on lab2-server, which is also the master node.

      posted in Bug Reports
      dolfD
      dolf
    • RE: Fog version not displayed.

      Not in South Africa, though 🙂

      posted in Bug Reports
      dolfD
      dolf
    • Fog version not displayed.

      0_1468401956689_Screenshot from 2016-07-13 11:25:43.png

      posted in Bug Reports
      dolfD
      dolf
    • RE: How to manually upload an existing image

      It works! 🙂 The minimum partition size is probably wrong, but I will only deploy to larger drives in any case.

      posted in General
      dolfD
      dolf
    • RE: How to manually upload an existing image

      I created a new image in the Fog web interface, copied the d1.* files from a very similar image, and replaced d1p*.img with concatenated versions of the CloneZilla images. Currently deploying, and partclone seems happy thus far. Just wondering whether the resize will work…

      By the way:
      Sorry for the delayed responses. It typically takes me 45 minutes to deploy a 65GB image. Without compression the shortest time should be around ((65×1024)MB ÷ (100÷8)MB/s) ÷ 60s = 88 minutes. There’s a 100MB switch somewhere between here and the server 😞

      posted in General
      dolfD
      dolf
    • 1 / 1