• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. BardWood
    3. Topics
    B
    • Profile
    • Following 0
    • Followers 0
    • Topics 14
    • Posts 74
    • Best 2
    • Controversial 0
    • Groups 0

    Topics created by BardWood

    • B

      Unsolved Upgraded to FOG 1.5.2. Now, Parallels 13 VMs hard crash with ACPI (blah blah) 176 errors.

      FOG Problems
      • • • BardWood
      1
      0
      Votes
      1
      Posts
      277
      Views

      No one has replied

    • B

      Unsolved MacBook(s) and FOG:

      FOG Problems
      • • • BardWood
      20
      0
      Votes
      20
      Posts
      2.7k
      Views

      S

      @BardWood Sorry, no change there. I don’t see any vendor class identification in the PCAP. I have searched the web up and down to see if others report similar issues but couldn’t find anything yet.

      I really wonder if there is some kind of intermediate switch/router/firewall modifying the DHCP packets?!? Have you had FOG server and Mac client on the same switch yet?

    • B

      Unsolved FOG UEFI image sync with storage node seems to be looping\failing on UUID info.

      FOG Problems
      • • • BardWood
      10
      0
      Votes
      10
      Posts
      1.4k
      Views

      S

      @bardwood said in FOG UEFI image sync with storage node seems to be looping\failing on UUID info.:

      When I was capturing these images, all the disk info would get captured but it would error on ‘d1.original.swapuuids’.

      We need to know the exact error - otherwise it’s just guess work going the wrong way.

      Though I am not an expert on the replication logic yet I think the issue could be that you’ve deleted that file on the master. Please move that d1.original.swapuuids on your storage node out of the way (so you still have a backup copy of it just in case): mv /images/X1CG2-UEFI-FC-V1/d1.original.swapuuids /root

      Then see if replication stops looping.

      Replication is a very tricky thing to get right and so messing with it (deleting files by hand) makes it even harder.

    • B

      Unsolved UEFI PXE on Dell Optiplex 7010 hangs

      FOG Problems
      • • • BardWood
      34
      0
      Votes
      34
      Posts
      14.0k
      Views

      S

      @BardWood Bump… 🙂 Any news on this?

    • B

      Unsolved Need advice managing images on multiple storage nodes/groups

      FOG Problems
      • • • BardWood
      2
      0
      Votes
      2
      Posts
      603
      Views

      Wayne WorkmanW

      One server can be the master of several groups. This is how I setup exactly what you’re doing at my old job.

      So say you have servers A, B, C, and D, each one geographically separated at their own site. Say that A is the master and has uber amounts of space. Say that B, C, and D have limited space.

      Site A’s fog server would be the main server & the master of four groups.

      Group 1 - has all images in it and would be the primary group for all images. The master of group 1 is server A, and Server A is the only member of this group.

      Group 2 would be for site B. You’d create another ‘storage node’ using FOG’s web interface. You’d use the same IP address, same user & pass, same /images directory. All this would be all the same - but you would name it something like Site B Master. Then you’d configure Site B’s storage node out at the remote location to be a non-master and a member of Group 2. With this setup, only images shared with Group 2 would replicate to site B.

      You would repeat this sort of setup for C and for D.

      Make sure the main server has plenty of space and compute power. At my old job, with most locations using the same image - and with the shear number of images we had - we burned through 400GB in a flash. I’d suggest you shoot for 1TB or larger - even 2 or 4TB - because you’ll eventually get that one model where no image type works except for RAW and you wind up with a 500GB image file just to support that one dumb model.

    • B

      Unsolved Storage node issues (or not?)

      FOG Problems
      • • • BardWood
      2
      0
      Votes
      2
      Posts
      727
      Views

      george1421G

      FOG will only replicate with storage nodes in its group. Each group must have a master FOG node, and then X number of storage nodes.

      If your storage node is in a different storage group than your master FOG node no replication will happen.

      Use the replication controls on the images to control what gets sent to the storage nodes from the master node.

      You will probably want to use the location plugin to direct the appropriate clients to the storage node vs the master node.

    • B

      Unsolved Please point me to 1.2.0 -> 1.3.4 for Centos 6.7 upgrade docs/guide.

      FOG Problems
      • • • BardWood
      47
      0
      Votes
      47
      Posts
      18.0k
      Views

      Tom ElliottT

      @BardWood If you still have the original backup file from when you upgraded the server originally, this would be your best bet. It should be in /home/fogDBbackups

      It typically is time stamped.

      Once you reinstall 1.2.0, you’ll need to use the backup sql file to restore it as a lot changed in the DB between the two.

    • B

      Solved Need some clarity on how 'Image Export/Import' is supposed to work please.

      FOG Problems
      • • • BardWood
      7
      0
      Votes
      7
      Posts
      1.7k
      Views

      sudburrS

      It exports the definitions (the pointers) as a image_export.csv (Comma Separated Value) text file.

      This file can be edited!

      If after exporting your complete list of images from one server, you want to then import just one of those images to another server you edit the original exported .csv . I use Microsoft Excel.

      Delete the lines for the images you are not interested in, then save that new .csv .

      On the destination server you then import the updated .csv. The contents of which are then merged into that server’s database.

      One of the nicer bits about the exported .csv is that the creation date and image size are saved and imported along with all the other pertinent fields. Do not worry about the image ID #, that is not saved or imported.

      Of course you will still need to copy the actual image files referenced by the image definition onto the destination server.

    • B

      FOG works great in local office. TFTP timeout over long distance link.

      FOG Problems
      • • • BardWood
      7
      0
      Votes
      7
      Posts
      1.7k
      Views

      Wayne WorkmanW

      @BardWood do you have a hub (as opposed to a switch)? I’m just asking, because there’s something you can do to help solve this.

    • B

      FOG Trunk boot loop when PXE booting from storage node.

      FOG Problems
      • • • BardWood
      9
      0
      Votes
      9
      Posts
      2.1k
      Views

      george1421G

      @Tom-Elliott is there a way to do a qc check on the storage node during the install to ensure it can connect to the master node spl server. That would avoid this type of error?? Just as a suggestion?

    • B

      Solved FOG 1.2.0 - Is there a way to limit bandwidth in FOG?

      FOG Problems
      • • • BardWood
      3
      0
      Votes
      3
      Posts
      1.1k
      Views

      Wayne WorkmanW

      Might be faster to ship them a hard drive in the mean time…

      But I agree with Tom, you can setup a storage node there and use the location plugin.

      Fog doesn’t need anything super powerful. An old P4 or Core 2 Duo would work fine. You can even get an Intel NUC to use a fog storage node.

    • B

      Are Fog trunk server + 1.2.0 Storage Nodes compatible?

      FOG Problems
      • • • BardWood
      2
      0
      Votes
      2
      Posts
      744
      Views

      Wayne WorkmanW

      @BardWood I don’t think it will work. The FOG Trunk methods involved with synchronizing storage nodes with their masters has greatly changed, along with the ability to now cross-share images with storage groups. I would not advise mixing versions as you would like, I think it’ll only lead to disaster. The location plugin has also been modified to cause remote nodes at a remote location to PXE boot directly from their local node. Also, the 1.2.0 nodes probably can’t support this functionality

      You’d be better off just updating everything to fog trunk, OR just staying at 1.2.0 till 1.3.0 is released.

    • B

      How to sync StorageGroup masters with default group?

      FOG Problems
      • • • BardWood
      8
      0
      Votes
      8
      Posts
      2.7k
      Views

      B

      @Wayne-Workman Thank you, Wayne. That really does clear things up. It sounds like the easiest thing to do (short of upgrading to trunk) is what I’ve been doing. Do a round of image updates and assign them to default. Clear ‘is master’ from all other storage nodes and move them to default group. Watch the logs for the sync to complete, move them back to their respective groups, recheck ‘is master’ on SNs since I only have a single SN per group. I only update these images a few times per year so that’s really not that painful. I could manage rsync scripts or a manual process but I’d rather let FOG do it especially if 1.3.0 will be along sometime in the not too distant future. Much appreciated!

    • B

      Solved I'm stuck! FOG 1.2.0 issues Centos 7. Both server and StorageNode

      FOG Problems
      • • • BardWood
      9
      0
      Votes
      9
      Posts
      2.7k
      Views

      george1421G

      @Wayne-Workman said:

      https://wiki.fogproject.org/wiki/index.php?title=CentOS_7

      Great job!! Especially since a picture is worth a 1000 words (as they say).

    • 1 / 1