• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 64
    • Topics 113
    • Posts 15,335
    • Best 2,777
    • Controversial 0
    • Groups 2

    Posts made by george1421

    • RE: FOG Web GUI speed and default storage activity

      For ubuntu, I’m still working on the php-fpm config. For centos it works. BUT if you only have 60 hosts hitting your fog server and you have a performance issue, then something else is going on.

      What is your client check in interval (in the fog settings page)? The default is every 5 minutes. With only 60 hosts that shouldn’t hit too bad, with 200 hosts I might think differently.

      What is the stats on your FOG server? How much memory, vCPU, process time from top?

      posted in FOG Problems
      george1421G
      george1421
    • RE: 2 primary fog plan

      @msi said in 2 primary fog plan:

      What if the master node went down for some odd reason, can we still be able to deploy from the branch storage

      No in this setup if the master node is off-line all of the storage nodes would be down. There is one part I left out in my wall of text from before. One of the differences between a storage node and a normal node is that the storage node doesn’t have a local database. It uses (connects) to the master node’s (normal mode) database for scheduling and reporting. If the master node is down, then there is no database for the storage nodes. There is an unsupported way to kind of make this scenario work, but its no 100% clean.

      In your Elizabeth example. Did you install the FOG location plugin? Did you assign a storage node to a location? Did you assign the target computer to a location?

      posted in FOG Problems
      george1421G
      george1421
    • RE: 2 primary fog plan

      First let me say (yes I know where it came from) this is a pretty old drawing that may not reflect the current FOG design.

      A FOG server has two different modes. These modes are selected when you first install FOG. The first mode is “Normal” and the second mode is “Storage”.

      A Normal mode install is what you would do if you had a single stand alone fog server.

      A Storage mode install is what you would do if you needed to place one or more FOG servers closer to the target computers than a Master (normal mode install).

      In this setup you will have a master node and one or more slave (storage) nodes. These storage nodes can be local to the master node (to share the imaging load) or remote beyond some WAN link.

      The master node copies all of the snapins and images from the master node to all of the storage nodes in its storage group.

      Target computers can pxe boot from local storage node servers, BUT they must be able to reach the master node during pxe booting. Also if you are using the FOG client on each computer, the FOG Client will only interact with the Master FOG server and not the local storage nodes. On e final caveat, when capturing an image, only master nodes can accept an image. To say it another way, Master Nodes can capture and deploy images and snapins, Storage Nodes can only deploy images.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      @sebastian-roth The issue is (only guessing here), that the OP needs to setup a multicast router, or on their vlan router allow multicast data to pass through the router. Most routers have this disabled by default. One other thing is to have IGMP Snooping enabled on the switches so that the switches know who is a IGMP subscriber and who doesn’t care about the data stream (i.e. PIM Sparse Mode vs Dense Mode).

      posted in General Problems
      george1421G
      george1421
    • RE: PXE connection Using Windows 2008 as DHCP Server

      @techadmin There is a wiki page that covers this: https://wiki.fogproject.org/wiki/index.php?title=Multicasting

      The other thing is your networking infrastructure. If all of your target machines are on the same subnet as your FOG server, then you are good to go. If they are on different subnets then you need to get with your infrastructure team and discuss setting up a multicast router or allowing multicasts to traverse your subnets. This is not something specific to FOG, but rather multicast data paths.

      posted in Windows Problems
      george1421G
      george1421
    • RE: Apache Issue

      @avaryan Interesting, how many computers do you have checking into this FOG server?

      posted in Linux Problems
      george1421G
      george1421
    • RE: Hyper-V Generation 2 VMs Aren't Booting Into Network -

      On your fog server, can you open a command prompt and key in sudo dnsmasq -v to ensure its 2.76 or later?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      @tomcatkzn said in Setting up the right FOG Environment:

      So a vhdx of about a 250Gb for /Images should be a good starting point then.

      Yes that is a good size since you need storage for 30 computer images.

      posted in General Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      @tomcatkzn said in Setting up the right FOG Environment:

      Can you elaborate on how you: “use FOG to place the required diver files in a predefined location on the target system during imaging.”

      Sure…
      https://forums.fogproject.org/topic/7740/the-magical-mystical-fog-post-download-script
      https://forums.fogproject.org/topic/7391/deploying-a-single-golden-image-to-different-hardware-with-fog
      https://forums.fogproject.org/topic/4278/utilizing-postscripts-rename-joindomain-drivers-snapins

      IMO building your reference image on a VM is the only proper way to build your reference image. I would dislike doing what I do on a real system, it would really slow down our workflow with building new reference images.

      posted in General Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      @tomcatkzn said in Setting up the right FOG Environment:

      Any idea what the size of a basic win7 and win10 image is with the new compression code?

      We’ve been rolling out fat images lately, but I think our thin Win7 image is about 8GB on the target disk.

      When setting up the VM I would create 2 vmdk files. One for the OS and one for images. If you are comfortable with the linux installer you can do it when you manually partition the disk. Create an OS disk of about 20GB in size and then create the image disk as a single disk, with a standard partition (not lvm) formated with ext4. Then for your mount point make it /images. That way when you install fog everything is setup.

      The other option is to install linux first on that single 20GB disk and then after the OS is installed then add in the imaging disk and mount it to /images BEFORE fog is installed. Again you will want to make it a single vmdk, with a standard partition (not lvm) formatted with ext4. The reason why I say a standard partition is that if you need to you can expand that vmdk later and then the partition, and file system. Its a bit harder if lvm is involved.

      posted in General Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      @tomcatkzn said in Setting up the right FOG Environment:

      I was thinking that I need for each operating system/hardware combination :

      A base image with drivers and updates but no other software
      then a pre-sysprepped image of each of the above with other software installed.
      then a final sysprepped image for deployment.

      In our case we have a generic OS image and then use FOG to place the required diver files in a predefined location on the target system during imaging. Granted we only use Dells in our company, but we have 3 images (Win7x64, Win10x64, Win10x64EFI) for 15 different hardware models. It takes a while to set this up, but we can add new models without needing to recapture images for each new model. Also with this design, we recapture these 3 images each quarter with the latest updates (we may abandon this since Win10 will change the way updates are applied anyway).

      We build our reference images on a VM so that we have capabilities like snapshotting, and hardware independence during reference system creation. We do use Microsoft MDT to automate our reference image build. That way we get a consistently built reference image each quarter.

      posted in General Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      @tomcatkzn said in Setting up the right FOG Environment:

      Could you offer the reasons why you prefer Centos over Ubuntu?

      Based on recent changes to Ubugtu they seem to alter the code for some reason that causes FOG issues. Centos has always been a stable build, and that is what I run. As Wayne said, Centos or Debian seems to be are most stable OS platforms. With that said FOG does support and is tested against a large number of Linux OS distributions. Wayne runs install checks every morning against the supported platforms: http://theworkmans.us:20080/fog_distro_check/installer_dashboard.html

      posted in General Problems
      george1421G
      george1421
    • RE: Setting up the right FOG Environment

      Really either of the two hardwares will work just fine for FOG. In your environment either would be a bit of an overkill. I would stick with one of the main stream linux distributions, rhel/centos, debian, ubuntu and not go with one of the variants unless you have a specific case.

      As long as your network is setup for multicasting you can image those 35 lab machines in one push, as long as you have the same image setup for all systems.

      For your setup I would work on developing a single image (or group of images) that will deploy to all systems on your campus. This will give you the most flexibility as new models are brought on board.

      In my environment the FOG server is running Centos7 on a virtual machine with 2 vCPU and 5GB of RAM. Image push time for a 25GB Win10 image is about 4.5 minutes.

      posted in General Problems
      george1421G
      george1421
    • RE: Apache Issue

      I guess I would have to ask you can you update to FOG 1.4.4?

      Also lets collect some background information here

      How many client computers are contacting this fog server?
      What is your client check in time/interval?

      posted in Linux Problems
      george1421G
      george1421
    • RE: Path is unavailable?

      @george1421 From your image of the dhcp settings, can we assume that 192.168.9.6 is the IP address of your fog server?

      Also since you are passing undionly.kpxe to the pxe booting computer, is that target computer in bios (legacy) mode?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Path is unavailable?

      Can you post a clear picture of the iPXE error taken with a mobile phone? I’m not sure what’s up with the link in the OP, but I need to see what the actual error is.

      At this point I’m not worried about any image size stuff. This is a pure dhcp/pxe issue in my mind (also as Tom posted).

      posted in FOG Problems
      george1421G
      george1421
    • RE: Solus Linux

      @reub_rester said in Solus Linux:

      Solus Linux

      Well I guess I would have to ask, what is your logic to use Solus? Its not a mainstream OS or even a repackaged Ubuntu variant. It doesn’t use a common package manager either. Its also a fairly new OS with a lot of churn (not a bad thing, but is changing a lot). For a server OS that isn’t something you would really want.

      Will FOG work with it? Probably not. Will FOG ever work with it (not speaking for the developers here), probably not. There are other mainstream Linux distributions the FOG works with today, there is no real value in supporting something new.

      posted in General
      george1421G
      george1421
    • RE: "Permission denied": another Nas Issue (Qnap TS-231)

      @iarwayn said in "Permission denied": another Nas Issue (Qnap TS-231):

      the fact that the Nas size and occupation doesn’t appear on the main menu, but, hey! that won’t ruin my joy.

      You will have to live with good enough. For those settings to be populated, you need a real fog server as the storage node. You need a few php pages to execute on the target NAS to return what FOG expects. You can probably get there if you install apache and php on the NAS and then copy over the FOG web site to the storage node. You may need a few php libraries beyond what the NAS php page supports by default. I probably can be done but the results (values on the fog homepage) may not be worth the effort.

      I’m pretty sure it could be done with a synology nas because it does have packages for apache and php 5.6 that could be installed… But I never had the motivation to try or to document it.

      BTW: Well done on the QNAP integration. If you feel motivated at some time please document your steps for the qnap in the tutorials section like I’ve done so the next guy can build upon what YOU have created. There is no problem for the French screen shots either, we are all professional that know how to use google translate 😉

      posted in FOG Problems
      george1421G
      george1421
    • RE: Recommendations for Server

      @imagingmaster21 That is probably a bit overkill for the FOG server. But as they say more is always better.

      Just make sure you order it with an array controller that is compatible with linux. The cheap windows only raid controllers (SXXX series adapters) won’t work with linux, so don’t even try if you are installing linux on bare metal.

      posted in Hardware Compatibility
      george1421G
      george1421
    • RE: Recommendations for Server

      Really it depends not so much as how many you will image at a time but how many total systems will be managed by your FOG server.

      A circa 2010 desktop with an SSD drive would be more than enough to image systems. Just for a matter of scale, I can image 2 computers at the same time with FOG installed on a Raspberry Pi3. The heavy lifting (so to speak) during imaging is not done by the fog server, but by the FOG client. All the fog server needs to do during imaging is to move the image file from the hard drive to the network adapter and to manage the process. Its not a heavy workload.

      Now when you say 20-30 machines at a time. That can be done with a multicast image (one to many) as long as your network is capable of multicasting. If you are trying to deploy to 20 computers using unicasting (one to one) then you will run into a problem in that your infrastructure probably can’t handle that much data (no necessarily related to the FOG server).

      Lets start out with how do you plan on using FOG in your environment?

      posted in Hardware Compatibility
      george1421G
      george1421
    • 1
    • 2
    • 474
    • 475
    • 476
    • 477
    • 478
    • 766
    • 767
    • 476 / 767