Generic Questions about FOG Project
-
Background: My company is looking for a new imaging software due to “Shadow Protect” not working properly for us.
- Is the FOG server required for just imaging and deployment?
- What set’s FOG aside from it’s competitors like “CloneZilla,” “WDS,” “Acronis,” and “SmartDeploy?”
-
@agray said in Generic Questions about FOG Project:
Background: My company is looking for a new imaging software due to “Shadow Protect” not working properly for us.
- Is the FOG server required for just imaging and deployment?
Yes a FOG server is required for imaging. The FOG server holds the images set for deployment as well as manages the entire imaging process from PXE boot to rebooting after image completion. The FOG server requires a linux host server to run the application code. Once setup no linux experience is required.
- What set’s FOG aside from it’s competitors like “CloneZilla,” “WDS,” “Acronis,” and “SmartDeploy?”
In simplest terms, licensing costs.
You could say FOG and Clonezilla are distance cousins. They share a lot of commonality in that they are both built on FOSS applications. They share the same disk imaging subsystem. But then handle the images vastly different. Both have linux running inside their capture and deployment processes. FOG and Clonezilla are target OS agnostic. You can deploy more than just MS Windows with the FOSS imaging tools. Where the two tools diverge is in the snap in (application) deployment, multicasting, image management, groups, and so on.
As for the other two, there are licensing costs as well as a mix of functionality. If you are looking for a paid for solution (to one degree or another) MDT/WDS can be hard to deny. Snapdeploy offers quite a few features not found in the other solutions.
Regardless of the final deployment tool you use (FOG, Clonezilla, WDS, SCCM), I would start the entire process by building your reference image using the lite touch approach in MDT (Microsoft Deployment Toolkit) if you plan on deploying windows OS to your target computers. Once setup, MDT makes golden image creation a breeze, especially if you need to release a new version of Windows 10 every 6 months.
-
@george1421 said in Generic Questions about FOG Project:
The FOG server requires a linux host server to run the application code. Once setup no linux experience is required.
So, would it be possible to just run the server in a VM on a windows server?
Unfortunately I’m the only one with any Linux experience (about a year) and I’ll only be here till I graduate. You say “to run the application code,” would there be a way to put the server on a Windows Server without a Linux VM?@george1421 said in Generic Questions about FOG Project:
Regardless of the final deployment tool you use (FOG, Clonezilla, WDS, SCCM), I would start the entire process by building your reference image using the lite touch approach in MDT (Microsoft Deployment Toolkit) if you plan on deploying windows OS to your target computers. Once setup, MDT makes golden image creation a breeze, especially if you need to release a new version of Windows 10 every 6 months.
I didn’t even think about the recreation every 6 months. Thanks for the bit of “foresight.”
Is fog only remote? If so, how does that play into us getting new PCs via Dell/Amazon? Would we just need to plug them into the network and then use the Management Interface?
[Mod Note] I changed the formatting for readability -Geo*
-
-
Yes, FOG works using PXE Network booting, and a server which enables a Front-end Web-based GUI (Graphical User Interface) to manage images, hosts, storage, printers, and much more. The web side of things is what tells the machines what they’re to be doing.
-
FOG and Clonezilla are very similar in that we both use Partclone as our image medium. Where FOG Differs from Clonezilla, however, is the Web GUI and fully scripted out elements to automate the processes of capturing/deploying images to machines. WDS (and MDT typically) are Microsoft’s native utility for capturing/deploying in a lite touch scenario. When bundled into SCCM, they cost money. MDT/WDS are “file-based” imaging solutions (typically) where Clonezilla and FOG are block-level imaging solutions. I don’t know much about Acronis or Smartdeploy other than they are typically Paid for solutions. FOG and Clonezilla are 100% free to use. MDT/WDS are free (I believe) except for the licensing that may be required for the servers that help with the processes that are involved. SCCM is paid for product of Microsoft’s own making which leverages MDT and WDS. Acronis, I believe, can be used as block level and file based imaging.
File-based imaging is useful for maintaining specific configurations and mass deploying new changes to those configurations without requiring a full fresh wipe of the drive and information on the drive of the machine. They do typically take longer from a “nothing” to “complete and ready to use” than block level imaging mediums do.
Block level imaging is useful for putting a specific layout and maintaining that look and layout in fresh and clean setup for machines. It also, from a “nothing” to “complete and ready to use” is much faster than file-based as it doesn’t have to find a file and place it. It just captures the raw data from the disk. The draw-back is you shouldn’t use block level methods if you want to maintain user data on the disk.
There’s not one thing I can say to just tell you that “FOG is better than xyz software because…” FOG is just a utility designed to help make managing the machines and imaging many in a nice to use in an easy to use format. I think, if you had to pin me against Clonezilla and FOG, I’d choose FOG. Not because I’ve developed it for so long, but rather because of its ease of use compared to Clonezilla. That’s not to say Clonezilla doesn’t have it’s uses, just that if I had to pick one or the other.
I like @george1421 method of imaging however. Use the MDT utility to build a reference image quickly. Then use FOG to capture the reference for easy deploying.
I’ve had to use SCCM and for a lite-touch base system it was great. I had to use it to image many systems at once, and it just sucked. Again, not saying it doesn’t have its use cases, but I just hated it as 4 hours per machine is pretty crappy.
I’ve never used Acronis or SmartDeploy so I have no reference to give. Sorry.
-
-
@agray said in Generic Questions about FOG Project:
So, would it be possible to just run the server in a VM on a windows server?
Unfortunately I’m the only one with any Linux experience (about a year) and I’ll only be here till I graduate. You say “to run the application code,” would there be a way to put the server on a Windows Server without a Linux VM?FOG only runs on Linux. You can run it as a VM in a Hyper-V environment, but FOG has the requirement of Linux under the hood. The target computer OS can be linux, windows, chromeOS.
Is fog only remote? If so, how does that play into us getting new PCs via Dell/Amazon? Would we just need to plug them into the network and then use the Management Interface?
FOG uses the PXE booting process where you bring in a bare metal computer connect it to your network and then net boot it into the FOG iPXE menu. From there you can register the target computer with FOG and then manage it from the FOG server or just pick the deploy menu if its a load and go environment where the fog server will never see the target computer again, such as in a PC remanufacturer.
If you are going to have a mix of hardware, such as buying the cheapest computer every time, you will have a hard time managing your fleet of computers with what ever solution you work with. The issue is the drivers required. If you follow the lite touch approach you only load the drivers that are necessary to get the system booted then during Windows OOBE the proper drivers are loaded into the system. You can do this by including every possible driver in your golden image or by having your deployment tool inject the proper drivers during the image push.
-
@Tom-Elliott said in Generic Questions about FOG Project:
I like @george1421 method of imaging however. Use the MDT utility to build a reference image quickly. Then use FOG to capture the reference for easy deploying.
This is the approach that we’ve been using in house since 2013. If you are smart with your task sequences and use folders for your task sequences you can quickly upload a new Win10 release, copy over your task sequences folder and deploy the next release in about 10 minutes. MDT has saved us a bunch of time not having to do things by hand in Audit mode.
Tom you also hit on a very good differences between (MDT/WDS/SCCM) in that they use file level imaging which is surely slower than what FOG and clonezilla uses with block level imaging. Both methods have their advantages, but if your main constraint is time, FOG/Clonezilla is much faster to push an image than MDT/WDS/SCCM. With FOG I can push a 25GB reference image to a bare metal target computer in just about 4 minutes. That’s going from bare metal to Windows OOBE running in about 5 minutes. At the end of OOBE I have a computer connected to AD and is ready to move to the desktop that has all of the common SOE applications already installed.
-
@Tom-Elliott said in Generic Questions about FOG Project:
There’s not one thing I can say to just tell you that “FOG is better than xyz software because…” FOG is just a utility designed to help make managing the machines and imaging many in a nice to use in an easy to use format. I think, if you had to pin me against Clonezilla and FOG, I’d choose FOG. Not because I’ve developed it for so long, but rather because of its ease of use compared to Clonezilla. That’s not to say Clonezilla doesn’t have it’s uses, just that if I had to pick one or the other.
I have tried using CloneZilla for a bit but, like you said, it is not very user friendly; also doesn’t look very nice.
@george1421 said in Generic Questions about FOG Project:
From there you can register the target computer with FOG and then manage it from the FOG server or just pick the deploy menu if its a load and go environment where the fog server will never see the target computer again, such as in a PC remanufacturer.
If you are going to have a mix of hardware, such as buying the cheapest computer every time, you will have a hard time managing your fleet of computers with what ever solution you work with. The issue is the drivers required. If you follow the lite touch approach you only load the drivers that are necessary to get the system booted then during Windows OOBE the proper drivers are loaded into the system. You can do this by including every possible driver in your golden image or by having your deployment tool inject the proper drivers during the image push.Would our company using a static network pose any problem with the PXE booting process?
We usually buy a bulk of 5-8 Dell Optiplexes. I read FOG was hardware independent on the website, would that allow for the use of say a image of an Optiplex 3040 to be put on a 3050?I appreciate you both for the information! Get to present this “research” to my boss Monday.
-
@agray said in Generic Questions about FOG Project:
Would out company using a static network pose any problem with the PXE booting process?
Yes and no. FOG (actually PXE Booting) requires a dhcp server to send out the proper net boot information. This is a requirement of PXE booting specifically. FOG leverages this to send a boot kernel to the target computer over the network then that boot kernel loads the imaging operating system called FOS. Once imaging is complete your target OS can use static IP addresses. If you can’t (by some regulatory requirement) use dhcp on your business network, then setup a specific and isolated imaging network where you can pxe boot computers to capture and deploy target computers. Its a bit more work to do it this way but it will meet your regulatory requirements. An isolated imaging network can consist of just a fog server and isolated network switch.
We usually buy a bulk of 5-8 Dell Optiplexes. I read FOG was hardware independent on the website, would that allow for the use of say a image of an Optiplex 3040 to be put on a 3050?
Typically what some will do is create a single golden image for all target hardware. Then all of your systems are exactly alike in configuration. The only thing you need to work out is to ensure the 3040 drivers are loaded on the 3040 hardware and the same for the 3050. This is an issue you will need to address regardless of the final imaging solution you pick. Some deployment tools will allow you to inject model specific drivers post imaging. There are ways to do this with FOG. Its not a native solution but not too hard to setup. I have a few tutorials on how to do this. Another solution some people pick is to just install all drivers for all models into the golden image. This makes a very fat and bloated image, but its one way to go about it. On my fog server I have about 15GB of drivers for about 20 different models of Dell computers. If that was in my golden image I would have to push out all of those drivers if they were needed or not on every deployment. The last way I’ve seen people do this is to have an image per hardware model. So in your case you would have one image for the 3040 and one image for the 3050 with the proper drivers loaded in each. It gets a bit messy having to manage one image per model but that way can work too.
In my company’s case we have one golden image for all and then inject the proper drivers based on the target hardware using FOG’s post install scripts.
-
@george1421 said in Generic Questions about FOG Project:
Its a bit more work to do it this way but it will meet your regulatory requirements. An isolated imaging network can consist of just a fog server and isolated network switch.
I’m pretty sure we have a small section of our network that is DHCP that we could expand to accommodate this project. Once they are imaged using DHCP and transferred out to static, FOG still won’t have any problems with management, right?
It gets a bit messy having to manage one image per model but that way can work too.
In my company’s case we have one golden image for all and then inject the proper drivers based on the target hardware using FOG’s post install scripts.We did that in the past but usually we get a batch and then we never get that model again so we could dispose of the image after a year or so.
How complicated is it to set up FOG to install said scripts and does MDT offer the option to make a driverless golden image? -
-
static ip address management post install. Not a problem. DHCP is only required if you need to PXE boot into the FOG iPXE menu or for capture/deploy activity. After image deployment the target computer checks-in to the FOG server using the FOG Client service so that is how the FOG server knows about the target computer.
-
Driver install for single golden image. Its not easy, but not hard either. If you use Dell computers then Dell provides the driver cab files as you need them. You can look over these tutorials: https://forums.fogproject.org/topic/11126/using-fog-postinstall-scripts-for-windows-driver-injection-2017-ed and https://forums.fogproject.org/topic/8889/fog-post-install-script-for-win-driver-injection for guidance. You will need a little linux experience to setup, but its not too hard.
-
MDT driver less: Yes you can. We will build our golden image using a virtual machine. We use ESXi virtualization, but you could use Hyper-V or VirtualBox to build your golden image. Then capture from that visualized machine. But I do recommend that you install the Dell Win10PE drivers in the Out of box drivers section to make your deployment life a bit easier. These are generic drivers that will allow your target computer to initially get connected to the network and talk to the storage device. There are plenty of how tos out there on how to setup and MDT environment. The deployment research web site is pretty handy too with info.
-