FOG as a DRS Server Deployment
Ok, I’ve asked for help on getting some of the FOG features configured to support my FOG as a DRS proposal at work and it’s been ACCEPTED! Thank you all for your help. Now I’m onto the logistics. My count for number of computers looking for support from this server is 230. I do not know frequency of uploads yet, but we can assume that each of these will have their own cron task created for them. Potentially multiple cron tasks if I run into weird scheduling requirements. I have set the average image on server size to 40 GB, that will put me at 9.2 TB. I plan on requesting 12 for growth. I am fuzzy on what the processing hardware requirements should be. This will be going on a ESX server. My question to the community, what kind of power will I need behind this system?
Setup FTP and NFS on the storage array and use it directly, rather than trying to NFS into the VM and NFS out of it again.
@Deastrom The storage reporting is an FTP thing. It does work in the current trunk. Inside the Sotrage Node’s /opt/fog/.fogsettings the ftp stuff in there should be the credentials that the main FOG server will accept.
For the bandwidth monitoring, I think (although am not sure) it’s a combination of FTP and getting the interface name correct in the storage node settings on the main FOG server for the remote node…
The FTP stuff with nodes is really confusing to me… I really need to sit down and explore it.
But to my knowledge, all those features work… and it’s likely the credentials being used that are goofed.
Deastrom last edited by Deastrom
The storage is an Oracle storage array of some sort that will most likely be attached to the FOG server as a NFS. I will be pushing for some Storage nodes, but mainly in a couple of remote locations. Still haven’t really found a way to get the dashboard items on the webgui to see the storage nodes. But since that’s cosmetic and the storage nodes work great regardless, I may wait to see what 1.3 brings to the table before focusing on that issue.
Also, scheduling will be important. I’ll be asking my customers for a frequency for their computer uploads and a window then mapping that on a calendar. I expect to hear something along the lines of ‘once a month’, ‘once a quarter’, ‘twice a month’, etc… and windows of weekend evenings being the most restrictive and others being just off duty hours. Not everyone is running 24 hour production, so bandwidth shouldn’t be much of an issue.
@VincentJ You’d be surprised how much space you have if you only buy 6TB drives. And I agree, multiple storage nodes would increase the number of simultaneous uploads that can be done. Depending on the link speed between switches, they could all be in one spot, or, if the links are slow, you could connect a storage node per switch and just assign the images for the computers connected to that switch to that particular storage node.
Also, because many of these systems are legacy and probably have bare bones OSs and tiny HDDs, it might not be an issue to upload them all in one night.
What storage in your ESXi are you using that is giving you 12TB of space?
Something to also test would be if you split your storage into small boxes in storage groups, can you better accommodate multiple uploads at once?
Since this is a lot of machines, you need to figure out how you are going to get them all uploaded repeatedly without too much issue. Also if you use a storage system with snapshots, you could have more recent versions available in case the latest becomes bad.
@Developers What are the RAM requirements for a host ?
@Deastrom Legacy RAM should be cheap…
@VincentJ That would be something I’m checking as we register each computer. Most of my oddballs have been testing and only a few were limited by bios and others by ram (128mb is not enough).
If you could put the Images on a FreeNAS based machine you could keep up performance and make sure that the images are stored safely with checksums etc. It also means that you can have multiple FreeNAS’ managed by one FOG Server VM to upload to multiple targets.
One thing to check, do you know that FOG works with the currently installed OS and BIOS settings on your machines?
@Deastrom If some are not windows, then that means some might need a RAW image type - these are not compressed and take significantly longer to upload and download. Also, the extremely legacy OSs might have extremely legacy hardware that might not even support network booting… or any of the recent FOG client kernels.
But, sounds like you’ve thought it through and it sounds like this is the best choice. But I seriously doubt you won’t run into issues with the legacy OSs and legacy hardware - so just be prepared for that.
I agree, the better solution is AD based and updated base images. We have have that for the majority of the company, but these 230 are one-offs. Specially configured computers for controlling machinery and creating recipes for said machines. Some of them aren’t even Windows, there’s some incredibly legacy operating systems as well.
The only potential issues I see is scheduling. A 40GB image in FOG Trunk takes 15ish minutes to upload. 15 x 230 = 3450 minutes.
3450 / 60 = 57 hours.
And that’s on a fairly speedy core i5 with a gig connection.
Additionally, you can’t upload images during production (during working hours) so it all has to happen at night. This means you’ll also need to have WOL working properly on all hosts, and on your switches / router. It also means your limited to around 10 or 12 hours a night for uploading.
So, what you would turn out with is 1 upload per host per week… and again, that’s on speedy computers.
I’m not trying to discourage you at all, but there are better ways to instantly recover from disaster using Active Directory folder redirection and just having an updated base image at all times. However, what you’re wanting to do is the most simple of all options and the most sure… but that’s a lot of hosts and uploads and data lol.
I won’t be doing any multicast downloading. This is purely 1:1 host to image ratio with cron Uploads and instant Downloads in case of system failure.
I do just fine with 4 cores assigned and 4 gigs of ram using Hyper-V.
For Unicasting (upload and download), the compression and decompression happens client side. The only thing the server needs for that is a fast network connection and HDDs that can keep up.
Multicasting download decompression happens server side - that’s where horsepower comes into play.
You might find interest in this thread: https://forums.fogproject.org/topic/5116/imaging-transfer-rates-vm-vs-physical-machine
Multicast decompression also happens client side.