FOG as a DRS Server Deployment
-
@Deastrom If some are not windows, then that means some might need a RAW image type - these are not compressed and take significantly longer to upload and download. Also, the extremely legacy OSs might have extremely legacy hardware that might not even support network booting… or any of the recent FOG client kernels.
But, sounds like you’ve thought it through and it sounds like this is the best choice. But I seriously doubt you won’t run into issues with the legacy OSs and legacy hardware - so just be prepared for that.
-
If you could put the Images on a FreeNAS based machine you could keep up performance and make sure that the images are stored safely with checksums etc. It also means that you can have multiple FreeNAS’ managed by one FOG Server VM to upload to multiple targets.
One thing to check, do you know that FOG works with the currently installed OS and BIOS settings on your machines?
-
@VincentJ That would be something I’m checking as we register each computer. Most of my oddballs have been testing and only a few were limited by bios and others by ram (128mb is not enough).
-
@Deastrom Legacy RAM should be cheap…
-
@Developers What are the RAM requirements for a host ?
-
What storage in your ESXi are you using that is giving you 12TB of space?
Something to also test would be if you split your storage into small boxes in storage groups, can you better accommodate multiple uploads at once?
Since this is a lot of machines, you need to figure out how you are going to get them all uploaded repeatedly without too much issue. Also if you use a storage system with snapshots, you could have more recent versions available in case the latest becomes bad.
-
@VincentJ You’d be surprised how much space you have if you only buy 6TB drives. And I agree, multiple storage nodes would increase the number of simultaneous uploads that can be done. Depending on the link speed between switches, they could all be in one spot, or, if the links are slow, you could connect a storage node per switch and just assign the images for the computers connected to that switch to that particular storage node.
Also, because many of these systems are legacy and probably have bare bones OSs and tiny HDDs, it might not be an issue to upload them all in one night.
-
The storage is an Oracle storage array of some sort that will most likely be attached to the FOG server as a NFS. I will be pushing for some Storage nodes, but mainly in a couple of remote locations. Still haven’t really found a way to get the dashboard items on the webgui to see the storage nodes. But since that’s cosmetic and the storage nodes work great regardless, I may wait to see what 1.3 brings to the table before focusing on that issue.
Also, scheduling will be important. I’ll be asking my customers for a frequency for their computer uploads and a window then mapping that on a calendar. I expect to hear something along the lines of ‘once a month’, ‘once a quarter’, ‘twice a month’, etc… and windows of weekend evenings being the most restrictive and others being just off duty hours. Not everyone is running 24 hour production, so bandwidth shouldn’t be much of an issue.
-
@Deastrom The storage reporting is an FTP thing. It does work in the current trunk. Inside the Sotrage Node’s /opt/fog/.fogsettings the ftp stuff in there should be the credentials that the main FOG server will accept.
For the bandwidth monitoring, I think (although am not sure) it’s a combination of FTP and getting the interface name correct in the storage node settings on the main FOG server for the remote node…
The FTP stuff with nodes is really confusing to me… I really need to sit down and explore it.
But to my knowledge, all those features work… and it’s likely the credentials being used that are goofed.
-
Setup FTP and NFS on the storage array and use it directly, rather than trying to NFS into the VM and NFS out of it again.