Need advice managing images on multiple storage nodes/groups
BardWood last edited by
- FOG Version: 1.3.5
- OS: CentOS 7.3 (server), CentOS 6.8 (storage nodes)
Doing a full replication of all images to the storage nodes is not possible due to a 200GB limit on the nodes themselves (minus OS). What I’m attempting to work out, would be a scenario where nodes have specific images just for their storage group. I know you can assign images to groups but since I’d have to move potential group masters into the default group, they will sync all images and fill up the 200GB disk. No bueno.
Wayne Workman last edited by Wayne Workman
One server can be the master of several groups. This is how I setup exactly what you’re doing at my old job.
So say you have servers A, B, C, and D, each one geographically separated at their own site. Say that A is the master and has uber amounts of space. Say that B, C, and D have limited space.
Site A’s fog server would be the main server & the master of four groups.
Group 1 - has all images in it and would be the primary group for all images. The master of group 1 is server A, and Server A is the only member of this group.
Group 2 would be for site B. You’d create another ‘storage node’ using FOG’s web interface. You’d use the same IP address, same user & pass, same
/imagesdirectory. All this would be all the same - but you would name it something like
Site B Master.Then you’d configure Site B’s storage node out at the remote location to be a non-master and a member of Group 2. With this setup, only images shared with Group 2 would replicate to site B.
You would repeat this sort of setup for C and for D.
Make sure the main server has plenty of space and compute power. At my old job, with most locations using the same image - and with the shear number of images we had - we burned through 400GB in a flash. I’d suggest you shoot for 1TB or larger - even 2 or 4TB - because you’ll eventually get that one model where no image type works except for RAW and you wind up with a 500GB image file just to support that one dumb model.