Hi,
I have a similar scenario, with images being built in one place, then transferred out to other independent FOG servers (NOT secondary sites). I do it rather manually, using rsync and mysqldump on images (something like mysqldump fog images --where ‘imageId IN (xx,xx,xx)’ with the corresponding images to pull down).
I’ve thought about integrating them into fog, but sometimes, we need to pull back images from other servers, and not all sites have the same kind of connectivity, so the transfers have to happen at different times of day… So well, I use my brain, and we do sync using rsync/mysqldump as required. I will probably end up with some file synchronization system… but rsync works nicely for this. And the “one way image pull” is easy to script.
One way (meaning you don’t care what’s on the slave, as far as images go):
on master: mysqldump fog images > /images/images.sql
on “slave”: rsync -avP fog@master:/images/ /images && mysql fog < /images/images.sql
I think it’s not too appropriate to use the storage node feature for this, unless we can indeed define a bandwidth limit and time for the sync to happen, which can be done using a script & crontab without touching FOG. And if you want independent servers (I do), that won’t do, you need a full master on each site.
EDIT: ah, I just saw Tom’s news https://news.fogproject.org/imagesnapin-replication/
Well. That could do it… I guess we need to be able to flag a “big master”, or maybe have the ability to set an image as shared wherever it comes from on “masters”.
Cheers,
Gilles