How to easily import/Export images between systems
-
I need the ability to create images on one system and move them to another system. Both systems are a normal install. Would it be possible to have the FOG system create and update a csv file for each image and store that in the image folder for that image. This way all you need to do to move images from one server to another is copy the /images/ImageNameToBeCopied folder, then do an import image on the new server. The import image would point to the /images folder of the newly copied image and import the CSV file.
Right now, we do an export on one server, then edit the exported csv file. Copy the folder to the new server, then import the edited csv file on the new server.
This feature request would eliminate the need to edit the CSV file each time.
-
I don’t have a solid answer for you on the CSV part. I’ve had an outstanding request for a while that does something similar. Now that the developers have added in the FOG API, there may be something that can be done externally.
But in your case, you have to full fog installs. On your source FOG server, you can it up as a master node. Then add your remote FOG server as a storage node. Make sure you enter the management user id of FOG and the password will be the password listed in the /opt/fog/.fogsettings file in the remote server. Once that is done, any images enabled for replication, will be sent to all storage nodes in the same storage group as your master node. When setting up the remote storage node configuration on your master node, you can even tell the master node to place files on the remote storage node in a different directory. This is done by setting the FTP path to the desired root directory in the storage node configuration on the master node.
The gotcha is still the image definitions. You could do that on a database level by exporting the images table and then scrubbing the data and then importing it into the remote storage node. You can do that on a database level so “in theory” it could be automated.
-
I think your request does have merit.
If you want to roll your own solution until the developers can consider your request, I think I have a path. Its not difficult to do (of course it helps to know what you are doing).
- You need to export the image definitions from your master FOG server. The following command will export your images table.
mysql -u root -e "SELECT imageName,imageDesc,imagePath,imageProtect,imageMagnetUri,imageDateTime,imageCreateBy,imageBuilding,imageSize,imageTypeID,imagePartitionTypeID,imageOSID,imageFormat,imageLastDeploy,imageCompress,imageEnabled,imageReplicate,imageServerSize FROM images WHERE imageReplicate='1';" fog | tr '\t' ',' > /tmp/imageimport.csv
Note there is another way to do the same thing. If you run into issues with this command we can use option 2.
-
Move the
/tmp/imageimport.csv
file to your remote fog server using the scp command (hint: this all can be scripted into a cron job). -
On the remote fog server you need to create an imageimport table (once) in your fog database. Key in the following command
mysql -u fog
. Then paste in the following command into the mysql command console.
CREATE TABLE imagesimport ( imageName varchar(40),imageDesc longtext,imagePath longtext,imageProtect mediumint(9),imageMagnetUri longtext,imageDateTime timestamp,imageCreateBy varchar(50),imageBuilding int(11),imageSize varchar(255), imageTypeID mediumint(9),imagePartitionTypeID mediumint(9),imageOSID mediumint(9),imageFormat char(1), imageLastDeploy datetime,imageCompress int(11),imageEnabled enum('0','1'),imageReplicate enum('0','1'), imageServerSize bigint(20) ); exit;
- On the target FOG server create the following file
importimages.sql
. This will be the sequences of commands to execute to import the exported csv from the master server into the imageimport table.
DELETE FROM imagesimport; LOAD DATA INFILE "/tmp/imageimport.csv" INTO TABLE imagesimport COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\n' IGNORE 1 LINES;
- This next section will be appended to the above section in the
importimages.sql
. There is a lot going on here so I thought I need to explain it. First it collects a list of imageNames in the imageimport table not in the images table. This is done, because we only want to import new records into the destination FOG server. Once we have that list only those records are imported into the images table.
INSERT INTO images (imageName,imageDesc,imagePath,imageProtect,imageMagnetUri,imageDateTime, imageCreateBy,imageBuilding,imageSize,imageTypeID,imagePartitionTypeID,imageOSID,imageFormat, imageLastDeploy,imageCompress,imageEnabled,imageReplicate,imageServerSize) SELECT imageName,imageDesc,imagePath,imageProtect,imageMagnetUri,imageDateTime,imageCreateBy,imageBuilding, imageSize,imageTypeID,imagePartitionTypeID,imageOSID,imageFormat,imageLastDeploy,imageCompress, imageEnabled,imageReplicate,imageServerSize FROM imagesimport WHERE imageName NOT IN (SELECT imageName FROM images);
- This last part will update the image definitions in the images table with matching records in the imageimport table. This way any updates made in the master fog server will be updated in the remote fog server.
UPDATE images i, imagesimport m SET i.imageDesc = m.imageDesc, i.imagePath = m.imagePath, i.imageProtect = m.imageProtect, i.imageMagnetUri = m.imageMagnetUri, i.imageDateTime = m.imageDateTime, i.imageCreateBy = m.imageCreateBy, i.imageBuilding = m.imageBuilding, i.imageSize = m.imageSize, i.imageTypeID = m.imageTypeID, i.imagePartitionTypeID = m.imagePartitionTypeID, i.imageOSID = m.imageOSID, i.imageFormat = m.imageFormat, i.imageLastDeploy = m.imageLastDeploy, i.imageCompress = m.imageCompress, i.imageEnabled = m.imageEnabled, i.imageReplicate = m.imageReplicate, i.imageServerSize = m.imageServerSize WHERE i.imageName = m.imageName
- The last bit is to run this command to execute the script
mysql -u root fog < importimages.sql
A cron job on the remote computer can call this import function.
This whole idea is this.
- On the master FOG server at 6pm a cron job runs to export the images table to a cvs file. Then the file is copied to the remote fog server using scp or ftp.
- On the remote FOG server at 6:15p a cron job runs to import the image csv file into the imagesimport table and then update the images table on the remote fog server.
I know this seems like a lot for a DIY solution. It may be beyond what you are really looking for.
-
@george1421 We have already tried this approach. You still have to deal with the import and export. Hence the request to have image sql record (exported as a csv file) is contained in the image directory. Then I would just have to do an import images.
In my case the master and remote nodes do not have all the same images on them.
For our use case, we want to have a set of master base images, created on the master server distributed to the remote nodes. From there the remote nodes could use those masters, modify them and save them as a customized image for their remote location.
-
@jjcjr How many remote FOG servers do you have. My process is still correct here (the explaining needs some work though).
-
@george1421 said in How to easily import/Export images between systems:
The gotcha is still the image definitions.
We could probably work something out for this with bash and cron on the remote nodes. The master server does have an API now, after all… We’d write a little bash script that requests image definition information belonging to the storage node the script is running on - then just add/delete those definitions locally. What do you think?
-
@george1421 Right now I have two remote locations, with up to 6 in the future.
-
@jjcjr If we just think about what features that FOG has today.
- You can enable / disable an image from replicating.
- You can assign an image to a single storage group.
- You can define multiple storage groups on your master node.
- You can define multiple storage nodes per storage group.
- The FOG master node can be a member of one or more storage groups.
- You can create multiple image definitions that point to a single image file.
So on your master node you can have a series of images, associated with individual storage groups, with your FOG Root server a master server in each storage group. The remote fog server would be consider slave or storage nodes in this setup. Replication would happen as it should. If you enable replication on image definition AAAA and its in storage group 123 it will replicate from the root node to all storage nodes in storage group 123. This replication will happen if the remote fog server is a full fog server or a storage node. This setup can be done right now.
What you need to happen next is what I described in the database bits (now with new details needs to be updated). The FOG Root node will need to export images that are enabled for replication in storage group 123 to all (real) fog servers in that storage group. That exported file from the Root node for storage group 123 will need to be copied to the servers in that storage group. Then on the remote servers in the 123 storage group it will need to import new image definitions not currently in its database and then update the fields of images that were exported from the root node. This should work.
If a remote image definition is deleted from a slave node, it will be reinstalled by the replication process. If the remote slave node adds new images not on the root node, these new images will not be touched (since the root node knows nothing about them). The replicator won’t touch them either.
The only thing that would make this a little easier to manage is if the image definition could be assigned to one or more storage nodes for replication. The way it is now you will need one image definition per storage group where you want that image to replicate to. You need multiple image definitions, but only one physical set of files.
-
@wayne-workman That could work. But I still think that if the specific image folder contained a file that was the image definition, then it could easily be imported, scripted or automated.
Having a file that contains the equivalent of the one line of the images csv file would eliminate the need to have to export, edit and import.
-
@jjcjr said in How to easily import/Export images between systems:
then it could easily be imported, scripted or automated.
This is what the API is for.
-
@george1421 This worked well.
I had to make one change to the importimages.sql file. I added LOCAL
LOAD DATA LOCAL INFILE “/tmp/imageimport.csv”
-
@jjcjr Ok just for clarity you are following the instructions from my first post? If so yes you need LOCAL in there or in your my.cnf file you need to set the … <something> path to empty.