Selective mysql export/import
Now that I have two FOG servers, I set up replication from the primary to the secondary. My secondary is not set up for DHCP, but I use it for imaging across networks with the USB boot method, loading ipxe with a local file instead of via PXE boot. This requires a slightly different config from the primary. which has DHCP enabled and serves pxe on an offline network. I believe when I exported the sql fog.db from the primary to secondary it took all the settings with it, and I had to go back to the secondary to reconfigure it for my custom set up again.
So my question is: How can I only export Hosts and Images?
(For the actual image files in /images, I have rsync set up.)
@george1421 That worked! I added the two additional tables to the export command and the “test” host was imported.
@brakcounty I have a few clues for you now. I was able to setup sql tracing on my FOG dev box. I turned on sql tracing and then from the web ui added a new host definition. This allowed me to see exactly what FOG was doing when it adds a new host.
It looks like you need to pay attention to the hosts, hostMAC, moduleStatusByHost tables. All three of these tables have inserts when a new host is added to the web ui.
@brakcounty I don’t have a good answer for you on either the api or table copy. I was doing some benchmarking with the mysql database and I think there is a way to see what tables FOG is using when I manually insert a host into the database. I have a feeling its the host table and <something> that needs to be exported and imported. Because there is hosts with certain data and another table to hold things like ad connection settings. I’m saying all of this from a crappy memory. i’m not in front of a fog server or my dev environment at the moment.
@george1421 I tried restarting apache2 and php, didnt work. I then tried the API export/import method and got this error on the secondary FOG server web UI:
But I do not see the “test_host” host that I created on the primary, on the secondary.
As I said I don’t think is the best approach here (I don’t know what the right approach is other than using the API because I’ve never needed to do this). But after you upload the new database files, restart apache and then php-fpm on the remote fog server to flush out the caching of the database that both apache and php-fpm does. Its possible you are not seeing it because apache doesn’t know the underlying records have changed in the database.
As for the single transaction you can drop that if you are dumping one table at a time. That flag has to do with how the tables are being dumped and timing.
@george1421 Ok what about the --single-transaction? Should I leave that?
UPDATE: I left --single-transaction and removed --no-create-info, and no errors were thrown. But I do not see the “test_host” host that I created on the primary, on the secondary. I also created a “testsync” image (not a captured image, just created in the web ui), and that did get imported into the secondary.
Now that I’m thinking about it, I wonder if it matters if I just create a host in Web UI vs actually registering a host.
@brakcounty Yeah that was a concern I had. You will probably need to go back and edit the sql dump command and remove the flag on it
-no-create-infoWith that command mysql dump only exports the data not the table definition. The issue you are running into its trying to import just the data and the key field is getting in the way. I was trying to avoid having to drop the table and recreate it, but it looks like that is what you will have to do.
The issue is on the export and not your import so to speak with the cron job.
After setting up a script and cronjob to pull the tables from the primary fog db and import into the secondary, the import commands are throwing these errors:
ERROR 1062 (23000) at line 23: Duplicate entry '22' for key 'PRIMARY' ERROR 1062 (23000) at line 23: Duplicate entry '395' for key 'PRIMARY'
This is my import command:
mysql -D fog images < /root/fog_images.sql mysql -D fog hosts < /root/fog_hosts.sql
@brakcounty On your master FOG server, look in FOG Configuration -> FOG Settings I think there is a FOG Storage node user account there. The value should exist. I’m not 100% sure at the moment, but try to use those credentials from the remote FOG server to connect back to the master fog server.
If that works then you can automate the process by setting up a cron job at the remote fog server to automate the export and import on a timed basis.
@george1421 Ah I see. Run the dump from the secondary FOG server without having to dump on the primary to a share, mount the share on the secondary, then import. I currently don’t have creds set on mysql on either server. I’ll look into setting creds then try to run mysql -h<hostname> to test.
@brakcounty While I’m not sure this is the right approach (dumping and then importing the tables) there is a few things to make this a bit more compact with this command
mysqldump -h<hostname> -u<username> -p <databasename> <table1> <table2> <table3> --single-transaction --no-create-info > dumpfile.sql
Now you could take this a bit more with creating a user account that is configured for remote access to mysql much like how a storage node accesses a master node over the network. Then you could run the mysqldump command on the remote FOG server and reach back to the master FOG server to dump its tables. With a this command in <pseudo > form.
mysqldump -h masterfog.domain.com -u fogsql -p password fog hosts images --single-transaction --no-create-info > /tmp/fogexport.sql
Then turn around on the same remote fog server and run the import command:
mysql -u fogsql -p passwprd fog < /tmp/fogexport.sql
I believe I found it:
- Export image and host list on primary server:
sudo mysqldump fog images > fog_images.sql
sudo mysqldump fog hosts > fog_hosts.sql
- Import image and host list on secondary server:
sudo mysql -D fog < /mnt/fog_images.sql
sudo mysql -D fog < /mnt/fog_hosts.sql
Keep in mind that the paths shown above are unique to my set up and method for transfer of the sql files. Perhaps there’s a way to dump two tables in one command. But I just figured this out seconds ago lol.
@george1421 A continuous sync. I want changes made to the Hosts, Images, and groups db (if possible) replicated to the secondary FOG server.
@brakcounty from the web ui you should be able to export and import from a csv file. Is that what you are asking or do you need a continuous sync?