Choose the right location on PXE Boot - Is that possible?
-
@Gamienator said in Choose the right location on PXE Boot - Is that possible?:
so it can’t be automated?
So what can’t be automated? Are you talking about existing hosts where you didn’t define a location?
When you use the full registration process, the location is a question you have to answer.
-
@george1421 Year, deploying directly without registration. But then we have to register every host and assign it to the location. Thanks for the clarification @george1421
-
@Gamienator Correct, it needs to be registered to know its location.
-
@george1421 Thanks George for your quick answer on that. If I may give a little background on what we are trying to achieve (Gamienator and I are colleagues) that is the source of the question. Our internal employee assets are not an issue. They will likely never need to be re-imaged at a location other than their assignment. However, we are a rental company, and the rental assets do rather often move around from one location to another. It is, therefore, very likely that they would need to be re-imaged at a location other than their assignment. So I can foresee a problem arising as assets then attempt to image over the WAN as they try to call home. Soooo…we are brainstorming for a scenario by which the HOST would simply image from it’s closest storage node (because all nodes will have the image), no matter what location it is in (hence our thought of not assigning HOSTS to LOCATIONS). We thought the idea of a USER tied to a SITE or LOCATION might “georestrict” it, but our initial test did not prove that. Our current design idea is one server and a STORAGE NODE at each physical location.
Is it possible at all within the existing infrastructure of FOG? If so, are we just not approaching this the right way? And should this be a new topic?
Thank you very much,
James
-
@james-tigert Ok that gives me a bit bigger picture of what you are doing. So you are essentially working as a system remanufacturer (yes I know rental, but the concept is the same).
So I would change up how you have things configured, since you are only concerned with “load and go” method of deployment.
In this case you will place a full fog server at each location. Each fog server will operate independently of each other. You don’t need a central management system (like fog was originally designed). Each site will have its pxe boot server configured for the local fog server. If you want and have AD in your environment you can install the LDAP plugin so you don’t need to create local fog deployment accounts at each location, but that is up to you.
Now the last bit is image management. This part I’m going to tell you is not supported by the FOG Project. If you want to setup a central fog server and have that fog server replicate its images to the remote locations we can do that. You just (manually) create a storage group on the HQ fog server. Then add the remote (full) fog servers to that storage group as storage nodes. The replicator doesn’t really care if the endpoint is a full fog server or a storage node, its just going to send the image from the HQ fog server to anything in its storage group. So that will take care of the raw image files. As you update images on the HQ fog server it will automatically replicate them to the remote fog servers.
Now here is the manual part. The FOG server images are built out of 2 parts. The first part is the raw image files which the replicator is taking care of for you. The second part is the meta data. In this case you are going to have to manally export the image definitions from the HQ fog server and import them into the remote fog servers using the FOG Web UI. Its pretty simple but time consuming if you have a lot of FOG server. Now you could automate this process using some back end bash scripting but a lot depends on the number of remote fog servers and the frequency you update your master HQ images.
-
@george1421 Thank you again for the assistance. We appreciate that insight. We have decided to go that route with a FOG server at each location. Fortunately we are still in the RND phase, so we only have one node deployed thus far. Gamienator will rebuild it to be a server, so we can retest the process.
We haven’t seen any advanced replication tools, i.e. time and bandwidth limits (which are required), so Gamienator has a plan to manage that outside of FOG.
Just thought you would like to a resolution to the conversation.
James
-
@james-tigert said in Choose the right location on PXE Boot - Is that possible?:
Gamienator will rebuild it to be a server, so we can retest the process.
No need to rebuild, just delete control file /opt/fog/.fogsettings. Then rerun the fog installer script. It will convert the system over to a full fog system from a storage node. Its a bit more difficult moving the other way.
We haven’t seen any advanced replication tools, i.e. time and bandwidth limits (which are required), so Gamienator has a plan to manage that outside of FOG.
As for the replicator, You can do advanced time and bandwidth control but its outside fog. You probably will have a bit more success with rsync and a few bash/cron jobs. Again a lot depends on how often you update your base images. If its a once a month thing then you could launch the replication by hand when needed.
-
@james-tigert said in Choose the right location on PXE Boot - Is that possible?:
We haven’t seen any advanced replication tools, i.e. time and bandwidth limits (which are required),
There are advanced settings available like bandwidth limiting and as well you can adjust the time deplay between replication runs!
-
@george1421 @Sebastian-Roth Sorry to dig out that older thread. I wasn’t able to work sooner on that change. Wouldn’t it be possible to automate the meta data sync? I’m thinking of using the FOG API. It should be possible to grab all the needed Informations and then update it on every other FOG server in the diffent locations. We’ve got 7 Locations and update every 3 - 4 months. So year manually updating an image could be a solution, but automated would be much better imho
Thanks!
Gamie
-
@Gamienator said in Choose the right location on PXE Boot - Is that possible?:
Wouldn’t it be possible to automate the meta data sync? I’m thinking of using the FOG API. It should be possible to grab all the needed Information and then update it on every other FOG server in the different locations.
The short answer is yes. The longer answer is that it will take a programmer to write an external application to query the api on the master node and then on the remote node to update. This probably could be done in powershell, but to make the finished product useful to the fog project writing the data migration module in php would be the best. Then at least we could setup a cron job to run the php script and migrate the data that way. It would be a bit of giving back to the fog project if you were so inclined.
-
@george1421 Ok, good thing is we got a programmer in our team that can php. I’ll talk with him about that and maybe we can contribute it too.
-
@george1421 I promised you to stay in the loop about this topic. First the bad news: Our full stack developer just doesn’t have the time to support me with this. So what did I do? On my own 🤪 I guess if @Sebastian-Roth sees this code he’s puking or cringing how bad the code is But I’m a firsttime php coder, so please have mercy with me
So at first I created a new table with the information needed to connect to the other hosts. After that a simple PHP file with a form, that querys all of my available images and locations:
<html> <form method="post" action="transmitter.php" id="transmitting"> <?php $link = mysqli_connect("127.0.0.1", "secret", "damnsecret", "fog"); //connect Database //Check if Connection was successful and print error if not if (!$link) { echo "Fehler: konnte nicht mit MySQL verbinden." . PHP_EOL; echo "Debug-Fehlernummer: " . mysqli_connect_errno() . PHP_EOL; echo "Debug-Fehlermeldung: " . mysqli_connect_error() . PHP_EOL; exit; } //Gather all Images and create Dropdown Menu for User. if($result2 = $link->query("SELECT `imageID`,`imageName`,`imageDesc`,`imagePath`,`imageProtect`,`imageMagnetUri`,`imageDateTime`,`imageCreateBy`,`imageBuilding`,`imageSize`,`imageTypeID`,`imagePartitionTypeID`,`imageOSID`,`imageFormat`,`imageLastDeploy`,`imageCompress`,`imageEnabled`,`imageReplicate`,`imageServerSize` FROM `images`")){ echo "Transferiere Image "; echo "<select id=image name=image class='form-control' style='width:300px;'>"; while ($row = $result2->fetch_assoc()) { echo "<option value=$row[imageID]>$row[imageName]</option>"; } echo "</select>"; }else{ echo $link->error; } //Gather all installed Nodes that User can select if($result3 = $link->query("SELECT `ID`, `NodeName`, `IPAddress`, `Localmount`, `dbuser`, `dbpw`, `bwlimit` FROM `externalNode`")){ echo " zum Node "; echo "<select id=destination name=destination class='form-control' style='width:300px;'>"; while ($row = $result3->fetch_assoc()) { echo "<option value=$row[ID]>$row[NodeName]</option>"; } echo "</select>"; }else{ echo $link->error; } mysqli_close($link); ?> <input id="Senden" type="submit" name="senden" value="Senden"></html>
Then I made every storagenode available and mounted it via NFS into seperate folders into /mnt. After sending the form following php script got called:
<?php $link = mysqli_connect("127.0.0.1", "secretsauce", "damnsecretsauce", "fog"); //connect Database //Check if Connection was successful and print error if not if (!$link) { echo "Fehler: konnte nicht mit MySQL verbinden." . PHP_EOL; echo "Debug-Fehlernummer: " . mysqli_connect_errno() . PHP_EOL; echo "Debug-Fehlermeldung: " . mysqli_connect_error() . PHP_EOL; exit; } //getting all POST informations into useable variables $gotimage = $_POST["image"]; $gotdestination = $_POST["destination"]; //Query main table to retrieve all informations about the image that has to be transfered if($result = $link->query("SELECT `imageID`,`imageName`,`imageDesc`,`imagePath`,`imageProtect`,`imageMagnetUri`,`imageDateTime`,`imageCreateBy`,`imageBuilding`,`imageSize`,`imageTypeID`,`imagePartitionTypeID`,`imageOSID`,`imageFormat`,`imageLastDeploy`,`imageCompress`,`imageEnabled`,`imageReplicate`,`imageServerSize` FROM `images` WHERE `imageID`=$gotimage")){ $resultexp = $result->fetch_array(); }else{ echo $link->error; } //Query Second Table to retrieve needed information for destination host if($result2 = $link->query("SELECT `ID`, `NodeName`, `IPAddress`, `Localmount`, `dbuser`, `dbpw`, `bwlimit` FROM `externalNode`")){ $resultexp2 = $result2->fetch_array(); }else{ echo $link->error; } //making needed information useable $imageID=$resultexp['imageID']; $imagepath=$resultexp['imagePath']; echo 'Transferiere Image ' . $resultexp['imageName'] . ' ans Ziel ' . $resultexp2['NodeName']; //Query Storagegroup table to get information on which storage the image is located if($result3 = $link->query("SELECT `igaStorageGroupID` FROM `imageGroupAssoc` WHERE `igaImageID`=$imageID AND `igaPrimary`='1'")){ $resultexp3 = $result3->fetch_array(); }else{ echo $link->error; } $storageID=$resultexp3['igaStorageGroupID']; //set storagepath to needed variable if ($storageID==1) { $storagepath="/mnt/localnode/"; } elseif ($storageID==2){ $storagepath="/mnt/nvme/"; } else {echo "gibt nen Fehler, StorageID nicht gefunden."; exit; } $destpath=$resultexp2['Localmount']; echo "<p>"; //exec shell command to transfer image via rsync to destination and send mail notification after completion echo shell_exec("/usr/bin/screen -d -m /var/lib/transfer/transfer.sh " . $storagepath . " " . $imagepath . " " . $destpath . " 2>&1"); //make destination variables useable $destdbaddress=$resultexp2['IPAddress']; $destdbuser=$resultexp2['dbuser']; $destdbpw=$resultexp2['dbpw']; //connect to destination Database $link2 = mysqli_connect($destdbaddress, $destdbuser, $destdbpw, "fog_dev"); //Check if Connection was successful and print error if not if (!$link2) { echo "Fehler: konnte nicht mit MySQL verbinden." . PHP_EOL; echo "Debug-Fehlernummer: " . mysqli_connect_errno() . PHP_EOL; echo "Debug-Fehlermeldung: " . mysqli_connect_error() . PHP_EOL; exit; }
The reason I didn’t use the Replicationserve on FOG is, that I just want a simple Browserinput und when I click “send” just send it. On the other replicationservice I would log into the server and enable the service. Therefore I decided ot use rsync and let php call a little bashscript that copies is and send a mail when it’s done. The current status anyway can be looked up via SSH and screen resume command:
#!/bin/sh #start rsync with given variables /usr/bin/rsync -a -P $1$2 $3$2 #sendmail after image got transferred subject="Imagetransfer $2 abgeschlossen" /usr/sbin/sendmail mailadress@lol.de <<EOF subject:$subject Viel Erfolg! EOF echo "Sync abgeschlossen"
And what I’m now struggeling is, how to put the data into the destination database . At the moment I’m not even sure how to get the data. I already made the SQL statement: and even put in every SQL Result into a single variable
$vresultexpimageID=$resultexp['imageID']; $vresultexpimageName=$resultexp['imageName']; $vresultexpimageDesc=$resultexp['imageDesc']; $vresultexpimagePath=$resultexp['imagePath']; $vresultexpimageProtect=$resultexp['imageProtect']; $vresultexpimageMagnetUri=$resultexp['imageMagnetUri']; $vresultexpimageDateTime=$resultexp['imageDateTime']; $vresultexpimageCreateBy=$resultexp['imageCreateBy']; $vresultexpimageBuilding=$resultexp['imageBuilding']; $vresultexpimageSize=$resultexp['imageSize']; $vresultexpimageTypeID=$resultexp['imageTypeID']; $vresultexpimagePartitionTypeID=$resultexp['imagePartitionTypeID']; $vresultexpimageOSID=$resultexp['imageOSID']; $vresultexpimageFormat=$resultexp['imageFormat']; $vresultexpimageLastDeploy=$resultexp['imageLastDeploy']; $vresultexpimageCompress=$resultexp['imageCompress']; $vresultexpimageEnabled=$resultexp['imageEnabled']; $vresultexpimageReplicate=$resultexp['imageReplicate']; $vresultexpimageServerSize=$resultexp['imageServerSize']; //echo shell_exec("/var/lib/transfer/dbupdate.sh $destdbaddress $destdbuser $destdbpw $imageID"; ?>
But even then, the transmition to the destination database isn’t sucessfull. My next approach was to do it into a second batchfile, that querys the main database, export it INTO a File and then import it into the destination database, without success at the moment.
But after 12 hours straight PHP coding I need a little break I never coded php, and I can imagine I forget a lot of things, for example escaping. But after the progress I had already yesterday I though I could finish today, but it looks like no.
If I’m successul I’m answering again. But I can imagine that the approach I did isn’t quite helpful for the FOG Project
-
Well, after two days I was finally able to make the transfer successful with a PDO object. Learnes quite a lot the last to days, I’m updating the code again. After that I’ll have a look if I’m able to write it that way that it could be used for the fog project aswell (like the right UPDATE query and so on).
-
@Gamienator I will have a look at this tomorrow.
-
@Gamienator Sorry, just didn’t get to take a closer look at this up until today. I think you’ve done a pretty good job considering this is your first time using PHP!
Thinking more about the scenario you have I was wondering if there is a more apropriate way to do this. I don’t understand why you’d want to have users (I suppose you mean admins) start the transfers manually. What I mean is: Someone needs to have image X on FOG server in location Y and so he opens your special website, selects image, clicks send and needs to wait for it to finish before a host can be deployed at location Y. But why is this interaction needed at all I wonder? If I’d be you I’d try to have it all automated as much as possible.
One way would be to just replicate all images to the servers in all locations. Though images might not be needed and it would therefore waste a lot of bandwith to transfer the huge images to locations where they might not be used at all. So I thought about miss-using FOG’s concept of storage groups could be useful for you. Define a storage group for every location you have. Then edit the image settings -> tab “Storage Group” and assign the location/storage group where you want the image to be used. I’d suggest you leave the pre defined “default” Storage Group as is for every image but only add the new “location storage groups” as needed.
Now combine the stuff you’ve come up with already with the settings I mention above: Create a cronjob on your main server that will query the the “Storage Group” information of every image from the database and do the replication to the other servers automatically based on this information. So when people create a new image definition and add the correct “Storage Group” to it the image will be automatically replicated as soon as the image is being captured.
This is all about the raw image data replication. Now for the database there are two scenarios I can imagine. One would be to use MySQL’s capability of replicating (syncing) databases automatically. One of our users just wrote a tutorial on this topic. I’d suggest to not replicate all tables but only sync the
images
table by using replication filter rules.The other option is to add a simple
mysqldump
through SSH tunnel command that grabs all the information from theimages
table and push it to the other server’s database. It would be wise to also base this on the “Storage Group” information described above so you’d only have the image defintions needed in all the locations.That’s just my point of view. See what you think and let me know if you need help with this.