Choose the right location on PXE Boot - Is that possible?
-
Hello erverybody,
out project to let FOG replace our currect Symantic Ghost Servers is making progress. Today we installed our first storage node on a different location. As we tried to deploy an image we noticed that the client was getting the image over WAN from the main Storage node. After we set on that host the specific location it took then the local node.
So to confirm: Every node is in the same storage group, location Plugin is enabled and the nodes are set to the specific locations. Is there anything else I have to care? Thanks for your help
Cheers,
Gamie -
With the location plugin, the clients need to be assigned to a location so they know which server is their local storage server.
So you assign a storage node to a location and a target computer to a location. The remote target computers can pxe boot from the local storage node. It will chat with the master node a few times then it āshouldā load the image files from the local storage node.
-
@george1421 Okay, so it canāt be automated? Because when I was deploying without set to a specific location, it took the main storage node.
Could be a solution to use the site plugin and on every location there is a specific user?
-
@Gamienator said in Choose the right location on PXE Boot - Is that possible?:
so it canāt be automated?
So what canāt be automated? Are you talking about existing hosts where you didnāt define a location?
When you use the full registration process, the location is a question you have to answer.
-
@george1421 Year, deploying directly without registration. But then we have to register every host and assign it to the location. Thanks for the clarification @george1421
-
@Gamienator Correct, it needs to be registered to know its location.
-
@george1421 Thanks George for your quick answer on that. If I may give a little background on what we are trying to achieve (Gamienator and I are colleagues) that is the source of the question. Our internal employee assets are not an issue. They will likely never need to be re-imaged at a location other than their assignment. However, we are a rental company, and the rental assets do rather often move around from one location to another. It is, therefore, very likely that they would need to be re-imaged at a location other than their assignment. So I can foresee a problem arising as assets then attempt to image over the WAN as they try to call home. Sooooā¦we are brainstorming for a scenario by which the HOST would simply image from itās closest storage node (because all nodes will have the image), no matter what location it is in (hence our thought of not assigning HOSTS to LOCATIONS). We thought the idea of a USER tied to a SITE or LOCATION might āgeorestrictā it, but our initial test did not prove that. Our current design idea is one server and a STORAGE NODE at each physical location.
Is it possible at all within the existing infrastructure of FOG? If so, are we just not approaching this the right way? And should this be a new topic?
Thank you very much,
James
-
@james-tigert Ok that gives me a bit bigger picture of what you are doing. So you are essentially working as a system remanufacturer (yes I know rental, but the concept is the same).
So I would change up how you have things configured, since you are only concerned with āload and goā method of deployment.
In this case you will place a full fog server at each location. Each fog server will operate independently of each other. You donāt need a central management system (like fog was originally designed). Each site will have its pxe boot server configured for the local fog server. If you want and have AD in your environment you can install the LDAP plugin so you donāt need to create local fog deployment accounts at each location, but that is up to you.
Now the last bit is image management. This part Iām going to tell you is not supported by the FOG Project. If you want to setup a central fog server and have that fog server replicate its images to the remote locations we can do that. You just (manually) create a storage group on the HQ fog server. Then add the remote (full) fog servers to that storage group as storage nodes. The replicator doesnāt really care if the endpoint is a full fog server or a storage node, its just going to send the image from the HQ fog server to anything in its storage group. So that will take care of the raw image files. As you update images on the HQ fog server it will automatically replicate them to the remote fog servers.
Now here is the manual part. The FOG server images are built out of 2 parts. The first part is the raw image files which the replicator is taking care of for you. The second part is the meta data. In this case you are going to have to manally export the image definitions from the HQ fog server and import them into the remote fog servers using the FOG Web UI. Its pretty simple but time consuming if you have a lot of FOG server. Now you could automate this process using some back end bash scripting but a lot depends on the number of remote fog servers and the frequency you update your master HQ images.
-
@george1421 Thank you again for the assistance. We appreciate that insight. We have decided to go that route with a FOG server at each location. Fortunately we are still in the RND phase, so we only have one node deployed thus far. Gamienator will rebuild it to be a server, so we can retest the process.
We havenāt seen any advanced replication tools, i.e. time and bandwidth limits (which are required), so Gamienator has a plan to manage that outside of FOG.
Just thought you would like to a resolution to the conversation.
James
-
@james-tigert said in Choose the right location on PXE Boot - Is that possible?:
Gamienator will rebuild it to be a server, so we can retest the process.
No need to rebuild, just delete control file /opt/fog/.fogsettings. Then rerun the fog installer script. It will convert the system over to a full fog system from a storage node. Its a bit more difficult moving the other way.
We havenāt seen any advanced replication tools, i.e. time and bandwidth limits (which are required), so Gamienator has a plan to manage that outside of FOG.
As for the replicator, You can do advanced time and bandwidth control but its outside fog. You probably will have a bit more success with rsync and a few bash/cron jobs. Again a lot depends on how often you update your base images. If its a once a month thing then you could launch the replication by hand when needed.
-
@james-tigert said in Choose the right location on PXE Boot - Is that possible?:
We havenāt seen any advanced replication tools, i.e. time and bandwidth limits (which are required),
There are advanced settings available like bandwidth limiting and as well you can adjust the time deplay between replication runs!
-
@george1421 @Sebastian-Roth Sorry to dig out that older thread. I wasnāt able to work sooner on that change. Wouldnāt it be possible to automate the meta data sync? Iām thinking of using the FOG API. It should be possible to grab all the needed Informations and then update it on every other FOG server in the diffent locations. Weāve got 7 Locations and update every 3 - 4 months. So year manually updating an image could be a solution, but automated would be much better imho
Thanks!
Gamie
-
@Gamienator said in Choose the right location on PXE Boot - Is that possible?:
Wouldnāt it be possible to automate the meta data sync? Iām thinking of using the FOG API. It should be possible to grab all the needed Information and then update it on every other FOG server in the different locations.
The short answer is yes. The longer answer is that it will take a programmer to write an external application to query the api on the master node and then on the remote node to update. This probably could be done in powershell, but to make the finished product useful to the fog project writing the data migration module in php would be the best. Then at least we could setup a cron job to run the php script and migrate the data that way. It would be a bit of giving back to the fog project if you were so inclined.
-
@george1421 Ok, good thing is we got a programmer in our team that can php. Iāll talk with him about that and maybe we can contribute it too.
-
@george1421 I promised you to stay in the loop about this topic. First the bad news: Our full stack developer just doesnāt have the time to support me with this. So what did I do? On my own š¤Ŗ I guess if @Sebastian-Roth sees this code heās puking or cringing how bad the code is But Iām a firsttime php coder, so please have mercy with me
So at first I created a new table with the information needed to connect to the other hosts. After that a simple PHP file with a form, that querys all of my available images and locations:
<html> <form method="post" action="transmitter.php" id="transmitting"> <?php $link = mysqli_connect("127.0.0.1", "secret", "damnsecret", "fog"); //connect Database //Check if Connection was successful and print error if not if (!$link) { echo "Fehler: konnte nicht mit MySQL verbinden." . PHP_EOL; echo "Debug-Fehlernummer: " . mysqli_connect_errno() . PHP_EOL; echo "Debug-Fehlermeldung: " . mysqli_connect_error() . PHP_EOL; exit; } //Gather all Images and create Dropdown Menu for User. if($result2 = $link->query("SELECT `imageID`,`imageName`,`imageDesc`,`imagePath`,`imageProtect`,`imageMagnetUri`,`imageDateTime`,`imageCreateBy`,`imageBuilding`,`imageSize`,`imageTypeID`,`imagePartitionTypeID`,`imageOSID`,`imageFormat`,`imageLastDeploy`,`imageCompress`,`imageEnabled`,`imageReplicate`,`imageServerSize` FROM `images`")){ echo "Transferiere Image "; echo "<select id=image name=image class='form-control' style='width:300px;'>"; while ($row = $result2->fetch_assoc()) { echo "<option value=$row[imageID]>$row[imageName]</option>"; } echo "</select>"; }else{ echo $link->error; } //Gather all installed Nodes that User can select if($result3 = $link->query("SELECT `ID`, `NodeName`, `IPAddress`, `Localmount`, `dbuser`, `dbpw`, `bwlimit` FROM `externalNode`")){ echo " zum Node "; echo "<select id=destination name=destination class='form-control' style='width:300px;'>"; while ($row = $result3->fetch_assoc()) { echo "<option value=$row[ID]>$row[NodeName]</option>"; } echo "</select>"; }else{ echo $link->error; } mysqli_close($link); ?> <input id="Senden" type="submit" name="senden" value="Senden"></html>
Then I made every storagenode available and mounted it via NFS into seperate folders into /mnt. After sending the form following php script got called:
<?php $link = mysqli_connect("127.0.0.1", "secretsauce", "damnsecretsauce", "fog"); //connect Database //Check if Connection was successful and print error if not if (!$link) { echo "Fehler: konnte nicht mit MySQL verbinden." . PHP_EOL; echo "Debug-Fehlernummer: " . mysqli_connect_errno() . PHP_EOL; echo "Debug-Fehlermeldung: " . mysqli_connect_error() . PHP_EOL; exit; } //getting all POST informations into useable variables $gotimage = $_POST["image"]; $gotdestination = $_POST["destination"]; //Query main table to retrieve all informations about the image that has to be transfered if($result = $link->query("SELECT `imageID`,`imageName`,`imageDesc`,`imagePath`,`imageProtect`,`imageMagnetUri`,`imageDateTime`,`imageCreateBy`,`imageBuilding`,`imageSize`,`imageTypeID`,`imagePartitionTypeID`,`imageOSID`,`imageFormat`,`imageLastDeploy`,`imageCompress`,`imageEnabled`,`imageReplicate`,`imageServerSize` FROM `images` WHERE `imageID`=$gotimage")){ $resultexp = $result->fetch_array(); }else{ echo $link->error; } //Query Second Table to retrieve needed information for destination host if($result2 = $link->query("SELECT `ID`, `NodeName`, `IPAddress`, `Localmount`, `dbuser`, `dbpw`, `bwlimit` FROM `externalNode`")){ $resultexp2 = $result2->fetch_array(); }else{ echo $link->error; } //making needed information useable $imageID=$resultexp['imageID']; $imagepath=$resultexp['imagePath']; echo 'Transferiere Image ' . $resultexp['imageName'] . ' ans Ziel ' . $resultexp2['NodeName']; //Query Storagegroup table to get information on which storage the image is located if($result3 = $link->query("SELECT `igaStorageGroupID` FROM `imageGroupAssoc` WHERE `igaImageID`=$imageID AND `igaPrimary`='1'")){ $resultexp3 = $result3->fetch_array(); }else{ echo $link->error; } $storageID=$resultexp3['igaStorageGroupID']; //set storagepath to needed variable if ($storageID==1) { $storagepath="/mnt/localnode/"; } elseif ($storageID==2){ $storagepath="/mnt/nvme/"; } else {echo "gibt nen Fehler, StorageID nicht gefunden."; exit; } $destpath=$resultexp2['Localmount']; echo "<p>"; //exec shell command to transfer image via rsync to destination and send mail notification after completion echo shell_exec("/usr/bin/screen -d -m /var/lib/transfer/transfer.sh " . $storagepath . " " . $imagepath . " " . $destpath . " 2>&1"); //make destination variables useable $destdbaddress=$resultexp2['IPAddress']; $destdbuser=$resultexp2['dbuser']; $destdbpw=$resultexp2['dbpw']; //connect to destination Database $link2 = mysqli_connect($destdbaddress, $destdbuser, $destdbpw, "fog_dev"); //Check if Connection was successful and print error if not if (!$link2) { echo "Fehler: konnte nicht mit MySQL verbinden." . PHP_EOL; echo "Debug-Fehlernummer: " . mysqli_connect_errno() . PHP_EOL; echo "Debug-Fehlermeldung: " . mysqli_connect_error() . PHP_EOL; exit; }
The reason I didnāt use the Replicationserve on FOG is, that I just want a simple Browserinput und when I click āsendā just send it. On the other replicationservice I would log into the server and enable the service. Therefore I decided ot use rsync and let php call a little bashscript that copies is and send a mail when itās done. The current status anyway can be looked up via SSH and screen resume command:
#!/bin/sh #start rsync with given variables /usr/bin/rsync -a -P $1$2 $3$2 #sendmail after image got transferred subject="Imagetransfer $2 abgeschlossen" /usr/sbin/sendmail mailadress@lol.de <<EOF subject:$subject Viel Erfolg! EOF echo "Sync abgeschlossen"
And what Iām now struggeling is, how to put the data into the destination database . At the moment Iām not even sure how to get the data. I already made the SQL statement: and even put in every SQL Result into a single variable
$vresultexpimageID=$resultexp['imageID']; $vresultexpimageName=$resultexp['imageName']; $vresultexpimageDesc=$resultexp['imageDesc']; $vresultexpimagePath=$resultexp['imagePath']; $vresultexpimageProtect=$resultexp['imageProtect']; $vresultexpimageMagnetUri=$resultexp['imageMagnetUri']; $vresultexpimageDateTime=$resultexp['imageDateTime']; $vresultexpimageCreateBy=$resultexp['imageCreateBy']; $vresultexpimageBuilding=$resultexp['imageBuilding']; $vresultexpimageSize=$resultexp['imageSize']; $vresultexpimageTypeID=$resultexp['imageTypeID']; $vresultexpimagePartitionTypeID=$resultexp['imagePartitionTypeID']; $vresultexpimageOSID=$resultexp['imageOSID']; $vresultexpimageFormat=$resultexp['imageFormat']; $vresultexpimageLastDeploy=$resultexp['imageLastDeploy']; $vresultexpimageCompress=$resultexp['imageCompress']; $vresultexpimageEnabled=$resultexp['imageEnabled']; $vresultexpimageReplicate=$resultexp['imageReplicate']; $vresultexpimageServerSize=$resultexp['imageServerSize']; //echo shell_exec("/var/lib/transfer/dbupdate.sh $destdbaddress $destdbuser $destdbpw $imageID"; ?>
But even then, the transmition to the destination database isnāt sucessfull. My next approach was to do it into a second batchfile, that querys the main database, export it INTO a File and then import it into the destination database, without success at the moment.
But after 12 hours straight PHP coding I need a little break I never coded php, and I can imagine I forget a lot of things, for example escaping. But after the progress I had already yesterday I though I could finish today, but it looks like no.
If Iām successul Iām answering again. But I can imagine that the approach I did isnāt quite helpful for the FOG Project
-
Well, after two days I was finally able to make the transfer successful with a PDO object. Learnes quite a lot the last to days, Iām updating the code again. After that Iāll have a look if Iām able to write it that way that it could be used for the fog project aswell (like the right UPDATE query and so on).
-
@Gamienator I will have a look at this tomorrow.
-
@Gamienator Sorry, just didnāt get to take a closer look at this up until today. I think youāve done a pretty good job considering this is your first time using PHP!
Thinking more about the scenario you have I was wondering if there is a more apropriate way to do this. I donāt understand why youād want to have users (I suppose you mean admins) start the transfers manually. What I mean is: Someone needs to have image X on FOG server in location Y and so he opens your special website, selects image, clicks send and needs to wait for it to finish before a host can be deployed at location Y. But why is this interaction needed at all I wonder? If Iād be you Iād try to have it all automated as much as possible.
One way would be to just replicate all images to the servers in all locations. Though images might not be needed and it would therefore waste a lot of bandwith to transfer the huge images to locations where they might not be used at all. So I thought about miss-using FOGās concept of storage groups could be useful for you. Define a storage group for every location you have. Then edit the image settings -> tab āStorage Groupā and assign the location/storage group where you want the image to be used. Iād suggest you leave the pre defined ādefaultā Storage Group as is for every image but only add the new ālocation storage groupsā as needed.
Now combine the stuff youāve come up with already with the settings I mention above: Create a cronjob on your main server that will query the the āStorage Groupā information of every image from the database and do the replication to the other servers automatically based on this information. So when people create a new image definition and add the correct āStorage Groupā to it the image will be automatically replicated as soon as the image is being captured.
This is all about the raw image data replication. Now for the database there are two scenarios I can imagine. One would be to use MySQLās capability of replicating (syncing) databases automatically. One of our users just wrote a tutorial on this topic. Iād suggest to not replicate all tables but only sync the
images
table by using replication filter rules.The other option is to add a simple
mysqldump
through SSH tunnel command that grabs all the information from theimages
table and push it to the other serverās database. It would be wise to also base this on the āStorage Groupā information described above so youād only have the image defintions needed in all the locations.Thatās just my point of view. See what you think and let me know if you need help with this.