• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 64
    • Topics 113
    • Posts 15,337
    • Best 2,777
    • Controversial 0
    • Groups 2

    Posts made by george1421

    • RE: FOG Storage node add time of day bandwidth restrictions

      @Wayne-Workman said:

      @george1421 You can. You can get very specific with cron-tab events…

      Ugh, sorry I am guilty of reading too fast. I read cron-tab as cross-tab so I was stuck trying to understand what you meant by a cross tab query.

      Yes you are correct it can be done with cron. But these jobs would need to be managed from within the FOG console. You wouldn’t want most users poking around setting up cron jobs. But doing it with cron wouldn’t abort a current transfer or notify any of the FOG services that something happened because you are poking right into the database.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      @Wayne-Workman said:

      I think one could write a cron-tab event to run two scripts…

      As long as you could fire that script at a specific TOD and then revert the setting to the default transfer rate once the premium time range has passed.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      I don’t know if the transfer percent is available in the code. If it was then sure we would want it to continue. But then where do you draw the line. What happens if we are 90%, or 80% when would you decide to abort vs continue.

      My recommendation would be to use rsync because even if we were at 94% if you abort rsync and start it up again it will skip ahead to where it left off and continue at the changed transfer speed.

      posted in Feature Request
      george1421G
      george1421
    • Create the concept of a ForeignMasterStorage (deployment) node

      I’ve looked into the possibility to create a slave node deployment node by setting up a master node in the traditional manner. Then creating a proposed slave node as you would in the traditional way. But at the end of the process pointing the Slave node to the Master nodes database. This will work for most of the tables except for the FOG server specific tables like globalSettings. These setting are unique to the individual FOG server. I can see if your FOG Slave server is location in a different subnet or if there is conflicting settings between the Master node and Slave node there will be a setting clash. If the globalSettings table had an additional field that represented the unique FOG installation ID the (global)settings could be created to each individual FOG server. I didn’t check into many other tables for FOG settings clash but it looks like the current FOG system could be extended to a Master-Slave configuration.

      The other way I though about is to keep the fog databases isolated and then just send JSON or other types if IPC messages (they could be done as http POST calls between the systems for that matter) between the master and slave(s) FOG servers. This would allow the FOG installations to be run stand alone if needed but also communicate with a master node. Personally I like this approach a bit better for a scalability and robustness standpoint.

      posted in Feature Request
      george1421G
      george1421
    • FOG Storage node add time of day bandwidth restrictions

      Add the ability to set bandwidth restrictions for replication based on time of day. I can see a need to be able to define a single time range (such as 8a to 5p) where we might want to restrict replication traffic to 1MB/s but after that time range replicate data at 10MB/s. This would be needed in a WAN/VPN setup where the storage node might be location behind a congested link. To extend this concept a bit more setting 0MB/s would disable replication between the time range. You would want to do this if your storage node was behind a slow MPLS link.

      I can see issues since we are moving large files that a time boundary passes but the file hasn’t finished transferring, the file would continue to send at the old transfer rate until the file is finished copying. If rsync was used to move files vs the current ftp process when a time boundary was passed the current trasfer could be aborted and then restarted with the new bandwidth restrictions, rsync would continue moving the file where it left off before the transfer was aborted.

      posted in Feature Request
      george1421G
      george1421
    • RE: Fog user rights

      I think we need to get a bit of clarity from the Developers on this one.

      On my main node the snapins folder is owned by fog.apache but on my storage node the snapins folder is owned by root.root.

      As I posted below on my storage node the /images folder is owned by fog.fog and on my main node /images is owned by root.root.

      There doesn’t seem to be any consistency between the user /group and a functioning system (so to speak). But since we are having this discussion something must be amiss here.

      posted in FOG Problems
      george1421G
      george1421
    • RE: NAS problem iomega NFS mount ?

      (removing fog out of the equation for a second) Ok and you have that box exported (in the iomega) and then mounted onto a folder on your linux box?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Fog user rights

      Just to add a bit of correlation to your post.

      I noticed this when I was working on a POC setup.

      On my master fog server the files and folders under /images are owned by root.root with a file mode of 777 (realize I may have done the mode 777 when trying to get the master node to work many months ago. I installed 1.2.0 on this and then upgraded to the latest trunk builds over the months.

      On my storage node the replicator created the image files (same as on the master node) being owned by fog.fog with a mode of 755. The storage node was just setup with 1.2.0 and then immediately updated to the latest trunk build.

      To answer your question about fog being a sudoer, from a security standpoint I would say no. This should remain a low level account.

      posted in FOG Problems
      george1421G
      george1421
    • RE: NAS problem iomega NFS mount ?

      Could you clarify this a bit. Do you have a FOG Storage node setup or do you just have external storage connected via NFS?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      At this screen you would normally do this during setup. press the install/upgrade now button and then go back to the installer and press Y to finish the setup.

      Just to recap.
      in the command window you run the install program. The install program will pause during the install and instruct you to go to website and press the install/ upgrade now button (which will fixup / update the database schema to the 5080 build. Then you go back to the installer and answer yes and the setup will finish. If you forget this database update step I’ve seen the installer fail with errors.

      Once the installer is done then go to the fog management interface. If you get thrown back to the install update database page then something went wrong. If something went wrong then go to the apache error log at /var/log/httpd/errors_log (on centos) and tail that file. If there was a problem with the update program it will be listed there.

      I had the error after I applied one svn update that the database update page had an error and would not complete so each time I tried to login after the svn update I was presented with the database update page again.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Cant find Config.PHP

      Ok that tells me you only have /Data exported from your FOG server, I understand why the client can’t connect to the /images folder.

      For full disclosure this is what I get when I run the following commands.

      showmount -e localhost

      Export list for localhost:
      /images/dev *
      /images     *
      

      cat /etc/exports

      /images *(ro,sync,no_wdelay,no_subtree_check,insecure_locks,no_root_squash,insecure,fsid=0)
      /images/dev *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=1)
      

      I’m going to recommend that you add the above two lines to your /etc/exports file.

      Then run:

      exportfs -r
      

      Then run the showmount -e localhost command. This should then show you the exported /images directory.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Cant find Config.PHP

      On my centos 6.7 there is a file of that name in /opt/fog/service/etc/config.php but that only points to the web root folder.

      I suspect that the setting you are looking for has moved to a different location.

      But to address your nfs issue. What does this command show you?

      showmount -e localhost
      

      And if you run this command do you see the nfs exports?

       cat /etc/exports
      
      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      @Gilou

      Very nice feedback. I’m glad I’m not the only one trying to do this setup. From the testing that I’ve done I can say the fog replicator does work and it copied all files except what were included in the lftp command Tom posted. My (personal) preference would be to use the built in tools if available. I can say the bandwidth restrictions do work as defined. And I think it would be not hard (looking from the outside) to add a time of day function to the replicator so that it would only replicate images during a specific window. While I haven’t looked, I assume that the FOG Project has some kind of feature requests system to request this function.

      While an image pull back function would be nice, I would assume a do not deploy feature could be added as a flag to the image on the slave server. The one way push of the image would be mandatory that way we would know all nodes in the FOG deployment cloud 😃 would be an exact copy of the master node.

      Instead of copying the databases all over the place some kind of in process communication would be very light on the WAN links. I also thought about configuring mysql to replicate the FOG database around to all of the slave nodes, but I think that would make a mess of things since FOG wasn’t designed to use unique record number. Your mysql dump solution may be the only way to keep things in sync without breaking the whole environment.

      Thanks for the link to Tom’s news. I’ll have to read up on this too.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      Sorry, I got derailed because of the replication issues and didn’t fully describe me issue.

      Here is a made up scenario (names changed to protect the innocent) based on a project that I’m working on.

      Lets say we have 2 sites in New York [NY] and Georgia [GA]. The NY site is the HQ for the company. The windows image design team is at NY as well as the package developers. The NY site has what I will call the master FOG server. The GA site only has basic IT support staff. All images and snapins will be created by the NY IT teams. With plans on expanding in the near future to Kansas [KS] and Los Angles [LA] they want to setup a central deployment console (one way of doing things) that the remote sites can use to deploy images at their local site. Each site is connected to NY over a vpn link on the internet. While they have sufficient bandwidth on these VPN links they don’t want to deploy images across the VPN link.

      Now as I see it I could do the following.

      1. Setup a fully independent FOG server at each site. This server would be used to deploy images. Setup something like rsync to copy the master images and plugins from NY to each site. This would be the easiest to setup, but management would be a bit more because we would have to manually create settings in the remote systems as new images and snapins were created and replicated.

      2. Setup a full FOG server at each site, but link the remote servers to the master server in NY using something like a master - slave setup. Since each site would have a full FOG server (sans database) they would have the tftp and pxe boot services there (something I feel is missing on a storage node). They (remote site admins) could use the master node to deploy images via their local FOG server, or some corporate weenie could deploy a new image to all lab computers in GA with a push of a button. I like this concept since all systems from all sites would be recorded in this single master node. We could write reports against this single node for documentation purposes. There are a number of issues I can see with this setup from database latency to the master node attempting to ping all devices to check to see if they are up, from the fog clients trying to mistakenly contact the master FOG node instead of their local FOG deployment server.

      As I said, I don’t know of FOG is capable to do this since what I want was never in its core design. On the surface it appears to have the right bits in place to make this setup work. Looking back on my original post, I should have created a new one because the full picture is a bit broader than just a storage node syncing data with the master node.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      Well I guess I’m a bit red faced. I though clonezilla was its own thing. I guess I need to download a copy and see if the versions of partclone are the same. While I’m not with the FOG project (so this is only a guess), I could understand how the version of partclone could be newer/different with clonezilla than with a traditional linux OS, because the traditional OS version is dependent on the OS packager to update the version. Clonezilla is its own linux OS variant where the developer has control over the precise version used in the distribution.

      Understand I’m not saying this is the issue, but I would wonder why if using the same cloning engine you would get different results.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      Does clonezilla work correctly for this task? (just trying to contrast and compare). FOG uses partclone where clonezilla, well uses clonezilla to copy the image.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      While this thread has run on a bit I think a lot of great info has been covered.

      I’m thinking I need to rebuild my whole POC setup because I think the function of the storage node is just for storage (just a guess at this time) and the master node is my current production instance (blindly upgrading svn version has disabled my production instance in the past). But that’s part of running on the bleeding edge.

      What I need for this project is two (or more depending on scope creep) functioning FOG instances managed from a single master node. These may be connected across a VPN link or a MPLS link. At this time I’m not sure if FOG is the right tool. I’m not saying FOG is bad, just I’m trying to do something with it that it wasn’t really designed to do. With the idea of linking the master and secondary nodes via the database connection might prove to be problematic over a WAN connection with latency issues. Again this is just me thinking about things I’ve seen in the past on other projects. I do have the cpu bandwidth to spin up two new fog instances to setup the POC in a controlled manner without breaking our production system (which works really well).

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      While I can’t comment on the FOG code, at lot of systems will launch a process and then keep track of that process via a handle until it stops. In the destructor for the instances they will kill off the task based on the handle that was created when the process was launched of the application instance dies before the launched processes. I think the intent of the replicator was to have only one instance of the lftp process running at one time so it wouldn’t be too difficult to keep track of the process handle (as apposed to several hundred processes).

      With the current design you normally wouldn’t have to start and stop the replicator multiple times, so having multiple instances of the lftp process running should never happen. I’m not seeing the value in putting energy into fixing a one off issue.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      I guess I’ll have to leave this for FOG support. Because what you’ve done thus far is exactly what I would have done.

      FOG should copy and put back the same image to the same hardware no problem.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      I was able to update the system to SVN 4070 this AM.

      The FOG Replicators service is behaving much better now. The CPU utilization is better at/about the same utilization as the lftp process, so well done Tom.

      The image files are still syncing between the servers. One thing I did notice about the FOG replicator service is if you stop and restart the replicator several times, multiple lftp services are running. Based on this I assume that when the replicator is stopped, it doesn’t kill off the running lftp process. Not an issue under normal conditions, just an observation.

      posted in FOG Problems
      george1421G
      george1421
    • 1 / 1