• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 64
    • Topics 113
    • Posts 15,322
    • Best 2,772
    • Controversial 0
    • Groups 2

    Posts made by george1421

    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      @Wayne-Workman said:

      @george1421 What do you mean by global information ? or reports?

      I’m trying to think big picture here, but lets say I want to see all deployments on both the master and slave servers across the company. If the FOG servers are not linked in some manner I would have to log into each FOG server to run the built in report to get the deployments. Or if I wanted to get an inventory list of systems vs deployed images for every computer on every FOG server. How could I go about it with the current capabilities?

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG variables available during postinstall script execution

      I think I understand what you are saying about the kernelargs but I think those would be static entries. For example how would I access the defined mac address or hostname (as defined in FOGs database) from a postinstall script?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      Unless I’m missing something I do think FOG is pretty close to what I’m looking to implement.

      My perspective is looking at the Master Slave setup as two FOG servers isolated by a VPN connection at different sites. We would want each site’s clients to contact their own local FOG server. All images and snapins would be created and managed from the master node and then replicated via the storage node transfer that is already built in. The bits that are missing is to get global information/reports about all defined hosts from a single console and to schedule deployments from the master or site specific slave node to any client computer. This is a bit more that the storage node is capable of doing right now.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      It would be interesting to see if this could be managed from within the application.

      I could see just adding a TOD range to the gui for the storage node, then update the FOG Replicator service to look at that date range when it starts the replication transfer to that storage node. From the outside it looks trivial to add. 😉

      posted in Feature Request
      george1421G
      george1421
    • FOG variables available during postinstall script execution

      I have several post deployment scripts that run once the image has been pushed to the client. I’m trying to find out if any FOG host information is available as variables that can be used in these post install scripts. One such variable that would be handy is location. Some registry settings are configured based on the location of the system. For example one location could be NYC and another install location could be ATL. There are certain changes we need to make to the image before the OS is loaded that are dependent on its functional location. This is just one example, but I’m wondering if other deployment variables are available to these post install scripts.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      @Wayne-Workman said:

      @george1421 You can. You can get very specific with cron-tab events…

      Ugh, sorry I am guilty of reading too fast. I read cron-tab as cross-tab so I was stuck trying to understand what you meant by a cross tab query.

      Yes you are correct it can be done with cron. But these jobs would need to be managed from within the FOG console. You wouldn’t want most users poking around setting up cron jobs. But doing it with cron wouldn’t abort a current transfer or notify any of the FOG services that something happened because you are poking right into the database.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      @Wayne-Workman said:

      I think one could write a cron-tab event to run two scripts…

      As long as you could fire that script at a specific TOD and then revert the setting to the default transfer rate once the premium time range has passed.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      I don’t know if the transfer percent is available in the code. If it was then sure we would want it to continue. But then where do you draw the line. What happens if we are 90%, or 80% when would you decide to abort vs continue.

      My recommendation would be to use rsync because even if we were at 94% if you abort rsync and start it up again it will skip ahead to where it left off and continue at the changed transfer speed.

      posted in Feature Request
      george1421G
      george1421
    • Create the concept of a ForeignMasterStorage (deployment) node

      I’ve looked into the possibility to create a slave node deployment node by setting up a master node in the traditional manner. Then creating a proposed slave node as you would in the traditional way. But at the end of the process pointing the Slave node to the Master nodes database. This will work for most of the tables except for the FOG server specific tables like globalSettings. These setting are unique to the individual FOG server. I can see if your FOG Slave server is location in a different subnet or if there is conflicting settings between the Master node and Slave node there will be a setting clash. If the globalSettings table had an additional field that represented the unique FOG installation ID the (global)settings could be created to each individual FOG server. I didn’t check into many other tables for FOG settings clash but it looks like the current FOG system could be extended to a Master-Slave configuration.

      The other way I though about is to keep the fog databases isolated and then just send JSON or other types if IPC messages (they could be done as http POST calls between the systems for that matter) between the master and slave(s) FOG servers. This would allow the FOG installations to be run stand alone if needed but also communicate with a master node. Personally I like this approach a bit better for a scalability and robustness standpoint.

      posted in Feature Request
      george1421G
      george1421
    • FOG Storage node add time of day bandwidth restrictions

      Add the ability to set bandwidth restrictions for replication based on time of day. I can see a need to be able to define a single time range (such as 8a to 5p) where we might want to restrict replication traffic to 1MB/s but after that time range replicate data at 10MB/s. This would be needed in a WAN/VPN setup where the storage node might be location behind a congested link. To extend this concept a bit more setting 0MB/s would disable replication between the time range. You would want to do this if your storage node was behind a slow MPLS link.

      I can see issues since we are moving large files that a time boundary passes but the file hasn’t finished transferring, the file would continue to send at the old transfer rate until the file is finished copying. If rsync was used to move files vs the current ftp process when a time boundary was passed the current trasfer could be aborted and then restarted with the new bandwidth restrictions, rsync would continue moving the file where it left off before the transfer was aborted.

      posted in Feature Request
      george1421G
      george1421
    • RE: Fog user rights

      I think we need to get a bit of clarity from the Developers on this one.

      On my main node the snapins folder is owned by fog.apache but on my storage node the snapins folder is owned by root.root.

      As I posted below on my storage node the /images folder is owned by fog.fog and on my main node /images is owned by root.root.

      There doesn’t seem to be any consistency between the user /group and a functioning system (so to speak). But since we are having this discussion something must be amiss here.

      posted in FOG Problems
      george1421G
      george1421
    • RE: NAS problem iomega NFS mount ?

      (removing fog out of the equation for a second) Ok and you have that box exported (in the iomega) and then mounted onto a folder on your linux box?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Fog user rights

      Just to add a bit of correlation to your post.

      I noticed this when I was working on a POC setup.

      On my master fog server the files and folders under /images are owned by root.root with a file mode of 777 (realize I may have done the mode 777 when trying to get the master node to work many months ago. I installed 1.2.0 on this and then upgraded to the latest trunk builds over the months.

      On my storage node the replicator created the image files (same as on the master node) being owned by fog.fog with a mode of 755. The storage node was just setup with 1.2.0 and then immediately updated to the latest trunk build.

      To answer your question about fog being a sudoer, from a security standpoint I would say no. This should remain a low level account.

      posted in FOG Problems
      george1421G
      george1421
    • RE: NAS problem iomega NFS mount ?

      Could you clarify this a bit. Do you have a FOG Storage node setup or do you just have external storage connected via NFS?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      At this screen you would normally do this during setup. press the install/upgrade now button and then go back to the installer and press Y to finish the setup.

      Just to recap.
      in the command window you run the install program. The install program will pause during the install and instruct you to go to website and press the install/ upgrade now button (which will fixup / update the database schema to the 5080 build. Then you go back to the installer and answer yes and the setup will finish. If you forget this database update step I’ve seen the installer fail with errors.

      Once the installer is done then go to the fog management interface. If you get thrown back to the install update database page then something went wrong. If something went wrong then go to the apache error log at /var/log/httpd/errors_log (on centos) and tail that file. If there was a problem with the update program it will be listed there.

      I had the error after I applied one svn update that the database update page had an error and would not complete so each time I tried to login after the svn update I was presented with the database update page again.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Cant find Config.PHP

      Ok that tells me you only have /Data exported from your FOG server, I understand why the client can’t connect to the /images folder.

      For full disclosure this is what I get when I run the following commands.

      showmount -e localhost

      Export list for localhost:
      /images/dev *
      /images     *
      

      cat /etc/exports

      /images *(ro,sync,no_wdelay,no_subtree_check,insecure_locks,no_root_squash,insecure,fsid=0)
      /images/dev *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=1)
      

      I’m going to recommend that you add the above two lines to your /etc/exports file.

      Then run:

      exportfs -r
      

      Then run the showmount -e localhost command. This should then show you the exported /images directory.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Cant find Config.PHP

      On my centos 6.7 there is a file of that name in /opt/fog/service/etc/config.php but that only points to the web root folder.

      I suspect that the setting you are looking for has moved to a different location.

      But to address your nfs issue. What does this command show you?

      showmount -e localhost
      

      And if you run this command do you see the nfs exports?

       cat /etc/exports
      
      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      @Gilou

      Very nice feedback. I’m glad I’m not the only one trying to do this setup. From the testing that I’ve done I can say the fog replicator does work and it copied all files except what were included in the lftp command Tom posted. My (personal) preference would be to use the built in tools if available. I can say the bandwidth restrictions do work as defined. And I think it would be not hard (looking from the outside) to add a time of day function to the replicator so that it would only replicate images during a specific window. While I haven’t looked, I assume that the FOG Project has some kind of feature requests system to request this function.

      While an image pull back function would be nice, I would assume a do not deploy feature could be added as a flag to the image on the slave server. The one way push of the image would be mandatory that way we would know all nodes in the FOG deployment cloud 😃 would be an exact copy of the master node.

      Instead of copying the databases all over the place some kind of in process communication would be very light on the WAN links. I also thought about configuring mysql to replicate the FOG database around to all of the slave nodes, but I think that would make a mess of things since FOG wasn’t designed to use unique record number. Your mysql dump solution may be the only way to keep things in sync without breaking the whole environment.

      Thanks for the link to Tom’s news. I’ll have to read up on this too.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      Sorry, I got derailed because of the replication issues and didn’t fully describe me issue.

      Here is a made up scenario (names changed to protect the innocent) based on a project that I’m working on.

      Lets say we have 2 sites in New York [NY] and Georgia [GA]. The NY site is the HQ for the company. The windows image design team is at NY as well as the package developers. The NY site has what I will call the master FOG server. The GA site only has basic IT support staff. All images and snapins will be created by the NY IT teams. With plans on expanding in the near future to Kansas [KS] and Los Angles [LA] they want to setup a central deployment console (one way of doing things) that the remote sites can use to deploy images at their local site. Each site is connected to NY over a vpn link on the internet. While they have sufficient bandwidth on these VPN links they don’t want to deploy images across the VPN link.

      Now as I see it I could do the following.

      1. Setup a fully independent FOG server at each site. This server would be used to deploy images. Setup something like rsync to copy the master images and plugins from NY to each site. This would be the easiest to setup, but management would be a bit more because we would have to manually create settings in the remote systems as new images and snapins were created and replicated.

      2. Setup a full FOG server at each site, but link the remote servers to the master server in NY using something like a master - slave setup. Since each site would have a full FOG server (sans database) they would have the tftp and pxe boot services there (something I feel is missing on a storage node). They (remote site admins) could use the master node to deploy images via their local FOG server, or some corporate weenie could deploy a new image to all lab computers in GA with a push of a button. I like this concept since all systems from all sites would be recorded in this single master node. We could write reports against this single node for documentation purposes. There are a number of issues I can see with this setup from database latency to the master node attempting to ping all devices to check to see if they are up, from the fog clients trying to mistakenly contact the master FOG node instead of their local FOG deployment server.

      As I said, I don’t know of FOG is capable to do this since what I want was never in its core design. On the surface it appears to have the right bits in place to make this setup work. Looking back on my original post, I should have created a new one because the full picture is a bit broader than just a storage node syncing data with the master node.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      Well I guess I’m a bit red faced. I though clonezilla was its own thing. I guess I need to download a copy and see if the versions of partclone are the same. While I’m not with the FOG project (so this is only a guess), I could understand how the version of partclone could be newer/different with clonezilla than with a traditional linux OS, because the traditional OS version is dependent on the OS packager to update the version. Clonezilla is its own linux OS variant where the developer has control over the precise version used in the distribution.

      Understand I’m not saying this is the issue, but I would wonder why if using the same cloning engine you would get different results.

      posted in FOG Problems
      george1421G
      george1421
    • 1
    • 2
    • 762
    • 763
    • 764
    • 765
    • 766
    • 767
    • 764 / 767