• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 65
    • Topics 113
    • Posts 15,353
    • Groups 2

    Posts

    Recent Best Controversial
    • RE: Active Directory & Specific OU

      I can tell you yes, the format needs to be in ldap format and it appears you have the right format as long as the maintenance OU exists under your main OU (like Computers do) then it should place it in the right spot. I can say from one of my installs I have (ou=Desktops,ou=Computers,ou=NYC,ou=US,dc=domain,dc=local)

      Since your target computer is ending up in the Computers OU you must have the right information to join the computer to the domain so the FogCrypt part is right too.

      When you make a change to the group… everything disappears. That one got me too. FOG applies the information to the host based on the group but the group doesn’t keep the settings. It would be logical for the group to retain this setting but it doesn’t, it is just used to apply the values to the host. I would go into the target host you are interested in and check the AD settings there. Make sure the proper OU settings are there, then redeploy the host again.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Images not being Deployed

      I guess I should clarify. We use MDT to create our reference image on a VM with a 40GB disk. During the MDT deploy task we apply all of the windows updates. If we are making a fat image then we install the additional software at that time (using a fat image task sequence in MDT). When we have the image the way we want it, then we sysprep it and capture the image. At this capture point we still only have a 40GB disk. So we deploy that 40GB disk to the target computer and then once on the target computer we extend the disk with diskpart.

      We do it this way because we rebuild the golden image each quarter. If you have everything setup to create your reference image automatically, the actual hands on time is very small. What takes the most time from start to finish is windows updates. So far with the WIndows 7 updates it takes overnight to apply them all (~14 hrs).

      While I got a bit off point, the key is to deploy a smaller image to your client computers than their smallest disk size then extend their logical disk to the physical size during the cleanup process.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Images not being Deployed

      @jquilli1 said:

      I was told to use “Multiple Partition Image - Single Disk (Non-resizable)” if I’m capturing an image that’s Windows 7 or above. Have I been mistaken this whole time?

      While I quickly scanned this thread, I didn’t see what client OS you are deploying. I can say that we deploy “Multiple Partition Image - Single Disk (Non-resizable)” to all of our Win7 and Win8.x (and soon Win10) systems (MBR only). The one thing that we DO is create our reference image on a VM with a small hard drive (40GB) that way we are sure it will deploy correctly to any hardware we might have in the future. 40GB is sufficient for windows+updates+core applications. When we deploy that 40GB image to a computer with a 128GB (or larger) drive, initially the logical hard drive will be 40GB. In the SetupComplete.cmd file we launch a command script to window’s diskpart.exe utility to extend the logical drive to the size of the physical disk. While we haven’t had to do that with linux there are commands to do that too.

      To date we’ve deployed several hundred systems using this method, with FOG and a few other deployment tools.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      At the risk of extending this feature request even more…

      Please understand I’m not trying to be difficult, I truly want to understand if what I want to do is possible. I think we have a communication misalignment. I’m not doing a very good job explaining the situation because I keep seeing the same results (maybe that is the only answer, I don’t know).

      But I’m assuming from your context that in my drawing below there is one full deployment server in that network with the rest storage nodes. Is that a correct assumption?

      I understand the function of the location plugin, It allows you to assign storage groups and storage devices to a location and then you link a hosts to a location so it knows where to get and put (if necessary) an image to. I get that. I’ve been using FOG for quite a while.

      The issue(s) I’m seeing here are this:

      1. The storage nodes are not a fully functional deployment server. They are missing the tftpboot directory. While they do have the pxe boot kernel and file system, they alone can not provide pxe booting services for a remote site.
      2. The storage nodes do not appear to have a sql server instance running so I assume they are reaching out to the master node’s database for each transaction. Historically I’ve seen this being an issue with other products as they try to reach across WAN links for transactional data.
      3. There is no local web interface on the storage nodes. So all deployment techs from every site must interface with the HQ Master node. This shouldn’t be an issue since the web interface is very lite as apposed to some other flash or silverlight base management consoles.
      4. While this is not a technical issue, its more of a people issue. Since you will have techs from every site interfaces with a single management node its possible for one tech to mistakenly deploy (i.e. mess up) hosts at another site since there is no built in (location awareness) in regards to their user accounts.
      5. On the deployed hosts, where does the fog service connect to? Is it the local storage node or the Master node?
      6. Storage nodes can only replicate with the master node. i.e. if there are two storage notes at a remote site, one storage node can not get its image files from the other storage node at that site. All images must be pulled across the WAN for each storage node.
      7. Multicasting is only functional from the Master node. So in the diagram below only the HQ could use multicasting to build its clients. (edit: added based on a current unrelated thread)

      The fog system is very versatile and you guys have put a LOT of effort into it since the 0.3x days. And you should be acknowledged for your efforts. Understand I’m not knocking the system that has been created or your time spent on the project.

      I worked through this post, I can see that having a single master node with the rest storage nodes would work if:

      1. The /tftpboot directory was included in the replication files from the master node and the tftp service setup in xinet. (actually this could be built in as part of a storage node deployment by default, by having the service and tftpboot folder setup, even if it isn’t used in every deployment. There is no down side IMO)
      2. The user profile was location aware to keep them from making changes to hosts in other locations. The location awareness must have the ability to assign users who have global access for administration purposes.
      3. The storage nodes would have to be aware of latency issues with slow WAN links. And/or not break completely with momentary WAN outages.
      posted in Feature Request
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      @Joseph-Hales said:

      If you are not updating images that often it might be more logical to sneaker-net images to the other site we you make changes.

      Good point, it just may be easier and quicker to throw the image on a flash drive and overnight it to the other sites if transfer speed is required. But then there is more hands on steps at each site to import the image and create the DB entries.

      While its clear that the current FOG trunk can do this, but right now the how is missing from this discussion.

      posted in Feature Request
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      @Wayne-Workman said:

      But I wanted to point out that a typical 16GB (compressed size) image, pushing one copy of the image to one other node across a 1.5Mb/s link will take roughly 24 hours, and that’s if you have 100% of the 1.5Mb/s dedicated to the transfer.

      Have you thought about this? How big are your images?

      I selected a network connection specifically that was artificiality low for the POC. I see network latency being a real issue with a distributed design.

      Our thin image (Win7 only+updates) are about 5GB in size and our fat image is over 15GB. At 1.5Mb/s I would suspect that we would have ftp transfer issues with file moves that were taking longer than 24hrs to complete. But that is only a speculation.

      Its good to hear that FOG could do this without any changes.

      posted in Feature Request
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      Excellent…

      posted in Feature Request
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      Knowing what you know about the new features built into the SVN trunk, can I do this without any new “stuff” being added to FOG?

      posted in Feature Request
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      @Tom-Elliott said:

      @george1421 I’m still confused.

      Its highly possible that I’m ignorant to the features you have added to the trunk builds, plus I’m not doing a good job of explaining the current situation were I think FOG is highly capable to accomplish this with a few adjustments. I’ve looked through the wiki to see if there was something similar to what I need to do. The only thing that came close was https://wiki.fogproject.org/wiki/index.php/Managing_FOG#Storage_Management (the second graphic that shows the multiple storage groups). This is the POC concept used to setup my test environment.

      I took that previous drawing and build this sample layout.
      storage_network.JPG

      In this scenario I have these requirements (almost sounds like a school project):

      1. Will be constructed with 3 or more sites
      2. Connection to each site will be via a connected via a MPLS 1.5Mb/s link
      3. Because of the slow link each site must have its own FOG Deployment server to provide PXE booting
      4. Each of the sites could have one or more VLANs each with their own subnets isolated by a router.
      5. Corporate images will be created at the HQ site and distributed to all sites. There is a potential that each site could have their own images for specific purposes. So each site must be able to capture images to their local deployment server.
      6. On a corporate deployed image there may be a reason to recall or block deployment of a specific image across the organization (such as a detected flaw in the image).
      7. The location plugin is installed on all FOG servers. The only location that will have more than one locally defined location is LA

      To clarify the above picture:
      In the HQ location there is only one deployment server HQMasterNode
      The LAMasterNode and ATLMasterNode are connected back to HQ via a MPLS link (right now this is all done in a single virtual environment)
      In the LA site there are 3 FOG servers. One FOG deployment server, One FOG storage server and One FOG Storage server with PXE booting enabled (I think that is an option). The LA site also has two VLANs with about 700 nodes distributed across the VLANs. There are two defined locations for the LA site (LA_BLD01 and LA_BLD02)
      The ATL site only has one FOG Deployment server and one storage node on a single subnet.

      This is how I have the test environment built in my test lab.

      As I posted before I seeded the images in the HQMasterNode with images from my production FOG server. No replication happened between the HQMasterNode, LAMasterNode or ATLMasterNode until I created the first image definition on the HQMasterNode. Once that first image definition was created all images that were seeded on the HQMasterNode were replicated to the other two nodes in the HQ Storage Group. This worked great, now all image created on the HQMasterNode were located at the site FOG Deployment servers. The images did not get distributed beyond each sites MasterNode though. On the ATLMasterNode I created a single image definition and then the images were replicated to the ATLSlaveNode01.

      The first issue I ran into was even though I created all of the image definitions on the HQMasterNode those definitions were not copied to the LAMasterNode or the ATLMasterNode. Somehow I need to get those definitions (I’ll assume the same for the snapins) from the HQ deployment server to each site’s deployment server. This could be accomplished with a mysqldump of the tables before the replication starts and then picked up at the remote end and an mysqlimport run. Or by making url calls to each of the sites deployment servers to update their database with the image information.

      posted in Feature Request
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      @Wayne-Workman point well taken.

      I’m not really interested in creating a mishmash of scripts to do crazy things. I can see what needs to be done to make this work as FOG is currently designed.

      I’ve spent some time recreating my POC environment and have a mostly workable system using the current SVN. Based on the results of my testing I changed the a word in the title of this feature request to foreign master storage node from slave, because it sounds much cooler and is a bit more accurate.

      All joking aside. I found if I create 3 storage groups which represent 3 different sites each with their own master storage node and then in the center storage group make the master storage node from the left and right storage groups a “storage node” or to use my made up name “Foreign Master Storage node” in the center storage group I can send the images from a central master storage node to all other storage nodes in the other storage groups. (its a bit hard to explain with just words, but it does work). Eventually each storage group will be located at a different site, so I need a fully functional master node in each storage group.

      I did find an interesting fact, I seeded the center master storage node with images from my production server, but the replication did not start until I created the first image entry in the database. Then the files were replicated from the center Master Storage node to the other Foreign Master Storage nodes. The issue I’m at right now is that I need to get the content from the images and snapins table to both the left and right Foreign Master Storage nodes or they won’t start replicating to their storage nodes.

      posted in Feature Request
      george1421G
      george1421
    • RE: No Username and Password Populated After Storage Node Install

      While Tom may have to confirm, as far as I know you don’t log into a storage only. Only the main node has the ability to login. I think the last time I tried I to connect to a storage node /fog folder there was a basic message saying you can’t login here.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      @Wayne-Workman said:

      @george1421 What do you mean by global information ? or reports?

      I’m trying to think big picture here, but lets say I want to see all deployments on both the master and slave servers across the company. If the FOG servers are not linked in some manner I would have to log into each FOG server to run the built in report to get the deployments. Or if I wanted to get an inventory list of systems vs deployed images for every computer on every FOG server. How could I go about it with the current capabilities?

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG variables available during postinstall script execution

      I think I understand what you are saying about the kernelargs but I think those would be static entries. For example how would I access the defined mac address or hostname (as defined in FOGs database) from a postinstall script?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Create the concept of a ForeignMasterStorage (deployment) node

      Unless I’m missing something I do think FOG is pretty close to what I’m looking to implement.

      My perspective is looking at the Master Slave setup as two FOG servers isolated by a VPN connection at different sites. We would want each site’s clients to contact their own local FOG server. All images and snapins would be created and managed from the master node and then replicated via the storage node transfer that is already built in. The bits that are missing is to get global information/reports about all defined hosts from a single console and to schedule deployments from the master or site specific slave node to any client computer. This is a bit more that the storage node is capable of doing right now.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      It would be interesting to see if this could be managed from within the application.

      I could see just adding a TOD range to the gui for the storage node, then update the FOG Replicator service to look at that date range when it starts the replication transfer to that storage node. From the outside it looks trivial to add. 😉

      posted in Feature Request
      george1421G
      george1421
    • FOG variables available during postinstall script execution

      I have several post deployment scripts that run once the image has been pushed to the client. I’m trying to find out if any FOG host information is available as variables that can be used in these post install scripts. One such variable that would be handy is location. Some registry settings are configured based on the location of the system. For example one location could be NYC and another install location could be ATL. There are certain changes we need to make to the image before the OS is loaded that are dependent on its functional location. This is just one example, but I’m wondering if other deployment variables are available to these post install scripts.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      @Wayne-Workman said:

      @george1421 You can. You can get very specific with cron-tab events…

      Ugh, sorry I am guilty of reading too fast. I read cron-tab as cross-tab so I was stuck trying to understand what you meant by a cross tab query.

      Yes you are correct it can be done with cron. But these jobs would need to be managed from within the FOG console. You wouldn’t want most users poking around setting up cron jobs. But doing it with cron wouldn’t abort a current transfer or notify any of the FOG services that something happened because you are poking right into the database.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      @Wayne-Workman said:

      I think one could write a cron-tab event to run two scripts…

      As long as you could fire that script at a specific TOD and then revert the setting to the default transfer rate once the premium time range has passed.

      posted in Feature Request
      george1421G
      george1421
    • RE: FOG Storage node add time of day bandwidth restrictions

      I don’t know if the transfer percent is available in the code. If it was then sure we would want it to continue. But then where do you draw the line. What happens if we are 90%, or 80% when would you decide to abort vs continue.

      My recommendation would be to use rsync because even if we were at 94% if you abort rsync and start it up again it will skip ahead to where it left off and continue at the changed transfer speed.

      posted in Feature Request
      george1421G
      george1421
    • Create the concept of a ForeignMasterStorage (deployment) node

      I’ve looked into the possibility to create a slave node deployment node by setting up a master node in the traditional manner. Then creating a proposed slave node as you would in the traditional way. But at the end of the process pointing the Slave node to the Master nodes database. This will work for most of the tables except for the FOG server specific tables like globalSettings. These setting are unique to the individual FOG server. I can see if your FOG Slave server is location in a different subnet or if there is conflicting settings between the Master node and Slave node there will be a setting clash. If the globalSettings table had an additional field that represented the unique FOG installation ID the (global)settings could be created to each individual FOG server. I didn’t check into many other tables for FOG settings clash but it looks like the current FOG system could be extended to a Master-Slave configuration.

      The other way I though about is to keep the fog databases isolated and then just send JSON or other types if IPC messages (they could be done as http POST calls between the systems for that matter) between the master and slave(s) FOG servers. This would allow the FOG installations to be run stand alone if needed but also communicate with a master node. Personally I like this approach a bit better for a scalability and robustness standpoint.

      posted in Feature Request
      george1421G
      george1421
    • 1
    • 2
    • 763
    • 764
    • 765
    • 766
    • 767
    • 768
    • 765 / 768