• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 64
    • Topics 113
    • Posts 15,310
    • Best 2,772
    • Controversial 0
    • Groups 2

    Posts made by george1421

    • RE: Fog user rights

      Just to add a bit of correlation to your post.

      I noticed this when I was working on a POC setup.

      On my master fog server the files and folders under /images are owned by root.root with a file mode of 777 (realize I may have done the mode 777 when trying to get the master node to work many months ago. I installed 1.2.0 on this and then upgraded to the latest trunk builds over the months.

      On my storage node the replicator created the image files (same as on the master node) being owned by fog.fog with a mode of 755. The storage node was just setup with 1.2.0 and then immediately updated to the latest trunk build.

      To answer your question about fog being a sudoer, from a security standpoint I would say no. This should remain a low level account.

      posted in FOG Problems
      george1421G
      george1421
    • RE: NAS problem iomega NFS mount ?

      Could you clarify this a bit. Do you have a FOG Storage node setup or do you just have external storage connected via NFS?

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      At this screen you would normally do this during setup. press the install/upgrade now button and then go back to the installer and press Y to finish the setup.

      Just to recap.
      in the command window you run the install program. The install program will pause during the install and instruct you to go to website and press the install/ upgrade now button (which will fixup / update the database schema to the 5080 build. Then you go back to the installer and answer yes and the setup will finish. If you forget this database update step I’ve seen the installer fail with errors.

      Once the installer is done then go to the fog management interface. If you get thrown back to the install update database page then something went wrong. If something went wrong then go to the apache error log at /var/log/httpd/errors_log (on centos) and tail that file. If there was a problem with the update program it will be listed there.

      I had the error after I applied one svn update that the database update page had an error and would not complete so each time I tried to login after the svn update I was presented with the database update page again.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Cant find Config.PHP

      Ok that tells me you only have /Data exported from your FOG server, I understand why the client can’t connect to the /images folder.

      For full disclosure this is what I get when I run the following commands.

      showmount -e localhost

      Export list for localhost:
      /images/dev *
      /images     *
      

      cat /etc/exports

      /images *(ro,sync,no_wdelay,no_subtree_check,insecure_locks,no_root_squash,insecure,fsid=0)
      /images/dev *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=1)
      

      I’m going to recommend that you add the above two lines to your /etc/exports file.

      Then run:

      exportfs -r
      

      Then run the showmount -e localhost command. This should then show you the exported /images directory.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Cant find Config.PHP

      On my centos 6.7 there is a file of that name in /opt/fog/service/etc/config.php but that only points to the web root folder.

      I suspect that the setting you are looking for has moved to a different location.

      But to address your nfs issue. What does this command show you?

      showmount -e localhost
      

      And if you run this command do you see the nfs exports?

       cat /etc/exports
      
      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      @Gilou

      Very nice feedback. I’m glad I’m not the only one trying to do this setup. From the testing that I’ve done I can say the fog replicator does work and it copied all files except what were included in the lftp command Tom posted. My (personal) preference would be to use the built in tools if available. I can say the bandwidth restrictions do work as defined. And I think it would be not hard (looking from the outside) to add a time of day function to the replicator so that it would only replicate images during a specific window. While I haven’t looked, I assume that the FOG Project has some kind of feature requests system to request this function.

      While an image pull back function would be nice, I would assume a do not deploy feature could be added as a flag to the image on the slave server. The one way push of the image would be mandatory that way we would know all nodes in the FOG deployment cloud 😃 would be an exact copy of the master node.

      Instead of copying the databases all over the place some kind of in process communication would be very light on the WAN links. I also thought about configuring mysql to replicate the FOG database around to all of the slave nodes, but I think that would make a mess of things since FOG wasn’t designed to use unique record number. Your mysql dump solution may be the only way to keep things in sync without breaking the whole environment.

      Thanks for the link to Tom’s news. I’ll have to read up on this too.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      Sorry, I got derailed because of the replication issues and didn’t fully describe me issue.

      Here is a made up scenario (names changed to protect the innocent) based on a project that I’m working on.

      Lets say we have 2 sites in New York [NY] and Georgia [GA]. The NY site is the HQ for the company. The windows image design team is at NY as well as the package developers. The NY site has what I will call the master FOG server. The GA site only has basic IT support staff. All images and snapins will be created by the NY IT teams. With plans on expanding in the near future to Kansas [KS] and Los Angles [LA] they want to setup a central deployment console (one way of doing things) that the remote sites can use to deploy images at their local site. Each site is connected to NY over a vpn link on the internet. While they have sufficient bandwidth on these VPN links they don’t want to deploy images across the VPN link.

      Now as I see it I could do the following.

      1. Setup a fully independent FOG server at each site. This server would be used to deploy images. Setup something like rsync to copy the master images and plugins from NY to each site. This would be the easiest to setup, but management would be a bit more because we would have to manually create settings in the remote systems as new images and snapins were created and replicated.

      2. Setup a full FOG server at each site, but link the remote servers to the master server in NY using something like a master - slave setup. Since each site would have a full FOG server (sans database) they would have the tftp and pxe boot services there (something I feel is missing on a storage node). They (remote site admins) could use the master node to deploy images via their local FOG server, or some corporate weenie could deploy a new image to all lab computers in GA with a push of a button. I like this concept since all systems from all sites would be recorded in this single master node. We could write reports against this single node for documentation purposes. There are a number of issues I can see with this setup from database latency to the master node attempting to ping all devices to check to see if they are up, from the fog clients trying to mistakenly contact the master FOG node instead of their local FOG deployment server.

      As I said, I don’t know of FOG is capable to do this since what I want was never in its core design. On the surface it appears to have the right bits in place to make this setup work. Looking back on my original post, I should have created a new one because the full picture is a bit broader than just a storage node syncing data with the master node.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      Well I guess I’m a bit red faced. I though clonezilla was its own thing. I guess I need to download a copy and see if the versions of partclone are the same. While I’m not with the FOG project (so this is only a guess), I could understand how the version of partclone could be newer/different with clonezilla than with a traditional linux OS, because the traditional OS version is dependent on the OS packager to update the version. Clonezilla is its own linux OS variant where the developer has control over the precise version used in the distribution.

      Understand I’m not saying this is the issue, but I would wonder why if using the same cloning engine you would get different results.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      Does clonezilla work correctly for this task? (just trying to contrast and compare). FOG uses partclone where clonezilla, well uses clonezilla to copy the image.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      While this thread has run on a bit I think a lot of great info has been covered.

      I’m thinking I need to rebuild my whole POC setup because I think the function of the storage node is just for storage (just a guess at this time) and the master node is my current production instance (blindly upgrading svn version has disabled my production instance in the past). But that’s part of running on the bleeding edge.

      What I need for this project is two (or more depending on scope creep) functioning FOG instances managed from a single master node. These may be connected across a VPN link or a MPLS link. At this time I’m not sure if FOG is the right tool. I’m not saying FOG is bad, just I’m trying to do something with it that it wasn’t really designed to do. With the idea of linking the master and secondary nodes via the database connection might prove to be problematic over a WAN connection with latency issues. Again this is just me thinking about things I’ve seen in the past on other projects. I do have the cpu bandwidth to spin up two new fog instances to setup the POC in a controlled manner without breaking our production system (which works really well).

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      While I can’t comment on the FOG code, at lot of systems will launch a process and then keep track of that process via a handle until it stops. In the destructor for the instances they will kill off the task based on the handle that was created when the process was launched of the application instance dies before the launched processes. I think the intent of the replicator was to have only one instance of the lftp process running at one time so it wouldn’t be too difficult to keep track of the process handle (as apposed to several hundred processes).

      With the current design you normally wouldn’t have to start and stop the replicator multiple times, so having multiple instances of the lftp process running should never happen. I’m not seeing the value in putting energy into fixing a one off issue.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      I guess I’ll have to leave this for FOG support. Because what you’ve done thus far is exactly what I would have done.

      FOG should copy and put back the same image to the same hardware no problem.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      I was able to update the system to SVN 4070 this AM.

      The FOG Replicators service is behaving much better now. The CPU utilization is better at/about the same utilization as the lftp process, so well done Tom.

      The image files are still syncing between the servers. One thing I did notice about the FOG replicator service is if you stop and restart the replicator several times, multiple lftp services are running. Based on this I assume that when the replicator is stopped, it doesn’t kill off the running lftp process. Not an issue under normal conditions, just an observation.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      Ok that’s a bit clearer, you are using FOG for / as a deep freeze system to archive the image not deploy a master image to many machines.

      So for this I would use single disk multiple partitions not resizeable (since you are copying from and restoring to the same machine every time).

      While this is an obvious statement, this (FOG) should work no problem. Its just using partclone to copy the hard drive to an image and then using the same to take the image and put it back on the client.

      You say that its getting to classpnp.sys and freezes. Are there any warning messages at all?

      You did make reference to ACHI and legacy, so I can assume that this hardware/disk is not in UEFI mode at all??

      posted in FOG Problems
      george1421G
      george1421
    • RE: Imaging fails on one machine but worked previously.

      I would also say its the physical disk based on what you have posted. I have experienced a bad sector on a hard drive cause ghost and clonezilla to fail. I have not yet experienced this with FOG (just because with the other I’ve deployed units in the 1000s).

      I would ask 2 questions here.

      1. What is the hardware you are deploying to (ie Lenovo M93, Optiplex 9010, etc)
      2. Have you physically replaced the hard drive in this system. (I noted that you mentioned hard drives . Do you have more than one in this system?
      posted in FOG Problems
      george1421G
      george1421
    • RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG

      I would make sure to rule out issues with your reference image before focusing on FOG. I can say that I deploy Win7 using FOG without issue.

      We capture using single disk not resizable but then expand the disk to consume the entire disk post deployment using diskpart in the unattend.xml or via the setupcomplete.cmd. Either way works well.

      I can also say that when we build the reference image we use a VM client to capture a hardware neutral reference image.

      From a deployment standpoint I would ensure that the reference systems and target systems (hardware) are setup properly. It sounds like you’ve ruled out the hardware since you are building your reference image and then deploying to the same hardware.

      While this takes time I would build a reference image, sysprep it, and instead of capturing at this point just reboot the reference image to ensure that it builds correctly on the same hardware. This will make it clear that FOG is at issue. You should use the exact same process just don’t capture and deploy using FOG.

      Based on your error I would say there is a driver issue with your deployed system.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      After about 12 hours of running and the FOG Replicator service is still running at 100% utilization. It appears to be working as it should by moving files from the master node to the storage node. So it IS working, just with high CPU usage. I tried to poke around in the code a bit and add 20 second sleep statements to see if I could hit on where its looping uncontrolled (just a guess). I’m suspecting its somewhere after lftp is being launched and then it enters a task wait function which should wait until the lftp file copy is done. But from there I lost the trace (and btw I’m not a programmer only a good guesser).

      I think I’ll need to leave this to the developers to take a peek at.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      @Wayne-Workman said:

      @george1421 I’m curious how you’re making the clients get said drivers from the storage nodes ? It’s exported as read-only via NFS and the other available option without any changes is FTP.

      Well that’s the bits I haven’t worked out yet. I needed to get the drivers to the storage node. On the master node today I’m running a post install script to copy the correct drivers to the target computer. It’s possible that I may not understand the concept of the storage node just yet. I may have to rethink my position. Without the files I can only guess.

      But if I run a full install at the remote site that may address the driver deployment issue.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      Well I guess I just need to set it up and go away for the weekend.

      I just ran out of disk space on the storage node. Looking to see where the space went I look into the drivers folder and the sub folders and driver files were there. So if I circle back to Wayne’s first comment to create a faux drivers image. Given enough time the system as is will replicate the drivers folder and all sub files and folders over to the storage node. That still doesn’t explain the 100% cpu usage of the fog replicator service. But the system does work as is.

      Do I think the rsync method is better the ftp, yes. Do I think I can setup this POC system as is without much hassle, yes.

      posted in FOG Problems
      george1421G
      george1421
    • RE: FOG storage node and data replication

      @Wayne-Workman said:

      @george1421 said:

      (Actually from another recent thread it gave me an idea how to change this multi site POC by doing full installs at each site then pointing them back to the master node for the database information).

      It was originally Tom’s idea. And it’s proven to work. I’ve just been spreading the word.

      Well what I’m looking at is a multi site setup where each site would have a local storage node and a central master fog server at HQ. The idea is to start the deploy from the master node but have them deploy from the local storage nodes. But looking into a storage node a bit more it doesn’t look like the pxe environment is setup or the install isn’t complete (but I just did a few quick checks). But the idea of doing a full install at the remote sites but having them reference the master node’s database is brilliant. That way I have a full fog install at each site but only one database where everything exists.

      If I can get the replication bits to work like I need, I think I’ll have a solid solution.

      posted in FOG Problems
      george1421G
      george1421
    • 1 / 1