The FOGImageReplicator is run as a service not a cron job. Look in the /etc/init.d directory for the service descriptor. (next is going to be a rhel based command) Run the chkconfig FOGImageReplicator --list to show you if the image replicator service is set to auto start. Under debian based systems chkconfig may not be installed by default.

Posts
-
RE: Multiple TFTP servers multi subnet fog 1.2.0
-
RE: Windows7 restarts at bootup when it reaches classpnp.sys after being imaged with FOG
I guess I have to just throw this out since you are starting with a new deployment.
Is there any thought of changing your OS from Ubuntu to Centos? In my test environment I’ve spun up about 10 - Centos 6.7 systems with FOG in the last 2 weeks and deployed all of the trunk images without any deployment issues. In monitoring this forum for the last few weeks the majority of the install issues appear to be with the Debian based systems. I’m not saying that one OS is better than the other, but if the goal is to test FOG, then switch OS for the test and work out the kinks with the specific OS deployment later.
-
RE: Upload image to FOG not working
Another quick command you can run from a linux command shell would be
showmount -e localhost
That should show you the nfs shares currently enabled. If the /images share is available, then check into directory file ownership and permissions.
-
RE: Trunk Update Failing
for clarity I found those commands in <fog_trunk>/lib/common/functions.sh In case you want to update your way to an answer.
-
RE: Trunk Update Failing
I’m just pecking in the dark here. But looking at the installer script (actually a called subfunction). This is what is going on in that area.
add-apt-repository -y ppa:ondrej/php5-5.6 if [ "$?" != 0 ]; then apt-get update apt-get -yq install python-software-properties add-apt-repository -y ppa:ondrej/php5-5.6 fi
I might suggest that you try the apt commands one at a time and see what fails. Again I’m just guessing here, these commands will not do anything destructive.
-
RE: Trunk Update Failing
I guess I’m going to have to defer to one of the developers on this one. It almost sounds like there is a bad setting in the .fogsettings file because its trying to use a resource that it doesn’t have access to.
Was this a fully functional FOG server at any time?
-
RE: Trunk Update Failing
Interesting, (just questioning). You say you have direct internet access yet svn did not work?
SVN should work right out of the box without any changes, if you had a proxy server between your FOG server and the internet then you need to make a few adjustments. Something is up here. The first thing that the update does is tries (at this point) is to talk to your distribution repository server(s). -
RE: Trunk Update Failing
Sorry for the 20 questions,
What OS and version is this on?
Can you try to use subversion to checkout the current trunk?It sounds like the installer script can’t reach the correct repositories. Have you don’t a OS update apt-get/yum ??
-
RE: Trunk Update Failing
Just for clarity how do you have internet access on this box? Does it have direct internet access or is this box sitting behind a proxy server of some sort?
-
RE: Active Directory & Specific OU
If I understand what you are saying then, in the fog settings I would set the OU to your computer’s OU, then when you setup the host change it to the proper location for that host. You can do it one by one in each host’s active directory settings, or via applying the setting to a group of computers all at once. (as you noted the group does not retain the new setting but all hosts that are members of that group have the new setting applied)
-
RE: SVN 5195 not booting correctly via ipxe
I guess I’ll have to defer to the developers on this one, sorry. This is a bit beyond what I know.
-
RE: Active Directory & Specific OU
I would suggest that you get one working. Once you have one working then you can use the update group function to change all the rest to the correct OU, and finally update the defaults in the fog settings so any new systems will have the right settings.
-
RE: SVN 5195 not booting correctly via ipxe
Ok so they are not installed in the path I posted? You can pick them up from the sourceforge site or just rerun the installer again.
-
RE: SVN 5195 not booting correctly via ipxe
I have two questions for you:
- Do you have the kernel’s installed in the right path?
- Is this the first time you upgraded from 1.2.0 to a SVN trunk release?
The reason why I ask, I had an issue where the fog server was behind a proxy server and I applied the svn trunk update almost everything installed fine except the new fog client wouldn’t download and I couldn’t pxe boot because the boot images were missing. I found the issue that the svn installer uses curl to get the fog client and boot image files but since curl didn’t know about the proxy server settings it would just fail to get them. I had to set the environment variables to point to the proxy server then everything worked OK.
(edit) The boot files need to be in this path: /var/www/html/fog/service/ipxe and the file names are bzImage and init.xz. If they are missing FOG will display the boot menu but if you select anything the target computer will just restart (/edit)
-
RE: Active Directory & Specific OU
I can tell you yes, the format needs to be in ldap format and it appears you have the right format as long as the maintenance OU exists under your main OU (like Computers do) then it should place it in the right spot. I can say from one of my installs I have (ou=Desktops,ou=Computers,ou=NYC,ou=US,dc=domain,dc=local)
Since your target computer is ending up in the Computers OU you must have the right information to join the computer to the domain so the FogCrypt part is right too.
When you make a change to the group… everything disappears. That one got me too. FOG applies the information to the host based on the group but the group doesn’t keep the settings. It would be logical for the group to retain this setting but it doesn’t, it is just used to apply the values to the host. I would go into the target host you are interested in and check the AD settings there. Make sure the proper OU settings are there, then redeploy the host again.
-
RE: Images not being Deployed
I guess I should clarify. We use MDT to create our reference image on a VM with a 40GB disk. During the MDT deploy task we apply all of the windows updates. If we are making a fat image then we install the additional software at that time (using a fat image task sequence in MDT). When we have the image the way we want it, then we sysprep it and capture the image. At this capture point we still only have a 40GB disk. So we deploy that 40GB disk to the target computer and then once on the target computer we extend the disk with diskpart.
We do it this way because we rebuild the golden image each quarter. If you have everything setup to create your reference image automatically, the actual hands on time is very small. What takes the most time from start to finish is windows updates. So far with the WIndows 7 updates it takes overnight to apply them all (~14 hrs).
While I got a bit off point, the key is to deploy a smaller image to your client computers than their smallest disk size then extend their logical disk to the physical size during the cleanup process.
-
RE: Images not being Deployed
@jquilli1 said:
I was told to use “Multiple Partition Image - Single Disk (Non-resizable)” if I’m capturing an image that’s Windows 7 or above. Have I been mistaken this whole time?
While I quickly scanned this thread, I didn’t see what client OS you are deploying. I can say that we deploy “Multiple Partition Image - Single Disk (Non-resizable)” to all of our Win7 and Win8.x (and soon Win10) systems (MBR only). The one thing that we DO is create our reference image on a VM with a small hard drive (40GB) that way we are sure it will deploy correctly to any hardware we might have in the future. 40GB is sufficient for windows+updates+core applications. When we deploy that 40GB image to a computer with a 128GB (or larger) drive, initially the logical hard drive will be 40GB. In the SetupComplete.cmd file we launch a command script to window’s diskpart.exe utility to extend the logical drive to the size of the physical disk. While we haven’t had to do that with linux there are commands to do that too.
To date we’ve deployed several hundred systems using this method, with FOG and a few other deployment tools.
-
RE: Create the concept of a ForeignMasterStorage (deployment) node
At the risk of extending this feature request even more…
Please understand I’m not trying to be difficult, I truly want to understand if what I want to do is possible. I think we have a communication misalignment. I’m not doing a very good job explaining the situation because I keep seeing the same results (maybe that is the only answer, I don’t know).
But I’m assuming from your context that in my drawing below there is one full deployment server in that network with the rest storage nodes. Is that a correct assumption?
I understand the function of the location plugin, It allows you to assign storage groups and storage devices to a location and then you link a hosts to a location so it knows where to get and put (if necessary) an image to. I get that. I’ve been using FOG for quite a while.
The issue(s) I’m seeing here are this:
- The storage nodes are not a fully functional deployment server. They are missing the tftpboot directory. While they do have the pxe boot kernel and file system, they alone can not provide pxe booting services for a remote site.
- The storage nodes do not appear to have a sql server instance running so I assume they are reaching out to the master node’s database for each transaction. Historically I’ve seen this being an issue with other products as they try to reach across WAN links for transactional data.
- There is no local web interface on the storage nodes. So all deployment techs from every site must interface with the HQ Master node. This shouldn’t be an issue since the web interface is very lite as apposed to some other flash or silverlight base management consoles.
- While this is not a technical issue, its more of a people issue. Since you will have techs from every site interfaces with a single management node its possible for one tech to mistakenly deploy (i.e. mess up) hosts at another site since there is no built in (location awareness) in regards to their user accounts.
- On the deployed hosts, where does the fog service connect to? Is it the local storage node or the Master node?
- Storage nodes can only replicate with the master node. i.e. if there are two storage notes at a remote site, one storage node can not get its image files from the other storage node at that site. All images must be pulled across the WAN for each storage node.
- Multicasting is only functional from the Master node. So in the diagram below only the HQ could use multicasting to build its clients. (edit: added based on a current unrelated thread)
The fog system is very versatile and you guys have put a LOT of effort into it since the 0.3x days. And you should be acknowledged for your efforts. Understand I’m not knocking the system that has been created or your time spent on the project.
I worked through this post, I can see that having a single master node with the rest storage nodes would work if:
- The /tftpboot directory was included in the replication files from the master node and the tftp service setup in xinet. (actually this could be built in as part of a storage node deployment by default, by having the service and tftpboot folder setup, even if it isn’t used in every deployment. There is no down side IMO)
- The user profile was location aware to keep them from making changes to hosts in other locations. The location awareness must have the ability to assign users who have global access for administration purposes.
- The storage nodes would have to be aware of latency issues with slow WAN links. And/or not break completely with momentary WAN outages.
-
RE: Create the concept of a ForeignMasterStorage (deployment) node
@Joseph-Hales said:
If you are not updating images that often it might be more logical to sneaker-net images to the other site we you make changes.
Good point, it just may be easier and quicker to throw the image on a flash drive and overnight it to the other sites if transfer speed is required. But then there is more hands on steps at each site to import the image and create the DB entries.
While its clear that the current FOG trunk can do this, but right now the how is missing from this discussion.
-
RE: Create the concept of a ForeignMasterStorage (deployment) node
@Wayne-Workman said:
But I wanted to point out that a typical 16GB (compressed size) image, pushing one copy of the image to one other node across a 1.5Mb/s link will take roughly 24 hours, and that’s if you have 100% of the 1.5Mb/s dedicated to the transfer.
Have you thought about this? How big are your images?
I selected a network connection specifically that was artificiality low for the POC. I see network latency being a real issue with a distributed design.
Our thin image (Win7 only+updates) are about 5GB in size and our fat image is over 15GB. At 1.5Mb/s I would suspect that we would have ftp transfer issues with file moves that were taking longer than 24hrs to complete. But that is only a speculation.
Its good to hear that FOG could do this without any changes.