SF = San Francisco
Posts made by BardWood
-
FOG works great in local office. TFTP timeout over long distance link.
Greetings!
I’ve read the TFTP timeout entry here: https://wiki.fogproject.org/wiki/index.php?title=Tftp_timeout… I don’t think this applies in this situation but did successfully retrieve ‘undionly.kpxe’ via a Windows TFTP client on the remote subnet so that bit is working.Brief topology: I’ve got cross-subnet working in SF using MS DHCP to handout 66/67 via IPHELPER address. Server is x.x.70.x, clients are x.x.40.x. No issues. In Dublin, we replicated the existing setup in SF but we don’t have a MS DHCP server there and use a Cisco switch to handle DHCP instead. The master FOG server is in SF but there is a storage node in Dublin. Clients and Storage Node are part of their own ‘DUBFOG’ group. So Cisco DHCP is pointing its PXE clients to SF FOGs address for TFTP.
Attachment pic is what happens when this goes down. It shows this screen for ~30 seconds and then times out and boots from disk.
There’s no actual error displayed, I think it’s just taking too long. I’ve read some articles related to trunking Cisco and STP and am currently investigating that angle. Looking for any advice I can get. I realize 1.3.0 does have the ability for clients to PXE boot off of storage nodes but that wasn’t working when I last tested it a couple months ago./fog’ didn’t make a difference.
-
RE: FOG Trunk boot loop when PXE booting from storage node.
I didn’t customize anything when installing this but did add ‘/fog’ as my Web root so that should be right.
-
FOG Trunk boot loop when PXE booting from storage node.
To be clear, this doesn’t happen if I’m pointing at default master. This is Trunk build 7102. My FOG portal tells me there are newer builds available but this was cloned from git about 2 hours ago. Both default master and the storage node are on the same subnet\VLAN. If I have DHCP hand out the IP of the default master it works great. Pointing to storage node IP results in Chainloading failed:
-
FOG 1.2.0 - Is there a way to limit bandwidth in FOG?
I can do this VIA other means, but is there a way to limit the amount of bandwidth FOG uses for UNICAST imaging? I’m in San Francisco but I’m going to attempt to image a machine in Singapore. We have quite a bit of bandwidth in SF but for the small sales office in SG they have very little bandwidth. Again, there are other standard ways to do this but a configuration in FOG would be more tidy. Thanks in advance.
-
Are Fog trunk server + 1.2.0 Storage Nodes compatible?
Greetings,
Just a quick inquiry if I can run FOG trunk on default master but 1.2.0 on SNs. Downsides? Does upgrading to trunk on SNs bring any other gains? I ask because I like some of the features in trunk but plan on having 4 SNs globally and upgrading them poses a number of logistical issues (unrelated to FOG).
–bw
-
RE: How to sync StorageGroup masters with default group?
@Wayne-Workman Thank you, Wayne. That really does clear things up. It sounds like the easiest thing to do (short of upgrading to trunk) is what I’ve been doing. Do a round of image updates and assign them to default. Clear ‘is master’ from all other storage nodes and move them to default group. Watch the logs for the sync to complete, move them back to their respective groups, recheck ‘is master’ on SNs since I only have a single SN per group. I only update these images a few times per year so that’s really not that painful. I could manage rsync scripts or a manual process but I’d rather let FOG do it especially if 1.3.0 will be along sometime in the not too distant future. Much appreciated!
-
RE: How to sync StorageGroup masters with default group?
@Wayne-Workman Maybe I’m missing something. I created a new image for ‘Optiplex7010’ with no group assignment (so, default). The storage node, ‘Singapore’, is master of its own group but never got the updated ‘Optiplex7010’ image from default master (verified it exists on default master and wrote it back to a few machines.). Intra-group syncs are working. If I clear ‘is master’ & move ‘Singapore’ into the default group I can see replication happening in the logs and indeed the image appears on ‘Singapore’. But default master (The ‘normal mode installed’ FOG server) doesn’t do any replication if I move ‘Singapore’ SN back to its own group and check ‘is master’. In this example, group => group isn’t replicating. Is it supposed to?
It sounds like you you are suggesting I create a new group (let’s call it ‘masters’) where I’d leave the default as master and add the Storage Nodes as members. If you were just telling me how to sync my custom dirs, this was just a byproduct of adding more storage volumes in VMWare and I have moved /images to /fog_images on default master without issue. This includes the NFS share | perms on /images & /images/dev | etc/exports setup | and changing the storage path in the web portal. Does this mean a StorageNode can be a member of multiple groups where it is the master of one of them?
-
RE: I'm stuck! FOG 1.2.0 issues Centos 7. Both server and StorageNode
@Sebastian-Roth Yes and no. I reverted to Centos 6.7 and all is good. To be fair there is a big disclaimer at the top of the ‘FOG on Centos 7’ page but the fix described didn’t work for me.
–bw
-
RE: How to sync StorageGroup masters with default group?
@Tom-Elliott Are the multiple groups a feature of trunk? In 1.2.0 I only have a single drop down. Or do I need to maintain X number of exact images depending on the number of SGs I have. Each owned by a different master. The reason I ask is because as soon as I moved my SN to ‘Singapore’ and set ‘is master’ the replication log doesn’t show any replication but it was replicating when all hosts were in the same group.
-
How to sync StorageGroup masters with default group?
From what I’ve read @ https://wiki.fogproject.org/wiki/index.php?title=Managing_FOG#Storage_Management, it appears that FOG will only replicate images to other members of any given storage group. If true, I’m looking for a way to sync the /images dir among masters. I know there’s rsync but…
We are a global company with offices in 6+ locations around the globe but minimal IT staff at the other offices. What I’d like to do with FOG is to centrally create all images in SF but push those out to all ‘is master’ SNs who are the masters for geographically designated StorageGroups (e.g. dublin, zurich, etc). This should keep images from downloading to remote clients from SF so that all clients can get their images locally.
Long winded question is does FOG have a way to do this natively or am I stuck managing rsync scripts on the ‘default’ /images store? There will never be a location-specific image so the image I build in SF for ‘Lenovo X1 Carbon Gen3’ will be used globally.
-
RE: After deploye grub fog 1.2.0
@Junkhacker This fixed it for me. I changed my exit from the default (Sanboot style) to ‘GRUB Style’. ‘exit style’ didn’t work either. Oddly, this FOG server is a new install post development (in the ‘IT service’ prep sense). During dev on the old test server we didn’t have this issue. Not sure what changed.
-
I'm stuck! FOG 1.2.0 issues Centos 7. Both server and StorageNode
Greetings all,
Here is an outline of issues plaguing me at the moment. Any assistance or pointers to other references appreciated.
Server:
Unicast image up/down is working fine over multiple VLANs. I can re-run the install script without issue. I can connect to the ‘fog’ (MariaDB) db from the command line as root, but not as user ‘fog’. DB DOES have a password but even when I specify the fog user’s PW, it won’t let me in.All services can be started from command line without error. But when I do a 'systemctl status FOGxxxx (any of the FOG services) I get a bunch of noise about mysql
ALA
[root@SF-FOG-V ~]# systemctl status FOGImageReplicator -l
● FOGImageReplicator.service - SYSV: Startup/shutdown script for the FOG Multicast Management service.
Loaded: loaded (/etc/rc.d/init.d/FOGImageReplicator)
Active: active (exited) since Wed 2016-02-17 16:47:29 PST; 33min ago
Docs: man:systemd-sysv-generator(8)
Process: 1062 ExecStart=/etc/rc.d/init.d/FOGImageReplicator start (code=exited, status=0/SUCCESS)Feb 17 16:47:28 SF-FOG-V systemd[1]: Starting SYSV: Startup/shutdown script for the FOG Multicast Management service…
Feb 17 16:47:29 SF-FOG-V FOGImageReplicator[1062]: Starting FOGImageReplicator: [ OK ]
Feb 17 16:47:29 SF-FOG-V systemd[1]: Started SYSV: Startup/shutdown script for the FOG Multicast Management service…
Feb 17 16:47:29 SF-FOG-V FOGImageReplicator[1062]: PHP Warning: mysqli::mysqli(): (HY000/2002): Connection refused in /var/www/html/fog/lib/db/MySQL.class.php on line 64
Feb 17 16:47:29 SF-FOG-V FOGImageReplicator[1062]: PHP Warning: mysqli::select_db(): Couldn’t fetch mysqli in /var/www/html/fog/lib/db/MySQL.class.php on line 165
Feb 17 16:47:29 SF-FOG-V FOGImageReplicator[1062]: PHP Warning: mysqli::query(): Couldn’t fetch mysqli in /var/www/html/fog/lib/db/MySQL.class.php on line 89
Feb 17 16:47:29 SF-FOG-V FOGImageReplicator[1062]: PHP Warning: MySQL::sqlerror(): Couldn’t fetch mysqli in /var/www/html/fog/lib/db/MySQL.class.php on line 180
Feb 17 16:47:29 SF-FOG-V FOGImageReplicator[1062]: PHP Warning: array_key_exists() expects parameter 2 to be array, null given in /var/www/html/fog/lib/db/MySQL.class.php on line 150
Feb 17 17:20:24 SF-FOG-V systemd[1]: Started SYSV: Startup/shutdown script for the FOG Multicast Management service…I upgraded to PHP 5.6 on both master and storage hoping that would fix stuff but alas, no change. The errors for the other FOG services, when invoked with ‘status’ are similar. In all cases, ‘systemctl start FOGXXXX’ starts the service(s) without complaining.
On master, I’ve added the StorageNode and am able to see StorageNode host stats and disk info as it should be. But no replication ever occurs in the logs.
On StorageNode, also Centos 7, no images ever populate /images or /fog_images which I created. .mntcheck exists in /images & images/dev (ditto for my second set of folders). I’ve tried it as a member of the default group, and a member (‘is master’ unchecked) of a second storage group.
I can see the NFS shares of both machines over the network and mount them. I can FTP ‘to and from’ without issue.
If anyone can give me an educated guess of where to check next or any other info needed from me please let me know. Or even a better logging method (surprisingly light on details so I must be looking in the wrong place).
THX all,
Bard