@cjp82placer Do you have a web filter of any kind? You should check it to see if it’s blocking github.
Hello people - let me entertain you with a TV Pilot that I made called Hitchhikers:
Posts made by Wayne Workman
RE: Can no longer update using GIT
@cjp82placer Ok since the time has changed - rerun those commands to see if the issue is resolved or not. Also, add a ping command which will test DNS resolution:
# Just check if the curl succeeds or not. curl https://github.com/ > /dev/null 2>&1;echo $? # Just check if the curl succeeds or not. curl https://www.google.com/ > /dev/null 2>&1;echo $? # test dns resolution of google. ping -c 4 google.com # test dns resolution of github ping -c 4 github.com # Directly ping one of github's IPs ping -c 4 126.96.36.199
RE: FOG Storage Relication
Also, what port/protocol is used for storage replication or needed in this scenario?
Depends on how you choose to set it up. Using the multi-master configuration that George explained, you just need FTP for replication and port 80 for the PHP scripts on the remote nodes to listen for requests from the master server on.
But, with a 20 to 40Mbps link between your sites, that’s enough to just use fog in the standard way - with one master server and other storage nodes at your remote sites - and you can accomplish sending specific images to specific locations. You’d setup the location plugin - and you’d use FOG’s group-to-group image sharing. There’s no need to export/import anything in this setup and it’s officially supported.
RE: Fog Installer - Distro check
@developers I’ve looked into the Ubuntu 16.04 problems for the last two hours and have determined the problem is being intermittently caused by a problematic ppa.launchpad.net server:
apt-get updatecommand times out, and because it was not successfully run, lots of packages fail to install. I’ve also looked over all of the commits in the working, dev-branch, and master branch of the fogproject github repository around the days that the Ubuntu 16.04 failures started happening, there are no commits that would cause this.
The workaround: If the installer fails for you, manually run
apt-get updateuntil it succeeds. Once it has succeeded, run the installer again and it should work.
RE: FOG Storage Relication
is it possible to have multiple FOG servers connecting to the same storage node?
I think perhaps because you are new to FOG, you have terms mixed up. This is understandable. A fog storage node is one that does not have a web interface. It can respond to TFTP requests (network booting) and imaging tasks. It connects to the FOG Database remotely. It also is able to carry out tasks as requested by the ‘master’ fog server. For example, if the master fog server deploys an image to a host that is local to this storage node (and the location plugin is configured), that host would use the remote storage node to image with. Another thing to understand is that all image captures always go to the primary storage group’s master node of that image except in the case where the location plugin is configured. That’s a mouth full, yes. This article helps explain. So you can control everything from a single fog server - this is part of FOG’s design. But a ‘read only’ node is not. All nodes are under the command of the master fog server - and any one of them could be configured as a master in a storage group, any one of them could be a non-master in another storage group - any one of them could be a member of TWO storage groups. The way you organize your groups and your masters determines the direction of replication, and the behavior of replication. Images can also belong to multiple storage groups, which further makes the replication model even more flexible. There’s also the ‘multi-master’ implementation that is not officially supported but many people have chosen to use.
I should ask you some specific questions that can help us understand exactly what your needs are. Please try to answer each.
- Is the replication link slower than 1Gbps?
- Do you plan to capture all images from one location and use them at all locations?
- Do you want to limit what images are replicated where, or do you want them all replicated to all locations?
- Do you have control of DHCP at all locations?
- Do you have employees at these various locations that you want to restrict to only their locations?
RE: Connection time out (4c0a6035)
@tlyneyor Glad I could help. I’ve seen duplicates a lot, so I’m really familiar and very weary of them.
An easy way to detect rogue DHCP services is to do a packet capture on a linux box while issuing the commands to release the current lease and get a new one:
dhclient -r; dhclient. In the packet capture, you would see the rogue DHCP server’s address and MAC address.
Also, there is a way on Linux to check a specific IP address to see if there are duplicates:
arping -D -q -I eth0 -c 2 192.168.1.250 ; echo $?If you get a 0 back from that command, it means there are no duplicates. 1 or anything else that isn’t 0 means there are duplicates. Note that
eth0in the command is the name of the interface you are using to send the request out of and
192.168.1.250is the IP being tested. This does not need to be run on the affected computer, and probably should be executed on a normal-functioning system.