• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. george1421
    3. Posts
    • Profile
    • Following 1
    • Followers 65
    • Topics 113
    • Posts 15,342
    • Best 2,780
    • Controversial 0
    • Groups 2

    Posts made by george1421

    • RE: Routing and installation problem

      @DZKeeper Well this is disappointing its still not working, it should be.

      The reason why I wanted to see the ipconfig /all just to show that the default router for the LAB LAN is the LAB LAN nic interface of the FOG server.

      Also from the business side I wanted to make sure there was a router to tell business computers about the computers beyond the FOG server. My intuition is telling me its a routing issue and not related directly to the FOG upgrade since FOG doesn’t mess with IP tables (actually one of the setup prerequisites is that you must disable the firewall all together as well as selinux).

      If I had to listen to my intuition (assuming routing was working before you upgraded fog). I would almost suspect that the ip_forward setting was disabled. This command should return ( 1 ) enabled cat /proc/sys/net/ipv4/ip_forward

      You can confirmed that the IP address of the FOG server hasn’t changed for either NIC adapters?

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem

      @DZKeeper OK now we can rule out forwarding (sorry about making this drawn out solving as thread discussions adds a certain amount of delay).

      OK now that iptables is out of the way. From a computer on your LAB network, can you ping the business LAN interface (nic) of the FOG server from the lab network (this will test the on linux routing).

      Also do the same from the business side, ping the LAB LAN network interface of your FOG server. I’ll assume from the business lan you can ping and get a response from the business LAN nic of your FOG server already.

      Also from a computer on the LAB LAN computer, can you post here the output of ipconfig /all here? And also could you post the ip addresses of the FOG server both LAB LAN and business LAN?

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem

      @DZKeeper That forward policy is still DROP

      Lets try this one: iptables -P FORWARD ACCEPT That should change the forward policy to Accept and pass all data through your FOG/linux/router.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem

      @DZKeeper ok how about iptables -F. ??

      I want those firewall rules to have the default policy of accept to continue testing. The forward chain manages data passing through the Linux router.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem

      @DZKeeper Maybe Xubuntu is Systemd based and not SysV. (Sorry I’m a rhel guy, not debian/ubuntu)

      sudo service firewalld stop

      I just found these instructions too for ubuntu 14.04
      sudo ufw disable

      Sorry for the run around but rhel and ubuntu is just a bit different.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem
      • tracert -d will say destination host unreachable at first hop

      Then just to be clear the target computers on the LAB LAN can ping the FOG server, just not through it (that may be governed by the FORWARD chain)

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem

      @DZKeeper That forward chain is basically the default accept even though the default action is drop.

      If you issue the following command sudo service iptables stop then rerun the iptables -L All policies should be policy accept or it may give you the iptables is not running too.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Routing and installation problem

      ok as for the routing issue.

      Can the FOG server (which is acting as a router) reach the internet?

      From the FOG server make sure its default route points to router that has internet access, confirm that with traceroute.

      Make sure the FOG server can ping both directions.

      To turn a multi-homed (more than one nic) linux box into a router you need to enable the ip_forward kernel parameter. With the ip_forward set to 1 the linux computer will now pass traffic between its interfaces.

      If the FOG server (acting as a gateway) can ping an internet device and it can ping devices on the lab LAN then I would check to see if (for some reason) the firewall has been enabled on the FOG server. the command sudo iptables -L should return 3 rules all with accept. If you have questions if the firewall is enabled post the output here and I will tell you.

      From a computer on your LAB LAN. Do a tracert -d <ip_address_on_business_LAN> to see where your data packets are really heading.

      posted in Linux Problems
      george1421G
      george1421
    • RE: Scripting keeping multi-site-master-setup images updated.

      Tom, understand this is just a brain storming session right now to throw ideas around. At this point there is nothing we are starting to talk about.

      I’m fairly sure that replication does have a hook now. Mind you it’s based on the nodes/groups rather than replication process itself

      The hook (or attachment point) I was looking for is when the (object) has been fully transmitted to the remote storage node and before it loops back to the top to start replicating the next (object). At this time what I was thinking at the time the replicator know what it just replicated and to what storage node it would be seeming trivial to make a URL call against that remote storage node (understand I’m using the word storage node interchangeability with a remote FOG server at this point). If the remote storage node is a full FOG server that remote storage node would process the URL message and update its local database.

      That all said, hooks work at the server they’re running from. While it is possible to do a hook to perform this, I think it might make more sense to use the “Full server method” but connect to the “main” server. On the main server, create the groups and nodes you need and make your adjustments.

      Yeah, I agree. I kind of covered that point above.

      What will this do? It will allow any entry on the “Main” server (images, groups, hosts, printers, etc…) to be immediately available to ALL storage servers at ALL sites.

      It will make images and snapins available to all storage nodes in the storage group after replication. Today I have a remote FOG server connected using the existing storage node technology. Everything replicates just great using the current methodology. The issue is the remote FOG server technicians can’t see the images sent from HQ until I export the images information from the root FOG server web gui and then import them into the remote FOG server web gui. The goal is to eliminate this step. The second (different) but connected issue is with image replication. Sometimes these remote FOG servers are located beyond a slow WAN link where it might take several hours for the image to get the remote storage node (storage node or FOG server). It’s not so much of an issue for a new image, but if we update an image the way that the replicator currently works someone at the remote FOG server could try to deploy that image even though its partially replicated to the remote site. Using the hook points and remote url calls we could disable the image on the remote FOG server by updating the database and setting the image to disabled in the database and then reendable it once the replication is done. (ideally we would want a replication in progress flag but I’m trying to stay within the framework that has already been setup).

      Because the other nodes are “full on servers”, the ipxe and default.ipxe will be loaded from the proper node.

      Right in this multi master setup the remote FOG servers may or may not be aware there is a superior node in their configuration. The nodes would operate independently of the superior node. All images and snapings can be constructed, tested and then released from the superior node then let the replication process take place to send the [object] and its database information to the subordinate server. Expanding this out you could make a massive FOG server structure with each node being operated independently from the others, and then still retain the concepts of traditional storage nodes, because they have value with on site load balancing.

      I hope I haven’t made this two complex because while I tabled the idea for a while, it has been rolling around in the back of my head. There are others that could benefit from this type of setup and (in my mind) getting this this place from where we are today is just a small jump without any major code alterations, 90% of what we need is already in the box today.

      Ugh this editor has mangled my response, I'm working on trying to get the whole post to show.
      It doesn't like squared brackets around words. It ate half of my post because I used squared brackets!!, Ugh

      posted in General
      george1421G
      george1421
    • RE: Scripting keeping multi-site-master-setup images updated.

      @Tom-Elliott We were talking about how to solve a condition where for technical or political reasons a company has 2 or more standalone FOG server, how could images be developed at a root FOG server and have the images replicated to all other fog servers in the storage group. This can happen today with the current setup with replication and adding the remote FOG servers as “storage nodes” even though they are full fog servers. The singular issue is how do we update the images and snapins tables on the remote FOG servers. I can do this today by exporting the images settings from the root fog server and importing that exported file into the remote fog servers. The issue is this is very manual. So we were discussing how could the root FOG server send messages to the remote FOG servers to instruct the remote FOG servers to update their local images and snapins tables.

      When I was actively thinking about this it was to write a hook for the replicators that would get called when the replicators finished moving one object to the remote nodes. That hook could make a remote url call to that remote FOG server to update its database with the associated image/snapin settings.

      At this point we were talking about the communication format of this remote url call, in that the url call should send data in json format.

      Understand this is just a general discussion of what could be done. I think the last time I talked with you about this idea you said the replicators didn’t have that hook available so that is where my idea stayed (in dream land).

      posted in General
      george1421G
      george1421
    • RE: Scripting keeping multi-site-master-setup images updated.

      @Wayne-Workman there should be php libraries to encode and decode this format. Node.js uses this format quite a bit too.

      posted in General
      george1421G
      george1421
    • RE: Scripting keeping multi-site-master-setup images updated.

      I can say I’m a bit out of my element when it comes to this RPC like communications (I would assume the communication messages should use json for future compatibility reasons). I understand the intent of what you are saying. In my mind (back when I was thinking about it), I thought if I could write a hook that would key into when the image replicator finished moving a file to the remote node, it could call a web page on the remote node to insert the db record for the image it just transferred. For a traditional storage node the web page may just return true or ignore the url call, but a real FOG server would process the web page call and add the image definition to the remote FOG server.

      Along the same lines if you are working with a multi-site fog install and your site to site links are slow (i.e. 1.5 MB MPLS links) it may take days for the image to replicate from your HQ site to your remote site, some how that image needs to be blocked from deployment while the image is being replicated. The master node could send a web page call to the remote node to disable the image because of replication and then reenable it once the replication job has completed.

      posted in General
      george1421G
      george1421
    • RE: Fog Replication and best config setup for multisite masters

      Back on point to the OPs questions.

      Your Goal is not obtainable with the current FOG design. The replication happens only one way FOG Server -> Storage node or FOG Server A -> FOG Server B. It is not a n-way replication model.

      1. Wayne has a script for that, you can reinstall by updating the .fogsettings file then running the installfog.sh script, but you may have to manually edit a few setting via the gui because I don’t think the installer will update all db records if they already exist.
      2. If you use the traditional model you will have one master node and then a storage node at each site. That storage node can be the site tftp/pxe boot/imaging server. There will just be no local GUI to manage that storage node, all management happens on the Master Node at HQ.
      3. For the log files, I don’t have an answer.

      Issue:
      There is a replication/replicator log file on the master node. You need to ensure you have the storage nodes defined correctly on the Master Node with the proper LINUX fog user ID and password for each storage node.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Fog Replication and best config setup for multisite masters

      @Wayne-Workman I think this is a valid discussion (probably not in this thread since we both can post walls of text). But I think (based on my best guess) is that with more than one full fog server involved each fog server will be the owner of its own ssl key so that may not allow the cross site communications. Since it will be two fog servers talking and not a fog and storage node (which uses the fog server’s key).

      posted in FOG Problems
      george1421G
      george1421
    • RE: 7156 Uefi pxe DHCP error

      @dureal99d ok post the pcap file here and I’ll look at it. I don’t often recommend to people to use wireshark because it is powerful and super confusing if you don’t know what you are looking at.

      posted in FOG Problems
      george1421G
      george1421
    • RE: Fog Replication and best config setup for multisite masters

      @Wayne-Workman You are right I need to watch my words. A multi-master setup is not a supported configuration, but it worked for me. YMMV.

      I can explain how I use this setup and a bit why, but let me say this FOG was originally developed and intented (IMO) to be a one site local image deployment tool. Moving it to an enterprise level was not (and is still not) the focus of the developers. (Understand this is my opinion only and not a reflection of the current state of FOG, its developers, my role as a Moderator, or FOGs usefulness). To make FOG really enterprise ready the FOG Project needs a few more developers (i.e. the current core team can’t do everything) that are skilled in multisite organizations. The current structure of a single master node and multiple storage nodes is really geared towards a single (possibly large) site and not a disperse organization. The storage nodes don’t have their own databases and can not stand alone if needed, they need direct and immediate access to the master node (typically at HQ, connected over a low bandwidth connection). The second issue with FOG is that the levels of access control is very limited, you are either an administrator with full access to everything or you are a mobile user who can only deploy. In a multi-site situation you may need to limit certain administrators to specific deployment servers or to limit what systems they can deploy images to. I would hate to have an IT tech at site A accidentally deploy a new image to an unsuspecting computer/user at site B. The current level of access control would allow that.

      To address those concerns I (myself not as a Moderator) took the existing FOG system and twisted it a bit to work in my environment how I mostly needed it to work. Eventually this multi-master setup could be a real thing with a little coding, but today is just something I concocted that is not supported by the FOG Project.

      (be aware that this is totally made up organization) To keep things simple lets say I have 3 location NYC, ALT, LA and ATL is HQ. At each location I have a fully functional FOG server. It is the Master Node for each site. At LA (since it is a big campus) I also have a storage node. So on the LA FOG Server (master node) I have a storage group called… “LA storage group”. Now at ATL (HQ) I have 2 fully functional FOG server FOG-ATL and FOG-DEV. On the FOG-ATL server (FOG deployment server for the ATL site) There is a storage group created called “Biz storage group” that includes FOG-ATL, FOG-LA, and FOG-NYC. And finally at HQ I have a storage group created on FOG-DEV “Dev storage group” that includes FOG-DEV and FOG-ATL. So now you see I have have basically 3 storage group rings.

      1. FOG-DEV (master node) and FOG-ATL (marked as storage node even though its a full fog server)
      2. FOG-ALT (master node), FOG-NYC, FOG-LA ( both marked as storage node even though its a full fog server)
      3. FOG-LA (master node) and STORAGE-LA (traditional storage node setup)

      This setup function just as FOG was intended for replication. Replication always happens from Master Node in the storage group to storage nodes in the storage group. This replication will only happen one way. This is not a two way replication. You must always use the top down model.

      So now with this setup if I drop an image on FOG-ATL that image will be replicated to all FOG servers and storage nodes in my network with the exception of FOG-DEV this is because on the “Dev storage group” FOG-ATL is a slave node and FOG-DEV is the Master node. As I said before replication only happens top down. Master to slave or storage nodes.

      I’ll try to wrap this up quickly because I see this is more on a level of tutorial and not embedded in a post.

      In my environment I create the master images on FOG-DEV. I configure the images (when I create them) to not replicate from FOG-DEV until they have been approved. We do all of the development and certification of the images on FOG-DEV (we also deploy to the test lab from FOG-DEV). Once the image has been approved we update the image to allow replication. This triggers the image to be first replicated to the FOG-ATL FOG server and then from the FOG-ATL servers to FOG-NYC and FOG-LA.

      Now here is the manual part. We need to synchronise the image databases between all of the FOG servers. Just because replication happens doesn’t meed the FOG database knows about the images. The images will be copied to all FOG servers in this setup, but we have to export the database configuration on the FOG-DEV server and then import them to all other FOG serves on our network via the web gui. We can update images on our FOG-DEV server and these changes will be replicated no problem. We only have to log into each FOG server when we add a new image because that means we need a new record in the FOG server’s database.

      This environment does work as long as you accept the caveats of the limited manual intervention with updating the image table. As I think about it, I should create a tutorial to explain this a bit better (as I did with the FOG-PI setup) because I think this could be a supported “thing” without much programming. I also realize that the developers are working hard on FOG 2.0 so taking time to create this multi-master configuration is not in their area of focus right now.

      posted in FOG Problems
      george1421G
      george1421
    • RE: 7156 Uefi pxe DHCP error

      @dureal99d Ok then, I guess you get to learn how to do tcpdump and then read the output.pcap file with windows based wireshark.

      This is an example of a successful pxe boot. (almost I was testing and I sent a file name called 9snp.efi which did not existing on the FOG server, but again I was testing something. The important part is seeing the dhcpProxy request on port 4011 and then the target attempts to pull 9snp.efi using tftp). The bits you are interested in is seeing the flow of communication between the target computer 192.168.112.16 the FOG/DNSMasq server 192.168.112.24 and the soho router 192.168.112.1

      0_1476031191187_uefi_02.pcap

      posted in FOG Problems
      george1421G
      george1421
    • RE: Compiling dnsmasq 2.76 if you need uefi support

      @Wayne-Workman I updated #11 to hopefully clarify what my intent was.

      posted in Tutorials
      george1421G
      george1421
    • RE: Compiling dnsmasq 2.76 if you need uefi support

      @Wayne-Workman said in Compiling dnsmasq 2.76 if you need uefi support:

      Steps 11 - 13 are confusing to me. You found that your dnsmasq binary is installed at /usr/sbin/dnsmasq but you changed the makefile’s prefix to be /usr

      Also, wiki worthy

      If you looked in the Make file the prefix is the base of where stuff is installed. If you look at the lines just below that you will see that prefix is used for bindir and mandir variables. I use to do this kind of stuff all the time back in the early days (before the internet) so I forget some times that I need to add a bit of detail that I just intrinsically know.

      I needed to find where the current dnsmasq binary file is located, because the default for dnsmasq source code would have been /usr/local/sbin instead of where the distribution package placed it in /usr/sbin. The issue is if I would not have changed this line two dnsmasq binaries would have been installed. Only the search path would determine which one would actually be called when the service started. That is a bit two random for me. So that is why I updated the script to just overwrite the existing dnsmasq program.

      PREFIX        = /usr
      BINDIR        = $(PREFIX)/sbin
      MANDIR        = $(PREFIX)/share/man
      LOCALEDIR     = $(PREFIX)/share/locale
      BUILDDIR      = $(SRC)
      DESTDIR       =
      CFLAGS        = -Wall -W -O2
      LDFLAGS       =
      COPTS         =
      RPM_OPT_FLAGS =
      LIBS          =
      
      posted in Tutorials
      george1421G
      george1421
    • RE: Advanced dnsmasq techniques

      @Wayne-Workman said in Advanced dnsmasq techniques:

      wiki worthy

      Maybe once I can have a quorum of people prove or disprove my thesis. Right now there is a lot of assumptions being made (like I know what I’m doing) that need to be proven out through testing.

      posted in Tutorials
      george1421G
      george1421
    • 1 / 1