• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. DBCountMan
    3. Posts
    D
    • Profile
    • Following 0
    • Followers 0
    • Topics 66
    • Posts 308
    • Groups 0

    Posts

    Recent Best Controversial
    • Storage Node Disk Usage alternative disk usage read-source

      I think that the DefaultMemeber is tied to the IP address of the NFS share. My primary FOG server has two network interfaces, one for imaging offline using DHCP service, and another for remote management. The primary does not show disk usage saying “Node offline” when viewing from a PC or vlan not on the imaging interface. My secondary only has one interface and I use the USB boot method for imaging, so only one network interface and I can see disk usage. My question is: Is there a way for the FOG Web UI to read disk usage from a different source instead of DefaultMemeber?

      posted in General
      D
      DBCountMan
    • RE: Does FOG use or install the log4s?

      @george1421 said in Does FOG use or install the log4s?:

      Again don’t listen to a dude on the internet prove it to yourself.

      “Think for yourself, question authority.” -Tim Leary

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @george1421 That worked! I added the two additional tables to the export command and the “test” host was imported.

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @george1421 I tried restarting apache2 and php, didnt work. I then tried the API export/import method and got this error on the secondary FOG server web UI:
      Screenshot from 2021-12-29 14-46-23.png

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @george1421 Ok what about the --single-transaction? Should I leave that?

      UPDATE: I left --single-transaction and removed --no-create-info, and no errors were thrown. But I do not see the “test_host” host that I created on the primary, on the secondary. I also created a “testsync” image (not a captured image, just created in the web ui), and that did get imported into the secondary.

      Now that I’m thinking about it, I wonder if it matters if I just create a host in Web UI vs actually registering a host.

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      After setting up a script and cronjob to pull the tables from the primary fog db and import into the secondary, the import commands are throwing these errors:

      ERROR 1062 (23000) at line 23: Duplicate entry '22' for key 'PRIMARY'
      ERROR 1062 (23000) at line 23: Duplicate entry '395' for key 'PRIMARY'
      
      

      This is my import command:

      mysql -D fog images < /root/fog_images.sql
      mysql -D fog hosts < /root/fog_hosts.sql
      
      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @george1421 Those creds worked, but I was prompted for the password. Is there a way to put the password in-line with the command so it runs w/o interaction?

      Nevermind found it here

      posted in General
      D
      DBCountMan
    • RE: Newly captured images are being owned by "root" instead of "fogproject"

      @tom-elliott Well all of my other images in /images are owned by fogproject:root. If this isn’t a problem then I’ll leave it alone.
      Screenshot from 2021-12-28 14-38-51.png

      posted in General
      D
      DBCountMan
    • Newly captured images are being owned by "root" instead of "fogproject"

      As the title states. The last two images I captured/uploaded to the FOG server are owned by root:root instead of fogproject:root. Not sure why this happened. I’m sure I can change ownership back to fogproject but something changed somewhere. Maybe I did something by accident? Where would I have to look?

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @george1421 Ah I see. Run the dump from the secondary FOG server without having to dump on the primary to a share, mount the share on the secondary, then import. I currently don’t have creds set on mysql on either server. I’ll look into setting creds then try to run mysql -h<hostname> to test.

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @brakcounty
      I believe I found it:

      1. Export image and host list on primary server:
      sudo mysqldump fog images > fog_images.sql
      
      sudo mysqldump fog hosts > fog_hosts.sql
      
      1. Import image and host list on secondary server:
      sudo mysql -D fog < /mnt/fog_images.sql 
      
      sudo mysql -D fog < /mnt/fog_hosts.sql 
      

      Keep in mind that the paths shown above are unique to my set up and method for transfer of the sql files. Perhaps there’s a way to dump two tables in one command. But I just figured this out seconds ago lol.

      posted in General
      D
      DBCountMan
    • RE: Selective mysql export/import

      @george1421 A continuous sync. I want changes made to the Hosts, Images, and groups db (if possible) replicated to the secondary FOG server.

      posted in General
      D
      DBCountMan
    • Selective mysql export/import

      Now that I have two FOG servers, I set up replication from the primary to the secondary. My secondary is not set up for DHCP, but I use it for imaging across networks with the USB boot method, loading ipxe with a local file instead of via PXE boot. This requires a slightly different config from the primary. which has DHCP enabled and serves pxe on an offline network. I believe when I exported the sql fog.db from the primary to secondary it took all the settings with it, and I had to go back to the secondary to reconfigure it for my custom set up again.

      So my question is: How can I only export Hosts and Images?

      (For the actual image files in /images, I have rsync set up.)

      posted in General
      D
      DBCountMan
    • RE: Install FOG on Ubuntu Server 21.10 issues

      @sebastian-roth Sounds good. My NEW secondary FOG server is up and running on 20.04.

      posted in FOG Problems
      D
      DBCountMan
    • RE: Install FOG on Ubuntu Server 21.10 issues

      @george1421 Oh well that explains it! 😅

      Cool thanks!

      posted in FOG Problems
      D
      DBCountMan
    • Install FOG on Ubuntu Server 21.10 issues

      The first issue I ran into was php7.0 not being available for 21.10, and the FOG install script kept failing there, so I had to manually add the repository

      add-apt-repository ppa:ondrej/php
      

      Then it fails at the Install Package: php-gettext step. The php-php-gettext package seems to rely on php8.0, which the FOG install.sh script doesn’t want, it wants 7.0.

      Is FOG 1.5.9 not compatible with 21.10?

      posted in FOG Problems
      D
      DBCountMan
    • RE: UEFI Boot

      @AvivKaplan1
      You said you FOG running on Ubuntu without DHCP, so you already have a DHCP server. You’d have to tell your exisiting DHCP server where the tftp server is, and specify file names, if your DHCP server supports that function. If it doesn’t, then you can use the USB boot method for booting ipxe from a USB drive that points to your FOG server.

      posted in General Problems
      D
      DBCountMan
    • RE: Boot from hard drive if connection to fog server fails

      So to anyone else that wants to do what I’m doing or similar, here is what my final ipxeconfig script looks like. Also keep in mind that while I followed the USB boot method instructions, I adapted the method to drop the files onto the EFI partition of the primary drive, same directory structure as well.

      #!ipxe
      isset ${net0/mac} && ifopen net0 && dhcp net0 || goto dhcperror
      echo Received DHCP answer on interface net0 && show ip && goto netboot
      
      :dhcperror
      prompt --key s --timeout 3000 DHCP failed, 's' !!!I.T. ONLY!!!; or continue to Windows in 3 seconds && shell || goto refind
      
      :netboot
      chain http://*fogip*/html/default.ipxe || goto netbooterror
      
      :netbooterror
      prompt --key s --timeout 3000 Connection failed, 's' !!!I.T. ONLY!!!; or continue to Windows in 3 seconds && shell || goto refind
      
      :refind
      imgfetch file:///EFI/Boot/refind.conf
      chain -ar file:///EFI/Boot/refind.efi
      
      
      posted in General
      D
      DBCountMan
    • RE: Boot from hard drive if connection to fog server fails

      @george1421 Pretty much there man! I still want to play with echos and hiding/masking command outputs. Also want to throw in “echo show ip” so I can see what IP is grabbed. This can be useful.
      Thanks for you help!

      :netbooterror
      prompt --key s --timeout 3000 Connection failed, hit 's' for the iPXE shell; continue to Windows in 3 seconds && shell || goto refind
      
      :refind
      imgfetch file:///EFI/Boot/refind.conf
      chain -ar file:///EFI/Boot/refind.efi
      
      
      posted in General
      D
      DBCountMan
    • RE: Boot from hard drive if connection to fog server fails

      @george1421 Follwing your suggestion, here’s what I ended up doing with success:
      ipxeconfig:

      :netboot
      chain http://*fogip*/html/default.ipxe || goto netbooterror
      
      :netbooterror
      prompt --key s --timeout 10000 DHCP failed, hit 's' for the iPXE shell; reboot in 10 seconds && shell || goto refind
      
      :refind
      imgload file:///EFI/Boot/refind.efi
      boot
      
      

      I also placed refind.efi and refind.conf on the EFI partition in EFI/Boot/ along with the custom bootx64.efi. Now I don’t know if refind.efi will automatically read refind.conf or do I have to tell ipxe to load the conf as well.

      Now when the connection fails, I get the 10sec prompt (which I will shorten and change the message of), then it goes to the refind menu, which the first option is selected, Windows EFI Boot option, with a 20sec timeout. Boots into Windows!

      I just have to clean things up and shorten the timeouts since our end users will see this if their internet goes out. Otherwise our Helpdesk will get flooded with “weird messages on screen when starting up”.

      posted in General
      D
      DBCountMan
    • 1
    • 2
    • 7
    • 8
    • 9
    • 10
    • 11
    • 15
    • 16
    • 9 / 16