someone mess with the linux service account (user) called fog
? That’s not the default web admin with the same name. What I’m talking about is the linux user called fog
. That account is (should be) managed by FOG and not used for system maintenance.
Best posts made by george1421
-
RE: TFTP/FTP Issues after cloning success
-
RE: dnsmasq issues with tftp
Well I see a conflict here. I see you have isc-dhcp server loaded in your configuration AND you are using dnsmasq. Which one do you want to use?
In regards to dnsmasq first confirm you are running dnsmasq version 2.76 or newer by keying this into the fog server linux command prompt.
dnsmasq -v
Hopefully the response looks like this:Dnsmasq version 2.76 Copyright (c) 2000-2016 Simon Kelley Compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
If so then please use my ltsp.conf, completely replacing yours.
# Don't function as a DNS server: port=0 # Log lots of extra information about DHCP transactions. log-dhcp # Set the root directory for files available via FTP. tftp-root=/tftpboot # The boot filename, Server name, Server Ip Address dhcp-boot=undionly.kpxe,,<fog_server_IP> # Disable re-use of the DHCP servername and filename fields as extra # option space. That's to avoid confusing some old or broken DHCP clients. dhcp-no-override # inspect the vendor class string and match the text to set the tag dhcp-vendorclass=BIOS,PXEClient:Arch:00000 dhcp-vendorclass=UEFI32,PXEClient:Arch:00006 dhcp-vendorclass=UEFI,PXEClient:Arch:00007 dhcp-vendorclass=UEFI64,PXEClient:Arch:00009 # Set the boot file name based on the matching tag from the vendor class (above) dhcp-boot=net:UEFI32,i386-efi/ipxe.efi,,<fog_server_IP> dhcp-boot=net:UEFI,ipxe.efi,,<fog_server_IP> dhcp-boot=net:UEFI64,ipxe.efi,,<fog_server_IP> # PXE menu. The first part is the text displayed to the user. The second is the timeout, in seconds. pxe-prompt="Booting FOG Client", 1 # The known types are x86PC, PC98, IA64_EFI, Alpha, Arc_x86, # Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI # This option is first and will be the default if there is no input from the user. pxe-service=X86PC, "Boot to FOG", undionly.kpxe pxe-service=X86-64_EFI, "Boot to FOG UEFI", ipxe.efi pxe-service=BC_EFI, "Boot to FOG UEFI PXE-BC", ipxe.efi dhcp-range=<fog_server_ip>,proxy
Don’t forget to replace the
<fog_server_ip>
tags with the IP address of your fog server.Ref: https://forums.fogproject.org/topic/8725/compiling-dnsmasq-2-76-if-you-need-uefi-support/5
Now for the isc-dhcp server. You need to decide what really will be your dhcp server for the subnet where the fog server is. It can be the fog server if you are imaging on an isolated network, or it can be your building dhcp server if you want to image using your existing infrastructure.
If you have an isolated imaging network then you can use isc-dhcp server for everything, dnsmasq is not required and will actually confuse things. If you want to image on your current production network and your production network dhcp server isn’t capable of sending out the pxe boot options, then you can use dnsmasq in concert with your existing dhcp server.
You just need to pick a path and we can help you get there.
-
RE: FOG fails to reboot target computer after imaging
@jeromecandau said in Problems with exit boot on M.2 SSD drive:
The only time when the boot occurs is in case whe just make a network boot until the fog menu appears and then select boot to hard drive (first item) (or do nothing : then the boot occurs after timeout).
OK lets start with the above. If this happens then you have the right exit mode in FOG. The exit mode values are only used in the FOG iPXE boot menu. When you start imaging, bzImage and init.xz are copied to the target computer then FOS (Fog Operating System, the customized LINUX that runs on the target computer for capturing and deploying images) starts. What is happening here after imaging (capture or deploy) FOS tells the computer to reboot then nothing… Do I understand that correctly?
PS: You should probably start your own thread since your issues are different than the original poster. I don’t want to give mixed information in this thread.
-
RE: Adding more storage space to fog.
@kjoslin Ok you want to physically add a second drive.
Then you need to decide how you want to do it.
One option if your computer uses LVM (logical volume manager) you can just add the new drive to the disk pool and linux will just start using the new disk space. ref: https://askubuntu.com/questions/458476/adding-disks-with-lvm
Second, you can add as a new disk and setup a new image directory. In this configuration you will have to create a second storage node on the main fog sever. Then when you define new images you will have to pick which location (disk) you want to store the images to. ref: https://forums.fogproject.org/topic/10450/adding-additional-image-storage-space-to-fog-server
-
RE: SysPrep
You typically sysprep your golden image so it may be deployed to any number of computers and hardware. The words “should sysprep them first” makes me think you are doing something else.
-
RE: CAnnot upload image to Fogserver
@greg-plamondon The instructions were in both documents. The one I linked and one that Wayne linked. Unfortunately it happens more often then you might think. There are some documents out in the wild that recommend you setup the fog account to install the software and manage the syste (which is the wrong way).
-
RE: Pxe Input/output error
@sourceminer There has been a rash of these in the last 2 days on the FOG forums, Connical must have changed something to trigger this again.
https://forums.fogproject.org/topic/10006/ubuntu-is-fog-s-enemy
-
RE: Storage Nodes Not Providing Images
Out side of your issue of FOG not picking the proper storage node for deployment you have a different issue here.
Since you are imaging 15-30 machines at once you are using the wrong technology. If they are all of the same image, you should be using multicast and not unicast images. Each fog server will fill its 1GbE network uplink connection with 3-4 simultaneous unicast sessions. To deploy 30 unicast streams you will need to add a bit more storage nodes.
The same goes for the switch to switch links. So how can you mitigate this?
- Upgrade to a 10GbE network
- Add more links in your network (LAG link groups)
- Use a multicast stream.
Just for a baseline number, for a single unicast stream, what does partclone indicate your transfer rates are in GB/min?
-
RE: Kernel Update fails ... Oh no, not again!
@sudburr Does the linux user
fog
have write permissions to /var/ww/html/fog/service/ipxe and all files under? -
RE: Kernel Update fails ... Oh no, not again!
@sudburr lets start out with
ls -la /var/www/html/fog/service/ipxe
bzImage needs to be owned by fog or have world write access.
-
RE: Chromium Image Issues after Updating Server
@rstockham23 The most notable thing I saw in the second video was this error message
Can not open file '/images/CloudRead61/d1.original.swapuuids".
So what I would like to see is the output of this command.
ls -la /images/CloudReady61
also
cat /images/CloudRead61/d1.partitions
-
RE: HTTP 500 Internal Server Error
We’ve seen a sharp uptick in these issues related to ubuntu in the last weeks. There is a document that should give you guidance. https://forums.fogproject.org/topic/10006/ubuntu-is-fog-s-enemy
You can confirm by checking the apache error log
tail /var/log/apache/error.log
If you see an errors that have pdodbc and insert field failed. That is a good indication that ubuntu has been tweaked. -
RE: NFS mount options needed.
@wombathuffer In regards to the storage node, can we remove that from the equation? Power it off and you still have the same issue? I’ve seen some pretty creative uses of a NAS with fog. I just want to ensure you are not cross mounting the NAS onto the fog server and trying to reshare the mounted nfs share. Because that won’t work. As long as the NAS is a stand alone storage node then we are ok.
So now I wonder what is unique about your setup where the standard nfs connect is failing? If you say its the same with virtual box and vmware. Understand I’m not poking at your issue, I’m trying to understand what might be different in your setup.
-
RE: NFS mount options needed.
To answer your question, yes you can change how the FOS engine mounts the fog server. You will need to unpack the inits (init.xz) and then edit the file /bin/fog.mount In there you shall find the mount command you seek. Once you update the fog.mount script then just repack the inits and move them back to /var/www/html/fog/service/ipxe directory.
Here is how you edit the inits: https://wiki.fogproject.org/wiki/index.php?title=Modifying_the_Init_Image
-
RE: NFS mount options needed.
@wombathuffer said in NFS mount options needed.:
The only thing I am asking in this thread is - How do I specify NFS options when the PXE booted server is mounting the NFS-service?
I was replying to your question just as you posted. See below
But I can say many people (including myself) run FOG in a vm on vmware and I’ve never seen this issue.
-
RE: Upgraded an Existing Server to 1.4.4 and Now Interface is Very Slow and Chromium Images are not working
@rstockham23 How many client computers are hitting this FOG server? What is your check in interval?
-
RE: Upgraded an Existing Server to 1.4.4 and Now Interface is Very Slow and Chromium Images are not working
@rstockham23 Lets try this, go to: FOG Configuration->FOG Settings->FOG Client->FOG_CLIENT_CHECKIN_TIME note the value and then set the value to 900. Wait what ever your checkin interval was and see if response is better.
-
RE: Upgraded an Existing Server to 1.4.4 and Now Interface is Very Slow and Chromium Images are not working
@rstockham23 OK so every 30 seconds all 500 systems “ping” the fog server looking for any new instructions. So wait 5 minutes and see what your load is like.
Now understand we’ve set the check in interval to 15 minutes. That means if you schedule a snapin deployment to these computers, It will take up to 15 minutes for the target computer to get the job.
-
RE: Fog Not Resizing Captured image
Just guessing but if you watch the capture that takes hours does it say Raw somewhere on the partclone screen? If so, is this a win10 instance, or I guess also win7 where bitlocker is enabled? If so partclone can’t read the partitions (because they are encrypted) so the only option it has is to read the entire disk, possibly taking hours.
ref: https://forums.fogproject.org/topic/10824/image-upload-deploy-taking-a-long-time/54
-
RE: Fog Not Resizing Captured image
@creative1204 If you look at this post: https://forums.fogproject.org/topic/10824/image-upload-deploy-taking-a-long-time/41 the poster also said that bitlocker was not enabled. In the post by @themcv he provides the commands to ensure that bitlocker is off.
Then if you look at this post: https://forums.fogproject.org/topic/10824/image-upload-deploy-taking-a-long-time/59
Its stated: “The problem is Microsoft is pushing for more BitLocker, so while it is technically off, it is encrypting the free space on the drive causing FOG to want to make it a raw image. I don’t know if this is just on pre installs or with the latest version of Windows, but I ran into this with the latest Surface Pro.”
I’m not saying this your exact issue, but it does align up with what was said and done in that thread.