@GorkaAP I hate to begin with this, but that referenced document deals with a 10 year out of date version of FOG.
Lets see if we can work out a solution using a current release. Please state the problem you are trying to resolve.
@GorkaAP I hate to begin with this, but that referenced document deals with a 10 year out of date version of FOG.
Lets see if we can work out a solution using a current release. Please state the problem you are trying to resolve.
@alexpolytech94 The shrinking of the disk when using single disk resizable is a bit of black magic. Sometimes because of the actual data size or location of the data on the disk its not possible to shrink the volume down enough to make it fit on a disk that is 1/2 the size of the source disk. I won’t go into too much detail, but if you have a partition that is fixed in size that can’t shrink, but a partition just before it on the disk that can shrink, fog will shrink the one that can be shrunk but leaves the one that can’t be shrunk as it were. If you were to deploy that image to a computer with half the size that non shrunk partition would be technically beyond the last sector on the 1/2 size disk.
To put it another way, always build your mother image on the smallest disk possible, because it can expand to a larger target disk more often than shrink your image to fit on a smaller disk.
When I was building golden images I would build them on a VM with a 50GB hard drive (smaller than anything I would deploy it to) and then let FOG expand the disk to match the target disk size. That always worked.
@atlas You need to have internet access to install FOG. I have seen some people install 2 network adapters in the fog server, one on the business network and one on the isolated network. The nic on the business network is for management and (install time only) internet access. This keeps the isolated network isolated.
FWIW that wiki page you referenced is 10 years old and does not currently apply to a current version of FOG.
FWIW: You can manually download the inits and kernels from here: https://github.com/FOGProject/fos/releases/
@atlas When it comes to opensource, the only wrong answer is one that doesn’t work. Well done!
Another hackish way would be to instead of changing the programming, you could enter a fake/but valid entry in the /etc/hosts table to point the dns entry to your internal server. This way you can use fog native code when version next comes out. But again if it worked for you it was the right answer.
You are going to have to draw a picture with IP addresses of how this infrastructure is connected. Use fake public addresses, but real internal addresses.
I can tell that that the way FOG with a master, storage nodes, and FOG clients are designed… they are expected (storage node and fog clients) to be able to reach the master node 100% of the time to remain operational. So If you have a fully routeable site to site VPN then everything will work as designed. If you have intermittent connection then things won’t work quite as well. The storage node needs to be able to contact the master node because the database only exists on the master node. So this link needs to be up 100% of the time. PXE booting is local then jumps to the master node to load boot.php.
While I can’t comment on the FOG code, at lot of systems will launch a process and then keep track of that process via a handle until it stops. In the destructor for the instances they will kill off the task based on the handle that was created when the process was launched of the application instance dies before the launched processes. I think the intent of the replicator was to have only one instance of the lftp process running at one time so it wouldn’t be too difficult to keep track of the process handle (as apposed to several hundred processes).
With the current design you normally wouldn’t have to start and stop the replicator multiple times, so having multiple instances of the lftp process running should never happen. I’m not seeing the value in putting energy into fixing a one off issue.
My preference would be to not do something out of band if possible. It does appear that creating a fake image with its path set to /image/drivers is choking the FOG replicator because of the sub folders, so I’m going to back out that change. Because no replication is happening because of that error.
I haven’t dug into the fog replicator code yet, but I’m wondering if rsync wouldn’t be a better method to replicate the images from the master node to the other storage nodes. Rsync would give us a few more advanced options like data compression and only syncing files that were changed than just a normal file copy.
Its a trunk build 5040.
Looking at the drivers folder. I have a combination of files and sub folders. Depending on how smart the replicator is it may not handle or traverse the sub folders.
The structure of the drivers folder is such.
/images/drivers/OptiPlex7010.zip
/images/drivers/OptiPlex7010/audio
/images/drivers/OptiPlex7010/audio/<many files and sub folders>
/images/drivers/OptiPlex7010/video/<many files and sub folders>
<…>
I suspect that the replicator was only designed to copy the image folder and one level of files below.
Rebooting the storage node appears to have started the replication /images/drivers but so far only the first file has replicated.
Looking at /opt/fog/logs/fogreplicator.log on the master node I see this error.
[10-22-15 8:19:52 pm] * shvstorage - SubProcess -> mirror: Fatal error: 500 OOPS: priv_sock_get_cmd
[10-22-15 8:21:08 pm] * shvstorage - SubProcess -> Mirroring directory drivers' [10-22-15 8:21:08 pm] * shvstorage - SubProcess -> Making directory
drivers’
[10-22-15 8:21:08 pm] * shvstorage - SubProcess -> Transferring file `drivers/DN2820FYK.zip’
the zip file is the only thing in /images/drivers on the storage node.
OK that sounds like a plan. I’ll set that up right away.
Do you know what the replication cycle interval is or where to find the setting? Under “normal” production once a days is sufficient, but I can see during development that we might need to shorten it to just a few hours.
@fhhowdy This error looks similar to what I might expect when secure boot is enabled. Check the firmware settings to ensure that secure boot is disabled, which will allow the FOG boot manager (iPXE) to load.
Something else to keep in mind is that FOG’s imaging operating system (FOS), is really targeted towards laptop/desktop computers and not servers. Servers often use hardware not commonly found on workstation class computers. I’m not saying it won’t work, we will just need to be mindful if things act abnormally. Machine class isn’t the issue here, because your server is not booting into the boot manager. The problem I mentioned may come when you pick an action from the boot menu.
@Cire3 The short answer is that it’s possible, but it depends on how nextboot.xyz handles dhcp information.
The simples form is to add this to the fog ipxe menu builder parameter block.
chain tftp://192.168.1.12/nextboot.xyz || goto Menu
If nextboot.xyz uses dhcp information (which will point to the fog server unless we alter it.
set newserver:ipv4 192.168.1.12
set newbootfile nextboot.xyz
set net0.dhcp/next-server ${newserver}
set net0.dhcp/filename ${newbootfile}
set proxydhcp/filename ${newbootfile}
chain tftp://${newserver}/${newbootfile} || goto Menu
@Thiago-Ryuiti The error is saying that the service account fogproject
doesn’t have rights to the /images directory.
This typically happens for two reasons.
@1337darkin My initial reaction is don’t use virtual box, my second thought is don’t use virtual box…
The thing is that vb uses iPXE as its own internal boot loader, and the issue is chaining with FOG’s version of iPXE. The screen shot you show is VB’s version of iPXE running trying to call snponly.efi where its failing. This is an issue with VB and not specifically with FOG.
I think there is a fix for this but I can’t seem to find it at the moment, google-fu is weak today.
On a totally abstract note. You can capture a uefi image with FOG in bios mode. And on the flip side you can capture a bios disk image in uefi mode. BUT to be able to boot that image after image/deployment the target computer’s firmware needs to match the disk image.
@professorb24 from 192.168.52.X can you ping devices on 192.168.54.X. If yes then you have network routing setup.
Once you pass full routing, what is your dhcp server for 192.168.52.X and 192.168.54.X?
@EuroEnglish I’m finding it hard to believe that there is a connection of having win11 on the computer that is causing this behavior. When the fog menus are being displayed, we are running iPXE boot loader. This in a way is an OS by itself. The OS on the target computer is not even known about at this point in the booting process.
The only thing I can think is that the format or disk structure is now allowing iPXE to fully initialize correctly.
Could you make this test for us? If you have 2 computers of the same model (make sure that bit locker is disabled on both systems) and secure boot is disabled.). One with win11 and one with win10. Verify they both behave the same way as you mentioned above.
On the win11 computer remove the hard drive/nvme drive. Boot into the FOG iPXE menu. Does it still have the slow speed?
Move the hard drive from the win10 computer to the win11 computer. Does the system act slow or normal?
@MonsterKaos There are 3 issues (well really 4 issues) that is probably impacting your deployment.
For issues 2, I have instructions on how to compile the latest version of iPXE here: https://forums.fogproject.org/topic/15826/updating-compiling-the-latest-version-of-ipxe Your fog server will need to have internet access to get the latest source code for iPXE. This should address the no configuration methods.
For issue 3, in the fog ui go to fog configuration-> kernel update. Download the latest kernel 6.x series to get support for the newest hardware.
Lets see where that works for you with deploying images.
@maximefog said in Deploy Image:
it is mandatory to register the workstation
This is not a requirement under certain conditions. There is a method I call “load and go”. It is a process that system builders use where once they load the OS on the target computer they never see the computer again. In this method you can not use the FOG Client for any of its function. The install process must be self contained or use a FOG post install script to make the install time adjustments to the target computer. Using this method you do not need to register the computer with FOG. Once the image is deployed to the target computer FOG forgets it ever saw this target computer. Once you have the master image setup as needed you deploy the image from the FOG iPXE menu “Deploy Image” menu. You never have to touch the FOG web ui for image deployment.
@bmick10 said in Fog stops at init.xz...18% and other percentages:
So it will load to Fog stops at init.xz…and different % each time.
This sounds like an iPXE issue, where its not loading FOS Linux’s virtual hard drive completely for some reason. Lets start out by having you rebuild/compile the latest version if iPXE using these instructions: https://forums.fogproject.org/topic/15826/updating-compiling-the-latest-version-of-ipxe Lets see if there is an update to iPXE that solves this issue.
@Tom-Elliott Thank you for the update. I should have looked at the code.
So the OP needs to update the current curl call with this ?
base64mac=$(echo $mac | base64)
token=$(curl -Lks --data "mac=$base64mac" "${web}status/hostgetkey.php")
curl -Lks -o /tmp/hinfo.txt --data "sysuuid=${sysuuid}&mac=$mac&hosttoken=${token}" "${web}service/hostinfo.php" -A ''
To make it work now?