Now, time flies and it’s been another two months. I’ve had very little time due to other projects but things are calming down a little bit right now. So I hope we can get this done in the next few weeks. I can take the lead on this but I can’t do it all on my own. I would really need some help by others to do some testing and so on.
Another thing I just now thought of - versioning the partition file we keep with the images. We can start out with v1.0.0. And if we change how we store that file, we simply increment the version. This would give us some more flexibility in regard to changes.
Thinking further, the scripts that collect the physical partitions are fine, we can just add JSON output for those.
And really, the LVM describing tools are what needs JSON output, so the effort there is targeted to just those.
I think I’m going to take a swing at adding JSON to an LVM describing tool (there are several).
One option for limiting support to reduce workload would be to limit to LTS branches only. So only 18.04 and 20.04 for Ubuntu and skipping the intermediary releases and dropping those that go out of support by their vendor (eg ubuntu 14.04, 16.04).
It gets a server 500 response from /fog/management/
and the second thing it attempts to load is favicon.ico which is a 404 error.
about 5 hours ago
I should give some more specifics.
This is a first server build on physical hardware.
I utilized the rhel 7 install instructions
I did not use any specialized command line options when kicking off the bash script to install fog.
I did disable selinux
I did disable firewall
I can access the login page after passing through the ssl error but there is nothing to log into.
@george1421 i meant to reply to this a long time ago, but here goes.
testing on deduping of those images has been done. they dedup quite well. the dedup changes affect zstd and pigz compressed images. pigz compressed images actually dedup better, but the the compression and performance are worse. it’s a tradeoff to be evaluated by the individual.
dedup is only possible with the newer version of partclone due to a rolling checksum integraed into the image format on earlier versions. the newer version lets us choose no checksum.
the compressed binary file is dedupable thanks to the --rsyncable flag on compression that is supported by both pigz and zstd.
like george said, any deduping would be the responsibility of the underlying filesystem or storage, not built into fog itself.
@alomarh Well it depends on what version of linux you are using. This appears to be RHEL compatible OS.
You can restrict the NFS server in FOG to limit access to a specific subnet range. You can do this without the need of a firewall. Look at the /etc/exports file. In the share line it starts with a star ( * ) replace that with the subnet range you want to share to. Look up nfs and exports for the exact syntax. Its hard to limit NFS to a single port range. You can do it, but you will need to make some configuration changes. NFSv4 is the way to go, but FOG is not there yet. I did experiment with it and it works with a few changes to the FOG server and FOS Linux.
@vtl.victor I am not saying it’s impossible to do. It just needs work and we don’t seem to find the time to work on this. Your quickest way to achieve what you want is capture/deploy an image at your FOG network site. Don’t let the deployed host boot into the OS (add kernel parameter shutdown=1 to that host). Then boot that same host up with Clonezilla (for example) and for taking a backup that can be deployed at an offline site on USB - no FOG server needed there - just the Clonezilla boot media and image on portable drive. (cross references: 1, 2, 3)
FOG image capture and deploy are multi-step processes. each are several commands that rely on scripts and programs in the FOG OS (FOS) and change based on what options you have selected. FOG images are not designed to be used stand alone. Using FOG images without a FOG server will require steps that were expected to be automated and there really isn’t a quick way around it.
So if you or anyone else is keen, you want to start looking at the commands to do this here:
@londonfog Just wondering out loud but if that python script can hit the FOG API, and register that new host and then select image deploy when the target computer boots up for the first time and it pxe boots, it will boot right into imaging. So if you already have the image defined, then you would create one curl api call to register the new server’s name and mac address with fog, then a second curl api call to deploy your image.