I appreciate what you do for FOG. If you need VM space, you know you can give me a hollar.
I've used FOG at a past job pretty intensely. During that time I contributed a lot to the FOG forums and it's documentation, a handful of pull requests, and contributed to the fog-community-scripts repo.
I've built automated tests for FOG's installer which run daily against many operating systems, as well as an external reporting tool that lets the community see what versions of FOG and OSs are out there in-use. Links are in my signature.
My fog time has slowed down a lot in the last couple years, but I still try to help as I can. I've got a lot of knowledge about FogProject in general and I can help you gear up or contribute if you would like.
Best posts made by Wayne Workman
FOG is in GitHub's arctic code vault
There’s a copy of fog and fog-community-scripts stored in the arctic printed on film that will last over a thousand years.
I think that is simply awesome.
RE: SORRY, but I give up testing FOG
@WalterT This post is completely unhelpful to yourself and to the fog community, and seems rash as well. If you need help with getting fog setup, create a thread about your specific problem, provide details, screenshots, logs, information. The community will help you as best as possible after you provide basic details about your specific issue.
RE: No network interfaces found (verifyNetworkConnection)
I’m feeling pretty ignorant at the moment.
I got to messing with this again and was able to try out a new unmanaged 1Gbps Cisco switch with it and I went through several different configurations in my tests and kept getting inconsistent results.
I have finally found out what the issue was. It was a bad patch cable the whole time.
That’s pretty shameful on my part as a technician, but it would be more shameful to conceal my mistake and not report what the issue was.
I do believe I exhausted every single other possible option before I realized it was the patch cable. Checking simple things first is hammered into all of us as troubleshooters, and the lesson has definitely been reinforced in me.
RE: School : A couple of questions
I come from Semantic Ghost background.
Fog is MUCH faster, supports queuing, renaming, joining to the domain, and there is ample support and high-responsiveness on the forums, with ample materials available in the wiki as well.
FOG images in general compress very well. 40GB compresses down usually to about 19GB on the server’s disk.
It’s free - not free like free beer, but free as in you may freely examine the code, freely make copies, freely make changes to your copies, freely distribute it under the GNU GPLv3 License, free to charge for it even, if you can (although I doubt you’d be successful)! The GNU GPLv3 allows for all of these things, as long as the License is respected and provided with copies and changes, and as long as all changes are completely open source and available to the public.
FOG can serve as a reliable DHCP server for you, offering more control and more options than Windows Server 2008 and below did (see our article on BIOS and UEFI Co-Existence).
FOG bridges the imaging gap for OSX, Linux, and Windows, and provides a management client for all three that can name them, join them to the domain, and run snapins on - all from a common web interface.
FOG can manage printers for you, allowing you to avoid cluttering up your domain controllers and group policy.
I use WOL to wake computers up on a schedule easily, and during breaks like spring break and winter break, I can easily disable it.
I use the fog client to push out Chrome updates regularly - with absolute ease. Using snapins also keeps group policy on computers and domain controllers less cluttered.
FOG logs logins for me, which I was previously logging using advanced scripting techniques that only I understood in my organization. Now, just using the web interface technicians can see login history for a computer or individual.
Fog supports wiping HDDs, and I can integrate ISOs into fog without much trouble.
Used to be, imaging a lab was a two to three person job for several hours with Ghost, and now it takes one single technician under 30 minutes - all of which are spent standing around and making sure things go smoothly. For example, we don’t have to name computers because fog does this. We don’t have to join to the domain because fog does this.
Please don’t disrespect CloneZilla in your report. Comparing it to FOG is unfair. It’s comparing apples to oranges. CloneZilla has strengths where FOG has weaknesses, and vice versa. For instance, if there are strict regulations on a network that a individual technician is not allowed to change, CloneZilla could be the winner in that scenario. If the network performs poorly, has problems, is slow, or non-existent, CloneZilla is the clear winner. If a technician does not have a server or old computer to dedicate as a FOG server, then CloneZilla is the winner. Also, CloneZilla is the most simple way to clone a FOG server! Where CloneZilla has weaknesses, FOG far excels. And where FOG excells is using your network to get work done - and fast. Bottom line is - CloneZilla is free open source software and has it’s place in the computer imaging industry and it should be respected for what it is.
RE: DNS Name Goes to Old FOG Installation
Ubuntu moved the default location for web pages in 14.04 from /var/www to /var/www/html. FOG is designed to do a symlink back to /var/www, but maybe something broke in that.
I think that statement there is what’s going on.
So, if you use the host name, you are taken to 1.2.0 interface, but if you use the IP you are taken to fog trunk interface.
This means that the web files for 1.2.0 obviously still exist, and the trunk files are there too.
If it were me, I’d delete everything in
/var/wwwEXCEPT for the
htmldirectory, and I would delete everything INSIDE of
/var/www/htmland re-run the installer. That should fix it.
So for instance if you saw
some-folderinside /var/www you’d do
rm -rf /var/www/some-folderThat’s a recursive delete command. same goes for everything in the other.
You can list the contents of the directory, including hidden files, with
RE: Deploy automatically ?
People that are new to fog don’t see the value in registering normally - and that’s OK. But fog comes to life with registered computers - automatic host naming, automatic domain joining, automated startups, shutdowns, reboots, software & script deployments, printer management, tracking of who logs into and out of said computers, inventory reports, imaging history, and many other things. Many of FOG’s features, you cannot use without registering.
And after you try out registering & using these features, you will start to understand how unnecessarily hard you were working before.
RE: Wiki news page?
the WiKi SVN article somewhat promotes upgrading to the developmental revisions…
I really think that the other upgrading methods should be ditched, IMHO. But others here feel otherwise.
At the least, the Upgrade To Trunk article and the SVN article ought to be merged. I’ve thought about doing this, but the SVN portion would be huge compared to the others, and I just haven’t gave it much time nor thought.
And I’m not “In” enough to maintain the news section.
Sad truth is, although Tom is fuc**** awesome at what he does, he is largely a one-man-army and he has a full time job and wife and so on. He’s the driving force behind FOG.
JBob comes in 2nd, with massive improvements to the new FOG client.
The other developers aren’t active enough (IMHO) to be able to keep the news section updated.
I’m a forum troll, and I help people as I can, but I’m not “in” enough to keep it updated (IMHO).
I’m more than willing to try, but I may fall short sometimes…
RE: Fresh clean Ubuntu 16 with FOG Trunk
Over the last few weeks, working with Tom, I was able to test changes back and forth over for Ubuntu 16 and Debian 8.
Both now install without modifications, without special commands.
Install Debian 8, just pull down fog and run the installer as normal. It works.
Install Ubuntu 16, just pull down fog and run the installer as normal. It works.
RE: Undionly.kpxe and ipxe.efi
Just created this article:
Latest posts made by Wayne Workman
RE: RHEL 8, CentOS 8, master branch, July 22, 2021
@sebastian-roth Yeah, today’s tests look like a mess. Though, what your saying about this GPG key. All the tests use the same OS, same patch level each day. If there was an issue with a GPG key in master branch, wouldn’t it be true that the same problem exists in dev branch? Most days, this is not the case. Most days, just master branch fail for CentOS 8 and RHEL 8.
There are ways to inject commands before installation, though the point of these tests is just to see in an automatic way if there are any issues that arise for a typical installation.
RE: RHEL 8, CentOS 8, master branch, July 22, 2021
Seems like the REMI GPG key changed. You need to manually run dnf update once and confirm the GPG key import.
Interesting this is not happening for the dev-branch tests.
@Sebastian-Roth I have done this. Which is why I’m bringing it up again.
RHEL 8, CentOS 8, master branch, July 22, 2021
I’ve been monitoring & have once cleaned up the daily installation tests, and for a while now RHEL 8 and CentOS 8 have been failing every day on the
masterbranch, while the
devbranch passes tests every day.
Given the dev branch is working fine, it would seem the fix is to release the current
devas a new version. What sort of shape is the dev branch concerning all the other aspects of FOG? Is it in a shape to release?
The external reporting graphs show 50 servers on 220.127.116.11, with overall about 460 servers operating on the dev branch.
There is another aspect to this - maybe we don’t prioritize a release just for fixing CentOS 8 and RHEL8. It’s a legitimate question to ask, at least. External reporting shows CentOS 8 and CentOS Stream 8 being around 15 total for the
devbranch. Pretty low figure. RHEL doesn’t even register in the top 20 OS versions in use. Raspbian has a higher adoption on
devbranch than RHEL.
RE: Cloning with image from a real workstation to a VM ?
If you’ve sys-prep’d the system correctly and the image has drivers for the VM, then it should work fine. The easy route is just to build the image you want inside the VM, sysprep it, and capture it.
RE: New install failing
I changed the settings to manual, as it was already set to DHCP.
I’ve seen this behavior lots of times with Linux. If you do a DHCP release / renew with these commands:
dhclient -r && dhclient
it produces the same behavior. the old address is held on the interface along with the new one. I’ve found that rebooting clears it. I’ve not tried other things.
I’ll work on a check for detecting two IPs on the chosen interface. If there are two, the installer should note this and exit rather than falling on it’s face and people not knowing what’s going on. It can also offer advice on how to fix (like rebooting)