@Ph3noM said:
No it’s ok now, i just made a mistake.
I am very happy!
Thanks all for the help!
What was the mistake?
@Ph3noM said:
No it’s ok now, i just made a mistake.
I am very happy!
Thanks all for the help!
What was the mistake?
Firstly - this is not a fog problem.
You cannot delete the account you are logged into. Log into the other account you created and then in Control Panel, go to User Management -> Manage other users and delete the administrator account there.
@ITSolutions said:
@GFm Now to carve out the time to update those over to Partclone. Just a quick FYI in case you are not aware of this. To convert them you don’t have to have the actual machine they go on. I found a spare machine and pulled the image down, and shut off the machine (DO NOT LET IT BOOT AFTER IMAGING) then changed the type in FOG and re-uploaded to the server. Doesn’t matter what hardware you put it on as it never boots into windows.
But on the same token if you have the correct hardware for them it is a perfect time to update the images, I am sure windows updates are way behind. lol
Very nice.
There is a new snapin that allows editing of the tasks, and custom task creation.
I think changing the end from a reboot to a shutdown would make this a lot easier.
I’m going to quote one of @Tom-Elliott 's old posts:
Why not use the Location Plugin to do the transfers for you?
Heck, if you update to the Development versions, you don’t even have to setup rsync tunnels.
You can install the Nodes how you see fit. I’d recommend, for your case, to install all the servers as “Full Servers”, and then once the installation is complete edit the /opt/fog/.fogsettings file to use: snmysqluser=‘fogstorage’ snmysqlpass=‘fogstoragepasswordfromfogsettings’ snmysqlhost=‘IP.OF.Main.Server’
This way, all the fog servers at all of the buildings communicate to a single server.
Then you create your storage nodes based on the information of the other fog servers.
Create the appropriate groups as necessary.
Assign the images to the groups you want the images to “cross” between.
That way you have a centrally managed server, with pxe boot setup locally at each building.
The location plugin will attach to the hosts that belong at that particular building.
Please give us details on your experience or thoughts, and please feel free to ask questions. We are here to help.
@george1421 you never cease to amaze.
@danilopinotti Can you please provide us with a short video (use your smartphone, upload to YouTube and then post here) to show what sort of error happens at the 500MB mark? We can take photos too but it might be much more difficult to capture the error with photographs.
Can you tell us how many partitions you have on your base-computer? Is it MBR or GPT? what model of computer is it? is the HDD a SSD?
What is the image type? single disk? Resizeable? Raw?
We really need much, much more details to help - we are at the mercy of what details you provide to us. The more details you can give us, the better.
The way @Arrowhead-IT posted looks legit.
I’d probably name the path something along the lines of /disk2 personally, and then mount the hdd to that directory (using /etc/fstab) and then modifying /etc/exports to export that directory. You’d of course create the dev sub directory, and create .mntcheck files both in the new directory and the dev directory. Then finally create your storage management entries.
Note the IDs inside the /etc/exports must each be unique, and you’d need (at least at first) 777 permissions recursively on the directory once everything is built.
@mtmulch
Welcome to FOG! Don’t hesitate to ask questions, we are here to help. And hopefully you’ll find some way of giving back to the project. See my signature for details. 
@need2 said:
go with Debian instead of Ubuntu for server installs.
Or Fedora 22.
https://wiki.fogproject.org/wiki/index.php/Fedora_21_Server
@Kiweegie said:
You’'ll need to visit each desktop I’d imagine to set them to boot from nic for PXE booting unless they are set that way already. If they are and you have some sort of inventory system (we use Lansweeper) you might be able to upload all the hosts and mac addresses via a csv file rather than having to manually register them.
Newer systems will allow you to set firmware settings over the network, I hear. I haven’t done it myself yet but I know it would save me a ton of foot-work in the future.
@Kiweegie said:
Zero-touch is the buzzword you want to mention to your senior team members.
Yup.
For some reason, people at my organization want to walk around hitting F12 to network boot. I could care less about making my job difficult for “Job Security”. Part of my goal in I.T. is to make my job easy - and to make my replacement’s job easy, and any less than that isn’t fair or right or moral to not only your employer but yourself. You should always push yourself to find better ways to more efficiently manage more systems at once - how did the Enterprise Administrator who is responsible for 20,000 computers get into his position? It darn sure wasn’t by walking around hitting F12 every time imaging needed to happen - or by walking around manually uninstalling one Antivirus just to install a different one on 500 computers.
So yes, the buzzword is Zero-touch, “I’ll make EVERYTHING Zero-Touch, and the next guy can walk in behind me and easily pick up the ball due to my simple naming conventions, ample & well written documentation and resource citation, and well-configured infrastructure” … is what you should really be aspiring to.
@JJ-Fullmer Well, those were some refreshing things you just said. I’ll have to try this when I find some time.
@drose807 nfs mounting at the command is as simple as:
mount x.x.x.x:/my/nfs/share /my/local/directory
for instance, to mount my home fog server’s /images directory locally to a directory called /tempMount it’d be:
mount 192.168.1.10:/images /tempMount
But that’s just at the command line. You would want something permanent, that’s where /etc/fstab comes in.
However, an already-remote directory cannot be re-exported in Linux.
You’re best off creating a node of some sort, like what @Arrowhead-IT and I have suggested.
ALSO, Google is pretty awesome, if you guys haven’t noticed… 

@JJ-Fullmer said in Image Prep Script:
pnputil
Can you tell us more about that? I’m thinking I’m going to have to build a universal image now.
@PageTown You can create them like this, but substitute your new path for /images
touch /images/.mntcheck
touch /images/dev/.mntcheck
that’s it. They are literally just blank files. You don’t need to do any copying.
@mtmulch I would recommend just putting it on the network, setting it up, and then just remove it from the network. This way is much more simple.
Just tell FOG you don’t want to use DHCP.
Then, later on, just manually setup the DHCP. it’s easy, we have lots of examples here.
wiki worthy
I’m working on a new dnsmasq article, it’s nice not to have to look up everything. This is nice and short too.
@dolf Not really. It’s safer if you have FOG virtualized and take snapshots regularly, but fog trunk is pretty solid at the moment. I’ve been running fog trunk in production for about a year now.
I’ve been thinking A LOT about the many many problems with all the Vendor Class identifiers that Apple has…
Because PC is so standard (like PXEClient:Arch:00000 and PXEClient:Arch:00007 ), and because Apple are extreme non-conformists,
It makes no sense to try to define a class for each apple device. it’s stupid.
I say - make ipxe.efi the default and then make classes for the various PC PXEClient architectures.
Abandon Macs that are 32 bit. Just don’t worry about them.