node=storage page and node=about pages solved.
Solution increase TimeOut and ProxyTimeout to 120 secs.
Currently only fails with list all hosts page.
node=storage page and node=about pages solved.
Solution increase TimeOut and ProxyTimeout to 120 secs.
Currently only fails with list all hosts page.
I do this days a migration and upgrade from fog1.4.2 to fog1.5.5 and on this version i have two problems.
firts i can’t list my 88 storagenodes, the node=storage page falls into service unavailable.
second i can’t open the node=about page with the same results as first.
the node=host&sub=list go to This page don’t works tryin to list my 14.500 pc’s but node=group&sub=membership&id= works without problems.
Thanks in advance.
We have our images configured to install and configure fog-client on the first-boot.
We use a custom service to do it but It’s possible to do it as a scheduled task when a NetworkProfile event is rised too.
Our service configuration is:
sc.exe config FOGService start=delayed-auto
sc.exe failure FOGService actions= restart/60000/restart/60000/restart/ reset= 120.
System user has problems with net access, is a local system user.
And wusa.exe i read that has some remoting problems.
Try this:
Use a Powershell Snapin Pack with an installation script and desired packages you want to install as contents.
Into that Installation Powershell Script you must do.
1º Elevate the script to Administrator user.
2º Launch with Admin credentials an Start-Process or Invoke… to execute the installation.
Maybe it works.
Now I cant to write an example. I do it later this night.
It seems like you have an nvme raid0, some linux based imaging solutions have problems with that.
As you say you must change from raid to sata to capture/deploy, clonezilla partclone make that suggestion.
I can see the pc that you link, so i cant help you with bios.
The computer must have an option to change it. Search a few more.
Hi again @khalid:
Again my suggestion is powershell.
Take a look at this post:
https://community.spiceworks.com/scripts/show/4045-install-standalone-updates-msu-files-via-powershell
Configure the snapin to be replicated on the locations nodes, configure FOG to get the snapin from the location, be sure to be configured in general, as default and on the client settings.
Be sure that the snapin is replicated on the server you need and try it.
maybe you need to change, on the Snapin Pack Arguments the " ; by "
@Sebastian-Roth
yes, i want to say that i correct it in the post
add it to /etc/exports as an rw share
then execute as root
echo "/images/userbackup *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=2)" >> /etc/exports
exportfs -a
systemctl restart rpcbind # or something like i dont remember de name exactly
systemctl restart ntfs-server
Try it if you can.
@willian
yes its right.
If you read the screen with attention partclone retrieves
Device size (partition size)
Space in use
And
Free space.
I do it.
Creating an rw nfs share and rsyncing folders from fog.postinit.
You can do it to fog or to another file server.
I do it usually.
I use powershell scripts, chocolatey commands and .msi/.exe files as snapins to do so and usually i have not problems.
Sometimes it’s not easy to code the scripts, but it’s possible.
If the snapin pack download is not complete you see 0kb sizes unless you take the properties of the file, on the log it seems downloaded.
I had the same log a few months ago in other client version 0.11.12.
In my case the zip format was the problem.
On my experience the best compatible zip format for snapin packs is sending the files to a zip folder from windows 10.
Can you try it?
I’m not sure to be explained correctly any doubt i try to answer.
Hi @willian
With this screenshot I think you have problems with the dirty bit or the hiberfile mark on ntfs block device /dev/sda1.
You can do something like:
on the fog.postinit script
if [[ ${osid} -eq 9 ]]
then
ntfsfix -d /dev/sda1
## Begin put here the rest of ntfs partitions if you want
## End
mount -t ntfs-3g -o remove_hiberfile /dev/sda1 /mnt
umount /mnt
fi
This is the idea. I have code like that in my fog.postinit and fog.postdownload script to grant access to the file system to deploy drivers and hostinfo data on fog.
Can you repeat on every ntfs partition.
Maybe it works.
this is a simply example, security is by your side:
#!/bin/bash
SERVER=$1
USERNAME=$2
PASSWORD=$3
FILE=$4
USER_TO_LAST=$5
echo "Result from last $USER_TO_LAST" &> $FILE
last $USER_TO_LAST &>> $FILE
lftp -u $USERNAME,$PASSWORD $SERVER << EOF
put $FILE
EOF
Configure as on image:
Snapin template is bash.
Tested on Linux Mint 19.
We have implemented that on Linux and windows machines with the fog-client.
We have a little snapin’s collection to create and upload configuration templates/backups from some clients to a dedicated ftp server.
We have another little snapin’s collection to retrieve/modify information via webservices too.
The secret is the recipe to the script code to be executed by fog-client.
My little experience (at this moment) is: if you can code it then fog can do it. (Sometimes can be so hard and sometimes you must not to do it.)
Hi:
We have an implementation with 1 master, 77 nodes (growing), placed on diferent citys, and nearly 14.500 (Growing) clients.
We are using the ControlAccess, Site, Locations, TaskTypeEdit and in a future FileIntegrity plugins.
We have a litle problem with the interface on Site and Location Interface. On each update group location and group site information FOG retrieve a white page, with the text {“msg”:“Group updated!”,“title”:“Group Update Success”} instead the green dialog.
I to solve this i must to change:
On hooks/addsitegroup.hook.php
Line 147:
. ‘id=“updatesite”>’ updated with . ‘id=“group-edit”>’
On hooks/addlocationgroup.hook.php
Line 153:
. ‘id=“updateloc”>’ updated with . ‘id=“group-edit”>’
It seems to work correctly now, but i don’t know if i can get future problems with this.
Thanks to all for your time and work on this Project.