DB cleanup did the the trick.
I’m using FOG 1.5.7 on Ubuntu server 16.0.4.
This is not a big issue but I have always something in queue in storage group activity on the dashboard.
Any idea to know what it is and how to remove this?
here a screenshot :
I look at everything in tasks and nothing appears.
Many thanks by advance for your help.
Hi I’m using fog 1.5.7 (upgrade from 1.5.6 and/or 1.5.5, I could not remember) on ubuntu 16.04.3.
I used the the script in the tar file to install it.
thanks for your answer. Sorry but I don’t now how to do this.
I exactly have the same behaviour.
I mean if I create a multicast session for 2 machines and I join with 2 already registered PCs, the session won’t start,
whereas it does with 2 unregistered ones.
This is what I see from Images>Multicast
This what I have in active tasks :
And this in Active Multicast :
In shell this is what “ps aux | grep udp” gives :
root 3571 0.0 0.0 4500 744 ? S 09:57 0:00 sh -c /usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 300 --portbase 64776 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p1.img;/usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 10 --portbase 64776 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p2.img; root 3572 0.0 0.0 8704 828 ? S 09:57 0:00 /usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 300 --portbase 64776 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p1.img root 7774 0.0 0.0 4500 844 ? S 10:35 0:00 sh -c /usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 300 --portbase 62458 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p1.img;/usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 10 --portbase 62458 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p2.img; root 7775 0.0 0.0 8704 828 ? S 10:35 0:00 /usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 300 --portbase 62458 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p1.img root 11289 0.0 0.0 4500 852 ? S 11:09 0:00 sh -c /usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 300 --portbase 55058 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p1.img;/usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 10 --portbase 55058 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p2.img; root 11290 0.0 0.0 8704 780 ? S 11:09 0:00 /usr/local/sbin/udp-sender --interface ens32 --min-receivers 2 --max-wait 300 --portbase 55058 --full-duplex --ttl 32 --nokbd --nopointopoint --file /mnt/FOG_iSCSI/FOG/W10_Remote_20191029/d1p1.img fog-adm+ 11896 0.0 0.0 14228 868 pts/0 S+ 11:20 0:00 grep --color=auto udp
I have only one session running. Is it normal to get so much processes?
Using ipxe.kpxe instead of undionly.kpxe solved the issue. Thanks for your help.
Thank you! I keep you informed
Thanks for your clear answer.
I’ll first update the machine bios, but Dell computers are less and less open so I’m pretty sure it won’t help.
Then I’ll give a try to ipxe.kpxe hoping it’l be compatible with all intel hardware we have as broadcom.
And at last, I’ll made 2k12 filters. (Which is barely already done in a kind of way)
I’m using fog server 1.5.7 on ubuntu 16.04.3.
We have a little issue with some intel cards : PCI-E Intel chips 82574 on Dell Optiplex 5050 and 5060.
I get first an IP, the undionly.kpxe is downloaded but then I face the error :
DHCP Failed : no configuration methods succedeed.
In the pxe shell I tried to bind ip manually and then ping, but it seems finally the IP is already binded but
the ping does not work.
I tried all I had in mind without any success, I even disabled the integrated NIC did not helped.
What make this thing strange is the same card works without any issue on another PC (Dell Optiplex 9020 for instance).
I finally made it works using intel.kpxe instead of undionly.kpxe !
Any idea on how to make it works with undionly.kpxe?
Thanks by advance,
We have 2 servers that are synced.
Both are master for themselves.
Server 1 is master for both, this way Server 1 replicate itself on Server 2.
And I export/import for times to times
This way they are both independent and synced.
The servers are under Ubuntu 16.04 / FOG 1.5.6
on both servers a du in images folder give :
1340230528 or 1.3T
On server 1 Pie :
On server 2 Pie :
On the shell both servers show the same amount of data but not on the PIE. any idea?
it worked ! Many thanks I did not even noticed this tab!
My skills in english are fair but that’s all, that’s may be why my message was not clear enough.
To resume :
-We had a NAS LUN linked to the legacy FOG Server
-We rarely do unicast deploy, 95% are multicast.
-Running 3 or more multicasts start causing slowdowns on the WebUI.
-Sometimes we have up to 10 simultaneous multicasts. Very have rarely more, even
with 23 classrooms on the main site.
As the nic used to access the WebUI is the same as the one that flood the images through the network we thought that we could mitigate this phenomenon using storage node.
We are note stuck to a model or another, we are still searching for the best compromise.
Now we have/had a second issue it’s image export/Import. We plan to use replication at the end but for the first load which represent several TB sync through a wan link is not possible.
It seems this issue is now solved, the remaining “problem” is only cosmetic, but may be I did something wrong, it was the first time I moved images from storage to another and tried the export/import.
I already tried what you suggested and it did not helped unfortunately.
I did not noticed the “deletemulti” in the url. To be sure I tried again with one
image only and strangely the result is exactly the same :
[Wed Jun 26 17:03:26.511411 2019] [proxy_fcgi:error] [pid 28204] [client 10.33.0.200:39938] AH01071: Got error 'PHP message: PHP Warning: trim() expects parameter 1 to be string, array given in /var/www/html/fog/lib/fog/fogbase.class.php on line 1386\n', referer: http://10.33.0.11/fog/management/index.php?node=image&sub=deletemulti
whereas this time I’m sure I did not check more than one image.
for info my images are not stored in /images but in /mnt/FOG_iSCSI (mounted iSCSI LUN) so of course I adapted the chown to my configuration.
A little up for this remaining issue.
Ok, seems to work finally.
The error was because I deleted the test image thinking it will only remove infos from database whereas it has trully deleted the image in the storage node, even if it appeared in default() storage in image description.
Now, new image will be stored in the storage associate with the location of the captured machine despite the selected storage during image creation.
It’s quite confusing because this mean we could think that images are stored in the default storage whereas they are not and inversely.
Is it a bug or is this is a normal behaviour?
Does the storage setting in image creation has an influence or everything depends of location of the computer which deploy or capture?
No worries, right now the fog server is an Ubuntu 16.04 VM build in an ESXI 5.5
the VM has 2 vNIC. One on our education network and another one storage network.
Each vNIC are supported by 2 physical Ethernet NIC teamed and configured to use ip hashing to load balance traffic.
The Ubuntu server is configured to use multipath iSCSI to access the NAS LUNs and multicast is massively used to install our classes.
We have to install between 200 and 300 machines per week. At start we have no severe issue but with programmed tasks starting the webui is dramatically slowing down. With 2 or 3 tasks running we can start seeing slowdowns.
Using nload and top to monitor where could be the issue, we could only see very high network load but no issue with memory or CPU.
We deploy at speed between 5GB/m and 27GB/m depending on the receiving hardware. No other resources that the FOG server are showing slowdowns.
So we thought the mutliple multicast sessions were cannibalizing the server bandwidth and having a server for the management and a storage node for the deploys would be a good idea.
Sorry for the long explanation. Are we wrong?
Then we are multi-site, right now only our main site is testing fog and we already have several TB of images.
At the end, sync via wan link will be OK but for the first load we will have to send a disk with all images on it. that’s the other reason we would like to test import.
I have a fog server which was installed as default with image stored in a local mount (from an iSCSI link).
I wanted to move the images from the server to a storage node.
So I built a storage node, unmounted images LUN from the server and mounted it on the storage node.
I wanted to move the images from the storage node to another using the WebUI and it does not seem to be possible.
So I exported image infos for a specific image. tried to reimport it, but storage info remain the same.
So I delete it again and recreated the image manually.
Using exactly the same parameters but the storage node. (choosing the new one)
unfortunately this did not worked either always getting this error :
Could not mount images folder (/bin/fog.download)
On another hand I can create new image and deploy them without any issue. (they are stored on the new storage)
I tried the plugin location the enforce the use of the new storage node on my test PC but it did not helped.
Any idea of what could be my problem?
I have only one capture on the branch with which we are doing tests, we will see if the image is deleted or not.
Do you know if it’s possible to schedule replications at specific times? because we have about 2.5To to sync and I would like to do it on night shifts.
I read that putting a node as master with empty storage could wipe all nodes in this storage group. does this mean that a new capture should only be done on master storage node otherwise it will be erased on the node where it has been created?