FOG Multicast Problems / Partclone fails to finish
-
My problem is back again. I tried shutting down and re-starting service but I’m hanging up again at the Particlone screen.
Ran this on both Storage Node and Server…
root@Fog:/# ps ax | grep udp 12432 pts/1 S+ 0:00 grep --color=auto udp root@Fog:/# service FOGMulticastManager stop; sleep 5; service FOGMulticastManager start * Stopping FOG Computer Imaging Solution: FOGMulticastManager [ OK ] * Starting FOG Computer Imaging Solution: FOGMulticastManager [ OK ]
Also…
Tailed the Multicast Log File on Storage Node below…
[06-29-16 4:31:48 pm] ___ ___ ___ /\ \ /\ \ /\ \ /::\ \ /::\ \ /::\ \ /:/\:\ \ /:/\:\ \ /:/\:\ \ /::\-\:\ \ /:/ \:\ \ /:/ \:\ \ /:/\:\ \:\__\ /:/__/ \:\__\ /:/__/_\:\__\ \/__\:\ \/__/ \:\ \ /:/ / \:\ /\ \/__/ \:\__\ \:\ /:/ / \:\ \:\__\ \/__/ \:\/:/ / \:\/:/ / \::/ / \::/ / \/__/ \/__/ ########################################### # Free Computer Imaging Solution # # Credits: # # http://fogproject.org/credits # # GNU GPL Version 3 # ########################################### [06-29-16 4:32:07 pm] Interface Ready with IP Address: 172.16.1.22 [06-29-16 4:32:07 pm] * Starting MulticastManager Service [06-29-16 4:32:07 pm] * Checking for new items every 10 seconds [06-29-16 4:32:07 pm] * Starting service loop [06-29-16 4:32:07 pm] | Sleeping for 10 seconds to ensure tasks are properly su bmitted [06-29-16 4:32:17 pm] | 0 tasks to be cleaned [06-29-16 4:32:17 pm] | 1 task found [06-29-16 4:32:27 pm] | 0 tasks to be cleaned [06-29-16 4:32:27 pm] | 1 task found [06-29-16 4:32:37 pm] | 0 tasks to be cleaned
I’ll be here all day. I’m working on imaging a lab. Thanks!!
Any suggestions are helpful!
Cheers,
Joe
-
@Developers
.
Any thoughts here? -
Re-update and see if it is working?
-
-
Wed Jun 29, 2016 12:17 pm
Running Version: 8140
SVN Revision: 5698Is that current? I don’t see any options under …
Fog Configuration > Kernel Update >> Published Kernel
-
@Joe-Gill not even close. We’re not trying to update kernels. You need to update FOG as a whole.
-
So what’s the easiest method to update FOG?
-
-
Well I updated via SVN. Still have the issue. I’m also experiencing an issue reading the Dashboard disk usage. I can send you a screenshot of that if you’d like.
You cannot delete members of existing groups…
Also cannot add members to new groups…
Any help is greatly appreciated.
Thanks!
-
@Joe-Gill said in FOG Multicast Problems / Partclone fails to finish:
I’m also experiencing an issue reading the Dashboard disk usage. I can send you a screenshot of that if you’d like.
Post that to this thread:
https://forums.fogproject.org/topic/7875/storage-group-activity-inconsistenciesAlso, try to clear out multicast stuff in your DB with commands found in here:
https://wiki.fogproject.org/wiki/index.php?title=Troubleshoot_Downloading_-_Multicast -
How about adding members to a group? I can’t add any members to new or existing groups.
-
@Joe-Gill That would be another seperate problem in the current fog trunk that probably doesn’t directly impact the multicast process I’m trying to help you with. Please create a bug report about adding hosts to groups, provide your version and the exact click-for-click steps you’re doing - also include apache error logs from Web Interface -> FOG Config -> Log Viewer -> Apache error logs.
-
No problem.
I will create a new post for the group problems.
As for the multicast issue, I have followed the Wiki on clearing the DB. That went smooth. I’ll retry this issue now. Thanks!
-
@Joe-Gill I know what’s wrong with the groups, and all other similar type things but I need to drop my truck off at the shop. All will be fixed in a couple of hours ok?
-
Ok this is bizarre! I restarted the servers, cleared the DB of non-essentials, stopped and restarted services. Everything appeared to be going well, until it got past the first two partitions. Than the session locked up at the last partition…
I tailed the log file and it showed this…
UDP sender for (stdin) at 172.16.1.22 on eth0 Broadcasting control to 224.0.0.1 [06-29-16 9:12:49 pm] | 0 tasks to be cleaned [06-29-16 9:12:49 pm] | 1 task found [06-29-16 9:12:49 pm] | Task (9) Multi-Cast Task is already running PID 2209 New connection from 172.16.19.108 (#0) 00000009 New connection from 172.16.19.141 (#1) 00000009 New connection from 172.16.19.110 (#2) 00000009 New connection from 172.16.19.142 (#3) 00000009 New connection from 172.16.19.105 (#4) 00000009 New connection from 172.16.19.131 (#5) 00000009 New connection from 172.16.19.116 (#6) 00000009 New connection from 172.16.19.140 (#7) 00000009 New connection from 172.16.19.144 (#8) 00000009 New connection from 172.16.19.106 (#9) 00000009 New connection from 172.16.19.103 (#10) 00000009 New connection from 172.16.19.111 (#11) 00000009 New connection from 172.16.19.115 (#12) 00000009 New connection from 172.16.19.136 (#13) 00000009 New connection from 172.16.19.113 (#14) 00000009 New connection from 172.16.19.130 (#15) 00000009 New connection from 172.16.19.135 (#16) 00000009 New connection from 172.16.19.118 (#17) 00000009 New connection from 172.16.19.138 (#18) 00000009 [06-29-16 9:12:59 pm] | 0 tasks to be cleaned [06-29-16 9:12:59 pm] | 1 task found [06-29-16 9:12:59 pm] | Task (9) Multi-Cast Task is already running PID 2209 [06-29-16 9:13:09 pm] | 0 tasks to be cleaned [06-29-16 9:13:09 pm] | 1 task found [06-29-16 9:13:09 pm] | Task (9) Multi-Cast Task is already running PID 2209 ...
Any ideas?? I’ll be around until 5:30 PM MST this evening… Thanks!
-
Awesome! Tom on that note, the multicast session I had problems with… Just took off! So… I jumped the gun! I apologize! Thanks!!
-
@Joe-Gill said in FOG Multicast Problems / Partclone fails to finish:
the multicast session I had problems with… Just took off!
That’s what we call the multicast wait time. The default is 10 minutes, which I think is a bit much in most cases. At work, mine is set at 2 minutes.
Web GUI -> FOG Configuration -> FOG Settings -> Multicast Settings -> FOG_UDPCAST_MAXWAIT
-
Group, and all the other originally broken (but kind of unkown to most) are now fixed.
Please verify for sure, though I have tested just to be somewhat safer.
-