r993 is out.
Should contain the function that was missing for Multicast and hopefully allow download of the snapin files from the proper location now.
r993 is out.
Should contain the function that was missing for Multicast and hopefully allow download of the snapin files from the proper location now.
Based on my findings in your log file @KyroDK, it’s because the checkIn Function doesn’t exist in any of the files if found. It’s called, but doesn’t exist which is why you’re seeing this problem.
I’ll add this function and try to get this reposted for you today.
While on the 3.12, does it affect the speeds of the known good systems?
By format, I’m referring to 4k (Advanced Format), or regular format drives. ext3 or ntfs should work regardless.
I’m assuming this issue is happening on both XP and Windows 7? Or is it limited to only Windows 7 machines?
I ask because it could be a permissions issue. I don’t know much about how to install local printers to a host as I haven’t had to play with them too much. All of our printers are on a printer server, so we use the network printer option for our systems. It works, mostly, but once in a while we get the IDS Message, and these are on Windows 7 systems. Once we acknowledge those messages they install fine. We don’t get those messages or issues on our Windows XP machines though.
It makes me think the printer is trying to install as the local user rather than the Administrator of the system.
I’m looking into the file download issue, but I think I’ve got the fix for that already.
I don’t know if I uploaded the commit for that yet, but I think this should do the trick:
In file:
[code]{fogwebdir}/service/snapins.file.php[/code]
Edit the line that has this: (ON OR AROUND LINE 55)
[php]@readfile($snapinTask->getSnapin()->get(‘file’));[/php]
Make it say:
[php]@readfile( $GLOBALS[‘FOGCore’]->getSetting(‘FOG_SNAPINDIR’).‘/’.$snapinTask->getSnapin()->get(‘file’)); [/php]
That should get you back to downloading the file and the system should try installing the file after that.
I’ll try to look into this to make sure this works as expected in the next day or two.
As for the methods, the problem isn’t so much the task itself, but because you can Schedule it, it actually creates an Image Task based on the current system. I suppose I could add the method to delete the image task after creating it if it’s the Snapin types, but I have to figure out the best approach first.
You might try restarting the nfs service and the portmap/rpcbind service
FOR UBUNTU OS:
[code]sudo service portmap restart
sudo service nfs-kernel-server restart[/code]
FOR REDHAT/CENTOS/FEDORA OS:
[code]service rpcbind restart
service nfs restart[/code]
See if this helps you out, sometimes a simple service restart will help. It may not in this case, but you never know.
Also,
To my knowledge, ftp is used to actually transfer the image from /images/dev to /images as nfs for /images is mounted as read-only
Make sure your server fog user password is set and that that password is what’s set in Storage Management Password and in FOG Settings->[FONT=Ubuntu][COLOR=#555555]FOG_TFTP_FTP_PASSWORD[/COLOR][/FONT]
[FONT=Ubuntu][COLOR=#555555]Hopefully this helps.[/COLOR][/FONT]
Scott,
Check the switch between your client and your server. As these systems network drivers never worked until this latest kernel, how were you imaging them before? Are other clients affected by slow speeds?
I don’t know how well the kernel will help, but you can try mine as it has more Drive options available like scsi and the like. Maybe this will work, maybe it won’t as I don’t know the format the drive is using.
My kernel is on 3.12.0
Scott,
Can you try my new kernel? It is 3.12.0 and had a couple new drivers added to it, I’m keeping my fingers crossed that this is the driver you need.
I don’t think the kernel is use issue here as it’s able to communicate with your FOG Server and get to the point to try to mount your NFS Share, which comes off the server.
ON your server, can you let us know what OS you’re using? It’ll help us with command troubleshooting.
FOR UBUNTU TRY:
[CODE]sudo service portmap restart
sudo service nfs-kernel-server restart[/code]
FOR CENTOS/REDHAT/FEDORA TRY:
[code]service rpcbind restart
service nfs restart[/code]
Can you please get your apache logs a go:
[code]/var/log/apache2/error.log[UBUNTU]
/var/log/httpd/error_log[REDHAT][/code]
And see what it’s telling you? That way I can make the proper tweaks to the file to help you further.
If you ever need help, please don’t be afraid to ask
Timikana,
Don’t forget to mount your nfs to the host and place the file there. The loaded file of the init.gz file is only around 40 Megabytes, and I imagine the disk you’re attempting to clone has a lot larger size.
Maybe try:
[code]mkdir /tmp/nfsshare
mount -t nfs <IP.OF.FOG.SERVER>:/images/dev /tmp/nfsshare
dd if=/dev/sda of=/tmp/nfsshare/windowsubuntu bs=4096 conv=noerror,notrunc[/code]
Can you attempt multicast task and when this is trying to run, attach a copy of the apache error logs. I can try to see why it’s not working. I haven’t had much time to play with multicast especially as I don’t create multicast jobs where I work. So it’s not something I’m fully aware of yet.
@Albatros,
If you delete the Active Task for the host, but leave the snapin task, for now, it will deploy the snapin’s as expected. I haven’t figured out a good method yet to getting the snapin deployment only to operate.
There are many ways you can do this. Yes, you can task all jobs individually. Or, you can create a grouping of the host, and that group, when you apply the host, will create all the tasks individually, but from one location for you. Or you can do the multicast thing.
Your system management queue is telling you how many systems are imaging at the same time. Once that limit is hit, then systems after that number are in queue until a slot opens up.
This means, you can (Unicast) create a task for 10 systems (usually 10 is the default) even if they’re all different image ID’s. Turn all of those systems on at the same time and they’ll all starting imaging. Anything more than 10, (lets just say you have 15 systems) the extra 5 will wait in turn for there turn to image.
The reason, I think, this is faster is because the decompression is being done by the clients (the ones receiving the image) where in multicast, it’s all done on the server before getting to the client.
In review, in multicast deployment, the server is not only imaging all of the clients, hosting the database and the GUI, but it’s also performing the additional task of decompressing the image and sending it to the clients. In unicast, the clients are doing the majority of the leg work.
Lets try to put that in perspective. Multicast, ultimately should theoretically be faster because it only has to decompress the image once, but it also requires that all clients receiving the image are getting the data and placing it on the drive at the same time. They have to continuously sync themselves with what the server is doing (or vice versa) so everything is on the same page. It may initially start out faster because the server doesn’t have any issue starting off. However, as time drags on, the systems may not be all in sync, so the server has to wait until all systems are matched up.
Theoretically, unicast should be slower because each client creates a link to the server. It’s not slowing the server down though, but timewise it would be slightly slower because it’s up to the client’s to do the work. However, it doesn’t have any requirement to keep all in the same sync frame so, the client is free of waiting for other machines.
It should be shorter but I’ve never had any luck with multicast. If you set all jobs as unicast it should be about 15 to 20 minutes flat for all 10. They will all image at the same time even on unicast.
I’m rebuilding the kernel. Maybe the new one will work.
It should load now. Again, I don’t know how well it will work, but it’ll be at the same link:
[url]https://mastacontrola.com/fogboot/kernel/bzImageIntelChips[/url]