Capture task not completing after finishing and then loops
-
I think you should go through the basics.
Make sure that
/images/.mntcheck
exists.Set world-writable permissions on /images with
chmod -R 777 /images
Double check that you can FTP into the /images directory from another computer using the credentials inside of Storage Management -> [stoage node name] -> User & Pass. There are instructions here for that: https://wiki.fogproject.org/wiki/index.php/Troubleshoot_FTP
Make sure you have enough free space on the server’s partition that holds /images using
df -h
Try to do a test upload from another computer and see if it works or not.
If you’re comfortable with deleting the image, try to delete it and then re-create and re-upload.
Report back with what you find. Ask more questions. We are here to help.
-
-
/images/.mntcheck exists
-
The image directory is 777 and can FTP to it from other computers.
-
Currently 1.7TB free of disk space. I am trying to take an image of at max ~1.1TB
I will try to do a test upload from a different computer now.
-
-
@Wayne-Workman I did a test from a different PC and the problem still persists. When the upload finishes it immediately starts to reupload again.
Not sure if this matters or not, but I have the following settings set:
OS - Win7
Image Type - Multiple Partition - All Disks
Partition - Everything
Compression - 0
Protected - Unchecked
Image Enabled - Checked
Replicate - UncheckedThis is what I have used in the past and never had a problem.
-
If you use this URL:
x.x.x.x/fog/service/jobs.php?mac=aa:aa:aa:aa:aa:aa
(replace x.x.x.x with FOG server’s IP and aa:aa:aa… with MAC address of the problem host)
You should see either
#!ok
or#!nj
Ok means yes there is a job, and nj means no job.
I would expect you to see
#!ok
for the host you’re having problems with.Using CLI on the FOG server, you can clear out all jobs that are active or queued and whatnot using these steps:
mysql use fog update tasks set taskStateID=5 where (taskStateID=0 or taskStateID=1 or taskStateID=2 or taskStateID=3);
But, these are just to clear the task and to see if the web interface is reporting correctly or not for this problem host… I’m not sure what the problem is, we just have to keep looking and figure it out.
If you don’t mind, what is the output of this?
ls -lahRt /images
I’m just wanting to see the output from the problem image.
Also, can you figure out what filesystem you have on your server? Ext3 has major issues with deleting and moving large images, and you have a very large image. Easily the largest one I’ve ever heard of on the forums. -
@arainero I’m wondering if you could do this as a test.
Upload your image again, but when the system should shutdown or restart, unplug the computer during the bios POST. What I’m interested in seeing is if the task completes AND if the upload is just missing the shutdown command at the end of the image capture. It appears that you tried to capture an image from another machine. What size of image were you trying to capture.
@Wayne-Workman May be you can answer this. It was my understanding that you need to have 2 times the free space of the image you are trying to capture in your /images directory. If the OP has 1.7TB free space and trying to capture a 1.1 TB image, that breaks the rules as I understand them. Is my understanding correct?
-
@george1421 I don’t know enough about it to give an answer. I do know that all uploads go to /dev, and then FTP is used to move them from /dev to /images. I don’t know if it’s a copy+Delete or a move. I do know that Ext3 is really terrible for FOG.
-
@Wayne-Workman For some reason I thought it was a copy and delete function that’s why you need 2x free space. But in the OPs case I think something else is going on. I know on one of the early trunk builds FOG was capturing my reference image and then rebooted instead of shut down, but in the OPs case it doesn’t appear that the task was marked complete so on the next reboot it continues to try to capture the image.
-
-
@Wayne-Workman “#!nj” is being reported. There are currently no active tasks, if that makes a difference with this test.
Here is the output of the command:0_1449550371121_fogout.txt (I had to upload it, it was being marked as spam)
-
@george1421 After the computer finishes taking the capture the computer does not shutdown or reboot. It just starts to take the capture again.
It doesn’t delete the image either, just keeps adding to it. The one drive is 60 GB and the other is 1 TB.
-
@arainero According to the output you provided, the problem computer’s MAC is
fcaa14310ad4
, and it’s not properly moving from the /dev folder to /images.that move is done via FTP at the very end of the upload.
I can see that the image is about 1,010 GB roughly.
I think something is wrong with your FTP credentials. Those are the username and password set in your storage node here: Storage Management -> [node name] -> User and Pass
Use those credentials to FTP into the server from a remote computer. Make sure you can see what’s inside of /images and /images/dev during the FTP session. You should see a folder called
/images/dev/fcaa14310ad4
. Try to upload a file to the /images folder, see if that works. https://wiki.fogproject.org/wiki/index.php/Troubleshoot_FTP#Testing_FTPWhat filesystem are you using?
-
and, you could just manually move the folder for temporary. We still need to get this fixed though.
mv /images/dev/fcaa14310ad4 /images/ExactImageNameGoesHere
Please still try to figure out why the move via FTP is not working.
-
@Wayne-Workman I am taking another image now since the drive was filled up by the previous one. I will update again when I attempt this. As of right now I can FTP to those folders from a different computer.
-
@Wayne-Workman Here is a video of the ending of the capture process. As you can see, it finishes and starts right back up. We turned the computer off when it got to the calculating bitmap screen.
https://www.youtube.com/watch?v=cdoXnM_DHrw&feature=youtu.be&t=1m2sI then made a folder in /images 1282015 (the of the name of the image) and moved the files from /images/dev/fcaa14310b5b/ to /images/1282015. I am now multicasting this to another computer to see if it works.
However, that won’t fix the root problem as you said.
I did notice that the files created in /images/dev/fcaa14310b5b/ were created as user root with rw-r–r-- permissions. I was able to move the files from /images/dev/fcaa14310b5b/ to /images via ftp. I was unable to move the files from /images to /images/1282015 via ftp. I had to use mv to move them.
It may be because I made the fold 1282015 as root and it is owned by root:root.
-
It was short lived. After about 10 minutes this was displayed http://i.imgur.com/WnSPX10.jpg
-
@arainero I watched the video.
I don’t know what a “Earth Shaker” is, you can tell your co-worker.
Why are you using a RAW image type? That’s why your images are so big.
Please tell us more about the system you’re capturing from. Is it Windows? Is it Linux? What OS?
RAW image types should always be the VERY LAST choice!
-
@Wayne-Workman Earthshaker is from a video game lol.
Where do you set what the image type is? I don’t believe I have ever seen a RAW option.
I am capturing Windows 7 machines with two hard drives.
-
@arainero said:
https://www.youtube.com/watch?v=cdoXnM_DHrw&feature=youtu.be&t=1m2s
It’s right there in your video, it says “File system: RAW”
For a example of a non-raw, see this: https://www.youtube.com/watch?v=VZXBCdULUbk
Of course, that is a troubleshooting video I uploaded for another issue - but it does show “File system: ntfs” in the video as an example.Can you give a screenshot of your image’s settings?
Here’s what I’d choose for a machine with multiple disks:
-
@Wayne-Workman I believe that youtube link is the one I supplied.
Here is a picture of the image settings screen. I used to use the default setting of 6 for compression, but now I have been experimenting when this problem started happening.
-
@arainero said:
@Wayne-Workman I believe that youtube link is the one I supplied.
fixed.
Is the 1TB drive encrypted at a hardware level? bitlocker?