Bad Sectors and Failed Image Upload
-
Hello,
I’m not sure if this should be here or in the Fog related forum.
Fog: 1.2.0
Ububtu 10.4.4I have been searching for a solution but with no results.
I have a Fog Server setup and have uploaded my first Windows 7 image successfully. On my second attempt with another PC I ran into a problem. At the reboot where I expect the image to be uploaded I get a notification that the hard disk has 9 bad sectors and the upload fails. I have run chkdsk and there were no bad sectors found. The message suggested the --bad-sectors option of ntfsresize. I do not know where I would set this option.
Can anyone offer assistance with getting this to work?
-
I guess I don’t fully understand the issue.
One image worked fine, the next one is telling you there’s bad sectors on the disk and won’t upload.
You tried chkdsk, but it didn’t find anything. This completely possible. Did you run chkdsk in the 5 phase or 3 phase scan? I believe the 5 scan actually checks sectors and surface, where the 3 phase (default normally) just checks the data and location of the data.
-
Hi Tom,
Yes, the first image was from PC#1; it uploaded fine. The second was from a different box,PC#2.
I ran chkdsk /f /r, all 5 phases. It took a very long time as the HD is 1TB. There is only 40-50GB of data on it.
-
And after running chkdsk are you still seeing the same error?
-
Yes, I have run chkdsk twice with the same result.
-
Then there likely is actually something wrong with the drive that chkdsk is missing.
Could you try a different system of the same model? Do you run into the same issues?
-
I don’t have an identical system.
I have a number that are similar, same MB and CPU, but with different amounts of RAM and different HD sizes.
I will try one and see what happens.
-
Could you try taking a non-resizing image of the disk? i.e. create a multiple partition single disk image and try uploading to that. This is probably not what you want to do long term but it might let you pull the image off of that machine before the disk finally dies.
-
I don’t have enough space on the fog server to do that.
I have no reason to believe there is actually anything wrong with the HD on the problem machine. Is it possible the report is incorrect? Is there another way to test it other than Windows chkdsk to verify that there is an issue? It’s a WD 1TB enterprise drive only a few months old.
The image of another machine just completed successfully.
-
try the western digital drive diagnostics available from their website
-
“The image of another machine just completed successfully.”
Does this not tell you anything.
Why is it so hard to believe that there could potentially be a “REAL” problem with a manufactured drive? These kinds of problems happen all the time.
We know it’s not fog just toying with your emotions here. It’s telling you what the problem is and why it’s failing. I would see if WD can test it.
You could try a ddrescue on the drive and see if anything shows up, or better yet perform a surface scan on the disk, using fogs very vast amount of tools.
-
Running WD diagnostics now. Will follow up with surface scan.
-
[quote=“geoffpeters, post: 34452, member: 25329”]I don’t have enough space on the fog server to do that.[/quote]
The image process still uses partclone.ntfs and not dd, so you are still only storing the “used space” of the disk image. smartctl can also pop out information about the failure status of a disk
[CODE]
$ smartctl -a /dev/sda
[/CODE]On the up side if your drive is starting to fail you caught it early and you are still under warranty
-
OK. The WD Data Lifeguard extended test did not find any problems.
When I run the surface scan from Fog, can I see the results later? It will take a long time to run and I want to start it when I leave for the day.
I tried running smartctl using the debug option but I get a message “command not found”
Thanks
-
I don’t know that smartctl is installed. You stated the bad-sectors option, but have you defragged the system and then tried chkdsk?
THe command to “repair” bad-sectors is already embedded into FOG but if you do though debug it would be something like:
ntfsresize -b [otherargs as needed] <partition to clean> -
I’m a bit confused about what you are recommending.
Should I run the surface scan? If so is there a way to view the results at a later time. I ran it on a smaller disk and the scan results were only visible for a few seconds. I will not necessarily be viewing the monitor when scan completes.
Can you clarify what you mean “THe command to “repair” bad-sectors is already embedded into FOG”
Thanks
-
I would certainly try the surface scan. If you are having problems getting the results out of fog you could run the test manually, I think it uses badblocks in the background so booting into a debug session and then running
[CODE]
$ badblocks -sv /dev/sda
[/CODE]
should give you the output you are looking for. As I mentioned above though I would try one of the non-resizing upload tasks (multi-partition image), you don’t need 1TB to do it and it will give you an image to recover from if you have a problem with the disk. -
Thanks.
I decided to forgo the surface scan for now.
The multi-partition image is currently uploading.
I appreciate the help and thank you both for your time.