Upload task from a failing disk
-
Hi all,
I’m using FOG 1.2.0 on Centos 6 as advocated but, as secondary role, I use it as disaster recovery system.
This is done by scheduling upload jobs on a bunch of PCs to be downloaded in minutes whenever a disk crashes.
Unfortunately one of these PCs has seriously got a failing disk (bad blocks). It would the perfect case for a disaster recovery tool, but I discovered that the last uploaded image was corrupted, since the image file of the relevant partition was simply cut short as partclone exited when it found the bad blocks.
Since I had a previous week’s valid image to deploy, I got the system back up&running in minutes through a FOG download task, but this issue forced me to think about.
Thus, first I made a script to batch check ALL .img files through partclone.chkimg to be sure to have at least one useable disk image. Next, I noticed that, following partclone project’s suggestions, I was able to manually have a working image of the failing disk’s partitions by using partclone’s -R (–rescue) option.
Better safe than sorry: thus I managed to add this option to all partclone.<fstype> instances of fog.upload script considering that imaging a working disk wouldn’t be affected by -R option while on a dying one I would appreciate the difference.
I tried to upload again the damaged disk (I hope it won’t ultimately fail before my tests are over!) but no avail since -R options seems have no effect in fog.upload.
Where am I wrong? Are there other places where partclone options may be fed to the scripts? How can I force partclone to use -R option otherwise?Thanks for your help
lsalv -
Please let us know about the OSID and image type you use and we should be able to point you the right way within fog.upload. As far as I know there aren’t any other places to set partclone options. Well, maybe in ‘usr/share/fog/lib/funcs.sh’.
By the way. Did you extract & loopmount before editing the file?? Just asking…
-
i’m sure you already know this, but fog isn’t intended to be a disaster recovery tool or backup system. it’s intended to deploy known working images to good hardware. if you intend to use fog for recovery, it would be wise to set up a rotation schedule for image files.
-
[quote=“Uncle Frank, post: 42332, member: 28116”]Please let us know about the OSID and image type you use and we should be able to point you the right way within fog.upload. As far as I know there aren’t any other places to set partclone options. Well, maybe in ‘usr/share/fog/lib/funcs.sh’.
By the way. Did you extract & loopmount before editing the file?? Just asking… :)[/quote]
Hi Uncle Frank,
Thanks for your help.
OS is WXP and it’s a multiple partition, single disk, not resizable image type. It is cloned by partclone.ntfs whose sole instances are in fog.upload, where I added -R option before all the others.
I’m puzzled because adding this options seems having no effect whatsoever, while a manual partclone.ntfs invocation whth the same parameters as in fog.upload does the job.
Not clear for me: what file to edit, what to extract and loopmount? I suppose to fog.upload to be edited (I did), .img file is clearly wrong since:
a) it is 1.8Gb while the partition is 230Gb whose some 80Gb are used
b) last valid .img was 23Gb
c) partclone.chkimg failsStill investigating
Bye
lsalv -
[quote=“Junkhacker, post: 42345, member: 21583”]i’m sure you already know this, but fog isn’t intended to be a disaster recovery tool or backup system. it’s intended to deploy known working images to good hardware. if you intend to use fog for recovery, it would be wise to set up a rotation schedule for image files.[/quote]
Hi Junkhacker,
Yes, I’m fully aware that I’m out of intended scope (as stated at the very beginning of my first post), but Fog is so elegant a solution that I was fascinated to find new usage, and so far I succeeded. Also I already realized a safety policy for imagefiles was needed, and I did the job with a small series of shell scripts and crontab.
Thanks for helplsalv
-
[quote=“lsalv, post: 42408, member: 27895”]Not clear for me: what file to edit, what to extract and loopmount? I suppose to fog.upload to be edited (I did)[/quote]
Where and how did you edit fog.upload??? Are you aware of this: [url]http://www.fogproject.org/wiki/index.php/Modifying_the_Init_Image[/url]When you edit fog.upload please search for the string ‘mps’ and you should find this part of the script:
[CODE]…
elif [ “$imgType” == “mps” ]; then
gptormbr=gdisk -l $hd | grep 'GPT:' | awk '{$1=""; print $0}' | sed 's/^ //'
;
…[/CODE]
First partition tables and MBR are saved and then about 40 lines of code further from this point you can see partclone being called… -
Hi Uncle Frank,
Gosh, now I realize what you meant. So stupid not to get it immediately, but working in the middle of the night causes those headaches o_O . Now I’m more aware of the real FOG internals.
I actually got the point to change params in fog.upload as you suggested, and it didn’t work because it was non incorporated in the running kernel init.
Just made it and launched a job to check. I’ll keep you updated.Thanks a lot in the meantime
lsalv -
Hi all,
I made the changes and all works well.
Now, all images are made regardless of disk status, thus I’m able to restore anyway, as I actually did on the failing drive.
Next step would be have some more logging of the partclone process, just to be sure. I’ll look around for hints.
Thanks for your help!!lsalv
-
Partclone has an logging option (–logfile) and you could write that log to the same place where your images are (NFS mounted to /images on the client)… Just an idea.