Storms, corrupted MBR, save the images?
-
@Mastriani Assuming you can access the data (and the data isn’t damaged/corrupted), it is theoretically possible to get the entire database and the images and put them on a different disk.
-
If the MBR’s are “broke” in such a manner, chances are there’s sectors damaged as well.
You can try to dd-recover the drives to see if you can at least get the data.
Theoretically it’s possible, but if so many systems lost the drive, chances are likely this is far more reaching than recovery is going to be able to help with.
What concerns me, however, is how so many machines ended up with the same problem. The client machines, I can understand. The “Servers” worry me. Infrastructure should always be separated, grounded, and have DR available as required. Of course Storms aren’t anything one can predict, but I’ve not heard of many server’s being lost during storms. Power outages are understandable, but a direct hit from lighting… With no grounding… No surge protection… (Scary).
-
Just be aware the computer that you are using to retrieve the files from must be a linux computer. Windows doesn’t understand ext4 disk format.
-
@Tom-Elliott Thank you Mr. Elliott, and things to understand.
In a school district in a rural, morbidly economically depressed county.
Local electric cooperative is “good ole boy” style, and having had some investigation done, our “general” electrical delivery is off standard by 5 - 12% as a rule of operation, (ie. on 110 circuit, actually voltage can range 89 - 105 at any time tested). I definitely agree, it is a “shouldn’t ever happen” scenario, but dirty electric delivery shouldn’t ever happen either.
I have done what I can do with the available UPS and as much separation between mission critical devices and standard use, but there are quite simply too many things outside my domain and “above my pay grade”.
If I can recover the data, is there a best practices methodology you suggest?
-
@george1421 Thank you sir, yes I understand. It would be from a new drive with a new install of Ubuntu server.
I am looking for a best practices methodology to ensure my activities do not add to possibility of further corruption/damage.
-
@Mastriani Well I totally understand the limitations of economic status. I’m just saying it’s definitely scary (for reasons you are all too easily aware of now). As far as recovering, you can certainly try getting the data, though your best bet would be to use dd-rescue I think as it’s not reliant on knowing the partition layout. dd recovery would also work, but you have to write the information directly to opt out of “drop errors” kind of thing where dd-rescue does this relatively seamlessly.
Understand that to get your backup it will take a long time.
I’ve had to do dd recovery on a 500gb drive and it was about 12 hours of waiting for the information to be done copying. After that, I was able to put it on another drive and all seemed to be well.
-
@Quazz Thank you, currently running diagnostics for RAM/RAID consistency, so I am thinking towards the next step.
Obviously, if the data isn’t retrievable, it’s an all new install of everything and level 0 start over. Hope that isn’t the outcome, so trying to plan for a better scenario.
-
@Tom-Elliott Thank you Mr. Elliott, I am not familiar with the operation of dd-rescue? Something you can link me to perhaps?
-
-
I think we need clear information on what you’ve tried, how you determined what’s happening, and a general overview of how things were setup though.
Running DD on individual disks that were in a RAID won’t be very useful.
-
I try and keep a USB drive backup of the images offline as well as the configuration save to permit an easy rebuild from scratch if necessary. Most users update images rarely enough that this is not a huge burden.
-
-
Came in, restarted UPS, checked status, gave it 2 hours to restore levels, in case of further outage. We still have crews working in county and 4 surrounding, likely will have more down.
-
Started Win DC, locked in boot options reboot cycle. Started hardware diagnostics, letting it go.
-
Started Ubuntu/Fog server, locked at lost/corrupted mbr message. Attempted 1 restart in case of anomalous behavior. Same outcome.
-
Ran the Ubuntu live CD, could not even mount sda1, (•sudo mount /dev/sda1 /mnt). Tried Boot-Repair, same outcome. “Assumed” it was pointless to go for more utilities/attempts … ?
-
-
@Mastriani once you get your servers back online, look at Veeam Endpoint Backup (free) to make a DR image of your physical servers, both windows and linux. If you have it installed and you have your DR backup you can bring back the server by booting off the Veeam DR disk and then connect to your backup repository (files).
ref: https://www.veeam.com/windows-endpoint-server-backup-free.html
-
Well the “MBR” data is the partition layout, so if this is lost or corrupt, linux has no idea where /dev/sda1 is.
You could try a utility with testdisk that should be able to help find and recover the mbr information potentially.
-
@george1421 That is massively appreciated, especially considering my lack of authority to control the environments for my equipment. Thank you very much sir, that is very much needed.
-
-
@Tom-Elliott Yes sir, I was just looking at that, having trouble understanding how it is utilized, unless it is a boot CD/DVD?
-
Maybe here:
http://www.cgsecurity.org/wiki/TestDisk_Livecd -
@Joseph-Hales Thank you sir, yes, it would appear that may be a best practice that has to be instated. Even though I regularly update images, at no more than 2 month cycle, it shouldn’t add much extra work to the process in total.