Converting Images from single SSD to multiple SSD
-
Here’s a puzzel.
All the images I have are from Servers with a single 480GB SSD. I need to up their disk performance to a stripe across 4x 120GB SSDs.
Is there a way of changing the images? They were taken as windows 7 single non resizable disk.
The reasion for this is the existing windows servers are imaged, one box, one image, to the 480GB. Now I’m virtualising the Physical box, so it’ll be 1 physical box, 2 virtual machines. Idealy I’d like to image back the existing images to the virtual machines. i suspect drivers etc will stop this.
Does anyone have any suggestions?
-
You have an interesting puzzle here that may be difficult to pull off with or without FOG.
You have some challenges as I see it.
- Going from a single disk to multiple disks, I assume you will be using a raid controller now? If so your captured OS will need the drivers loaded before you capture the image, because at startup windows needs access to the disk. If it doesn’t have the drivers then you are out of luck for booting
- If you want to do the image cloning with FOG (clonezilla, or ghost for that matter), since both run a strain of linux, you will need to ensure the linux/winpe kernel has the necessary drivers so your imaging solution can reach the stripped array.
The last bit I’m not clear on. Are you converting physical to virtual? If that is the case what hypervisor will you be using? There may be tools already available for the p2v conversion.
-
thanks george, the SSDs are not on a raid controller. The environment is a training room, so I’ll just connect the SSDs to the mother board, no raid, the stripe will be done at OS level.
I could P2V the physical to virtual machines, seems the easiest way.
Thanks -
@Julianh You mean striping like this? If yes I’d say you are better off re-installing the whole system instead of trying to split and convert your current image. Not worth the hassle and wouldn’t work anyway I reckon. The disks need to get initialized (some kind of stripe mega information being written to it) and I don’t think there is a tool out there that will do this for you…
Having the striped disk array in your host OS with a VM on top is a different story. That should be fairly easy to achieve. Setup your VM host system on the striped disks, create a VM with a 480GB disk (does not need to be all pre-allocated so you can do this on a smaller disk (array) as well and do a P2V from your old system to the container or use FOG to image from physical to virtual machine.
As George already mentioned you need to have all the drivers ready on the physical machine before moving it to the VM.
-
If you want increase disk performance then software raid is not the way to go. Also SATA 3 SSD drives are already bloody fast.
If you’re spending the money on solid state drives why not go ahead and get a non-hybrid RAID controller too?
-
@Wayne-Workman said:
If you’re spending the money on solid state drives why not go ahead and get a non-hybrid RAID controller too?
Definitely a good advice but he’d still need to figure out the RAID driver question!
-
@Sebastian-Roth he would just need one that supports linux plus whatever OS he’s imaging I guess… I haven’t looked into options in a while.
-
Areca cards work well in both environments, they are all pretty universal. I’ve used quite few different models in both OS’s (DebianLinux and Windows) . Debian has support built as modules (arcmsr) into the kernel out of the box, and those modules are also by default in the initrd. I’m not sure what the FOG kernel/initrd are based on, but I would imagine it couldn’t be too hard to add if needed.
I can attest they are fast cards, and fully support 4k sectors and 64bit LBA, as well as some SSD specific options.
One thing to be aware of - most OS’s aren’t aware of TRIM support for RAID drives. Some controllers expose emulated TRIM and convert it to drive level TRIM, but that is rare. Consequently, SSD’s will wear out rather quickly in a RAID array with a lot of writes.