Seems like you are trying to restore to an empty disk. Be aware this will most probably cause trouble.
-
Can you take a picture of the error? A picture of someone else’s environment doesn’t really help us since their problem seemed to be related to a specific system.
Double check host settings primary disk field. Possibly the primary disk for the SSD is not the same and thus the disk can’t be found on deployment.
-
In addition to what Quazz has suggested, you could also take a snapshot of your VM (for backup), and then upgrade to the working branch. There’s a lot of fixes in there. Here’s instructions: https://wiki.fogproject.org/wiki/index.php?title=Getting_FOG If this doesn’t work out for you, you can always revert to the snapshot and it’ll be like it never was done.
-
@Quazz Here is a picture of the error.
@Wayne-Workman I will snapshot the VM now and get Fog upgraded to the working branch. I’ve never done that but I remember reading about it. Shouldn’t be too hard to figure out. Would you suggest doing anything to Ubuntu as far as upgrading or updating? It’s been a few months since any OS updates have been applied.
-
@benc said in Seems like you are trying to restore to an empty disk. Be aware this will most probably cause trouble.:
Would you suggest doing anything to Ubuntu as far as upgrading or updating?
The fog installer ensures the packages it uses gets updated, so you don’t need to worry about updates. But - if you want an updated OS, you could absolutely update the system (after you snapshot). The one-liner I use for updating Ubuntu and Debian is:
apt-get update;apt-get -y dist-upgrade;apt-get -y autoclean;apt-get -y autoremove;reboot
Updating doesn’t hurt anything, and it’s good practice, as long as you have a healthy snapshot to go back to should something go wrong. -
@benc I’d suggest looking into a debug deploy task (tick the debug checkbox just before you click the button to create the deploy task in the web UI). Step through the task till you hit the error and when you get back to the command line run
gdisk -l /dev/sda
One thing that could cause this is if you try to deploy an image that came from a source disk larger than the destination SSD. Even if the image is set to resizable there are situations where the partition layout does not allow to resize to fit onto the smaller disk.
Please post the contents of
d1.partitions
and if availabled1.original.partitions
andd1.fixed_size_partitions
(from/images/PiearcyLaptopNoResize
). -
@Sebastian-Roth I’m building a server that has to be ready first thing tomorrow morning so I haven’t been able to put as much time into this as I would like. Based on your suggestion, I tried using a hard drive that was => the original source drive. This worked perfectly, just like pretty much every time I’ve tried to do this type of thing before. Looks like something to do with where the partitions were on the original drive not being properly mapped to the new SSD…? I’m going to take a closer look at all the partitions and where they are located on the drive. It might be as simple as getting rid of a little recovery partition at the end of the drive or something like that. I’ve never had to use a drive that was => the original drive. It has always captured the images and as long as total contents of each partition is less than the capacity of the destination drive it works fine.
-
Success! The issue all along appears to have been the fact that the original 1TB hard drive had two small partitions at the end of the drive. I know one of the partitions was a 20GB recovery partition, and the other was 2GB and I don’t know what it was for. The last things I did was delete the partitions using gparted, make sure the drive still booted windows, then manually resize the main partition and start a capture. I believe it will work just fine the way I’m used to it working now that the two partitions at the end are gone. I normally look at the size of the data on the drive and chose an SSD based on that, even if the original hard drive was 1TB, 2TB, or more.
In short, if you run into this issue, make sure there are no partitions hanging around at the end of the drive after the main partition. It would be helpful if Fog / PartClone could see that and give a slightly more detailed error. I appreciate everyone’s help on this.
-
So interesting, but would like to get more info about the issue !
-
@Padcotton FOG saves the starting positions of partitions to a file which it then uses in deployment.
FOG does not change the order of partitions, which means that ideally you want all of your partitions that won’t be resized (such as OEM and recovery) to come before the partitions that will get resized. (since then all the start markers are early on on the disk)
This should produce far better results overall.
Not sure why you got the empty disk error in this context, I’d rather expect the bootloader error or such…
But I’m assuming your target disk is smaller than the origin disk, thus causing the issue (eg 1TB HDD -> 960GB (“1TB”) SSD)
-
@Quazz Nice explanation of the issues regarding the order of the partitions and the starting positions. I think that’s exactly what was going on. Since the actual usage of the original 1TB drive was only ~150GB, I went with a 240GB SSD which has always worked for me before. It gave me that “Seems like you are trying to restore to an empty disk…” error every time I tried to deploy that image to anything smaller than 2TB. Now that the two little partitions at the end of the original drive are gone, I recaptured it and it deployed just fine to a 240GB SSD.