4k Advanced Format Drives
-
partcat: my steps are if you wish to use single disk resizable, if you do not use a resizable option you do not need to take those extra steps
-
root@MOB-LIN02:/test/fog_0.33b# svn info
Path: .
URL: [url]https://svn.code.sf.net/p/freeghost/code/trunk[/url]
Repository Root: [url]https://svn.code.sf.net/p/freeghost/code[/url]
Repository UUID: 71f96598-fa45-0410-b640-bcd6f8691b32
Revision: 1190
Node Kind: directory
Schedule: normal
Last Changed Author: masterzune
Last Changed Rev: 1190
Last Changed Date: 2014-02-04 10:08:42 -0600 (Tue, 04 Feb 2014) -
I’m going to look into getting this bcd on the init.gz and interject it in it’s proper location from the init.gz before the image upload even starts. This way, theoretically, we’ll have a working single disk resizable even for non-sysprepped systems.
-
Thank you reefcrazed. I thought this was fixed before that revision, but I could very well be wrong. Let me take a look, but can you – in the meantime – update to r1210 with:
[code]svn update
cd bin
./installfog.sh[/code] -
How can I tell “why” it is failing? The message on the computers pop up so fast, then reboot. I have no way of knowing what just happened.
-
In your database,
Alter the field that deals with multicast in the tasktypes listing.
The code would be:
[code]UPDATEfog
.taskTypes
SETttKernelArgs
= ‘mc=yes type=down mode=debug’ WHEREtaskTypes
.ttID
= 8;[/code]This will do multicast but in debug mode. From there, when it exits out you can probably look at the /var/log/partclone.log file to see what happened/or didn’t happen.
So the Multicast parts are what’s failing, rather the means of getting the image is my guess.
I’m going to go on a limb and think that the image filenames aren’t being found for some reason.
-
You know what I bet it is? I am doing this in a test environment to see how multicast works. I am doing this current test in VM’s. Since the hard drive sizes are less, that would do it right?
-
That very well could be.
-
“Can’t have partition outside of disk”
-
I am derailing this thread, I will start another.