Latest FOG 0.33b
-
Tested Scheduled Snapin tasks with both cron and delayed. All seems to be working properly. Hopefully this still goes for WOL, though my guess is I’ll have to do some of the same work with WOL tasks, though I guess if you’re WOL, you’re probably imaging as well.
-
All,
I’m moving my compression medium to xz rather than lzma. Though lzma seems to work rather well, xz is much better. The other part of my reasoning, is lzma doesn’t seem to maintain the same compression across different versions of itself. Meaning, if I compress the init.gz using lzma version 4.2 on one system, and uncompress it on lzma 4.9, it uncompresses fine. Recompressing it causes issues though for some, unknown to me, reason. For that reason, as far as I can tell, xz has maintained the same methods of recompression and does a very good job (better than lzma). I’ve included xz reading into my kernel file, which will be a must.
Please test and see how you like it. As alway’s, gzip compression is still available, and I’m keeping lzma available in the kernel as well, but I did add xz to the kernel parameters.
Please test the kernel, and for those of you on 0.33b, feel free to play with the init.gz:
[url]https://mastacontrola.com/fogboot/images/init.gz[/url] <-- goes in default location of /tftpboot/fog/images/init.gz
[url]https://mastacontrola.com/fogboot/kernel/bzImage[/url] <-- goes in default location of /tftpboot/fog/kernel/bzImage
I’m not adding these things into a new revision yet, as I’ve yet to hear back whether the kernel works on the Lenovo T440(s/p/etc…).
I’ve minimized my kernel, and it seems to me the configuration looks much more similar (all be it, seemingly, with more options – besides the newer features of the kernels since 2010) to the kitchensink kernel configuration. However, this kernel (I’ve tested), natively recognizes VMWare SCSI Drives (Win 7) and hopefully recognizes all different types of motherboard chipsets. If compressed with gzip, it seems to be about 13 MB, compressed with LZMA is just shy of 6 MB, and with XZ is a little more than 5.5 MB in size.
With this latest kernel configuration, I’ve also added RAID utilities to it. It should recognize hardware raid controllers, though I don’t know how to tell for sure as I don’t have another system with a hardware RAID system, for those of you trying to image servers or something of that sort.
Ultimately, I’m trying to assist in building a kernel that works across the board. I haven’t, quite yet, figured out a method of imaging LVM systems (the default filesystem layout for CentOS – and many more I’m sure) though my default debian install seems to image just fine (I think that has an LVM based file system as well.)
I hope you all are enjoying the latest and greatest I’m attempting to put out there.
-
r1049 re-added the fog.auto.del script into the init.gz file. There was a typo though, so this has been fixed in:
r1050 fixes the typo. Many of the service files have been updated to use class based things. I’ve tested a few elements, but can’t ensure all are working as expected. However, the amount of code is very much reduced. Doesn’t mean much now, but it means easier management later on down the road. Tarball has been updated to the latest revision as well.
Hopefully you guys like and enjoy. I’ve also, due to the fog.auto.del, made the dive into updating the init.gz with the xz compression method described above. The kernel has also been added to ensure all works properly out of the box.
Thank you all,
-
I’ve been thinking about resizable images… Since I’ve been having trouble getting them to work. (I have not yet tried the latest 0.33b yet, but aiming to do so very soon)
If Partclone could deploy a partition that is relatively small… only just bigger than the OS and installed apps. and then use a snap in to extend the drive via Diskpart within windows.
the only issue I;ve had so far is that if you start with a certain size HDD and try to deploy to a smaller HDD, even with a partition that is smaller than the new HDD, it will fail. (doing more tests at the moment, but disk IO is limited on my test suite.)
-
Vincent,
The issue, as I understand it, is that you’re having problems deploying an image created as Single Disk-Resizable/Multi-Part Non-Resizable, but using a machine that has a larger hard-drive than the systems you’re trying to image? What is the OS you’re referring to? I’m guessing Windows 7.
This isn’t, persay, a Partclone/Partimage issue, but rather a way the images are created.
In Multi-Part images, there are (typically) three files created during the imaging process. These are d1.mbr (Master Boot Record), d1p1.img (Partition 1) and d1p2.img (Partition 2). With this mode of imaging, you CANNOT image a system with a smaller disk than the “master” system no matter how hard you try. This is, in-part, due to how the imaging process creates the images. Usually speaking, images of this type don’t require you to sysprep, but even if you did, it wouldn’t matter because of the mbr file. Most Partitioning tables are stored within the MBR of the drive you’re trying to work with. This is why, if you image a drive larger than the drive you initially imaged with, the system only recognizes (initially) the original drive sizes.
What this means is, if you upload an image from a 160GB drive, and image a 320GB Drive, the 320GB drive system will, initially, look like it only has 160GB available to it. (until you look at the disk manager and extend the drive.)
Now, in retrospect, the partitioning tables won’t work for drives that are smaller. If you take that same image from the scenario above and try to image a system with only 80GB drive, the partitioning tables can’t write to the 160GB setup.
Resizeable images, on the other hand, don’t copy the MBR, and rather uses an MBR file located within the init.gz file on systems that, at the very least, the drive information has been removed, and therefore can recognize the drive independently.
With Windows XP images, the resizable images worked perfectly even without sysprepping because all that was copied was the boot information, not the partitioning schema. In Windows 7 (and I imagine up) the methods involved in generating an non-hard-drive specific MBR, actually requires a sysprep of the system. You can take the time to fix this issue with:
HKLM\System\MountedDevices\ and remove all entries except “Default” before uploading the image. Once complete, setup your image upload task then reboot the system and let the upload finish. Once complete, try deploying the image to another machine. Theoretically, all should work and the disk size shouldn’t matter if it’s larger or smaller than the original size, so long as the drive is larger than the image’s uncompressed size. Seeing as a simplistic base Windows 7 64 image is just larger than 10GB, you should be fine as most Hard drives are many times larger than this. This should keep out the evil Winload boot error, though I can’t guarantee that it will fix the boot issue itself.
However, I’d say it’s much easier, in the long run, just to sysprep/generalize the master system before upload. It might take a few extra minutes, but It will work much better from the tests I’ve performed.
-
I sysprep at the school where I work. not usually on my test systems.
Just setting up a new set of tests at the moment.
Will try one set without sysprep and one set with.
Might even create a few more VMs to test your new improvements with the SCSI drives in VMware
Currently the system I run at my school involves a Multi Partition image, created with the smallest drive we use. The only disadvantage is the larger systems don’t use all their HDD space. Easily solvable with Diskpart…
I’ve tried Resizable images once or twice on tests at work and it always seems to end up destroying the operating system and any system deployed from the image doesn’t boot.
Luckily we don’t have anywhere near 80gb of applications so we’re fine with that size image. (although it was annoying when a slightly different 80gb drive in one of our machines didn’t work with our 80gb image…)
-
Vincent,
The only problem with sysprep, unless rearmed, is you can only perform a sysprep up to 3 times. As you state you work at a school, I’m under the assumption, please correct me if I’m wrong, you work similar to the way my workplace does it. We create a base image that has all the necessary software all schools within our district need. (In your case I’m assuming this is one sysprep – unless there was something wrong with this and there were multiple other syspreps performed.) However, the way this base is created is probably using the manufacturer disc for OS install and chances are you don’t use Audit mode? This is technically another sysprep which Already means, theoretically, you’re at two syspreps for the Base image (Unless, again, somebody else sysprepped another time).
This means it only leaves you with one more sysprep before you’ve even been able to create your image for your school.
The way we create images at our school is we create, what I like to call, the Super Base. This is the base with all common software(s) installed on the system between all of the schools. Then we create the Base image for the school the image is going to be in place for, as these software needs change from High School/Middle School/Elementary School.
We don’t sysprep, yet, where we work, but we also aren’t using drives smaller than our initial image. I’ll look into testing some new commands to see if we have to sysprep a resizable image for Windows 7. This way, I’ll know whether or not it’s even possible. There’s a command I’ve been waiting to play with called partclone.ntfsfixboot that I imagine may do the trick for us. As far as extending systems with larger drives than the image was created on, I’m going to leave that up to the individual to fix/or not fix for now.
I know I’d love to be able to create a resizable, non-sysprepped, image for Windows 7, and maybe even Windows 8 as needed. (though I’m still waiting for Windows 8 to come in.)
-
Just in the middle of a test run with 0.33b r1050
I’ve got 12 test VMs and an Exemplar, one of the VMs has a smaller HDD than the exemplar, two that have bigger HDDs, the rest are identical to the exemplar.
I am deploying a Multi Partition image (non resizable) from the exemplar to all 12 VMs unicast.
I was expecting 1 failure… but the VM I was expecting to fail imaged correctly.
my windows 7 is installed on a single partition (no 100mb one) that is about 2/3 of the exemplar disk. No sysprep.
So it seems that moving to partclone may yield a way to deploy images to smaller hardware by setting the image to use less of the HDD and then expand later. The difference in my system is only 1gb (31GB, 32GB, 33GB disks) so may need to see if this works with even smaller disks as my partition is only 19.43GB.
-
If that works, it’s an unexpected quirk. It’s awesome none-the-less, but I’m a little worried about what affect the MBR is having on the smaller drives. Does the original size still hang in the balance on both smaller and larger drives?
-
r1051,
Starting to, try as I might, add french language translation. As I’ve updated the searching points of where to search for _() enclosed tags, I’m also having to add many more things to the translations we currently have. If anybody wants to have a go at it, I can post the .pot file and have others perform the translations for me. (Of course only if you have enough time.) If you are interested in doing this, let me know. When the translations are complete, please post the completed file and what language it was translated to.
r1052 released to fix a typo I found in the TaskScheduler service file. Also, it corrects the placement of the $deploySnapin utility so it actually works when it’s supposed to.
In case you’re all wondering, I performed a diff of revision 899 to current release, I’ve apparently made over 250,000 changes/edits/modifications to get things working. Hope you’re all enjoying.
-
Well, Looks like it’s successfully deployed to a 25gb HDD as well.
What do you want me to check on the MBR RE your concerns?
Anyone else replicated this behaviour?going to start some tests using the fog client etc soon and may even get around to making a snapin to expand the drive after deployment.
One slightly annoying thing (for me) is that when you set the storage server to Max Clients 1 (which I am doing to try and increase speed on my storage) it still ends up deploying 2 at once sometimes… Yes I know it seems strange, but with a single disk if I do too many it slows down to double digits per minute and I would rather not be waiting a whole day for a test run to complete.
-
All,
I’m currently implementing, in my own processing of methods, a form of templates of all the text that needs translating. With this, it has given me a great amount of insight in how to build the menu’s and submenus. I’ve been able to create much smaller codes to deal with the actual generation of the menu items and sub menu items, as well as notes.
My thought here, is to have all required translatable text in one file for easy modification/addition as necessary. The langauages folder still does all the tasks, but this, in the future, could be easily switched.
Just giving some insight into what I’m trying to accomplish at the moment. Right now it’s just in testing as I still have to add all the needed menu items.
Bare with me on this though. Is it the best? I don’t know yet, but it seems to process things much faster than the original methods as everything text related is stored in variables, which means it’s quick to access.
-
Turning all the text into variables loaded from a file is probably the easiest way to do the translation.
The language setting determines the file it reads and as long as the variables correspond it should work.
as long as a translation is available it’s easy to add… and you may get things done slightly faster/with less processing power if the same variable is used multiple times.
Do you think the pxe menu will be as easy to translate?
Having an option where the pxe image displays but doesn’t display unless you push a certain set of keys would be useful as well.
Sometimes people see the custom logo and start pushing buttons and then wonder why their PC doesn’t boot. No matter how many times you tell them to let the PC boot up they will bash things anyway… -
You could setup hidemenu which is on the PXE Boot Menu page from FOG Configuration the ? symbol.
At least they don’t start bashing things away as they just need to wait for it to boot.
The PXE menu should be easy to translate, if and when you setup using this same option. The text is ready for gettext translation, so long as we have a corresponding po/mo file in the proper directory with the proper information, the text would be translated before being written to the file.
With my rewrites, translation will be, ultimately, broken, until we get the files recreated. That’s part of why I’m trying to limit the translated need of words. Currently the FOG Web page requires 1414 translations. But there are a lot of double taps, such as:
Snapin Updated!
Host Updated!
Group Updated!If we use Snapin, Host, and Group as individual variables:
$this->foglang[‘Group’] = _(‘Group’)
$this->foglang[‘Host’] = _(‘Host’)
$this->foglang[‘Snapin’] = _(‘Snapin’)Then call the Updated! part as:
$this->foglang[‘MsgUpdated’] = _(‘%s Updated!’)
Then, on the corresponding pages, you call the things as:
sprintf($this->foglang[‘MsgUpdated’],$this->foglang[‘Group’])
This sounds like it’s more work, but it means, one file contains all the text. Changes are made that much easier. Translating is still difficult as it still requires a translated form of the original message, but I can pass one file around, that I know contains the text, that needs to be translated.
I could do that with a .pot file as well, but generating the file may come up with some erroneous tags as it needs to search all files with the entire fog web directory.
With this updated model, I just need to point it at the text file and have it go.
-
The only problem could occur if certain languages have differences and you want a ‘perfect’ translation.
Having ‘Group’ next to ‘Updated’ may work in one situation but not another. One language may need Updated before group or something else. But the question is… is having translations that may be slightly interesting, but understandable ‘Good Enough’?
Has there been any discussion with chuck about a freeze for a release? Adding in translatability is a big job re-writing basically every file to include variables. It’s been 2 Years 5 Months since 0.32 was released and it would be nice to get something stable on the site so that more people can benefit from fog.
-
r1053 relased.
Gives us the preliminary bits as I’ve described above. The only pages this makes direct changes to, so far, is Dashboard, and the Menu (Main and Sub) menu structures.
I know it won’t be perfect, but please test it.
Thank you,
-
Just in the middle of a test run… Storage set to max clients 1 but now 6 are running simultaneously. down to 62MB/M on one VM
Client hasn’t changed a hostname yet either.
I’ve tried manually running the client, increasing ram on the client PC, manually restarting the service. nothing working at all.
Will leave the test to complete and check hostnames after they’ve sat for a while… see if any of them do anything.
-
Hostname is done early by default, meaning before the system reboots after a deploy task. If it’s not changed maybe there’s another issue that needs to be looked at. The other worry of mine is the queued client status. I may need to request your fog servers apache error log when the tasks are running to see what errors are being spit at us.
-
OK, Can it be changed to use the fog client again? I will also need the Active Directory Join part of that as well. Changing early might be nice but seems like it’s building a feature that FOG already has, unless there will be some other way to join AD in FOG and do all of the other nice features in the client.
If you want access I can PM you the details you need.
-
Join ad still works fine. The hostname changer was added to make imaging that little bit faster. However you can turn off the early feature within the fog configuration fog settings hostname_early bit.