• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login
    1. Home
    2. Gilou
    3. Posts
    G
    • Profile
    • Following 1
    • Followers 2
    • Topics 7
    • Posts 105
    • Best 3
    • Controversial 0
    • Groups 0

    Posts made by Gilou

    • RE: Slave Drive to backup images

      Yes. Plug it inside. Assuming your first disk for ubuntu is /dev/sda, the second one will be /dev/sdb. If you used the first partition on your former partition you can mount it using:
      [CODE] mount /dev/sdb1 /mnt[/CODE]
      Then, your former files will appear under /mnt, and you can get to the usual files, which are:

      • opt/fog
      • var/www/fog
      • images
        As for mysql, you can try getting coldly a backup of var/lib/mysql back in place, but that may not be as easy as it sounds. But it should work, if you stop mysql first copy the data over, start again. Not really clean, but will work in most scenarios.

      Good luck.

      posted in Linux Problems
      G
      Gilou
    • RE: Imaging using partclone instead of partimage

      Also, I will try to give more insight on how to be able to resize ext[34] partition… There are no reason not to do the same we do for NTFS partition using resize2fs, assuming it’s inside the init.gz, haven’t even checked for it yet 🙂

      posted in Feature Request
      G
      Gilou
    • RE: 64 Bit Stuff

      awesome, will try…

      posted in Tutorials
      G
      Gilou
    • RE: 64 Bit imaging

      64 bits shouldn’t be an issue, as it has nothing to do with the partition table. However, I’d say the issue might be more around UEFI/GPT rather than that?

      posted in FOG Problems
      G
      Gilou
    • RE: Imaging using partclone instead of partimage

      There is a bug if you use my patch, I missed a spot where partimage is still used. I need to work a bit more on that, and we need either a way to tell “use partclone or partimage” (say, an image type), or a way to effectively migrate from the partimage+gz format to partclone + gz. But that requires a better roadmap thinking than just changing a few lines… And the fog sh script is also not consistent, a lot of things should be refactored in it to allow for an easier development on it…

      As for “how I did it”, well, I’m a bit familiar with partclone (I use it for MacMinis), and the documentation is rather extensive. The catch might be with pigz / gzip, as pigz doesn’t return properly when reading stdin, but gzip is available in the buildroot environment, so there it goes…

      I have huge issues building buildroot, 32 bits or 64 bits, vanilla or the one descried on SVN, or as a matter of fact, even the one listed on your website (Tom)… And I won’t have immediate time to work on that, but I’ll look into it… My goal is to work on a 64 bits buildroot + kernel, to be able to exploit the 16 GB RAM monsters properly…

      posted in Feature Request
      G
      Gilou
    • RE: Load kernel and init.gz from storage node

      (and then, pointing your local machines to this TFTP server using DHCP rather than your master node.)

      posted in FOG Problems
      G
      Gilou
    • RE: Load kernel and init.gz from storage node

      Basically, having more than one tftp server? There you go: [url]http://www.fogproject.org/wiki/index.php/Multiple_TFTP_servers[/url] 😉

      posted in FOG Problems
      G
      Gilou
    • RE: Boot GRUB Froze

      That’s debian/ubuntu oriented, and I think it is properly documented on the wiki, but he’s using Fedora 😉

      posted in Linux Problems
      G
      Gilou
    • RE: Boot GRUB Froze

      Hi,

      d1p3.img might be the swap partition, so it’s not a problem if it says it can’t find it: fog creates them back when you download the image. However, the GRUB issue is mainly due to the fact that GRUB2 doesn’t work too well with FOG. Two solutions:

      • install grub1 (also known as grub legacy).
      • use BCD / bcdedit to chainload grub instead of the opposite, and install grub in the linux root partition. This is actually a better solution, especially since it might be hard to have Fedora work on a grub legacy setup, depending on your install.

      Good luck 😉

      posted in Linux Problems
      G
      Gilou
    • RE: Imaging using partclone instead of partimage

      OK, here’s the patch, quick & dirty for those interested, it’s being worked on, and will certainly not work on a multicast environment. I’m still interested in being able to work on SVN, by the way.

      [url=“/_imported_xf_attachments/0/443_partclone.diff.txt?:”]partclone.diff.txt[/url]

      posted in Feature Request
      G
      Gilou
    • RE: Latest FOG 0.33b

      Another way to get a tarball directly from SVN latest version is to use SourceForge, and hit “Download snapshot” on [url]http://sourceforge.net/p/freeghost/code/HEAD/tree/trunk/[/url]

      posted in General
      G
      Gilou
    • Imaging using partclone instead of partimage

      Hi,

      I have a need for 2 use cases (working on 0.33b base):

      • ext4 support
      • grub2 support

      Let’s talk about ext4, because it appears to be easier (somehow). I’m going to use partclone.extfs instead of partimage. Problem is that the image types are not compatible, and partclone plays less nicely with .gz images than partimage. But that’s OK.

      So I modified my FOG script in init.gz, and everytime the partimage save/restore occured for $osid == 50, I replaced it by the appropriate command for partclone. And guess what… it works 😉

      I’ll test a bit further, especially, I will try not to break the multicast more than need be, and I shall provide a patch for /bin/fog. Anyone interested? Remarks?

      As for GRUB2, and other things (like UEFI), I think there are quite a more things to be done… GPT seems partially supported, so that’s a start…

      Cheers
      Gilou

      posted in Feature Request
      G
      Gilou
    • RE: In need of a working Kernel

      I’m feeling like trying something, but well. Has anyone tried compiling buildroot & the kernel on 64 bits. FOG as it is would benefit a lot from it, on system with more than 4 GB RAM, especially for the upload process… Any input on that? 😉
      I’ll see how long it takes to make buildroot & the kernel 64 bits, see how that goes…

      posted in FOG Problems
      G
      Gilou
    • RE: FOG Print Management

      Hi,

      What does your C:\fog.log say about it?

      Cheers
      Gilou

      posted in General
      G
      Gilou
    • RE: Mac images

      as I mentionned on another thread, the way I image MACs is to PXE boot ubuntu (to have a somehow compatible linux running on the macminis I had on my hands), and use partclone to copy the hfs+ partitions… FOG can PXE boot that, but can’t image it… Might be a lot easier once made to work with partclone (there is a wiki page about how to have mac PXE boot).

      posted in Tutorials
      G
      Gilou
    • RE: How to: Modify the PXE boot menu to allow the booting of .iso files

      The thing about booting an ISO, is that you have to transfer the size of the ISO over the wire using tftp… Which can take long. The usual approach to live booting on PXE is to send a smaller image, then have the root mounted off NFS, so that it doesn’t have to wait until everything is loaded to boot.

      posted in Tutorials
      G
      Gilou
    • RE: How to make deployment quicker with Windows 7 and fog

      Sometimes, you can’t use multicast (cheap switches, mixed environment…). But you can make unicast faster, especially if the server’s disks are faster than the clients, and that it has shitloads of RAM. If you deploy a 16 GB image, and you have 16 GB of RAM, chances are that if the tasks starts simultaneously, they will hit the cache on the server, thus they will be only limited by their own disk speed. Which usually goes slower than 1 Gbps. So you can get 2-3 hosts to be deployed at once, and still be faster than 1:1 😉

      posted in General
      G
      Gilou
    • RE: How to make deployment quicker with Windows 7 and fog

      What kind of disk setup do you have on your server? How much RAM/CPU ? 🙂
      If you were going at wire speed, 35 GB should be deployed in 5 minutes…
      Assuming 1 Gbps wire speed, you have to account for :

      • NFS overhead (let’s say 5%)

      • Fog host disk (used to write): usually, standard SATA disks nowadays can get 100 MB/s… Older ones… get that down to 50.

      • Fog server disks… it needs to read the data, but if you have standard disks, it should be 100 MB/s at least (you can play with bonnie++ to have an idea, or smarctl -tT /dev/sda, to see what you get)

      • To test the server disk, you can use bonnie++ or smartcl (smartmontools)

      • To test the bandwidth, you can use iperf, see what you get on the link

      • To test if your server is up and connected at 1 Gbps, use mii-tool or ethtool

      But counting 50 MB/s disks, on both side, and even the overhead, you get 500 Mbps, that’s still 10 minutes tops. Then there is the switch capacity… but for 1:1, 1 Gbps on each side, I doubt any Gbps switch will fail you.

      Good luck 😉

      posted in General
      G
      Gilou
    • RE: Bit Torrent

      (And if you have 50 PC in a VLAN, properly isolated, use multicast, that will be way faster… Even if your switch sees it as broadcast, it probably would work faster in anyway…)

      posted in General
      G
      Gilou
    • RE: Bit Torrent

      the actual mechanism in init.gz using partimage would have no idea how to download using bittorent (it mounts the NFS, and uses partimage over it directly). If you want to use bittorrent, you need some space to save the image before dumping it to the disk… That’s a not trivial thing to do given how the process is done currently, I’m afraid 😞

      Though it would be awesome… Like… Let’s check the size of the image, let’s check if we have size_of_image_decompressed + size_of_image, deploy using bittorent on the extra space, dump the image, then remove the temp partition and extend the main one… Not trivial, but interesting 😉
      Also, it would probably be slower, as you need to write all the image twice… on the disk…
      The upload process gets around it by splitting the image inside the RAM (and would thus be heavily faster on a 64 bits kernel) before sending the chunks out there, but bittorrent wouldn’t be able to do that (or you’d have to force the upload to be sequential, and find a way to dump the part of the file that is properly downloaded, but then that reduces the impact of bittorrent because everyone starts downloading the same part of the image file, rather than distributing randomly over the chunks…).

      posted in General
      G
      Gilou
    • 1 / 1