FOG Torrent
-
In my opinion and regardless of it’s performance, if it can be fixed without much effort (and I haven’t a clue about that), then lets fix it and keep it. Otherwise, throw it out.
-
The idea of FOG Torrent was to download the image via torrent, and the more that had to do the task, the more you had to seed the image around.
This actually works pretty well, but it also means you must create a location to download the image first. This means it would ONLY work for resizable images and you had to make some pretty weird guestimates.
I accidentally broke it (although for good reason) by switching the filenames around from the resizable rec/sys .img formats to use the common filenames now of d1p1 and whatnot.
I’m not overly concerned about removing it and it was really just an idea that started to spin a wheel down the road, but hasn’t really gotten much traction as you can see.
-
I’d vote for a removal if we don’t have anyone really getting into this. I guess that we would see a lot of issues coming up in the forums if people would start using torrent. It’s not as easy as multicast I think (from what I remember about my tests).
But I don’t want to ruin someone’s work either. Who added torrent to FOG?
-
@Sebastian-Roth Ask and thy will be done.
-
@Sebastian-Roth said:
Who added torrent to FOG?
that was me. it worked through any network connection, so it had that as an advantage over multicast (tom even imaged a computer at his house from an image hosted in Kansas) and it didn’t hammer the server as hard as a bunch of unicast, but it never had the speed boost i wanted out of it.
-
i always meant to get back to work on it, but haven’t had the time
-
@Junkhacker Thanks for joining in. I didn’t mean to offend you when saying that I vote for removal. Just from the tests I did with torrent imaging I don’t feel this is a very useful feature. Again, no offense!
… it didn’t hammer the server as hard as a bunch of unicast, but it never had the speed boost i wanted out of it.
In my case it really hammered the network equipment as I remember. Lots and lots of TCP connections all over the place and - as you said - not much speed…
-
A good question to ask is - if I were imaging 30 to 200 computers, would a torrent imaging task perform better than individual unicast imaging tasks? Also, assume my only seed is my FOG Server. Would the individual hosts seed pieces as they received them like regular torrenting works?
Let’s forget about multi-cast for the moment.
-
@Wayne-Workman that was the exact intent. When the files were downloading they would also be seeding what they have to everybody.
The problem with torrent imaging is torrents grab chunks rather than sequential data.
Downloading the image could become faster but it still has to be put on the HDD once the image was downloaded, which would only be at HDD speed after the download. Speed difference may be faster to download the image but you still had to wait to put it on the HDD.
-
@Tom-Elliott Ok I get it. Where is the image stored in the mean time? I guess thats why you all were talking about knowing exactly how large the image is and the resizing and stuff - because the image would need temporarily stored on the HDD because it’s too big to fit in RAM.
Bet this would work a lot better for two-drive systems…
-
@Wayne-Workman two disks would be good to know getting the image but as we all know cross drive copying is VERY slow. Slower than same drive copying.
-
Specifically for spinner disks of course.
-
@Tom-Elliott said:
Downloading the image could become faster but it still has to be put on the HDD once the image was downloaded, which would only be at HDD speed after the download. Speed difference may be faster to download the image but you still had to wait to put it on the HDD.
I can confirm that in my tests the speed kind of picked up as more and more of the image arrived at the clients. But the network/switch also had to handle more and more.
BUT if I remember correctly I used a modified torrent client which was able to write the image straight to disk. Yes, I found that link! Very much to the end you see
./btdownloadheadless.py /tmp/bittorrent/currentimage.torrent --saveas /dev/sda1
@Junkhacker Re-reading the old thread on torrent imaging… I was somehow aware of this in the back of my mind but I actually didn’t remember that you were using it even at the cost of having to download first before dumping the image to disk. Must have been really useful for you. Are you still not using multicast? If you are keen to get this back into FOG (including writing straight to disk - otherwise does not make sense to me) I offer to put in what I know and tested a while ago.
-
@Sebastian-Roth said:
In my case it really hammered the network equipment as I remember. Lots and lots of TCP connections all over the place and - as you said - not much speed…
that is true, but if you were imaging a lab of computers it didn’t take long before most of that traffic was on a relatively local switch instead of coming all from the server room.
@Wayne-Workman said:
Where is the image stored in the mean time? I guess thats why you all were talking about knowing exactly how large the image is and the resizing and stuff - because the image would need temporarily stored on the HDD because it’s too big to fit in RAM.
what it was doing was creating a partition at the end of the drive to put the image on, then writing the downloaded files to the other partitions. when imaging was done, it would remove that partition and extend the partition before it to the end of the drive. works for the 100-350mb partition 1 remainder partition 2 setups that windows uses most of the time
@Sebastian-Roth said:
@Junkhacker Re-reading the old thread on torrent imaging… I was somehow aware of this in the back of my mind but I actually didn’t remember that you were using it even at the cost of having to download first before dumping the image to disk. Must have been really useful for you. Are you still not using multicast? If you are keen to get this back into FOG (including writing straight to disk - otherwise does not make sense to me) I offer to put in what I know and tested a while ago.
i’m still not using multicast. to be honest, unicast is perfectly fast enough for the way we use fog. the torrent-cast method was mostly an experiment and i hoped a solution to the problems most people have with multicast. it just never worked out that way.
if your method writes straight to disk, does the end client still seed somehow? one of the biggest advantages of the torrent-cast method was that if you had a slow link to the server, your local hosts could help make up for it (old buiding they won’t upgrade the uplink to, remote office, that kind of thing). i also had an idea about how the fog client on computers that weren’t being imaged could be pre-seeded with images for an upcoming imaging task, but that also never came to fruition.
-
@Junkhacker If you could get it working again that would be great. And it’s perfectly fine if the data is not received sequentially in my mind - this is just the nature of torrenting - it’s designed to be this way and has many advantages. Torrent imaging should work like torrenting and not like unicasting (sorry sebastian).
-
@Junkhacker said:
if your method writes straight to disk, does the end client still seed somehow?
Why not? The modified torrent client thinks it is using a normal file - writing and reading random blocks within that file/block device. If I remember correctly more and more seeders and speed build up over time…
-
@Sebastian-Roth Oh. The way you described it before seemed like the clients only accepted pieces in sequential order… If they just write random pieces to where they should go on the drive, that’d be the best solution.
-
@Sebastian-Roth said:
Why not? The modified torrent client thinks it is using a normal file - writing and reading random blocks within that file/block device. If I remember correctly more and more seeders and speed build up over time…
ah, so this is a completely uncompressed block device torrent file, i see. my method uses the same images as standard Fog imaging
-
@Junkhacker Good point! Thanks for mentioning. Now more and more I remember why I totally dismissed torrent for imaging for me. I guess there is no way you could combine random block torrent transfer with partclone (filesystem aware) imaging that is writing to disk on the fly. Only raw would be possible. With current HDs being up to 1 TB and more even in clients this would be a huge wast of network traffic…
Reading through some old mails with a college I remembered the HD in the clients being a massive bottleneck as well. Disk IO literally dropped down to the floor as clients upload (read) and download (write) random blocks at the same time.
-
@Sebastian-Roth well, we could still shrink the partition to the size of the actual data usage like we do now. that would keep you from having 1 TB images unless there’s actually a TB of data on it. but you still couldn’t compress it, and it would be a torrent only image, you couldn’t do normal unicast/multicast deployments with it.