Imaging computers at 2.6Gbps
-
@Obi-Jon the error is most likely completely unrelated to the compression level. if you can capture more information, we’ll see if we can find the cause.
-
@Junkhacker “locate partclone.log” does not find a log file. Thinking a log was not generated, or am I looking for the wrong log file? I tried “locate *.log” but didn’t see anything promising:
/opt/fog/log/fogimagesize.log
/opt/fog/log/fogreplicator.log
/opt/fog/log/fogscheduler.log
/opt/fog/log/fogsnapinhash.log
/opt/fog/log/fogsnapinrep.log
/opt/fog/log/groupmanager.log
/opt/fog/log/multicast.log
/opt/fog/log/pinghost.log
/opt/fog/log/servicemaster.log
/root/fogproject/bin/error_logs/fog_error_1.3.5.log
/root/fogproject/bin/error_logs/foginstall.log
/var/log/alternatives.log
/var/log/auth.log
/var/log/bootstrap.log
/var/log/dpkg.log
/var/log/kern.log
/var/log/php7.1-fpm.log
/var/log/apache2/access.log
/var/log/apache2/error.log
/var/log/apache2/other_vhosts_access.log
/var/log/apt/history.log
/var/log/apt/term.log
/var/log/mysql/error.log
/var/log/unattended-upgrades/unattended-upgrades-shutdown.log -
@Obi-Jon Partclone.log is related to the client.
Anything presented on the client in regards to FOS (Inits) will be in respect to the client.
You could add isdebug=yes to that host’s Kernel Arguments field. This will drop the host into a terminal prompt where we can step through and obtain more information.
-
@Tom-Elliott Ah, I borked the log then when I uploaded the image and redownloaded after the error to compare image file sizes. No errors the second time since the images matched. If I see that error again I’ll revisit this. Thanks!
-
@Obi-Jon I feel I must add, once again.
“Holy Shit”
-
@Tom-Elliott Lol, my sentiments as well, been pumped all day. Can’t wait to try this over multiple unicasts simultaneously.
-
@Obi-Jon That’s great to see. We see very similar speeds with ours, though it’s just a VM on ZFS based storage with 4g of ram and 2 cores. I think the key is making sure you’ve got solid clients, and ssds on the client side. And Intel nics.
Some of our broadcom equipped thinkpads max out at around 7, the intel Dells hit the 10,11 mark.
This is all on gzip. Haven’t made a new image since the new compression.
-
@Bob-Henderson Good point on the Intel nics, that’s what I’m using as well. My new clients are based on H110T chipset, which has both Intel and Realtek nics. I made sure to only enable PXE on the Intel nic and disabled the Realtek nic entirely. Now to keep my users from plugging into the wrong jack and generating more help desk tickets. I’m actually contemplating gluing a punched down RJ45 plug into the Realtek ports, lol.
-
@Obi-Jon eww ya, no, don’t do that.
Color code the jacks and cables. blue jack, blue cable, orange jack, orange cable. It’s what we do for our users. Another school here, but we’re a 1:1 shop so all my stuff are laptops. Snapins are my saving grace, tied with PDQ Deploy
-
@Bob-Henderson That’s a really good idea. Gives me ideas for audio/video stuff that gets plugged in wrong every time they wax the floors…
-
@Obi-Jon Thats how it started with us too. Labels didn’t work, cuz who can asked to read something written in big letters on each end…
So we stock 10 colors now, and they all have a reason for them. We have a sign made up for the colors, and posted it in every room laminated.
-
From my experience:
Your FOG machine doesn’t need to be that fancy to accomplish this. It will be useful for concurrency and what not, of course, but I’ve had similar results with a 2nd generation i5, 6GB RAM, 500GB HDD (!!!) and 1gbps link
The key to fast imaging is primarily down to the target device (assuming your imaging 1 device of course). Modern multi-core CPUs tend and faster storage solutions tend to get deployed to quite a bit faster.
-
@Quazz Yes, the target device does appear to be the deciding factor for deployment speed when imaging 1 device. That said, my old FOG box was running 0.32 (didn’t upgrade it due to some customization I had made to it) and was WAY slower than this, even with the same clients. The server was no slouch, even for 5 year old hardware, but the newer versions of FOG (partclone, etc) are making a big different too.
From now on I think the bottleneck (if you can call it that) will be endpoint network bandwidth. We’re pretty much saturating 1Gbps links as this test shows, so going concurrent with 10Gbps link at the server is the next logical step. For me, 10Gbps is overkill since I have mostly 100Mbps endpoints with 1Gbps uplinks, but as I upgrade endpoint switches client speeds will improve a lot. Heck, with 100Mbps endpoints and SSD on the server I am thinking I can saturate 50-100 clients simultaneously at 100Mbps each. Can’t wait to try it.
-
@Bob-Henderson My color scheme for network cables has been based on length (5’ = white, 7’ blue, 10’ pink, 14’ yellow, etc). Maintenance disconnects everything every summer, waxes the floors or shampoos the carpets, then hooks everything up again, so the different colors on each row of computers is useful for figuring out which cables go where. Not sure I can do away with that for network cables, but I’ll definitely incorporate your idea for everything else.
Good idea to post laminated signs too.
-
Ah, yeah, I remember that game. We ended up doing a disconnect/reconnect system myself, to alleviate the issues.
good thing there are more colors available, huh?
-
@Obi-Jon @Junkhacker it would be interesting to see the speed difference using our new fog 2.0 transfer protocols / techniques when it’s more ready for testing. Generally speaking our new approach we’re working on is far more stable in transit (less packet loss), and capable of a much higher throughput / network efficiency.
-
@Joe-Schmitt Any documentation or such available for this? I’m curious about how you’re doing it from a technical perspective.
-
@Bob-Henderson there’s really 2 factors going into it (keep in mind this is a very simplified / high level overview of what we’re doing). First is the removal of NFS (it’s still an image storage option if you want, but not how FOS reads the images now). The server is now the one responsible for streaming an image to the client, this means we can use protocols such as http/2 streams, SPDY, on-the-fly packet compression, so on.
Secondly is making the server and client more intelligent (giving it a basic ai of sorts). FOS will basically be running a custom build of the linux FOG client to give us access to our existing framework, and real-time server communication. This means the server / client can automatically take care of throttling on congested networks and on-the-fly transferring which server (assuming you have a multi server setup) FOS is receiving the image from incase of high workload, or if a more optimal server frees up.
TL;DR By making FOS smarter, and upgrading our transport protocols.
-
@Joe-Schmitt Just dropping from NFS over to HTTP/SPDY/etc is huge.