Imaging transfer rates - VM vs Physical machine
-
Oh, and that’s with the GREATLY improved upload/download speeds that FOG Trunk offers.
1.2.0 is significantly slower.
-
Hmmmm. I’ve been wondering if I should attempt to upgrade to trunk for a while now. Ever since I encountered the hostnamechanger problem in 1.2.0 months ago.
Now that you mention it, I think my main server is only one core and like 2gb of ram. I didn’t think I would need much horsepower when I built it.
I believe my fastest node is a recent gen i3, so technically 4 cores. I really thought the disk speed and network latency would be the main bottlenecks.
-
@Neil-Underwood said:
Hmmmm. I’ve been wondering if I should attempt to upgrade to trunk for a while now. Ever since I encountered the hostnamechanger problem in 1.2.0 months ago.
Now that you mention it, I think my main server is only one core and like 2gb of ram. I didn’t think I would need much horsepower when I built it.
I believe my fastest node is a recent gen i3, so technically 4 cores. I really thought the disk speed and network latency would be the main bottlenecks.
Well, I never tired even imaging with the Hyper-V build that accidentally only had 1 core. The web UI was unacceptably slow. I just tore it down and re-built it with 4 cores like I had in the past.
Oh, and my Hyper-V FOG server is assigned 4 GB of non-dynamic ram assigned to it, and 500GB HDD space assigned. The hard disk set that FOG uses in the server are SAS12 I think, in a RAID 1 configuration.
You’ll get a ton of other benefits from virtualizing FOG, and it would make using FOG Trunk not so scary.
Start off with 1.2.0. Migrate your images and DB. Get it working, test, test, test. Snapshot it.
Then, install FOG Trunk. Get it working. Snapshot it. KEEP YOUR SNAPSHOTS.
In the future, if a upgrade to a newer Trunk version goes wrong / doesn’t work, revert to previous snapshot. Easy.
Personally, I take a snapshot EVERY Friday, and before EVERY upgrade and I keep several past snapshots just in case. And I also regularly export my DB and images every Friday and before every upgrade. And, I label my DB exports with Revision and Date.
-
I had the opportunity today to unicast to 24 machines at once.
The bottleneck is DEFINITELY the network then! The bandwidth chart was reading 2Gbps and was just a flat line at the top of the graph…
I looked at CPU utilization on the FOG vm, it was 1%, and then 0%, and then 1% lol.
Not sure how accurate Hyper-V’s reporting is for CPU usage, but that’s what it said.
-
@Neil-Underwood i host my fog server on a esxi vm and this is the kind of deployment speed i get
https://www.youtube.com/watch?v=gHNPTmlrccM
i can’t imagine it being much faster than that
(SVN 3488) -
Unicasting to 28 machines at once…
-
@Junkhacker This is the kind of speed I get when I deploy an image at one of my remote sites through a node. ~6m30s for a 40GB image.
@Wayne-Workman Nice. I upgraded my server from 1 core to 4 and increased RAM to 4096 dedicated.I did notice the WEB-UI was a little snappier. It didn’t seem to affect transfer speeds even the slightest bit though.
The PC’s I’m imaging have 2 Gigabit switches between them and the FOG server, and about 30 feet of cable, max. The main switch, however, is an HP Pro-Curve (shudder). I’m starting to think it may be the culprit rather than the VM. I have yet to investigate, simply due to the awful interface on that thing. I also think that switch is the reason that I can’t multicast. Anyone have first-hand experience with FOG & Pro-Curve switches?
-
I have first-hand experience with under-powered switches and multicasting with ghost… it sucks.
The switch needs the horsepower (CPU) to replicate the packets to all ports. Cheaper, under-powered switches perform just fine under normal usage, but when you apply the biggest jobs (multicasting at 1Gbps), they suffer.
-
That must be the issue then. I get ~1.5GB/min when imaging one PC through this switch. I only ever image 8 PC’s at a time. When I have 8 images deploying (unicast) from the FOG server through this Pro-Curve switch each successive deployment transfers exponentially slower, until I’m down to about 400-500MB/min. Just awful. It takes about an hour for 8 PC’s.
Originally I was pleased with that speed because my company’s previous method of imaging was doing it manually with a WinPE disc over the network, which took even longer. I see now though that this should be moving much quicker. Thanks for the feedback.
-
Ugh. I just discovered the problem. It’s the switch. I thought this whole time that the switch was GB, because it says Gigabit. After poking around I see that only the uplink/downlink ports are GB. Everything else is 10/100. Kinda hard to to multicast through a 10/100 switch. I’ll have to request these switches be replaced.
So it looks like my VM is probably just fine. Just need a doggone switch. Maybe I’ll just interject a small 5 port GB switch at the uplink? We just bought about a dozen Netgear 5-port GB switches. I’ll try it out tomorrow and see.
-
So I moved the server over to a Gb switch and I’ve seen an increase in local deployments, but not by much. Transfer rate increased from ~1.5GB/min to ~2.5GB/min. I really think it’s just this weak old server with a Xeon 5130 2Ghz CPU that’s holding me back. Obviously got a little improvement being on the new switch, but that’s still way slower than it should be. I think I’m going to migrate this VM to one of our new Dell R420’s. I’m concerned about migrating from VBox to Hyper-V though. Any pitfalls I should know about?
-
@Neil-Underwood said:
So I moved the server over to a Gb switch and I’ve seen an increase in local deployments, but not by much. Transfer rate increased from ~1.5GB/min to ~2.5GB/min. I really think it’s just this weak old server with a Xeon 5130 2Ghz CPU that’s holding me back. Obviously got a little improvement being on the new switch, but that’s still way slower than it should be. I think I’m going to migrate this VM to one of our new Dell R420’s. I’m concerned about migrating from VBox to Hyper-V though. Any pitfalls I should know about?
The new FOG server should get the SAME static IP.
Export your database, AND export your hosts. FOG Configuration -> Configuration Save -> Export. Host Management -> Export Hosts -> Export
And, copy your images outta there. You can use NFS with another linux machine, or use Samba. (I recommend using Samba).
I can walk you through Samba setup via chat, it’s easy.
-
Thanks. I appreciate that. I was referring more to the conversion process from VDI to VHD format though. I intend to use the built-in “VBoxManage clonehd” feature to do the conversion, but it’s a 220GB file and it’s going to take a few hours to complete. Just want to make sure it succeeds on the first try.
I already have a Hyper-V host with several other VM’s on it that we just built as part of our Domain migration. If I can just do the conversion and fire up the new Hyper-V VM and pick right back up where I left off it would be great. I just worry about the differences in HW virtualization between the two and whether or not the Guest OS (Debian 7.8 in this case) will be tolerant enough.
Truthfully I would like to migrate away from Debian to Ubuntu. All my nodes are running Linux Mint 17.1 (Ubuntu 14.04) and I’d like to make them all uniform. I started with Debian because it’s what I’ve always used because of stability, but the Debian repositories are just too far behind the latest software releases and lately I’ve found myself having to jump through too many hoops to make things “just work” with it. That’s fine for fun at home, but not at work where I have actual deadlines.
-
@Neil-Underwood You might experience equal difficulty with Ubuntu, but in different areas. Tom explains it best.
I’d recommend an overhaul to Red-Hat or CentOS or Fedora. But of course, the choice is up to you.
-
My two cents, I would suggest Debian or Fedora, and using Hyper-V instead of VirtualBox. Also ensure your drives on the server are fast, and/or your RAID array is properly set up for fast sequential reads. And if you have an AV on the host, make sure it is told to leave your VHD files alone.
-
I just discovered that my RAID battery is failing. I’m definitely going to have to migrate this to our Hyper-V host.
I’m curious about the Fedora/CentOS recommendations. I’m definitely more comfortable with Debian/Ubuntu, but I’m not opposed to switching. I cut my teeth on Red-Hat. Are there noticeable benefits or is it just personal preference for you guys?
-
@Neil-Underwood You’ll find solid RAID support in CentOS. FOG was originally developed on Fedora. Both of these are based on Red-Hat… and Red-Hat has longevity and a large on-going support base.
Beyond that, I “learned” Linux (not really) using Fedora in college, and naturally chose Fedora when I set out to start using FOG. I run Fedora Server for FOG (naturally) both at home and work, and I run Fedora Workstation on my personal computers at home that I use for everyday things.
-
Just wanted to share that I went ahead and moved the server over to a physical unit temporarily. The old Dell server was starting to fail left and right. I went with CentOS 7 & FOG from git on a Dell Optiplex 3020 for the interim and wow it’s fast. Upload of a 40GB image was less than 10 minutes. Deploying the same image takes about 5 minutes. Backing up and importing the database was relatively painless. A few quirks to work out but they’re minor.
-
bumping this thread…