Specs
-
FOG uses network bandwidth and disk throughput. So the more of each of those, the better. Try to give FOG at least a 1GB NIC and use storage that is RAID5 or better. Most of FOG’s workload from a disk subsystem standpoint is reading. You usually upload from 1 computer at a time, but deploy to multiple clients, so the fastest disk subsystem optimized for read performance is best. If you use a storage system that is already RAID before presenting to the VM controller, then you should be fine on disks.
As far as RAM is concerned, FOG is not a major user of it. 2GB should be fine for most operations. Maybe someone more familiar with multicast can chime in about it’s RAM usage, but for unicasting up to 30 clients at 100mb per client, I have no issues with RAM usage.
-
The server consumes most of its CPU and RAM during multicast, because it’s decompressing the image file before pushing it out, which threads pretty nicely and uses a decent amount of buffering. I have dual core VM’s with 4GB of RAM for storage nodes, and they get pretty close to maxing out their resources while they’re multicasting.
-
Would SSD’s be better than SATA III?
I was looking at the Samsung 840 Pro vs the Western Digital VelociRaptor WD1000DHTZ.
We would do most likely raid 5 or 6. Is there an advantage either way? -
SSD’s are generally faster than HDD’s, but there will be a point of diminishing return on performance. I believe the max theoretical throughput on a 1Gb NIC is 7.5GB/Min (1Gb/sec = 1000Mb/sec, 8b = 1B, 1000Mb/8 = 125MB, 125MB/sec * 60sec = 7.5GB/min… or something like that). So once your server is able to read the image file at that speed, and your client is able to write the image at that speed, you’ll see the peak performance without upgrading to 10Gb networking, which is pretty rare outside of the data center.
-
SSD allow you to use fewer disks for the same performance. You can replace about 12 HDD with 5 SSD. Less disks = less chance of array failure, as long as you have enough disks to have redundancy.
-
If we have a 10 GB backbone would FOG work with a fiber card as long as it is native ubuntu supported?
-
I’m going to assume that as long as the OS can drive the device, FOG can make use of it.
-
Thank you for all your help!
-
If we have a 10GB juniper switch and connection from our hub site would it be worth while to spend the 500-1000 on a 10GB nic?
-
Unless you’re in a very demanding environment I feel like it would be wasted. If it’s just a matter of load balancing then I’d suggest setting up storage nodes. There are very few circumstances in which I would expect 10Gb to be the only option.
In the follow scenario your bottleneck is probably the HDD:
[COLOR=#999999]HDD[/COLOR] [COLOR=#ff0000]–SATA3–[/COLOR] [B]Server[/B] [COLOR=#339966]–1Gb–[/COLOR] [B]Swtich[/B] [COLOR=#339966]–1Gb–[/COLOR] [B]Client[/B] [COLOR=#ff0000]–SATA3–[/COLOR] [COLOR=#999999]HDD[/COLOR]In the follow scenario your bottleneck is probably the 1Gb:
[COLOR=#800080]SSD[/COLOR] [COLOR=#ff0000]–SATA3–[/COLOR] [B]Server[/B] [COLOR=#339966]–1Gb–[/COLOR] [B]Swtich[/B] [COLOR=#339966]–1Gb–[/COLOR] [B]Client[/B] [COLOR=#ff0000]–SATA3–[/COLOR] [COLOR=#800080]SSD[/COLOR]In the follow scenario your bottleneck is probably still the 1Gb:
[COLOR=#800080]SSD[/COLOR] [COLOR=#ff0000]–SATA3–[/COLOR] [B]Server[/B] [COLOR=#0000ff]–10Gb–[/COLOR] [B]Swtich[/B] [COLOR=#339966]–1Gb–[/COLOR] [B]Client[/B] [COLOR=#ff0000]–SATA3–[/COLOR] [COLOR=#800080]SSD[/COLOR]In the follow scenario your bottleneck is probably the SSD, although you might have reached the 6Gb limit for SATA3:
[COLOR=#800080]SSD[/COLOR] [COLOR=#ff0000]–SATA3–[/COLOR] [B]Server[/B] [COLOR=#0000ff]–10Gb–[/COLOR] [B]Swtich[/B] [COLOR=#0000ff]–10Gb–[/COLOR] [B]Client[/B] [COLOR=#ff0000]–SATA3–[/COLOR] [COLOR=#800080]SSD[/COLOR]Without upgrading the network connections all the way to the client, the only performance increase you would see from using 10Gb on the server side is from concurrent imaging tasks, which you could easily get by using storage nodes to load balance the tasks.