Suggestions for new server
-
The volume of imaging varies quite a bit from 2 a week, to a couple hundred a week depending on the workload.
-
As far as the server it’self I am still unsure if i need my boss to spring for a server, or just get a desktop with a large capacity drive.
-
I’d highly recommend a server architecture over a desktop. While a desktop system is capable, the reliability of your fog server will be something you’ll likely need more than the drive space itself.
You don’t have to spring money like it’s going out of style or anything either.
If you have VM systems, you may be better just buying a storage system to store the images on and just creating a simple VM for the fog server provided you can give it a dedicated NIC.
If you don’t want a VM for the FOG Server, basically a simplistic server that has Maybe 5 300gb drives in a RAID 5 array, with a dual core processor and maybe 8GB of ram should be plenty powerful enough without necessarily busting the bank. You can likely get around 5 years with such a system with little to no issues, and you’ll likely have a better warranty over it to.
A desktop system, you may expect more issues and only a lifespan of around 3 years.
Just my recommendations. What you do and how you do it is completely up to you.
-
If your imaging a few hundred a week… maybe it’s worth going for something that can push more than 1GbE.
If you have 10GbE networking between your switches then you could bond together a few gigabit NICs to throw even more data at once down the pipe. You will need a good storage system that can handle throwing multiple streams at once at full speed.
Also consider storage nodes… If you have multiple buildings/larger IDFs then maybe a storage node within one of these could help you image more quickly.
The big advantage of VMs is you can move them to new hardware as required and fit many VMs on one host.
As for Raid… Raid 5/6 has less performance than Raid 10 and you might need the extra performance depending on what specification you settle on.
-
I’m just going through the same process. Currently running an HP N36L Microserver and imaging around 60 machines a week at one location but will need to image a lot more than that per week in the near future… have just ordered a small Fujitsu Xeon E3-1226 based minitower server with 8Gb ram along with a pair of 1Tb ssd’s which I’m intending to run in software raid 1 for the /images storage. The server has two 1Gb network interfaces so I’ve scounged a redundant Cisco 3750 switch from our networks people and will bond the adapters and run a pair of aggregated ports on the switch.
Came in at around £1000 all in. I’d have it all up and running by now but the supplier sent the kit to the wrong office friday last… sigh.
-
nice, hopefully the rest of your network has somewhere you can plug the bonded ports into from the switch.
-
I’m lucky in that I don’t have to support multiple locations. The switch will be local to the server (which sits under the bench) - I keep everything completely isolated in order that our IT Security peeps can sleep soundly at night
Have been imaging Windows XP machines for a good few years now and Fog has been superb. At the end of this month I need to be imaging Windows 8.1 Panasonic CF-G1 Toughpads hence the change to Fog 1.2 and the update to the hardware.
-
[quote=“Robin Commander, post: 38241, member: 64”]The server has two 1Gb network interfaces so I’ve scounged a redundant Cisco 3750 switch from our networks people and will bond the adapters and run a pair of aggregated ports on the switch.[/quote]
I’d love to chat with you sometime about how you set up your NIC bonding.
FWIW I am running a Dell PowerEdge 2900 that I consolidated down from two of that model that were being retired. It is running 4GB of RAM, a 250GB RAID1 array for the OS, and a 2TB RAID10 array for image storage. I realize that is an excessive amount of storage for normal use of FOG, but I have been using FOG as a pre-repair image storage system until I can get our Unified System Image implemented.
That being said, the Dell PowerEdge 2900 is no spring chicken anymore, but performs more than fast enough for our demands. HOWEVER, I would NOT want to run FOG on a workstation architecture unless it was set up beautifully. I would want excess cooling and an aftermarket RAID card at least. The reason for this is so that the system can have as much redundancy and reliability as possible. Even then, unless you get premium parts, you are still likely to have problems because workstations are just not designed to be on 24/7. So with all of the money you would sink in to building a potent, redundant, downright beautiful workstation, you could easily get a basic server that would server you better and for longer.
-
it really comes down to how you will be using it. my university just bought a refurbished server with 16GB ram, a 130Gb OS raid, and 20TB raid for storage to dedicate to fog
-
The NIC bonding isn’t difficult to achieve, I just followed the guide at [url]https://help.ubuntu.com/community/UbuntuBonding[/url]. There’s also a handy guide on a similar vein at [url]http://www.beyondvm.com/2014/03/quick-tip-bonding-lacp-and-vlans-in-linux/[/url] which adds some of the Cisco CLI commands
My etc/network/interfaces ended up as:
auto lo
iface lo inet loopbackauto eth1
iface eth1 inet manual
bond-master bond0auto eth2
iface eth2 inet manual
bond-master bond0auto bond0
iface bond0 inet static
address 192.168.1.60
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 8.8.8.8 8.8.4.4
bond-mode 4
bond-miion 100
bond-lacp-rate 1
bond-slaves eth1 eth2… which provides for a single ‘bond0’ virtual network interface that you set up beforehand and refer to as the main interface when installing Fog. If you want even more network bandwidth you could add another two port card and add these interfaces as eth3 and eth4 into the mix too but the limit would be the disk throughput I guess - and the network infrastructure. As everything is isolated and local the latter isn’t an issue for me.
Of course you do need a lan switch that supports LACP (a.k.a 802.3ad). On my sandpit system at home I have another N36L Microserver with a two port pcie-e (Intel) network card running into ports 1 &2 on a DLink DGS-3324SR switch. The latter was pretty easy to set up for LACP from its web interface. For the Cisco '3750 I may need to do some reading
I’m very interested to see what performance the ssd’s will achieve on my new box. I figured that the vast majoriy of the time as I’ll be reading from them when imaging back to the G1 tablets I shouldn’t see problems related to trim settings. Time will tell !
HTH
Robin