Suggestions for new server
-
My Fog server died that I had previously pieced together for a proof of concept and ended up just using it. I am going to be purchasing a new system for the fog server to run off of. I will be running Ubuntu 14.04 and the latest version of fog. Other than a large capacity drive for storage purposes are there any suggestions on a new machine?
-
I only have 1 physical fog left. virtual machines are great
if your set on physical get good NICs/HDDs
you wont need much power but you do need to be able to throw data around.it might help if you posted some info about your image size or how many PCs your imaging, and how often.
-
Are you planning on building using actual server architecture or are you going to be using workstation hardware as a server?
-
The volume of imaging varies quite a bit from 2 a week, to a couple hundred a week depending on the workload.
-
As far as the server itāself I am still unsure if i need my boss to spring for a server, or just get a desktop with a large capacity drive.
-
Iād highly recommend a server architecture over a desktop. While a desktop system is capable, the reliability of your fog server will be something youāll likely need more than the drive space itself.
You donāt have to spring money like itās going out of style or anything either.
If you have VM systems, you may be better just buying a storage system to store the images on and just creating a simple VM for the fog server provided you can give it a dedicated NIC.
If you donāt want a VM for the FOG Server, basically a simplistic server that has Maybe 5 300gb drives in a RAID 5 array, with a dual core processor and maybe 8GB of ram should be plenty powerful enough without necessarily busting the bank. You can likely get around 5 years with such a system with little to no issues, and youāll likely have a better warranty over it to.
A desktop system, you may expect more issues and only a lifespan of around 3 years.
Just my recommendations. What you do and how you do it is completely up to you.
-
If your imaging a few hundred a weekā¦ maybe itās worth going for something that can push more than 1GbE.
If you have 10GbE networking between your switches then you could bond together a few gigabit NICs to throw even more data at once down the pipe. You will need a good storage system that can handle throwing multiple streams at once at full speed.
Also consider storage nodesā¦ If you have multiple buildings/larger IDFs then maybe a storage node within one of these could help you image more quickly.
The big advantage of VMs is you can move them to new hardware as required and fit many VMs on one host.
As for Raidā¦ Raid 5/6 has less performance than Raid 10 and you might need the extra performance depending on what specification you settle on.
-
Iām just going through the same process. Currently running an HP N36L Microserver and imaging around 60 machines a week at one location but will need to image a lot more than that per week in the near futureā¦ have just ordered a small Fujitsu Xeon E3-1226 based minitower server with 8Gb ram along with a pair of 1Tb ssdās which Iām intending to run in software raid 1 for the /images storage. The server has two 1Gb network interfaces so Iāve scounged a redundant Cisco 3750 switch from our networks people and will bond the adapters and run a pair of aggregated ports on the switch.
Came in at around Ā£1000 all in. Iād have it all up and running by now but the supplier sent the kit to the wrong office friday lastā¦ sigh.
-
nice, hopefully the rest of your network has somewhere you can plug the bonded ports into from the switch.
-
Iām lucky in that I donāt have to support multiple locations. The switch will be local to the server (which sits under the bench) - I keep everything completely isolated in order that our IT Security peeps can sleep soundly at night
Have been imaging Windows XP machines for a good few years now and Fog has been superb. At the end of this month I need to be imaging Windows 8.1 Panasonic CF-G1 Toughpads hence the change to Fog 1.2 and the update to the hardware.
-
[quote=āRobin Commander, post: 38241, member: 64ā]The server has two 1Gb network interfaces so Iāve scounged a redundant Cisco 3750 switch from our networks people and will bond the adapters and run a pair of aggregated ports on the switch.[/quote]
Iād love to chat with you sometime about how you set up your NIC bonding.
FWIW I am running a Dell PowerEdge 2900 that I consolidated down from two of that model that were being retired. It is running 4GB of RAM, a 250GB RAID1 array for the OS, and a 2TB RAID10 array for image storage. I realize that is an excessive amount of storage for normal use of FOG, but I have been using FOG as a pre-repair image storage system until I can get our Unified System Image implemented.
That being said, the Dell PowerEdge 2900 is no spring chicken anymore, but performs more than fast enough for our demands. HOWEVER, I would NOT want to run FOG on a workstation architecture unless it was set up beautifully. I would want excess cooling and an aftermarket RAID card at least. The reason for this is so that the system can have as much redundancy and reliability as possible. Even then, unless you get premium parts, you are still likely to have problems because workstations are just not designed to be on 24/7. So with all of the money you would sink in to building a potent, redundant, downright beautiful workstation, you could easily get a basic server that would server you better and for longer.
-
it really comes down to how you will be using it. my university just bought a refurbished server with 16GB ram, a 130Gb OS raid, and 20TB raid for storage to dedicate to fog
-
The NIC bonding isnāt difficult to achieve, I just followed the guide at [url]https://help.ubuntu.com/community/UbuntuBonding[/url]. Thereās also a handy guide on a similar vein at [url]http://www.beyondvm.com/2014/03/quick-tip-bonding-lacp-and-vlans-in-linux/[/url] which adds some of the Cisco CLI commands
My etc/network/interfaces ended up as:
auto lo
iface lo inet loopbackauto eth1
iface eth1 inet manual
bond-master bond0auto eth2
iface eth2 inet manual
bond-master bond0auto bond0
iface bond0 inet static
address 192.168.1.60
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 8.8.8.8 8.8.4.4
bond-mode 4
bond-miion 100
bond-lacp-rate 1
bond-slaves eth1 eth2ā¦ which provides for a single ābond0ā virtual network interface that you set up beforehand and refer to as the main interface when installing Fog. If you want even more network bandwidth you could add another two port card and add these interfaces as eth3 and eth4 into the mix too but the limit would be the disk throughput I guess - and the network infrastructure. As everything is isolated and local the latter isnāt an issue for me.
Of course you do need a lan switch that supports LACP (a.k.a 802.3ad). On my sandpit system at home I have another N36L Microserver with a two port pcie-e (Intel) network card running into ports 1 &2 on a DLink DGS-3324SR switch. The latter was pretty easy to set up for LACP from its web interface. For the Cisco '3750 I may need to do some reading
Iām very interested to see what performance the ssdās will achieve on my new box. I figured that the vast majoriy of the time as Iāll be reading from them when imaging back to the G1 tablets I shouldnāt see problems related to trim settings. Time will tell !
HTH
Robin