FOG and SSDs
Working on making some upgrades to our FOG servers to increase performance and streamline some maintenance. We currently run FOG on Ubuntu 10.04 desktops with a single hard drive that houses everything. I’ve had the thought of splitting the FOG server storage and the image storage into two devices.
Within the device, the images would be stored on a standard 1TB hard drive for lots of space that will help with temporary transfers, etc. I know you can have /images stored on a different drive so I’m not too worried about that.
Our plan is to install Ubuntu/FOG on a 60GB SSD to really increase performance, especially while network booting a machine while another machine is already being imaged. This will also allow us to backup the OS on our own without having to worry about backing up images.
My question to the FOG community, are there any snags with running an SSD in Ubuntu 10.04 that I should be aware of? I would assume that the amount of I/O produced by an Ubuntu OS that is only used sparingly would give the SSD some longevity.
any advice is appreciated
In case anybody goes digging through this, I thought I’d share my experiences.
We finally got around to our upgrade and performed these steps:
Used Server Management program Webmin to perform Filesystem backups of the /tftpboot, /var/www folders and our users home directory too, since some ssh settings and such were stored there(in a tar format). Also used it to export the mySQL database to a sql file.
Did a fresh Ubuntu 10.04 install on the SSD. Performed the steps for optimization listed here: [url]http://www.howtogeek.com/62761/how-to-tweak-your-ssd-in-ubuntu-for-better-performance/[/url]
Installed new 1tb hard drive and mounted as /images, outlined here: [url]http://www.fogproject.org/wiki/index.php/Moving_your_images_directory/Adding_Storage_to_the_Images_directory[/url]
Switched back to original hard drive, remount 1tb drive and transferred image files. (This step could be performed so many ways that I’ll leave it to you)
Switched to new SSD again. Installed a fresh install of FOG. Once it’s finished, installed Webmin to restore filesystem backups of TFTPBOOT and VAR/WWW.
Then performed this command in terminal to restore MySQL database:
[CODE]sudo mysql fog < fog.sql[/CODE]
That assumes that your mySQL exported file was called fog.sql
All in all, the transition was smooth, after I learned the right order in which to import stuff. The performance of the website/system was noticeable as the system now booted in 6 seconds flat. This speed increase seemed to have caused an issue in tftpd-hpa (which is what drives the FOG tftp server), in that the service was trying to start before eth0 was ready (with an ip address) and would fail out. Manually starting the service showed that there were no problems. . This also seemed to be part of a bug that a newer version of tftpd-hpa has, as none of my other 10.04 servers have this issue.
It took me several hours of reading, but I finally figured out a fix (not sure if it’s a good fix, but it worked for me)
From the terminal:
[CODE]sudo nano /etc/init/tftpd-hpa.conf
Add this text after the line beginning with “author” (line 4 for me)
#take a nap so eth0 can wake up
and then finally, change the line beginning with “start on” (line 11 after you add the information above) to resemble this:
[CODE]start on (filesystem and net-device-up IFACE!=bond0.80)[/CODE]
After a reboot, run
To see if your service auto started. If so, you’re all set.
There is probably a better way to have transferred the information around, but I’m kind of new at this kind of stuff, this is just what worked for me. Hopefully it’s helpful to you.
We let go of version 9.X when Ubuntu stopped providing updates. 10.04 is actually very good and runs everything smoothly.
I’m running version 9.X still. When it comes to the fog install I found that once the software was installed, upgrading the server (updates) some times caused problems so I had to roll back.
Each subsequent version of FOG works great with the newest releases, we just run into problems when the OS is newer than the FOG back-end program software at times.
I thought virtualizing really killed the I/O throughput?
Are you running a version of Ubuntu higher than 10.04? I’ve been wondering if I should make plans to upgrade the Ubuntu version as well.
I’ve got Ubuntu 64bit server running on a 128GB SSD. I’ve had good results with the I/O and network speeds. Just make sure that the NIC you end up with on the server install is a quality NIC (not broadcom if you can avoid it). Quality NIC’s really are a night and day difference on servers w/linux in my opinion.
If you virtualized your server setup, you could even use NIC teaming to really push your images out (easy to set up in ESX as well). We’ve got reports of users on the forums here using Fedora with good results - [url]http://sourceforge.net/projects/freeghost/forums/forum/716419/topic/4851544[/url]
As far as snags I haven’t run into any.