Moving FOG's /images files off the root partition
This last weekend I created a new production FOG server and I thought I would document how I moved the images off the root partition and onto a disk of their own. Personally I don’t like storing any dynamic or user data on the root partitions. Too many times in the past I’ve see the root partition fill up and the *nix operating system break hard and the only way to repair them is to rebuild the OS. Putting user files on the root partition is basically the same as for a Windows file server putting the user’s group and home drives on the drive.
For this new production server I created a new VM with 3GB of RAM and a 16GB virtual hard drive. For this project a 16GB virtual hard drive is more than sufficient for the OS and the FOG files. But it isn’t big enough for the typical image files. But we’ll fix that a bit later. While I’m going to cover the process for a virtual machine, the process if very similar for a physical server.
So at this point I have a single VM with 1 vCPU 3GB of RAM and 16GB virtual hard drive. The remainder is the process I used to setup this new production server.
- Install Centos 7 using the minimal vanilla install for the OS
- Upgrade the OS with all of the latest fixes by issuing
yum upgrade -yfrom the command prompt
- Add a new virtual hard drive to the server. Size this new vmdk file appropriately for the number of images you plan on capturing. For my installation I make this new vmdk file 100GB in size.
- Reboot the server
- Now we need to locate this new drive. Since this is the second hard drive in the system it would probably be defined as /dev/sdb. Issue the
lsblkcommand to show you the drive structure.
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 298.1G 0 disk ├─sda1 8:1 0 294.3G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 3.8G 0 part [SWAP] sdb 8:16 0 100.0G 0 disk
Now that we know our 100GB disk is sdb lets connect get down to business prepping the disk for our FOG server.
6) We’ll use LVM for this volume, but lets first create a partition on this disk that fills the entire disk.
fdisk /dev/sdb n p <enter 3 time> w
- With the physical partition created lets setup the LVM environment. First we’ll use the pvcreate command initializes physical volume. Remember I only created one partition on the disk so it should be referenced by /dev/sdb1
- Next we’ll create the volume group. A volume group can be from a single partition to several disks with many partitions. For this example I’ll only attach a single partition to this volume group. But the idea is you can expand your volume group (i.e. add additional storage space to your server) by adding more physical space to this volume group. In the command below I’m creating a volume group name fog_disk and attaching the physical partition /dev/sdb1 to that volume group.
vgcreate fog_disk /dev/sdb1
- Now I’m going to create a logical volume that will be linked to the volume group I just created. You can see in the command below I’m creating a logical volume fog_files that will use 100% of the free space on the fog_disk volume group.
lvcreate -l 100%FREE -n fog_files fog_disk
- With the logical volume created we can go and format the disk. In this case I’m formatting the disk in the ext4 format. You’ll notice that the device name is the combination of the volume group and the logical volume name.
- After formatting the LVM drive we need to attach the new drive to a directory off our root file system. In my situation I want to attach this new disk to the /opt directory. FOG stores some of its files in /opt/fog and the images in /images (off the root partition). When I get done I want to move the /images files to /opt/fog/images so all of FOGs dynamic and user files are on the same physical disk. If we capture more files that our 100GB vmdk drive can hold, it will just fill up that drive. The upload will fail, but it will not take down the OS since the data files are on the /opt partition and not the root partition. Its a little complicated for the MS Windows folks to grasp, but trust me it works this way well. To have this partition connected (mounted) each time the OS boots, we need to append the following to /etc/fstab file. This text needs to be entered on a new line in the fstab file.
/dev/mapper/fog_disk-fog_files /opt ext4 defaults 0 1
- With the fstab file updated, lets tell the OS to mount all
- Issue the
df -hcommand to show you the mounted devices. You should see a line in the df output that looks similar to this:
/dev/mapper/fog_disk-fog_files 99G 0G 99G 99% /opt
- With the new hard drive successfully attached to the /opt directory we can go ahead and install FOG as normal.
- Once FOG is installed we need to do a few clean up steps to get the content of the /images directory into the place I want it.
- Lets move the entire /images directory into its new location in /opt/fog. Issue the following command
mv /images /opt/fog
- Now lets create a new /images folder.
- Now lets ensure that we have things setup correctly before we proceed. If you key in
ls -la /imagesyou should not have any files in that directory (hint: we just created it).
- If you key in this commnad
ls -la /opt/fog/imagesyou should see at least 2 directories (/opt/fog/images/dev and /opt/fog/images/postdownloadscripts). If these commands give us what we need then we have everything in place to do the magic of the last step.
- In this step we are going to bind two directories together. This is kind of like a symbolic link. The issue with a symbolic link here is that nfs can not share a symbolic link. We’ll get around this by using a bind mount to give the appearance that the /images are on the root partition while in reality the files are in the /opt/fog/images directory. Append the following line to /etc/fstab. As with the other fstab line, this text must be on a new line.
/opt/fog/images /images bind bind 0 0
- With the fstab updated, issue the following command to mount this bind (link).
- Great the command completed successfully, how can I tell if it worked? The answer is simple, list the contents of the /images folder. If there is stuff in it, then the command worked.
ls -la /imagesYou should see the same content as if you listed the /opt/fog/images directory.
#ls -la /images total 40 drwxrwxrwx 10 root root 4096 Feb 7 09:55 . dr-xr-xr-x. 20 root root 4096 Feb 7 09:45 .. drwxrwxrwx 2 root root 4096 Oct 14 14:45 dev -rwxrwxrwx 1 root root 0 Feb 5 18:57 .mntcheck drwxrwxrwx 3 root root 4096 Jan 21 08:54 postdownloadscripts
- Just to ensure that everything we did survives a boot, reboot the FOG server.
- Use the df -h command to check to see if the /opt is still mapped to /dev/mapper/fog_disk-fog_files
- Issue the
ls -la /imagescommand to confirm you still have files in that location.
- Ensure that nfs still has the proper nfs exports created.
showmount -e 127.0.0.1
Export list for 127.0.0.1: /images/dev * /images *
- If everything looks like its in place then you are done.
One thing I did not comment on above, by setting up a new vmdk file for the /opt directory, we can expand the vmdk file as needed as our captured images increase. I’ll follow up a bit later on how to add space to your /opt partition with just a few commands.
You just placed a link in a current issue regarding the separating images on a separate volume. Rather than confuse the person to whom’s thread you were responding, I thought I’d post my questions here since they pertain more to this article than to the other guy’s question.
I regret that I’m buried with a project that has taken much longer than I had hoped, so I apologize up front for not having time to thoroughly read and digest your article here, and I completely understand if you don’t have time to answer a question that is answered above some where - but we’re moving forward rapidly with a small change to FOG for which I can foresee no problem, but we’re not doing what you recommend here.
To alleviate the issue of image storage, we chose only to mount a separate disk volume (on VMs, a VHD, on hardware an actual 1-4 TB HD) as /images and configured FOG at install (I believe) to use that folder for images.
Is this a mistake? We are required to upload every PC before re-imaging with win7 or win10, so we’ll have a boat load of images and will need space accordingly. At some sites, we may have to sway out the drive, if we need to. We’ve already tested the process and it appears to work, but are we missing something? It seems you’re splitting a lot of things up and splitting images of at a different folder.
Are there any long-term problems or scalability issues you know if we leave the FOG installation as-is out to the box, and move only the /images folder to a separate disk?
@jim-graczyk In short, will what you did work. Most assuredly. That is one way to go about it. And I will tell you its much easier to do it the way you did it than post FOG install. Unfortunately some people only realize that they need more disk space after they have setup fog and fully configured it. So to avoid reimaging the server with the proper disk space there is a round about method to do exactly what you did, but post install. In the since of performance standpoint for your VM, you will probably be better off (performance wise) with a 2 vmdk/vhd model since you will have 2 disk worker threads (one for the OS disk and one for images). So this route is (IMO) the preferred route.
Now to your point about all of the other stuff I did. I’m an old unix guy so I have a golden rule. No unregulated (size) data goes on the OS partition/disk. This is for windows as well as unix/linux. In unix if you fill up the root partition the OS will crash, and usually crash so bad the only solution is to reinstall the OS. With that in mind, I see the /images and /snapins as unregulated (controlled) storage. So in the thread I moved both locations to a vmdk different than the root partition. I could be wrong, but I don’t think the FOG management program considers existing disk space availability or disk reserve when you go to upload objects into storage. If you have good control on your snapins or don’t use snapins then mounting /images to the new vmdk is sufficient.
Thanks for the clarification. We’ve been caught with storage problems frequently enough that we know to use a separate disk volume for /images. For Snapins, we defined our processes when Snaps were much less capable than they are now. We use DFS on separate windows servers and leverage Samba as links under DFS on FOG storage Nodes for smaller sites. Our approach allows for easier tweaks to any Snapin by just editing the contents of some folders. No re-uploading to FOG and re-replicating a big ball of a Snapin. The only thing that replicates are the smaller changed files via DFS.
I get the old linux ‘mandate’ to separate everything into it’s own volume, though the guys I’ve worked with didn’t do that at the disk level, but at the partition level (since most had to deal with a storage team and getting one large chunk of disk and partitioning it was easier than explaining things to the storage team). I’m good with the concept, but don’t follow it dogmatically. Instead I consider the use of the server. If the server is an appliance - does one thing for you, as FOG does - then a FOG server than boots Linux but doesn’t do FOG is of no value to the business. In this case, I don’t split volumes for everything. This goes for Windows and Linux.
I haven’t had the problem you describe were the OS won’t boot, plus with VMs, it’s exceptionally easy to mount the VHD and free up space. I find that placing any hard limit on a specific folder (volume, disk, whatever) is an act of fortune-telling that will end up shutting down the app sooner than allowing all folders supporting an app use the space that’s allocated. I monitor everything with XYMON so I get alerts on disk consumption, but even without that, the run-time for the app is longer w/o partitions.
I only partition where there can be rapid growth that necessitates expanding a volume - and the /images folder in FOG is the best example, when uploading is required (server hundred GB in one client possible).
I know my thinking is contrary to what some feel are best practices in Linux, but I’ve been happy with the results… I don’t tend to lose the service the app/service provided because I miss-guessed the space log files need by 100MB.
Just my opinion…