Adding new storage after new HDD install?
-
Hate to have to ask here in the forums, but I can’t seem to find any documentation here. I ran out of space on my original hard drive, so I installed a 2nd drive (2tb )in the unit yesterday. However I cannot figure out how to configure FOG to use that new drive. Can anyone help? This is a time sensitive request.
I am running “Channel Alpha | Version 1.5.4.586” -
The answer may be simple or complex depending on how your fog server is setup.
The simple answer is if your fog server (linux OS) is setup for LVM, just add that new drive to the LVM group and then expand the file system to the size of the new LVM volume. Then the OS is responsible for managing that new disk.
If you want to be in a bit more control of where your files are stored. You would create a new FOG storage group, and then add that new disk as a new storage node replicating the settings you have for the default storage node in this new storage node, except for the disk location. I do have bits of a tutorial on how to do this part.
I would say using the LVM method is easier on the FOG configuration because nothing changes inside FOG. You continue to use FOG without any changes because everything is managed in the OS. If you need a 3rd disk with LVM just add it to the LVM group and then expand the filesystem and move on. That is one of the sweetness of the LVM disks.
Lets start out by posting the output of these two commands here
df -h
lsblk
-
@Jim-Holcomb And number three would be to move your current image location to that new disk. It’s not as flexibel as what you have with LVM but it’s pretty straight forward, does not add complexity to your setup and no FOG config change is needed.
Format the new disk and mount somewhere. Move everything from /images (including the .mntcheck file) to that new disk and then umount and remount it in /images, done.
-
root@fog:~# df -h Filesystem Size Used Avail Use% Mounted on udev 1.8G 0 1.8G 0% /dev tmpfs 376M 9.3M 367M 3% /run /dev/sda1 226G 8.0G 206G 4% / tmpfs 1.9G 248K 1.9G 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sdb1 917G 866G 4.6G 100% /images tmpfs 376M 64K 376M 1% /run/user/1000 /dev/sdc1 1.8T 68M 1.7T 1% /mnt/sdc1 root@fog:~#
root@fog:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 232.9G 0 disk +-sda1 8:1 0 229.1G 0 part / +-sda5 8:5 0 3.8G 0 part [SWAP] sdb 8:16 0 931.5G 0 disk +-sdb1 8:17 0 931.5G 0 part /images sdc 8:32 0 1.8T 0 disk +-sdc1 8:33 0 1.8T 0 part /mnt/sdc1 sr0 11:0 1 1024M 0 rom root@fog:~#
[Mod note] Fixed formatting for readability -Geo
-
@george1421 I would love any tutorial that you might have.
-
@Jim-Holcomb Ok now we see the existing disk structure, you don’t have LVM setup on your server. So we can take that off the table.
The next decision you will need to answer is: Do you want to forfeit (or possibly remove) the current space you have for /images (~1TB) or use both your current space on /dev/sdb in addition to the new disk you added as /dev/sdc1?
This decision is round just copying the files from /dev/sdb to /dev/sdc then remove /dev/sdb from your system.
(personal opinion would be to add a much bigger [new] drive like 4TB, copy the contents from /dev/sdb to the new disk then remove the old disk. But I realize that spend may not be available to do this in your environment)
-
@george1421 I was hoping to just “add” the space, if at all possible? I most certainly do not want to forfeit any images I currently have. I do not mind moving the current images to the new drive, if that makes the most sense. This is why I am asking you all for help on this. Did know adding a single drive to a linux box would cause this much pain. Once again open to any best practice suggestions you might have.
-
@Jim-Holcomb said in Adding new storage after new HDD install?:
Did know adding a single drive to a linux box would cause this much pain.
Wouldn’t call that pain. It’s just you have more options and need to decide which way you wanna go. Compare that to Windows where you are usually only left with one single solution and that might not even suite your needs. Haha.
From my point of view you can still go the most flexible route of using LVM or lets say do a combination of the things suggested by George an me. Start by taking a look at tutorials on LVM, e.g. https://www.tecmint.com/add-new-disks-using-lvm-to-linux/ (don’t just blindly follow this but you should get the gist of how to configure LVM) and start playing with the newly added disk before you move your precious images over. Make the new disk a LVM physical disk, create a volume group and logical volume on it, format and mount that.
Make sure you have a copy of your images on an external (disconnected) drive/media before you actually get to copy/move your images over!!
After you’ve moved the images to the new disk and mounted in /images you might look into adding the old /dev/sdb drive to extend your LVM (combine both disks).
I won’t give you a detailed step by step tutorial as this is something critical where you need to use your brain and understand exactly what you do.
-
@Sebastian-Roth Looks like I have some research to do. Thanks for everything! Sure wish Tom were around…
-
@Jim-Holcomb Well since you don’t have LVM and want to augment your current storage I would recommend this approach that has the least amount of effort. https://forums.fogproject.org/topic/10450/adding-additional-image-storage-space-to-fog-server
This process basically connects the new disk a pseudo new storage node. When you capture images you will have to select which storage group you will send the image to. Both storage groups will exist on your single fog server.
If you don’t like this approach, search for my handle in the tutorials. I have all three ways mentioned in different articles.
-
@george1421 I think this is exactly what I am looking for. Let me research your options here. If I have two storage nodes, the search function will still search across both nodes, right?
-
@Sebastian-Roth Sebastian, quick question for you. What would be considered “best practice” in this scenario? I have copied all my current (1tb) files to a NAS device, so now both the 1tb and the new 2tb drives are available to do whatever is needed to them. At this point is the LVM the best way to go?
-
@Jim-Holcomb Going LVM is definitely the most flexible way. You should be able to add more disks to the disk array if needed.
-
@george1421 I’ll have to check when I get back to work… but I wanted to piggyback off this topic. Our current FOG server and nodes are all running on 1TB drives and also running out of space. We purchased 10TB drives and I was planning on using the method from the Wiki to remap /images to the new 10TB raid 1 (hardware lvl) and expand the root partition to the full 1TB on the old drives to make more room for snapins.
https://wiki.fogproject.org/wiki/index.php/Adding_Storage_to_a_FOG_ServerIf our FOG servers are using LVM, you’re saying I can just mount the raid and add it to the group as a pool storage? How do you keep the old hard drive from running out of space?
-
@jflippen said in Adding new storage after new HDD install?:
If our FOG servers are using LVM, you’re saying I can just mount the raid and add it to the group as a pool storage?
No, you’d add the new disk to the LVM storage group, extend the logical volume and Linux filesystem to fill the new disk space. But that’s just the theoretical quick route. As I already said this is a complex topic. There is no simple tutorial our howto to guide you. Every situation might be a bit different and giving general advices can fail terribly.
To give at least a proper advice on which ways you’d head I may ask you to run the following commands and post output here:
df -h
andlsblk
. -
[root@fog-master ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_fog--master-root 20G 14G 6.8G 67% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /dev/shm tmpfs 3.8G 169M 3.6G 5% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/sda1 1014M 334M 681M 33% /boot /dev/mapper/centos_fog--master-images 902G 233G 669G 26% /images tmpfs 766M 12K 766M 1% /run/user/42 tmpfs 766M 4.0K 766M 1% /run/user/0
[root@fog-master ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 930G 0 part ├─centos_fog--master-root 253:0 0 20G 0 lvm / ├─centos_fog--master-swap 253:1 0 7.8G 0 lvm [SWAP] └─centos_fog--master-images 253:2 0 902.2G 0 lvm /images sr0 11:0 1 1024M 0 rom
-
@jflippen This is not a step by step manual. Make sure you read about LVM, understand it and have a backup copy of the data before you start.
So in LVM speech it seems like you have a physical volume on sda2 with one volume group (centos_fog–master) and three logical volumes. You should be able to assemble the new disk, initialize as physical volume, add to the volume group and then extend your centos_fog–master-images logical volume and the filesystem on it to span the size of the new disk.
Read man pages and articles on LVM on the web before you start.
-
@Sebastian-Roth okay, thanks. I’ll do some research and see which route seems best. I still have a lot to learn about linux and wasn’t sure if there was a “new” way of adding storage to a FOG server since the wiki didn’t mention LVM and I was am not familiar with that yet.