Adding new drive to FOG 1.2 server, upload issue to storage node. Any ideas?
I’ve added a new drive to our existing FOG server and followed the instructions from the post below to get it added as a storage node on the existing server.
Re: [I need to add additional storage on an already running FOG server](can I get some help?)
FOG 1.2 installation on Debian 7.6 - in operation since 9/2014
2TB drive with images in the standard /image directory
We have about 450GB free on the existing drive, so I added another 2TB drive and it’s mounted at /images2 (I’m not very creative in that regard). I created empty .mntcheck files in /images2 and /images2/dev.
I added a new storage group and added a storage node, set the IP address to the FOG Server IP, set the path to /images2 and set the Management username to “fog” and the password to the one I used when installing Fog.
I manually edited the /etc/exports file to export /image2 and /image2/dev.
I rebooted the Fog server for good measure and created a new image to reside on the new storage node. I began an upload. During the upload, I noticed that the temporary MAC-based directory for the upload in the dev directory was in /images/dev rather than in /images2/dev. I thought “hmmmm”, but let it continue. After the transfer completed, the files were not moved to their place in the /images2 directory and are still in /images/dev under the MAC directory.
I’ve checked the permission on the folders, they’re all set to 777 and root is the owner. This is what was set on the original /images folder. If I login using FTP with the fog account and password it is successful. I can even see the additional storage available on the dashboard. I’ve looked at the debugging info for FTP here in the forums, but I think I have it set properly. Is it perhaps a file ownership issue? Any other ideas?
Thanks for any help you can provide!
@Dave-Wolf The article says minutes for a 100GB+ image - this isn’t a stone-cold figure. If your system is older, running on older hardware, is under higher load, has slower RAM/Processor/Bus speed - this figure will vary quite a bit I imagine. Even more if it’s using a SAN that is less than amazing.
And, what you’ve described is exactly what’s described by Sebastian. Credentials check out, seems like a timeout, using Ext3… I am not sure you’re not affected - I’m almost positive you are. Ext3 is the issue, it’s how it operates, there’s no configuration you can do to fix it besides moving to Ext4 or better.
And - several people here have used FreeNAS with Fog, there’s a wiki writeup about it in two different spots I think, I’ll see if I can find them.
Newer note and features on the subject:
@Wayne-Workman Hi Wayne - on the FOG server it’s ext3, but the images are typically less than 20GB and I haven’t experienced the error referenced in the link. Right now I’ve archived some of our lesser used images to another server and can restore them if we need them. This has freed up enough space for a bit while I can still try to get this working.
In addition, I’m still working on a possible solution of running FOG in a FreeNAS jail so I have access to an easily expandable amount of storage. I’ll need to test the performance, but it might be a better way to go for us. Has anyone else tried to run FOG in a FreeNAS jail?
Thanks for the continued support, Wayne!
@Dave-Wolf What format is the partitions in? I ask because @Sebastian-Roth discovered a fairly serious issue with Ext3 several months ago, noted in here:
@george1421 It’s the same password used by the default storage node. I used WinSCP to connect via FTP to the fog server using the same credentials and was able to upload a file, download a file, and delete a file. Not sure what else I can try, it would be nice to see if there is some error being spit out in a log file somewhere, or maybe on the client screen? I didn’t see anything thing that indicated an error, just a rather long delay at the end of the upload (which smells like a timeout to me).
I decided to work on a new server in a jail on our FreeNAS server while I debug this issue. I’m going to experiment with LVM as well, so I can hopefully prevent this issue in the future when we run out of space.
@Dave-Wolf You may need a gpt disk to extend beyond the 2TB limit. MBR is probably out of the question.
As for the files getting stuck in the /image2/dev folder, typically that is related to the ftp user account and password defined for that storage node is not allowed to log into the FOG server via ftp. Can you confirm that the uid/pass defined for the ftp account for the second storage node is valid?
@george1421 I don’t think fdisk will allow me to extend the partition beyond 2TB, so I think I’m stuck with that on the existing system.
I’m very close with the second storage node concept, it’s just not being moved from the /images2/dev directory at the end of the transfer. Is there some way I can enable debug and step through the upload on the client system to hopefully get some more information as to why the transfer is failing? I see a long pause when the upload is complete, like it’s trying to move the files, but they remain in the MAC address named directory under dev.
If I create a new image in the default storage group (that points to /images) it can upload and move the files properly. The permissions on /images and /images2 match exactly. I’m just not sure what’s different/missing.
Thanks again for your help, and if anyone else can chime in, I’d appreciate it very much.
@Dave-Wolf If this is on ESXi and it is created as just a standard partitioned disk and your root partition is the last partition on the disk, you still have options (not sure about the 2TB part, but I suspect you are right that the vmdk files are limited to 2TB). You can expand the vmdk file and then (while it sounds scary) delete the partition and then recreate it right away with the additional disk space for the partition. You are not changing the start of the partitions only the end of the last partition. Once that is done you would just expand the file system.
Or you can setup the second storage node as you have already laid it out. I think Wayne has done a similar setup.
@george1421 My FOG server isn’t using LVM at the moment, unfortunately, and I would probably have to rebuild it if I wanted to convert it, since I only have the one 2TB volume. It’s deployed on an older ESXi server which had a little over 2TB left on it which I just wanted to add. My existing disk was created using fdisk and I believe it’s limited to 2TB. So, to do things “the right way”, I would have to scrub this installation and start over, unless I’m missing an easy way to convert. This server is used daily by my development team to deploy images for automated testing. I am looking for an easy way to add capacity without taking the system offline for any extended period. I will keep LVM in mind when the time comes to move to a better FOG hardware platform. Thanks for the suggestion to use LVM, I do appreciate it.
While this isn’t a direct answer to your post, your post makes me wonder if there is a better way to go about this.
My personal preference is to move the image files off the root partition. I did create a tutorial for doing this. But it doesn’t exactly fit your current setup since it appears you are using a physical fog server.
IMO I would think that the easiest way to add another 2TB of storage to your kit is to not do it the way you have it setup. If it was me, and I knew that your had Debian setup using LVM disk management, I would just add that 2TB disk to the root ( / ) LVM group. This will put linux in charge of managing and spanning the data between the disks. That way you don’t have to mess with any funky fog configurations. Once the disk is added to the lvm group and you expand the file system to take advantage of the new space, you use FOG as you have to the last 2 years. There are no changes needed to FOG you will just have more space in the file system. If you need more space then you would add a third drive to the lvm group and so on.