Setting up new storage node
-
@ITSolutions I did what you said. formatted for Ext4 (same as Root). I am getting the same error.
Ext 4 does not allow me to create new folders so this time i just made the location /Fogdrive
-
After the new drive is formated and mounted and you can SEE it in the linux system,
open the CLI, go to the new image directory (wherever you mounted it)
create a .mntcheck file there, then create the dev folder and a .mntcheck file in there.
touch .mntcheck;mkdir dev;touch dev/.mntcheck
The new directory needs 777 permissions assigned to it recursively as well (you may change this later after it’s working)
I’m keeping my instructions and commands here generic so that this can help others.
chmod -R 777 /the/path/to/your/new/hdd/mount/goes/here
Then you need to add the new images and new dev folders to the exports file:
vi /etc/exports
You’ll see the two lines in there already for your old local storage node. They will have IDs, and in 1.2.0 they start at 1.
Copy those two lines, change the IDs to 3 and 4. Modify the paths so they are correct.Then either reboot or restart NFS and RCP.
-
And obviously add the new location to your Storage Management area as a new master node (in it’s own group as well)
And I agree with @ITSolutions , it’s probably a very bad idea to try to use NTFS. I’d recommend Ext4 as he did.
-
@Wayne-Workman Ok so i think i followed you up untill the last part
@Wayne-Workman said:
After the new drive is formated and mounted and you can SEE it in the linux system,
open the CLI, go to the new image directory (wherever you mounted it)
create a .mntcheck file there, then create the dev folder and a .mntcheck file in there.
touch .mntcheck;mkdir dev;touch dev/.mntcheck
The new directory needs 777 permissions assigned to it recursively as well (you may change this later after it’s working)
I’m keeping my instructions and commands here generic so that this can help others.
chmod -R 777 /the/path/to/your/new/hdd/mount/goes/here
Then you need to add the new images and new dev folders to the exports file:
vi /etc/exports
You’ll see the two lines in there already for your old local storage node. They will have IDs, and in 1.2.0 they start at 1.
Copy those two lines, change the IDs to 3 and 4. Modify the paths so they are correct.Then either reboot or restart NFS and RCP.
Could you expound on what the last step does?
-
It’s not temp.mntcheck it’s just
.mntcheck
(Files that begin with a period in Linux are hidden, you can see hidden files with
ls -la
)for the /etc/exports stuff, look at this: https://wiki.fogproject.org/wiki/index.php/Troubleshoot_NFS Look at the NFS Settings area in that article.
-
@Wayne-Workman So essentially I am editing the old original .mntcheck files copying their contents, editing them to reflect my new location, changing the 1 to 3 and the 2 to a 4 and then saving that data over my new .mntcheck files in the new location under /media/administrator/Fogdrive. Do I have that correct?
-
@Jordonlovik said:
Do I have that correct?
Not really no. Sorry.
the
.mntcheck
files are empty. They are blank. When NFS mounting occurs on the client, it then verifies that mounting was done correctly and is working. It does this by checking for a file called.mntcheck
The clients do not look inside the file, they merely look to see if the file exists. Hence “mnt check”the
/etc/exports
file defines what directories are exported. think of this as sharing. In that file, you should have two lines that describe your existing local storage node. Just copy those lines and past them at the end of the file, and change those two pasted lines to describe the new exported directories (wherever you mounted your hdd to). You’ll need to update their IDs. You cannot have duplicate IDs in this file. Then you save your changes, and then reboot.Does this make sense?
-
@Wayne-Workman yes it does thank you. all i have to do is add two new lines with new IDs that correspond with the new node location.
-
@Jordonlovik said:
all i have to do is add two new lines with new IDs that correspond with the new node location.
Correct.
Plus the .mntcheck files and dev folder, and the matching stuff in the web interface afterwards.
-
@Wayne-Workman Is there any other way to edit the .mntcheck file? the vi editor is really giving me trouble through my remote session.
-
You don’t edit the .mntcheck file. It’s supposed to be a blank file.
You simply create it with
touch .mntcheck
and that’s it, you are done.http://www.linfo.org/touch.html
The touch Command
The touch command is the easiest way to create new, empty files. -
@Wayne-Workman Sorry im saying that wrong i am actually modifying the /etc/exports file! I don’t know how or why i was thinking .mntcheck still. It looks like i may have F’ed my server over anyways. This is my first run at Linux and its been a bumpy one. I was not able to edit the /etc/Exports file because it was owned by root. So i ran the 777 permissions command not knowing the consequences. now i am getting the Sudoers is world writable error. I wish i had realize my install partitioned so idiotic by default. i have plenty of storage space its just not in the root directory where it needs to be.
-
@Jordonlovik I apologize, I should have realized. You do need “super user” permission when editing /etc/exports.
you can either A. change to super user for the duration of your session like
sudo su
or B. execute only one command with sudo likesudo vi /etc/exports
-
@Wayne-Workman said:
And I agree with @ITSolutions , it’s probably a very bad idea to try to use NTFS. I’d recommend Ext4 as he did.
Poking my nose into this little 'ism. The NTFS formatted image drives, physical and virtual, that I’ve been using for over a year disagree.
-
Maybe I’m seeing the wrong thing.
The DefaultMember is IP address 10.1.1.201, but the new node (which I imagine is on the same server?) is set to IP 10.1.1.102
-
@Tom-Elliott said:
10.1.1.201, but the new node (which I imagine is on the same s
I noticed this myself. I changed that and am sill experiencing the same error.
-
At this point I am strongly considering a full reinstall of the server. Although it was labor intensive. I was just starting to experience the wonder of FOG imaging too. and I love it.
-
@Jordonlovik said:
At this point I am strongly considering a full reinstall of the server. Although it was labor intensive. I was just starting to experience the wonder of FOG imaging too. and I love it.
I’d recommend CentOS or Fedora. These instructions basically work for both: https://wiki.fogproject.org/wiki/index.php/Fedora_21_Server
-
@Wayne-Workman You don’t prefer Kubuntu for FOG setups?
-
@Jordonlovik I don’t prefer things that aren’t Red Hat based. I’m studying for my RHCSA and RHCE, and if an employer wanted to use Linux, they generally choose RHEL for the support that Red Hat offers.
If you’d like to create a tutorial for Kubuntu, please go ahead and post it in the tutorials area for everyone. And obviously we try our best to help people use whatever distro of Linux they want. @Tom-Elliott is really the guy who makes it work on just about everything.