Cannot add a storage node
I have a little a problem on the “add storage step”, storage that I want to use to the NAS (qnap). I use NFS share, mounted on my FOG to /images with this command :
- mount -t nfs 192.168.1.50:/master/images /images >>>>> Succes !
I granted the following rights in this directory (mounted)
- touch /images/.mntcheck
- touch /images/dev/.mntcheck
- chmod -R 777 /images
- chown fog:root /images
- chown fog:root /images/dev
My /etc/exports file was not move with default content :
- /images (ro,sync,no_wdelay,insecure_locks,no_root_squash,insecure,fsid=1)
- /images/dev (rw,sync,no_wdelay,no_root_squash,insecure,fsid=2)
And I have all the rights to create or remove files in my /images mounted (to the NAS)
But when I add the storage in the FOG Web Interface like this :
/master/images = Path of my NAS’s share
192.168.1.50 = IP of NAS
Username/password = default (tftp user)
it says me an error (Failed to connect to) in Disk Information…
So we cannot specify directly our NAS in the “add storage node” or bad configurations ?
Thanks in advance
I finish my labo With Windows storage node with few problems of permission at first. Like the need to grant permission to create and delete files and DIRECTORY on the ftp user (user filezilla).
At the end of the capture task, FOG needs to rename the image file. Then, if my storage path is:
With filezilla, I must create a user by giving all rights to c:\ , c:\images, and c:\images\dev to complete my lab.
Should I do the same thing for the user that I created on the NAS? because when I create, on the NAS, a user with RW on my share “images”, it’s not allow me to capture or deploy an image. I am forced to use the administrator account of my NAS to make this happen. So like with Filezilla, in my NAS the ftp user must have full access to the root ? (and i did not see this option in QNAP)
Thanks in advance
I will try Nagios why not, it’s a good idea ^^
that’s what I thought too for the disks information regarding of my tests
I finally finished my lab with the NAS by activating FTP service on NAS and by adding FTP user on it for management user (used at the “add starage node” step in WEB GUI). It’s a success !!
Also I enabled all my network for NFS, is it necessary to do it ? for capture or deployement task, the only one who needs access it’s the FOG server ? the PC’s don’t need ?
Last question ^^ : Does the SMB works too ?
I continue my test with Windows storage node …
Thanks all for your help
It [U]may,[/U] there are no guaranties. It depends on the OS and other configurations on that particular NAS.
I thought that my error at the DISK Information would block me during capture/deployment.
I will continue my lab and I tell you if it works
The Disk informations should work if I connect to the NAS ?
Thanks for your help
The only way of really knowing if it is working is to check the replication logs to see if that node is successfully replicating the files. If it is then you can set up a download task for a client and remove queuing from the fog server and other nodes on the network.[/quote]
One word can solve all your reporting woes… Nagios.
That is all
You can use it to check if a server is alive, and with proper set up it can even read out disk usage and make custom graphs, as well as a load of other things. I know it’s not perfect, but it’s a solution.
If you want to try Nagios, I recommend FAN (Fully Automated Nagios) as it will come with Nagios, Centreon (used to create the services and actually do the checking), and it includes Nagvis, a visual Nagios.
By no means is it for the faint at heart, I just wanted every one to know that this is capable of being done and some linux servers already do this.
The only way of really knowing if it is working is to check the replication logs to see if that node is successfully replicating the files. If it is then you can set up a download task for a client and remove queuing from the fog server and other nodes on the network.
I try many things to resolv my problem but nothing works
To isolate the NAS, I create a Windows storage node with NFS but always the same … (don’t have information disks in WEB FOG Interface)
I configured Windows NFS Server (ANSI code) with all the rights for all computers and the full access NTFS to evevryone and anonymous logon to my share c:/images.
I activate the GPO to permit the anonymous 's access in my share on Windows Server too
And Fillezilla with theses directory (and fog username/pwd used in management user) :
At this step (add storage node), management user must be a NFS user ? If yes, there was not possibility to create one on the NAS.
I cannot connect my NAS’s storage directly in Web FOG interface, maybe I did not understand how it works…
This means that FTP must be enabled and use login/pwd configured for theses directory ?
But if I use ftp, can we say that we use here NFS? or it’s just for identifcation ?
Thank you in advance
the problem is that when I create a NFS share on my NAS, i only allow the access to my fog (192.168.1.50) but never create a user for that. At the creation of the share, before to activate NFS on it, it told me about the user (probably SMB permissions)…In doubt, I create on NAS, the same user fog/password used in FOG Web Interface (add storage node step)>>> it’s not better after creating this local user
But no, I dit not activate FTP on the NAS.
Can you say me what kind of account i have to put in “Management Username” ? (add the storage node step) :
tftp, or ftp? Is FTP enabled on the NAS? The problem, as I see it, is FTP accesses points differently from NFS.
NFS you can direct specifically, because it starts at the root level: /, so adding /master/images is the correct method to connect that particular NFS setup. However, Most NAS’s from my experience explicitly redirect the FTP connection to a particular home, in your case /master is the FTP user’s / directory. Essentially, from FTP’s standpoint, it’s literally trying to connect to /master/master/images, hence the issue. If you rename the image path from /master/images to /images, FTP will work, but now NFS will not connect.