Location Plugin - ID Must Be Set To Edit Error
gob last edited by gob
- FOG Version: 1.44
- OS: Debian 9
- Service Version: N/A
- OS: N/A
I have a clean install of Fog server and currently road testing it with a view to rolling out globally in our organisation. The plan is to have a single Fog management server with multiple storage servers dotted around the planet. As I understand it, the Location plugin is crucial to minimising bandwidth, however when I try to create a new location I get the message ‘ID must be set to edit’ and the location is not created.
I have found a similar post relating to the LDAP plugin but those troubleshooting steps seem specific to LDAP.
There are no errors posted in the Apache2 log.
Is this a wider problem related to all plugins or am I doing something wrong?
thanks for any suggestions.
That is exactly the case. I need to further investigate.
@george1421 OK looking at what you are doing closer (and not assuming this time), it appears the NAS is not keeping the hidden files .mntcheck between reboots? And you need to recreate them every NAS reboot. That is pretty strange.
@tedd77 Um, why are you doing that? The NAS and the FOG server should be a stand alone servers. You should not need to cross mount anything.
It is working now however every time I reboot the nas I have to execute the following from FOG server
mount -t nfs <nas_ip>:/share/images /mnt
mount -t nfs <nas_ip>:/share/images/dev /mnt
@tedd77 you need to ensure that the user account you created on the qnap has full rights to that directory structure, as well /share/images/dev needs to be writable by a root user. Not THE root user, but A root user. If you look at my tutorial for the synology, you will see I needed to enable extra rights on that nfs share. I can’t speak for the qnap tutorial since I did not write it. I did write one for MS Windows 2012 and had to set similar rights.
Very good tutorial , settings were done to the qnap.
Now i can see that Fog started pointing to my QNAP and the Fog attempts to write but fails after few seconds.
see screen capture
A fog storage node needs
- NFS (set up a certain way)
- FTP (to access the nfs share)
- TFTP (if you want to pxe boot your clients)
Then when FOG is accepting that storage node and you can deploy, then switch and make the NAS the master node of its storage group. Its pretty easy to setup if you have only one site, a FOG server and a NAS. Its a little bit more complicated if you have multiple storage nodes across several sites. You can set that up, it just requires smoke and mirrors.
@tedd77 I’m sure someone wrote a how to for the QNAP. I did write one for a synology nas. Let me grab that one (and look for the qnap)
QNAP 431X with 10G interface
@tedd77 Who makes your NAS. Because there is a setup required to turn a NAS into a storage node.
Yes I defined my nas as a storage node.
I created a user called fog , and a folder called fog on the nas. Given full rights to that user.
On the FOG server I created a storage node , pointed it to the NAS server.
@tedd77 do you have the nas setup as a FOG storage node?
Thank you , what is the best method to capture images then to a large storage nas?
My Fog server is residing on a VM with limited storage .
The aim is to have a NAS on the same network to capture and distribute images. The Fog server will be the leader (orchestra leader) of all operations.
@tedd77 Storage nodes can only send images (normally). You can only send captured images to the master node of each storage group.
Still having hard time figuring out how to make the FOG writes the images on a separate NAS server.
I looked onto youtube https://www.bing.com/videos/search?q=fog+location+management+tutorial&&view=detail&mid=ABF5CCD8147F2039673EABF5CCD8147F2039673E&FORM=VRDGAR
with no luck.
I created 1 extra group , 1 new storage node, as well as 2 locations
whenever I divert from the default group the machine fails to create an image however if I keep on the default group it works but
the image will be stored on the FOG server itself.
@george1421 I’d also recommend, removing the /var/www/fog and /var/www/html/fog directories, and rerunning the installation.
In older FOG Versions, FOG would try to detect the location based on the OS, but since debian and ubuntu have now started using the /var/www/html path similar to redhat basis, on upgrades it was possible to have two separate instances of FOG installed, with one being the “primary”.
The installer does try to create a softlink to /var/www/fog from /var/www/html/fog in the case the /var/www/fog is still the default location. Removing both and reinstalling should make this so no matter which document root it’s using, it will work without issue.
@tedd77 Well since you fount the issue, there is no need to upgrade to the dev branch unless you really want to.
Here are the steps.
sudo -i git clone https://github.com/FOGProject/fogproject.git /root/fogproject cd /root/fogproject git checkout dev-branch cd bin ./installfog.sh
Now when 1.5.0 is released all you need to do is this.
sudo -i cd /root/fogproject git checkout master git pull cd bin ./installfog.sh
checkout masterare the commands to switch the installer between to the installers.
They are 100% ok with your hash
First please accept my apologies for the inconvenience I may have caused.
I discovered that I was writing the file in the wrong directory.
In fact I have 2 fog folders
The correct one is the second in the above list. I was writing to the first unfortunately and nothing was working for me.
Now after I dug properly things are working for me and I confirm the patch is perfect.
@george1421 I still do not mind to try the dev version, could you please send me the instructions ?