Unable to use an NFS share to store images
-
@Junkhacker I believe it has that capability. I will try following that documentation again. Maybe I’m missing something.
-
@vpt i know that there are a few forum posts on how to do it as well, if you haven’t searched, like this one for example https://forums.fogproject.org/topic/8668/qnap-nas-storage/18
-
@vpt said in Unable to use an NFS share to store images:
With this setup, I get an error message during capturing that states that access is denied when attempting to mount the share to the /images directory. I have verified that the share on the QNAP is wide open to anyone, but I still get an access denied.
Its probably because FOS Linux connects to the storage node using NFS and the NFS server (QNAP in this case) has squash_root enabled (meaning that it rejects
root
mounting the NFS share). I have some KBs in the tutorials forum that talk about synology NAS and using a windows server as a FOG storage node (not recommended) that mentions this setting too. -
@george1421 Thanks for that piece of information. I had completely forgotten to verify that on the QNAP.
I enable no_root_squash for that share, and I get a different error message now. See Image:
https://meridiannatl-my.sharepoint.com/:i:/g/personal/mzarvalas_vptitle_net/EaYCC_4OSztCm5Acom_qH3YBhoXBTeBavx1OK-Or9aNMuQ?e=Gq8I3HI’m not sure what this error message is trying to convey.
(also sorry for the onedrive link. The image upload feature kept telling me the file was too large)
-
@vpt First let me say, don’t get confused by the paths you see in the picture. When it says “
Failed to create ... /images/1866...
” this means /images on the local system. The FOS Linux booted to do the task mounts/ImageRepo/FogImages/dev
on your QNAP into/images
in the FOS Linux on capture. So what it is trying to say is that it cannot create/ImageRepo/FogImages/dev/1866...
on your QNAP as far as I see it. Make sure it’s writable when mounted via NFS. -
@vpt OK then lets start out by saying how fog works.
(I’m going to speak as how fog was intended to work) When you upload an image FOS Linux connects to the fog server /images/dev directory on the fog server to /images on the target computer via NFS. It does this as the root user. FOS then checks to see if the hidden mount check file is present (can be see on the fog server by
ls -la /images/dev
. This mount file is in both /images and /images/dev directory. FOS linux creates a directory in /images/dev on the fog server that matches the mac address of the target computer (you should inspect each step to see where its failing). FOG then uploads the disk image to /images/dev/<mac_address> on the fog server. Once the upload is done, fog logs into the fog server via FTP and issues a move command to move the upload files from /images/dev/<mac_name> to /images/<image_name>. Then FOS updates the database on the fog server and then reboots the target computer.That is the workflow. Since the error is around pairing the backup location its either not able to create the <mac_address> directory or the mount check files are missing.
-
@george1421
I checked the /images and the /images/dev directories like you mentioned, and I confirmed that they both have .mntcheck files in them. However, I don’t see a file with the machines MAC address in the dev directory, so my guess is that is where my problem lies. -
@vpt I can tell you a trick/secret. If you schedule a debug deploy/capture then pxe boot the target computer after a few screens of text you need to clear with the enter key you will be dropped to the FOS Linux command prompt. From there key in
fog
at the command prompt. You will then single step through the image capture/deployment. You will need to press enter at each breakpoint to continue. Keep stepping through the deployment until you get the error then hit Crtl-C to abort the deployment. From there you can look about to see what isn’t right (like inspect /images from the FOS Linux end to see if you can make directories and such). Once you think you have the issue resolved, you can restart the deployment by keying infog
again at the FOS Linux prompt. This is the method I use to debug post install scripts. -
@george1421 Thanks for that trick. It looks like it is failing on the “Preparing backup location” tasks. On your recommendation, I hit Ctrl+C to abort, I went to the /images directory in FOS Linux, and I attempted to mkdir test. It told me that it couldn’t create the directory and that it was a read-only file system.
Does this mean that FOS can’t write to the directory on the QNAP?
-
@vpt Yes, you don’t have the permissions set right, either you shared the nfs share as read-only or its the file level permissions on that directory on the qnap that blocking the write. So once you get the permissions right then you should be able to create that directory.
-
I realize that you are using a qnap, but here is the tutorial for the synology NAS devices: https://forums.fogproject.org/topic/9430/synology-nas-as-fog-storage-node You should review the permissions for the directories.
And I’m sure you seen this qnap tutorial: https://forums.fogproject.org/topic/10973/add-a-nas-qnap-ts-231-as-a-storage-node-fog-v1-4 the same rules apply for 1.5.x as 1.4.x
-
@george1421 Thanks for these guides. I’m going through the QNAP one step-by-step, and I’m just waiting for the advanced folder permissions to apply. Based on how quickly it’s going, it probably won’t finish today, but I will follow up with my results once it finishes.
I appreciate everyone sticking with this and helping out.
-
@george1421 I was able to complete your QNAP guide finally (been out sick the last few days), and I get a bit further in the process now. However, I still get an error when I get to the Partclone part of the process. See this image for the error in Partclone:
https://meridiannatl-my.sharepoint.com/:i:/g/personal/mzarvalas_vptitle_net/ERHZpkPVz7NAtaerPQsvHNAB0qBUmJMNskhfgvGlTrcpGw?e=Nx1shPAnd see this message for the error afterwards back in FOS Linux:
https://meridiannatl-my.sharepoint.com/:i:/g/personal/mzarvalas_vptitle_net/EdicD0QitxBDsyIuY3JemsoB0XlA97xNczYbRhv_o5rDFw?e=7MQ0AUI tried researching the error in partclone, and I attempted to run ntfsfix against both sda1 and 2, but not luck. I also tried capturing an unsysprepped image with shutting down Windows using the “shutdown /s” option to avoid the Fast Startup issues with windows, but my results were unchanged.
The second error back in FOS Linux seems to indicate that there isn’t enough space to capture an image, but if FOS is using the NAS share as it seems to be indicating in the error message, there should be more then enough space.
Let me know if you have any further troubleshooting advice to try. Thanks a bunch for you help so far.
-
@vpt That first error is saying that windows was probably shutdown incorrectly for image cloning. This happens when someone just uses the shutdown start menu button. With fast boot enabled in windows, shutdown is really an enhanced sleep state. Its really not “shutdown” what we would think it is.
So what can you do?
Shutdown the windows computer properly for cloning by:- If you use sysprep add the /shutdown switch to properly power off the computer for image cloning.
- If you don’t use sysprep then key in
shutdown.exe -s -t 0
from a command window.
To understand what is happening here you can google “windows dirty bit”.
-
@george1421 I have tried both shutting down the system using shutdown.exe -s -t 0 as well as sysprep with the /shutdown switch. I’ve also tried hitting Shift + F10 when it boots to the setup screen of the sysprepped image to launch the command prompt and shutdown from there. I still got the same error.
I also tried running through this guide:
https://wiki.fogproject.org/wiki/index.php?title=Windows_Dirty_BitI completed all three steps in that guide and I still get the same error in partclone
-
@vpt So did you boot the image you are trying to capture and run check disk “chkdsk /f”? There is something wrong with the structure of that disk you are trying to capture.
-
@george1421 I did not let the full chkdsk complete. I tried method 3 in that guide which says to try to capture the image as soon as chkdsk reboots the system. I will try to let chkdsk run through to completion and see if that works
-
@george1421 I ran a full chkdsk on that machine, and I still received the same error.
To see if the issue was isolated to that machine I grabbed another one, inventoried it, and ran a capture task without issue. I think we can safely call this issue solved at this point.
To all interested, Following the below guide solved my initial NFS issue with my QNAP:
https://forums.fogproject.org/topic/10973/add-a-nas-qnap-ts-231-as-a-storage-node-fog-v1-4Thanks to everyone for their help with this project!