FOG: 1.5.4 -> How to setup NAS - Synology DiskStation as Master Node
-
@jeremyvdv Well then change to a different username: FOG web UI -> Storage -> edit the Storage node settings, scroll down to the bottom and edit username and password.
-
actually in my configuration storage web interface of fog it was of course root.
But change that I always have the same error message with my new user
an idea?
-
@jeremyvdv The IP of the client (that it receives when imaging) is possibly in the NAS FTP blocklist due to too many incorrect login attempts.
-
No, it’s not that … and it uploads the files during the creation of the image.
It’s really at the end of the upload that plente.
And the task does not stop on the web interface
-
@jeremyvdv said in FOG: 1.5.4 -> How to setup NAS - Synology DiskStation as Master Node:
No, it’s not that … and it uploads the files during the creation of the image
The upload of the image doesn’t happen via FTP but using NFS. So those are two different things. FTP is only used at the very end to move/rename the image.
From what I see in the last picture it looks like you are using a Synology NAS. In a German forum I just read about someone having the same “Maximum number of tries exceeded” error message and it turned out that his Synology had added a block for his IP address.
You should be able to find the client IP address in the block list of you Synology NAS. See instructions here: https://mariushosting.com/ip-block-list/
-
indeed it was good in the security of synology. thank you but now I still have a problem for upload. I have my pc that goes to the end but in the web console the task does not stop and during a download error image ./bin/upload
Thank you
-
Hello
I admit that I am desperate.
I can not fix this error message.
Please have an idea.
Thank you
-
@jeremyvdv Please test the FTP connection using the following information:
- Server: 10.1.5.8
- Username:
fogproject
- Password: the password you set in FOG web UI -> Storage -> edit the Storage node -> Management Password
Now see if you can find the directory where the images are being stored. Possibly that is in
/volume1/...
or something like that. When you’ve found the diretory containingdev/308d9914a4e6
you note down that path. Go back to FOG web UI -> Storage -> edit the Storage node and set this as FTP Path. -
I’m sorry I do not understand.
I have this config. with user fogproject password fogprojectwhat do you want me to do as configuration?
Thank you -
can someone help me?
^ thank you -
I can not understand.
When I capture the image … the files are well stored in the synlogy. But I can not find the way from the fog …
-
@jeremyvdv said in FOG: 1.5.4 -> How to setup NAS - Synology DiskStation as Master Node:
can someone help me?
^ thank youAre you telling me the that the storage node configuration you have is still creating the error message
ftp_put(/images/dev/308...)
because the storage node configuration clearly states /volume1/images2 fro the FTP path. This should not happen. Do you happen to have more than the one storage node configuration for this nas? I can’t explain why it says /volume1/images2 yet the error message is still defult?When you look on the NAS the files are for sure in
/volume1/images2/dev/<mac_address>
directory? -
Yes that’s right.
on the nas the files are well in volume1 \ images2 \ dev <mac adress> -
@jeremyvdv Then I don’t understand how this picture was created in this post: https://forums.fogproject.org/topic/12168/fog-1-5-4-how-to-setup-nas-synology-diskstation-as-master-node/26 The path clearly states /images/dev/<mac_address>. I can understand if the FTP Path is different than your Image Path in the host definition, but that is not the case in your picture of the storage node configuration here: https://forums.fogproject.org/topic/12168/fog-1-5-4-how-to-setup-nas-synology-diskstation-as-master-node/28
Did you by chance on the default node (real fog server) uncheck the box that says Is Master Node ? You can only have one master node in each storage group. You can have more than one storage group, but each storage group can only have one master.
-
yes i have disabled the id mode on the stoarge default.
but I do not know what to do concretely …
Min image is created well and the tahce resten loop
-
it makes me crazy … I still have data from the image created …
And I can not download … the capture spot turns in a loop …
-
@jeremyvdv Well I guess we have 2 things.
-
While this is not a fix, you can save the failed upload by manually moving the folder /volume1/Images2/dev/<mac_address> to /volume1/Images2/<image_name> . Understand this is not a fix only a work around.
-
Do you have teamviewer capabilities? I would like to see what you are seeing, because its impossible to happen the way you say it does. Understand I am not doubting you, only don’t understand how its possible.
-
-
yes we can do a team viewer.
Can you set a date and time if you can?
IF you can I give you a clear diagram of my install.
-
@jeremyvdv If you are ready now, I am. We can switch over to FOG IM (look for the bubble in the upper right corner of this forum) to send the details of teamviewer connection.
What I will need you to do before we connect to make things go quicker.
- Schedule a capture task with a target computer, but before you hit the submit button check the debug check box then submit the task.
- pxe boot the target computer.
- on the target computer press enter to clear the several screens of text to end up a the FOS Linux command prompt.
- key in
ip addr show
to get the IP address of the target computer. - key in
passwd
and reset root’s password to something simple like hello (no worries the password will be reset when you next reboot the target computer). - Now from your teamviewer host computer use putty and ssh into the target computer using the IP address you collected in step 4 and the user
root
with the password you set in step 5.
That will prep us for a debugging session.
-
@george1421 After having a chat session with @jeremyvdv which we had to cut short I discovered somethings.
I asked him to post the output of the kernel parameters from a debug session.
cat /proc/cmdline loglevel=4 initrd=init.xz root=/dev/ram0 rw ramdisk_size=275000 web=http://10.16.3.129/fog/ consoleblank=0 rootfstype=ext4 mac=00:00:00:00:00:00 ftp=10.1.5.8 storage=10.1.5.8:/volume1/images2/dev/ storageip=10.1.5.8 osid=5 irqpoll hostname=LP0045 chkdsk=0 img=test10 imgType=n imgPartitionType=all imgid=5 imgFormat=5 PIGZ_COMP=-10 hostearly=1 pct=5 ignorepg=1 isdebug=yes type=up
Where we can see the storage path for NFS is
10.1.5.8:/volume1/images2/dev
but looking on the nas via FTP there is no reference to /volume1/images2 its just /images2, yet there is data in /images2/dev/<mac_address> directory. So NFS is working and FTP is not.If we looked at what he posted as a screen shot of the nas storage node
We can see that he is clearly calling out
/volume1/images2
When he connected via ftp to the nas the path was /images2/dev/<mac_address> like we see from the top picture here.
So what I’m thinking is that we leave storage path at /volume1/images2 and set the FTP path to /images2 then it should work. I think NFS is looking at it form the filesystem view where ftp is looking at the path from a logical view. They technically point to the same location, just how they get there is via two different paths.