SVN 5909 Optiplex 7440 capture Issue
-
That seemed to have done the trick for that particular issue. The next issue comes in the fact that it is saying “Preparing backup location…Failed”
Failed to create image capture path (prepareUploadLocation)
Args passed: /images/f48e38d1c6c7That folder exists in /images/dev, but not /images
-
@svalding said in SVN 5909 Optiplex 7440 capture Issue:
Failed to create image capture path (prepareUploadLocation)
Args passed: /images/f48e38d1c6c7@Tom-Elliott sounds like an init typo?
-
@Wayne-Workman Seeing as I haven’t edited the init’s in regards to this, it seems rather unlikely to me. Why? Because /images on the client, for an upload, is using /images/dev.
So it not being able to create /images/<macofhost> means exportfs is likely screwed up somewhere, or there’s no more space on the disk. Mind you, it only attempts to create the directory if it doesn’t already exist. So something else is a little odd to me at this point. If I had to guess, drive is full?
-
@Tom-Elliott ok then.
@svalding
Please provide the output of these four commands:
showmount -e 127.0.0.1
cat /etc/exports
df -h
ls -lahRt /images/dev
-
root@Slipstream:~# showmount -e 127.0.0.1 Export list for 127.0.0.1: /images/dev * /iscsi * root@Slipstream:~# cat /etc/exports /images *(ro,sync,no_wdelay,no_subtree_check,insecure_locks,no_root_squash,insec ure,fsid=0) /images/dev *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=1 ) root@Slipstream:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 29G 4.4G 23G 17% / udev 10M 0 10M 0% /dev tmpfs 794M 89M 705M 12% /run tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 397M 4.0K 397M 1% /run/user/116 /dev/sdb1 591G 244G 318G 44% /iscsi tmpfs 397M 0 397M 0% /run/user/0 root@Slipstream:~# ls -lahRt /images/dev /images/dev: total 20K drwxrwxrwx 22 fog root 4.0K Jul 15 15:05 .. drwxrwxrwx 5 netserv netserv 4.0K Jul 15 14:36 . drwxrwxrwx 2 netserv netserv 4.0K Aug 6 2015 501ac5fe4d07 drwxrwxrwx 2 root root 4.0K Apr 14 2015 001422d93dfd drwxrwxrwx 2 root root 4.0K Mar 4 2015 0025649960e7 -rwxrwxrwx 1 netserv netserv 0 Oct 10 2014 .mntcheck /images/dev/501ac5fe4d07: total 595M drwxrwxrwx 5 netserv netserv 4.0K Jul 15 14:36 .. -rwxrwxrwx 1 root root 2.0M Aug 6 2015 d1p4.img -rwxrwxrwx 1 root root 2.5M Aug 6 2015 d1p3.img -rwxrwxrwx 1 root root 13M Aug 6 2015 d1p2.img -rwxrwxrwx 1 root root 238M Aug 6 2015 d1p1.img -rwxrwxrwx 1 netserv netserv 18K Aug 6 2015 d1.mbr -rwxrwxrwx 1 netserv netserv 0 Aug 6 2015 d1.original.swapuuids drwxrwxrwx 2 netserv netserv 4.0K Aug 6 2015 . -rwxrwxrwx 1 netserv netserv 339M Aug 6 2015 d1p4.img.000 -rwxrwxrwx 1 netserv netserv 61 Aug 6 2015 d1.original.fstypes -rwxrwxrwx 1 netserv netserv 6 Aug 6 2015 d1.fixed_size_partitions -rwxrwxrwx 1 root root 746 Aug 6 2015 d1.minimum.partitions -rwxrwxrwx 1 root root 792 Aug 6 2015 d1.original.partitions /images/dev/001422d93dfd: total 1.5G drwxrwxrwx 5 netserv netserv 4.0K Jul 15 14:36 .. -rwxrwxrwx 1 root root 1.5G Apr 14 2015 d1p2.img drwxrwxrwx 2 root root 4.0K Apr 14 2015 . -rwxrwxrwx 1 root root 8.2M Apr 14 2015 d1p1.img -rwxrwxrwx 1 root root 512 Apr 14 2015 d1.mbr /images/dev/0025649960e7: total 4.4G drwxrwxrwx 5 netserv netserv 4.0K Jul 15 14:36 .. -rwxrwxrwx 1 root root 4.4G Mar 4 2015 d1p2.img drwxrwxrwx 2 root root 4.0K Mar 4 2015 . -rwxrwxrwx 1 root root 8.2M Mar 4 2015 d1p1.img -rwxrwxrwx 1 root root 512 Mar 4 2015 d1.mbr root@Slipstream:~#```
-
Is this accurate? If so, what is currently exported doesn’t match what the config file says should be exported (in red below).
Export list for 127.0.0.1:
/images/dev *
/iscsi
*
root@Slipstream:~# cat /etc/exports
/images *(ro,sync,no_wdelay,no_subtree_check,insecure_locks,no_root_squash,insec ure,fsid=0)
/images/dev *(rw,async,no_wdelay,no_subtree_check,no_root_squash,insecure,fsid=1 ) -
There are also 3 images present in /images/dev. This could be due to an FTP credentials issue, or /images just being unavailable for whatever reason.
What’s up with the iscsi export?
-
You are noticing the same thing I am. I’m usually not the maintainer of this server, but the guy who is is on an extended vacation, so I’m taking over for him. I know we are using open ISCSI to connect to storage on one of our SAN devices to store images.
I don’t know if he had some kind of symlink going on between /iscsi and /images, but that would likely be my guess as to what is going on here.
Sorry for bothering the list with what seems like a pretty basic issue. When it comes to troubleshooting linux issues, I’m definitely not the greatest. We should have some documentation in our ticketing system from when this got setup as to what he has going on, so I’ll have a look at that. But it definitely seems like not having /images showing up is the cause of our problem.
-
@svalding I could be wrong, but I don’t think the /iscsi is the issue. I would guess, if that’s all I’m limited to, that this is somewhat older in configuration and you’ve since updated the exports file?
Can you try:
sudo service nfs-kernel-server restart
#if Ubuntu/Debian BasedOr:
service {rpcbind,nfsd} restart
#if redhat based.Then get us the output of:
showmount -e 127.0.0.1
-
root@Slipstream:~# service nfs-kernel-server restart root@Slipstream:~# showmount -e 127.0.0.1 Export list for 127.0.0.1: /iscsi/dev * /iscsi * root@Slipstream:~#```
-
Progress!
I fired up a machine and started a capture after running that last command, and it got past that issue.
Now it is just kind of hanging out on Saving original disk/parts UUIDs with a blinking cursor underneath. I’m going to just leave it sit there for the time being, and see if it actually goes into Partclone or not.
-
@svalding said in SVN 5909 Optiplex 7440 capture Issue:
Now it is just kind of hanging out on Saving original disk/parts UUIDs
It probably won’t. You probably need to do a debug capture and run
fixparts /dev/sda
and then try to capture withfog
command. -
and using info from here: https://forums.fogproject.org/topic/7987/will-not-capture-windows-10-image/4 it seems that I have a successful capture happening now!
You guys are awesome. Thanks for all your help.