Chainloading failure: Toshiba Tecra C40-C UEFI & Samsung SSD MZNLF128HCHP-000
-
-
@mabarton I hate to be the bearer of bad news but all of your pictures are upside down and it makes it very difficult for me to read them.
-
@Wayne-Workman I’m aware. Even when I flip them, they come in upside down. Its fun:(
-
@mabarton in the web interface, go to fog configuration - ipxe boot menu. Check “no menu” and see what happens. If it fails then undo it. But post a pic of the failure. It MIGHT have more info.
-
@Wayne-Workman
Fixed the upside down issueWorking on the config change.
-
With menu off
-
This post is deleted! -
Here is something interesting.
I am able to image the computer with Clonezilla.
Right now I am attempting to deploy the image to a different computer.
-
Have you ever changed your fog servers ip address?
If you go to a web browser and visit:
X.x.x.x/fog/service/ipxe/boot.php?mac=mac:with:colonsWhat do you get? Copy/paste. Replace Xs with fog ip and the mac with colons with the problem computers mac.
-
I have not changed my Fog server’s IP.
@Wayne-Workman said in Chainloading failure: Toshiba Tecra C40-C UEFI & Samsung SSD MZNLF128HCHP-000:
If you go to a web browser and visit:
X.x.x.x/fog/service/ipxe/boot.php?mac=mac:with:colonsWhat do you get? Copy/paste. Replace Xs with fog ip and the mac with colons with the problem computers mac.
I will do that tomorrow after my kids swim meet. Sorry for the late reply. I had to get my kids and head home.
Thank you, Wayne, for all of your help. -
@mabarton Lets circle back and confirm if you can capture and deploy now to these Toshiba systems. If that is working now then we can focus on the uefi exit issue. Initially I thought the two issues were connected but the picture with the FTP error pointed to another issue.
-
@george1421 I began an image upload last night at 11:00 but I had to go home. I am at a swim meet now but I will go to the school and check after and let you know.
I want to thank you and everyone for you helping in solving my problem.
-
@mabarton Excellent, I hope for success on this part. The second issue should (can) get worked out too. I’m focusing on something Tom said in the initial post, but lets take it one issue at a time.
We are here to help and promote the FOG Project. It will be your turn soon to do it in your world.
-
This post is deleted! -
@george1421 said in Chainloading failure: Toshiba Tecra C40-C UEFI & Samsung SSD MZNLF128HCHP-000:
@Wayne-Workman I guess I have to ask a silly question, why is it trying to use the /home/fog directory for image capture? That’s a bit unusual.
I am uncertain because before it used a location that was /images.
I did a sudo apt-get remove and wonder if it deleted some directories/repositories??? Is that possible?
-
@mabarton It’ll still use /images. It’s just that ftp users need a home directory. Call it just passing through.
-
@mabarton Just so you understand FOG is a complex orchestration of a number of other open source projects. All must work together to produce an effective deployment.
FOG uses PXE, TFTP, FTP, PARTCLONE and NFS to be able to image files. The FOS engine (the software that is downloaded to the target computer to capture and deploy images) moves files over (unix) NFS connections (akin to a MS Windows file share) to the /images/dev directory on the FOG server. Once the upload is complete and the image information has been updated in the FOG database, the files are moved from the upload directory in /images/dev/<mac_address> to the /images/<image_name> directory. The FOS engine could do this over NFS, but that means reading the entire image back to the FOS engine’s memory and then writing it to the proper directory on the FOG server. But instead of doing this, the FOG developers are brilliantly using the FTP service on the FOG server to move the files about directly on the FOG server. To do this, once the files are captured to the FOG server, the FOS engine connects to the FOG server over the FTP protocol to direct the file moves. (This is the part you ran into yesterday). The FOS engine must log into the FTP service using the fog (user) maintenance account. In your case the fog (user) home directly did not exist, so linux block the fog (user) login via the FTP service. So the files were on the FOG server, they just couldn’t get to the final destination or recored in the database that the upload was complete. Once you created the fog (user) home directory the FOS engine should have been able to move the file from /images/dev to /images/<image_name>.
I know that was a long (and some what tedious explanation) but I hope you under a little bit more about the sequence of event.
-
@george1421 iPXE, Apache, in some cases ISC-DHCP or dnsmasq, MySQL, MariaDB, PHP, PartImage, probably others I’m not thinking about.
-
httpd php php-cli php-common php-gd mariadb mariadb-server tftp-server nfs-utils vsftpd net-tools wget xinetd tar gzip make m4 gcc gcc-c++ lftp curl php-mcrypt php-mbstring mod_ssl php-fpm php-process
And, others that are not listed which are directly downloaded and not installed through the repo manager.
-
You guys make my head explode. Lol
It amazes me the amount of knowledge both of you have about Fog. Someday, I hope to have a fraction of that knowledge.
I will be at the school in about 2 hours and will check if the image uploaded.
Another question…does it matter where you install Fog and its subsequent updates? For example, do you have to be in /root or can you be in / or another location?
I ask this because I see reference to
/root/ fogproject, and I either do not have access to /root or nothing exists in the directory. Also, I am concerned that I messed up the install and that is the reason I am having this issue.