Chainloading failure: Toshiba Tecra C40-C UEFI & Samsung SSD MZNLF128HCHP-000
-
I have not changed my Fog server’s IP.
@Wayne-Workman said in Chainloading failure: Toshiba Tecra C40-C UEFI & Samsung SSD MZNLF128HCHP-000:
If you go to a web browser and visit:
X.x.x.x/fog/service/ipxe/boot.php?mac=mac:with:colonsWhat do you get? Copy/paste. Replace Xs with fog ip and the mac with colons with the problem computers mac.
I will do that tomorrow after my kids swim meet. Sorry for the late reply. I had to get my kids and head home.
Thank you, Wayne, for all of your help. -
@mabarton Lets circle back and confirm if you can capture and deploy now to these Toshiba systems. If that is working now then we can focus on the uefi exit issue. Initially I thought the two issues were connected but the picture with the FTP error pointed to another issue.
-
@george1421 I began an image upload last night at 11:00 but I had to go home. I am at a swim meet now but I will go to the school and check after and let you know.
I want to thank you and everyone for you helping in solving my problem.
-
@mabarton Excellent, I hope for success on this part. The second issue should (can) get worked out too. I’m focusing on something Tom said in the initial post, but lets take it one issue at a time.
We are here to help and promote the FOG Project. It will be your turn soon to do it in your world.
-
This post is deleted! -
@george1421 said in Chainloading failure: Toshiba Tecra C40-C UEFI & Samsung SSD MZNLF128HCHP-000:
@Wayne-Workman I guess I have to ask a silly question, why is it trying to use the /home/fog directory for image capture? That’s a bit unusual.
I am uncertain because before it used a location that was /images.
I did a sudo apt-get remove and wonder if it deleted some directories/repositories??? Is that possible?
-
@mabarton It’ll still use /images. It’s just that ftp users need a home directory. Call it just passing through.
-
@mabarton Just so you understand FOG is a complex orchestration of a number of other open source projects. All must work together to produce an effective deployment.
FOG uses PXE, TFTP, FTP, PARTCLONE and NFS to be able to image files. The FOS engine (the software that is downloaded to the target computer to capture and deploy images) moves files over (unix) NFS connections (akin to a MS Windows file share) to the /images/dev directory on the FOG server. Once the upload is complete and the image information has been updated in the FOG database, the files are moved from the upload directory in /images/dev/<mac_address> to the /images/<image_name> directory. The FOS engine could do this over NFS, but that means reading the entire image back to the FOS engine’s memory and then writing it to the proper directory on the FOG server. But instead of doing this, the FOG developers are brilliantly using the FTP service on the FOG server to move the files about directly on the FOG server. To do this, once the files are captured to the FOG server, the FOS engine connects to the FOG server over the FTP protocol to direct the file moves. (This is the part you ran into yesterday). The FOS engine must log into the FTP service using the fog (user) maintenance account. In your case the fog (user) home directly did not exist, so linux block the fog (user) login via the FTP service. So the files were on the FOG server, they just couldn’t get to the final destination or recored in the database that the upload was complete. Once you created the fog (user) home directory the FOS engine should have been able to move the file from /images/dev to /images/<image_name>.
I know that was a long (and some what tedious explanation) but I hope you under a little bit more about the sequence of event.
-
@george1421 iPXE, Apache, in some cases ISC-DHCP or dnsmasq, MySQL, MariaDB, PHP, PartImage, probably others I’m not thinking about.
-
httpd php php-cli php-common php-gd mariadb mariadb-server tftp-server nfs-utils vsftpd net-tools wget xinetd tar gzip make m4 gcc gcc-c++ lftp curl php-mcrypt php-mbstring mod_ssl php-fpm php-process
And, others that are not listed which are directly downloaded and not installed through the repo manager.
-
You guys make my head explode. Lol
It amazes me the amount of knowledge both of you have about Fog. Someday, I hope to have a fraction of that knowledge.
I will be at the school in about 2 hours and will check if the image uploaded.
Another question…does it matter where you install Fog and its subsequent updates? For example, do you have to be in /root or can you be in / or another location?
I ask this because I see reference to
/root/ fogproject, and I either do not have access to /root or nothing exists in the directory. Also, I am concerned that I messed up the install and that is the reason I am having this issue. -
@mabarton It doesn’t really matter where the installation files are kept, but I recommend /root/git/fogproject or /root/svn/trunk generally. When running the installer you MUST have root properly sourced - meaning you need to BE root. In most cases,
sudo -i
will source root properly. -
@Wayne-Workman I use sudo su and sudo -i, but I have never used /root because when I use ls I see nothing in /root.
-
@mabarton Well, that just means it’s empty.
-
@mabarton So, I am looking at the image of a 128gb SSD and it shows that the image is 460 mib. I am going to deploy this image to see what happens.
My concern is that the chainloading error means that Fog is not able to boot the SSD and so it is imaging something else???
-
@mabarton booting and imaging are two completely separate things. The size is more a convenience thing to let you know how big of a hard disk you’ll need. I suspect, however, simply disabling network boot after imaging is complete will allow the system to boot properly.
-
@Tom-Elliott
That is excellent news! All of the other computers I have imaged with Fog have been legacy and HDD, and all of their images were almost the same size as the drive. That was why I was so concerned with the size/chainloading error.Thank you, Tom. So, the chainloading error is more of a nuisance than a problem.?
-
@mabarton So, is there a way that I can force the computers to shut down after the imaging process is complete? I am going to deploy this image to 100 computer and I don’t want them to boot loop until I get back to work.
-
@mabarton the chainloading IS a problem but I don’t have a way to fix it for now, so it is also a nuisance as it means, at least for this systems having issues, they won’t automatically perform tasks you might need. I think, however, this would be the perfect time to learn how to configure rEFInd to enable booting to actually happen without a problem.
-
@mabarton when setting up the task there is a place it asks you to shutdown after imaging.