Is this a 7400 or a E7400? I assume the 1st option since you said the device is new. I just received a 7400 2-in-1 today that I was able to get into the FOG menu by using one of these:
https://www.dell.com/en-us/shop/dell-adapter-usb-c-to-ethernet-pxe-boot/apd/470-abnd/pc-accessories
I had to enable booting through thunderbolt to use this. It may be possible that you can only pxe boot with the one I listed. I know we had trouble with some USB-C Precision laptops that had to use the adapter above. Then there is Microsoft and their surface. Also, mine had an nvme ssd that I had to switch from raid to ahci to get FOG to see the hard drive.
I am using 1.5.6 and bzImage Version: 4.19.48.
Best posts made by jflippen
-
RE: New latitude E7400. No internal NIC, Boot to USB-C Puck NIC. Gets IP from DHCP but does not connect to Fog.
-
RE: Option to delete files when deleting multiple images / snapins.
@Tom-Elliott If that is the case, why even include delete image on the list all page? I can’t imagine wanting to delete multiple images but keep their files. I understand wanting to make something idiot-proof but having a delete image that doesn’t delete the files seems unintuitive to me.
-
RE: Postdownload Scripts
Like @Tom-Elliott said, the initial postdownload script always runs. However, you can use CASE or IF statements to either call a function or open another script. Here is my current postdownload script that I got from one of @george1421 and his wonderful driver injection scripts found here
#!/bin/bash . /usr/share/fog/lib/funcs.sh [[ -z $postdownpath ]] && postdownpath="/images/postdownloadscripts/" if [ $img == "Win7BaseVM" ] || [ $img == "Win10BaseVM" ] ; then case $osid in 5|6|7|9) clear [[ ! -d /ntfs ]] && mkdir -p /ntfs getHardDisk if [[ -z $hd ]]; then handleError "Could not find hdd to use" fi getPartitions $hd for part in $parts; do umount /ntfs >/dev/null 2>&1 fsTypeSetting "$part" case $fstype in ntfs) dots "Testing partition $part" ntfs-3g -o force,rw $part /ntfs ntfsstatus="$?" if [[ ! $ntfsstatus -eq 0 ]]; then echo "Skipped" continue fi if [[ ! -d /ntfs/windows && ! -d /ntfs/Windows && ! -d /ntfs/WINDOWS ]]; then echo "Not found" umount /ntf >/dev/null 2>&1 continue fi echo "Success" break ;; *) echo " * Partition $part not NTFS filesystem" ;; esac done if [[ ! $ntfsstatus -eq 0 ]]; then echo "Failed" debugPause handleError "Failed to mount $part ($0)\n Args: $*" fi echo "Done" debugPause . ${postdownpath}fog.deletelog . ${postdownpath}fog.drivers #. ${postdownpath}fog.ad umount /ntfs ;; *) echo "Non-Windows Deployment" debugPause return ;; esac fi
You can see that I have an if statement at the beginning to only run his driver injection script if the image name is one of my golden image VM’s.
-
RE: New latitude E7400. No internal NIC, Boot to USB-C Puck NIC. Gets IP from DHCP but does not connect to Fog.
@buercky So, I see one thing in your screenshot that concerns me… you have two option 66 with one as the ipxe.efi instead of your server IP. Here’s what my DHCP options look like after applying the options from the guide:
-
RE: PXE Boot Dell Optiplex 7050 fails in UEFI works in Legacy
@dholland Hey, I think for UEFI booting you have to boot with ipxe.efi for option 67. Also, I would give this wiki page a read… I just set it up last month and it’s amazing!
https://wiki.fogproject.org/wiki/index.php?title=BIOS_and_UEFI_Co-Existence -
RE: Replication: Only working on 1/5 nodes
@george1421 @Tom-Elliott Is this a logic error in this method?
private static function _filesAreEqual($size_a, $size_b, $file_a, $file_b, $avail) { if ($size_a != $size_b) { return false; } if (false === $avail) { if ($size_a < 1047685760) { $remhash = md5_file($file_b); $lochash = md5_file($file_a); return ($remhash == $lochash); } return file_exists($file_b) && file_exists($file_a); } $hashLoc = self::getHash($file_a); $hashRem = $file_b; $hashCom = ($hashLoc == $hashRem); return $hashCom;
It looks like it skips md5sum for files larger than 1GB and does the encode_64 from getHash instead. The method is calling getHash for $file_a but not for $file_b. I do not see a part in the php file that calls for a hash for $file_b at all. Wouldn’t this mean the hashComp would be comparing a hashed file vs a non-hashed file?
-
RE: Dell Latitude 5590 issues after imaging
@george1421 virtualbox does not. @lschnider make sure you don’t turn off EFI in virtualbox until after the VM has shut down and just before uploading. Otherwise you have to go back to your snapshot do the sysprep again.
Then, as @george1421 said, you get to play musical chairs with your boot files, using undionly.kpxe for uploading and ipxe.efi for deploying.
I set up the DHCP rules on one of our dhcp servers (only one that was 2012…) and I can vouch that it saves a LOT of headaches.
-
RE: Replication problems 1.5.4 - always copying
-
Your idea sounds like a good compromise. Something like delete old file on new upload checkbox that is checked by default maybe?
-
okay, that makes sense. I forgot that the images folder only syncs folders tied to an image in the table.
-
I tried your update and that fixed the file getting set to 0kb. I test when I can, as FOG has a lot of potential in my eyes and I know most if not all of you are doing this in your spare time, which means a lot. I’m hoping someday I’ll have time to learn PHP and contribute more to the project, but that’s a battle for another day.
-
Posted in feature request and tagged you and this post for reference.
-
-
RE: Windows 10 Error on deployment only on 1st attempts...
@Sebastian-Roth That’s fine. Things have been pretty swamped over here so I had to put the issue on the back burner. The issue doesn’t always show up, so troubleshooting has been time consuming.
Latest posts made by jflippen
-
RE: Windows 10 Error on deployment only on 1st attempts...
@Sebastian-Roth That’s fine. Things have been pretty swamped over here so I had to put the issue on the back burner. The issue doesn’t always show up, so troubleshooting has been time consuming.
-
RE: Windows 10 Error on deployment only on 1st attempts...
@george1421 Here is what shows up as mounted on the storage node:
Firewalld stopped and disabled.
sestatus is in permissive mode. -
RE: Windows 10 Error on deployment only on 1st attempts...
@george1421 I had a meeting this afternoon and will be off the next few days, so I won’t be able to test this until Monday. I will post results when I get in.
-
RE: Windows 10 Error on deployment only on 1st attempts...
@george1421 Okay. I was issuing the debug task from the web GUI instead of deploying an image with debugging.
So, when I try mounting the volume before running the fog command, I get denied:
However when I then run the FOG command the imaging goes without a hitch:
After the imaging completes the share is mapped no problem.
-
RE: Windows 10 Error on deployment only on 1st attempts...
@george1421 Here is a pic of the first command… the 2nd command brought back an empty result on the surface:
The storage node IP is 10.59.181.12.
-
RE: Windows 10 Error on deployment only on 1st attempts...
@george1421 They are just load sharing. The devices being imaged are on a different subnet than the storage nodes and fog server. Network team said there should be no blocking between sites on the firewall or the switch configs.
-
RE: Windows 10 Error on deployment only on 1st attempts...
@Sebastian-Roth Here’s what I get when trying to debug on my surface when I try to run the fog command on the debug. It doesn’t even make it past the first step.
-
RE: Windows 10 Error on deployment only on 1st attempts...
@Sebastian-Roth maybe… I know our network team has been tightening up security. Our main FOG server and our storage nodes are all set up with link aggregation (NIC teaming with LACP) with dual Gigabit ports. However the LAG setup didn’t seem to have issues previous to the 1.5.6 update.
-
Windows 10 Error on deployment only on 1st attempts...
So I have had the following issue for the last few months and haven’t been able to figure out what’s going on. The issue seems to only be with any of my windows 10 images, including ones I have just created… and only on the 1st attempt FOG takes to image the machine. After the client machine reboots from the error it images just fine. The error is that it is unable to locate the image store (/bin/fog.download).
Based on searches it seems to be an issue with FTP… I did notice that somehow my deployment server now has both a “fog” and a “fogproject” user account that it uses. I have to use one for my storage node settings and the other for the FTP username and password under the TFTP settings. If I try to use one account or the other for both I can no longer update my kernel and other functions break.
The only thing I can think of is that somehow things got screwy when I went from being on some test builds for troubleshooting last year back to a stable build when 1.5.6 came out.
Any suggestions for fixing this is greatly appreciated.
-
RE: New latitude E7400. No internal NIC, Boot to USB-C Puck NIC. Gets IP from DHCP but does not connect to Fog.
@buercky Awesome! Glad to hear it’s working now.