@Sebastian-Roth Thanks for the update! Again, this feature for us is more of a “nice to have” than an essential, so no immediate rush. I know all of y’all are busy!
Best posts made by danieln
-
RE: [FOG 1.5.10] - Log Viewer displaying blank drop down menu
-
RE: Images suddently not replicating to storage nodes from Master Node
@sebastian-roth Thank you very much for the response. Sorry for the delay, I was on a time crunch for delivering this image and I did not have enough time to continue troubleshooting. I ended up just deleting the image and recapturing and it replicated afterwards. Hopefully it was just a fluke. But i know how to check the replication log file now!
Thanks again for taking the time to respond.
Latest posts made by danieln
-
RE: Assigning Snapins to Hosts via FOG API – Proper JSON Structure and Method?
@danieln Figured it out. Posting the answer here for posterity. This is the request that worked:
Request:
PUT http://10.15.0.2/fog/host/7700
JSON body:
{ "snapins": 1 }
it was
"snapins"
that I had to pass into the JSON body. I found this by just running a general GET request for snapinsGET http://10.15.0.2/fog/snapin
(another total guess that ended up being correct) and in the JSON body response it returned"snapins"
with an array of objects, so I just passed that into the PUT request body and it worked. The Snapins are literally numbered in the order in which you create them and you can reference them here to assign them.Hope this is helpful to any future person who tries to do this!
-
Assigning Snapins to Hosts via FOG API – Proper JSON Structure and Method?
Server OS: Debian 10 Buster
FOG OS: 1.5.10Hey all,
I’ve been integrating the FOG API in with our existing mangement software and I have been successful with using the API to assign Images to hosts and then queue up deployment tasks, but I’m currently hitting a snag trying to assign a snapin to a host using the FOG API. I haven’t able to determine the right JSON structure and request type for this action and it doesn’t appear to be on the API Documentation so I’m really just kinda guessing at this point. I will describe below what I have tried so far:
Request:
PUT http://10.15.0.2/fog/host/7700/
JSON Body:
{ "taskTypeID": 11, "snapinID": 1 }
Response:
{ "id": "7700", "name": "f43909d8047b", "description": "Created by FOG Reg on May 26, 2022, 12:02 pm", "ip": "", "imageID": "529", "building": "0", "createdTime": "2022-05-26 12:02:49", "deployed": "2024-11-19 09:25:39", "createdBy": "fog", "useAD": "", "ADDomain": "", "ADOU": "", "ADUser": "", "ADPass": "", "ADPassLegacy": "", "productKey": "", "printerLevel": "", "kernelArgs": "", "kernel": "", "kernelDevice": "", "init": "", "pending": "", "pub_key": "", "sec_tok": "", "sec_time": "2024-09-26 11:11:04", "pingstatus": "<i class=\"icon-ping-down fa fa-exclamation-circle red\" data-toggle=\"tooltip\" data-placement=\"right\" title=\"Unknown\"></i>", "biosexit": "", "efiexit": "", "enforce": "", "primac": "f4:39:09:d8:04:7b", "imagename": "! HP Elitebook 840 G3", "hostscreen": { "id": null, "hostID": null, "width": null, "height": null, "refresh": null, "orientation": null, "other1": null, "other2": null }, "hostalo": { "id": null, "hostID": null, "time": null }, "inventory": { "id": "7700", "hostID": "7700", "primaryUser": "", "other1": "", "other2": "", "createdTime": "2022-05-26 12:03:01", "deleteDate": "0000-00-00 00:00:00", "sysman": "HP", "sysproduct": "HP EliteBook 840 G3", "sysversion": "", "sysserial": "5CG83651CN", "sysuuid": "caf1b9df-b277-11e8-97a3-6c1072018059", "systype": "Type: Notebook", "biosversion": "N75 Ver. 01.57", "biosvendor": "HP", "biosdate": "07/28/2022", "mbman": "HP", "mbproductname": "8079", "mbversion": "KBC Version 85.79", "mbserial": "PFKZU00WBBA9CO", "mbasset": "", "cpuman": "Intel(R) Corporation", "cpuversion": "Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz", "cpucurrent": "Current Speed: 2900 MHz", "cpumax": "Max Speed: 8300 MHz", "mem": "MemTotal: 7924528 kB", "hdmodel": "INTEL SSDSCKKF256G8H", "hdserial": "BTLA820625EQ256J", "hdfirmware": "LHFH03N", "caseman": "HP", "casever": "", "caseserial": "5CG83651CN", "caseasset": "", "memory": "7.56 GiB" }, "image": { "imageTypeID": "1", "imagePartitionTypeID": "1", "id": "529", "name": "! HP Elitebook 840 G3", "description": "", "path": "HPElitebook840G3", "createdTime": "2024-08-16 14:34:05", "createdBy": "fog", "building": "0", "size": "607121408.000000:12935051.000000:52981968896.000000:10500096.000000:5376049152.000000:176639.000000:723513344.000000:", "osID": "9", "deployed": "2024-11-18 16:16:26", "format": "5", "magnet": "", "protected": "0", "compress": "6", "isEnabled": "1", "toReplicate": "1", "srvsize": "27312695983", "os": {}, "imagepartitiontype": {}, "imagetype": {} }, "pingstatuscode": 6, "pingstatustext": "No such device or address", "macs": [ "f4:39:09:d8:04:7b", "38:ba:f8:c7:32:a1", "38:ba:f8:c7:32:a2", "3a:ba:f8:c7:32:a1", "38:ba:f8:c7:32:a5" ] }
It’s returning all the info about this host, including the Image ID, which I’ve been able to dynamically change with the API, but there doesn’t appear to be any apparent field in the response that indicates a snapin association. It seems like my current JSON body might be incorrect, or perhaps I’m using the wrong request type or endpoint.
So, my questions to the developers are:
-
What is the correct JSON body to use in order to assign specific snapins to a host before deploying?
-
Should I be using a PUT or a POST request for this action?
Assigning an image to a host is typically done using a PUT request, so I assumed snapin assignments would follow the same convention, but this may not be the case. -
Is there a specific endpoint that I should be using for assigning snapins to a host?
If possible, could you provide an example of the correct JSON body, endpoint, and HTTP method for assigning a snapin to a host?
This is something that can apprently be done via the API Documentation. I have tried assigning both
"10"
and"11"
as thetaskTypeID
and have had no luck.Thank you so much for your time and assistance!
-
-
Question about Multicast Image Tasking Behavior with FOG API
Sever OS: Debian 10 Buster
FOG OS: 1.5.10Hey everyone,
I am currently using the FOG API to create multicast image deployment tasks for groups of 24 laptops at a time. I have a question about the behavior of these multicast tasks in case of failure:
If one of the laptops in the group fails during tasking or deployment, would the other 23 laptops wait indefinitely until the task is manually canceled? Or is there a mechanism that allows the rest of the group to continue or time out appropriately?
If so, I think unicast may be the way to go.
Thanks in advance!
-
RE: Using Snap Ins to run scripts post imaging...?
@JJ-Fullmer Thanks for the reply! This is helpful.
I feel like maybe the best way to go about it is to assign the Snap In to each host. Just to confirm, this will deploy the Snap In after each time that host is image, regardless of which image it receives?
I’m also thinking of separating my inventory into FOG Groups by computer model and then assigning the Image Associated with it and Snap Ins at the group level.
-
Using Snap Ins to run scripts post imaging...?
FOG OS: 1.5.10
Server OS: Debian 10 BusterI wrote a Powershell script that will run a QC check on the computer’s hardware post imaging.
In an ideal world, it would be incredible if I could add just add it as a Snap In and have every image in the default storage group automatically get this snap in after it images. Is this possible? Do you have to task Snap Ins on every instance or can you assign them to a storage group?
If not, I will probably look into tasking it within Windows and baking it into the image, but was just curious!
Thanks in advance,
-
RE: Read ERROR: No Such File or Directory
@Tom-Elliott Yeah, that’s about the only other thing I can think of too. It’s a brand new Dell PowerEdge Server with a new HD though, but maybe the HD is just a lemon. I’ll swap out the HD and reinstall FOG and see if that works.
Thanks again for all your input!
-
RE: Read ERROR: No Such File or Directory
Hi all,
Bumping this with some new findings, some custom code I’ve written to try to mitigate and prevent this issue from happening, and another request from the community for some support on this issue:
I have built a custom checksum script that checks all directories in
/images
for all (*.img) files on the Master node, and compares it with all of the same .img files on each of my nodes and if the checksums do not match, to return them as a printed list so I can manually remove them for the Master Node to repropogate.For reference (and the curious), here’s the script:
#!/bin/bash # List of nodes to check NODES=("10.15.0.3" "10.15.0.4" "10.15.0.5" "10.15.0.6") # Create temporary files for missing and mismatched files temp_missing=$(mktemp) temp_mismatched=$(mktemp) # Function to handle the comparison for a single node check_node() { local node=$1 local img=$2 local master_checksum=$3 echo "Checking $img on node $node..." local node_checksum=$(ssh -oStrictHostKeyChecking=no -l root "$node" "xxhsum '$img'" 2>/dev/null | awk '{print $1}') if [ -z "$node_checksum" ]; then echo "File $img does not exist on node $node!" echo "$img on node $node" >> "$temp_missing" elif [ "$master_checksum" != "$node_checksum" ]; then echo "Checksums do not match for $img on node $node!" echo "Master: $master_checksum" echo "Node: $node_checksum" echo "$img on node $node" >> "$temp_mismatched" else echo "Checksums match for $img on node $node." fi } # Get a list of directories in /images for dir in /images/*/ ; do # Skip certain directories if [[ "$dir" == "/images/dev/" ]] || [[ "$dir" == "/images/postdownloadscripts/" ]]; then continue fi echo "Processing directory: $dir" # Get a list of image files in the directory for img in "$dir"*.img; do echo "Processing image file: $img" # Calculate the checksum on the master master_checksum=$(xxhsum "$img" | awk '{print $1}') echo "Checksum for $img on master: $master_checksum" # Check the file on each node for node in "${NODES[@]}"; do check_node $node $img $master_checksum & done wait done done # Read results from temporary files mapfile -t missing_files < "$temp_missing" mapfile -t mismatched_files < "$temp_mismatched" # Print results if [ ${#missing_files[@]} -ne 0 ]; then echo "Missing files:" for file in "${missing_files[@]}"; do echo "$file" done else echo "No missing files found." fi if [ ${#mismatched_files[@]} -ne 0 ]; then echo "Mismatched files:" for file in "${mismatched_files[@]}"; do echo "$file" done else echo "No mismatched files found." fi # Clean up temporary files rm "$temp_missing" "$temp_mismatched"
So far, the biggest offender here is the node 10.15.0.6 (as seen in the original screenshot post). The master (only sometimes) does not seem to want to replicate to this node correctly and after i run this checksum script, i’ll ssh into it and delete img files from the node and wait for replication, which happens with no issues according to the FOG replicator log, but afterwards, the checksum from the Master never matches the image file on this Node and imaging fails when trying to deploy from this node.
Could this be a FOG version conflict? They’re all on 1.5.10, so i’m not sure what’s going on here. There seems to be something maybe with the compression algorythm on this one node that causes it to sometimes not copy the .img file’s over correctly. Also, it seems like it’s almost always
d1p2.img
(which is usually the main data partition).Any ideas here why one particular node wouldn’t (sometimes) copy the img files over correctly?
Thanks again in advance!
-
RE: Read ERROR: No Such File or Directory
Just bumping this with an update to see if anyone else has any ideas:
I decided to cut my losses and completely erase the storage node, reinstall Debian + FOG, and set it up as a brand new additional storage node. Still getting the same issue
I haven’t been able to run tests on any other image than the one pictured on this screen (Dell7480), but every time this machine has booted into the FOG network and gotten this additional storage node assigned to it to deploy, I get this screen and it reboots. I have even tried rebooting the same machine until I get this additional storage node and it still fails.
I have also tried running a checksum on the
d1p1.img
file on the master node and the storage node and they appear to be identical:Image on Master Node:
root@2919-fog-master:~# md5sum /images/Dell7480/d1p1.img b4daa9d9f1282416511939f801a41e2c /images/Dell7480/d1p1.img
Image on Storage Node:
root@fog-node-4:~# md5sum /images/Dell7480/d1p1.img b4daa9d9f1282416511939f801a41e2c /images/Dell7480/d1p1.img
What’s also interesting about this to me is that the error screen seems to suggest that it’s
d1p2.img
that is missing/corrupted and notd1p1.img
. I’m not sure how these file names play into the full picture of deployment, but could it actually bed1p2
that’s the issue here? If so, what is that?Additionally, according to the Fog Replication log, they all seem to be replicating t othe nodes as well without any issue:
[07-06-23 10:33:13 am] * Found Image to transfer to 4 nodes [07-06-23 10:33:13 am] | Image Name: ! Dell 7480 [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1.fixed_size_partitions (Node 1) [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1.mbr (Node 1) [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1.minimum.partitions (Node 1) [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1.original.fstypes (Node 1) [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1.original.swapuuids (Node 1) [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1.partitions (Node 1) [07-06-23 10:33:14 am] # ! Dell 7480: No need to sync d1p1.img (Node 1) [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1p2.img (Node 1) [07-06-23 10:33:15 am] * All files synced for this item. [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1.fixed_size_partitions (Node 2) [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1.mbr (Node 2) [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1.minimum.partitions (Node 2) [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1.original.fstypes (Node 2) [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1.original.swapuuids (Node 2) [07-06-23 10:33:15 am] # ! Dell 7480: No need to sync d1.partitions (Node 2) [07-06-23 10:33:16 am] # ! Dell 7480: No need to sync d1p1.img (Node 2) [07-06-23 10:33:16 am] # ! Dell 7480: No need to sync d1p2.img (Node 2) [07-06-23 10:33:16 am] * All files synced for this item. [07-06-23 10:33:17 am] # ! Dell 7480: No need to sync d1.fixed_size_partitions (Node 3) [07-06-23 10:33:17 am] # ! Dell 7480: No need to sync d1.mbr (Node 3) [07-06-23 10:33:17 am] # ! Dell 7480: No need to sync d1.minimum.partitions (Node 3) [07-06-23 10:33:17 am] # ! Dell 7480: No need to sync d1.original.fstypes (Node 3) [07-06-23 10:33:17 am] # ! Dell 7480: No need to sync d1.original.swapuuids (Node 3) [07-06-23 10:33:17 am] # ! Dell 7480: No need to sync d1.partitions (Node 3) [07-06-23 10:33:18 am] # ! Dell 7480: No need to sync d1p1.img (Node 3) [07-06-23 10:33:18 am] # ! Dell 7480: No need to sync d1p2.img (Node 3) [07-06-23 10:33:18 am] * All files synced for this item. [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1.fixed_size_partitions (Node 4) [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1.mbr (Node 4) [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1.minimum.partitions (Node 4) [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1.original.fstypes (Node 4) [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1.original.swapuuids (Node 4) [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1.partitions (Node 4) [07-06-23 10:33:19 am] # ! Dell 7480: No need to sync d1p1.img (Node 4) [07-06-23 10:33:20 am] # ! Dell 7480: No need to sync d1p2.img (Node 4) [07-06-23 10:33:20 am] * All files synced for this item.
Again, this image deploys off of the other three storage nodes just fine with no warnings or errors. We purchased this additional storage node (fog-node-4) for imaging overflow so isn’t absolutely necessary to have it functional, so we’re not in dire straits here. But we’d really like to get it running and figure this issue out if we can.
Please let me know if all of this makes sense and if you have any questions for me that would help our team figure this out.
Thanks very much for your time and insight in advance!
-
RE: [FOG 1.5.10] - Log Viewer displaying blank drop down menu
@Sebastian-Roth Thanks for the update! Again, this feature for us is more of a “nice to have” than an essential, so no immediate rush. I know all of y’all are busy!
-
RE: Read ERROR: No Such File or Directory
@Tom-Elliott Thanks for the reply. This is helpful info!
Both machines (storage node and target host) are able to boot and the target machine usually starts imaging just fine but after a few minutes of running it’ll throw this warning and reboot after one minute. I have a master node + 4 additional storage nodes running our imaging operation so whenever it reboots and we tell it to deploy again it usually just connects to another node to finish imaging, but this one particular storage node seems to not want to image computers for some reason
Should I maybe change the compression algorithm for the image? It’s odd that the other 3 nodes and the master are able to deploy it. Would an erase and reinstall of FOG on the node be something worth trying?