HP Elitebook 830 Gen 6: Issues Capturing Images and Deploying Images
-
I have also attempted to use Clonezilla on the Hynix. I took an image successfully onto a usb drive was able to successfully deploy to a know G6 that failed to image using FOG with a Hynix drive with no issues.
-
@rocksteve69 Which Clonezilla version was this?
@Developers Could this be partition alignment related or was that already fixed ages ago? I can’t remember.
-
-
@rocksteve69 Thanks!
Looks like that version is running on Linux Kernel 5.2.9-2, potentially including some NVME fixes for certain devices.
I am compiling that kernel, hopefully that’s the only thing needed, because otherwise I am unsure where to start looking.
Will post a link here when it’s done.
-
@rocksteve69 https://drive.google.com/open?id=1WLjXQYKDoZCAxF5Gfjeva2O-MLrfQ40N
Copy to /var/www/fog/service/ipxe
Change kernel in WebUI to bzImage529
-
@Quazz said in HP Elitebook 830 Gen 6: Issues Capturing Images and Deploying Images:
Copy to /var/www/fog/service/ipxe
Change kernel in WebUI to bzImage529Excellent, will test and will report back
Cheers again for the help.
-
@Quazz
HiWe have attempted the updated bzImage529 and no effect. Same issues. We also started having issues before Partclone starts where intermittently after the ipxe boot it loads the Fog screen, select deploy image, enter creds, select an image and then it hangs at the bzImage529 stuck at 0% and then returns to the Fog screen after 2 minutes.
-
Now we appear to have another issue that is effecting every device we attempt to image or capture.
After ipxe into the Fog client, selecting Deploy and then selecting an image we get the following error:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0)
We have attempted rolling back the Fog Version to 1.5.7 and the Kernel 4.19.64 and no joy.
Any help would be awesome.
-
@rocksteve69 You may have to increase the ramdisk size in the settings on the WebUI
It’s at TFTP settings (set it to something like 275000)
-
Sorted, the Kernel panic issue, we just snapped the server back to before we upgraded last Friday. All working in that respect. Still got the original issue
-
@rocksteve69 Unfortunately I’m unsure what else to try at the moment.