Are they actually seperate drives or only seperate partitions on the same drive?
Best posts made by Quazz
-
RE: Large Image sizes
-
RE: Full C:\ After Imaging Inconsistent
Issue is likely similar to if not the same as
https://forums.fogproject.org/topic/13440/fog-1-5-6-auto-resize-is-unpredictable
-
RE: Laptop with 2 nvme drives randomly selected so selecting one drive to capture not working
@Sebastian-Roth I think relying on size isn’t the right approach.
Problematic scenario: Two nvme disks of the exact same size and model number. You only wish to deploy to a specific drive, the other shouldn’t be touched. How do we guarantee we get the right one?
Problematic scenario 2: Two nvme disks of different sizes. You wish to deploy to the smaller drive, so both drives can fit the image, how do you guarantee the correct drive?
The only thing unique to a drive is the serial number as far as I know. On the other hand, that would only work on system to system basis, so that’s not really that appealing.
What George brings up is probably a better way if it works, disks should be recognized in a specific order by their controller and could be identifiable in that way.
-
RE: Problem capturing image - " no space left on device"?
@Sebastian-Roth Looking over the
shrinkPartition
function, it looks like the new size is calculated as the theoretical minimum size as given by ntfsresize + 5% of that size. (for safety and consistency I believe).However, I can foresee situations where the theoretical minimum is small enough that the 5% of “safety space” isn’t enough to accomodate everything. I don’t think this will happen under every circumstance, only when there’s certain kinds of fragmentations that it can’t resolve.
Perhaps a preset minimum should be added to ensure a minimum safety boundry (like say 200MB)
Ntfsresize man page mentions you don’t require defragmentating the drive prior to using it since it kind of does that itself, but obviously on an SSD that logic doesn’t really make sense anyway!
Those are my current thoughts on this, but quite honestly I could be completely wrong.
-
RE: PXE Menu Parameters for Diskless NFS?
nfs addresses have to be written like
192.168.1.2:/nfs
(notice the:
) -
RE: Snapin hash does not exist
If a file doesn’t exist, it won’t get a hash, I believe, so verify that the snapin files still exist.
-
RE: Very slow cloning speed on specific model
@oleg-knysh You may also be interested in trying out an experimental init.xz that uses a newer version of partclone. Though the github issue does not indicate this resolves the issue for them, perhaps it will in this case. It’s worth a shot anyway.
https://drive.google.com/file/d/1u_HuN5NSpzb7YmQBAsrzDELteNmlWUWU/view
-
RE: Very slow cloning speed on specific model
@oleg-knysh Download the init file from the provided link, put it in /var/www/fog/service/ipxe
Give it a name that isn’t init.xz (eg init_partclone.xz) so it doesn’t overwrite)
Then you can change the host init file in the host settings or globally under FOG Settings -> TFTP servers (eg init_partclone.xz)
-
RE: Getting Dells to PXE boot with UEFI
@rogalskij ipxe.efi is the default file for UEFI.
However, it may pay to take a look at the default FOG DHCP config so you can serve both legacy and UEFI clients alike.
https://wiki.fogproject.org/wiki/index.php?title=BIOS_and_UEFI_Co-Existence#Using_Linux_DHCP
-
RE: ACPI Errors during host registration
Your images didn’t seem to upload correctly.
You could try adding the kernel argument noacpi in the FOG settings, though potentially the real culprit might not be ACPI related at all, hard to say without the images being visible.
-
RE: Error while creating new image: No space left on device
@eVal I’ve heard of people having issues capturing when updating existing images like that.
Most people’s workflow consists of spinning up the latest Windows 10 ISOs, installing all their software, sysprep and then capture I believe.
You might have to use cleanmgr.exe (default windows cleaning utility) to clean up the updates after they’re installed.
Defragging might also be since a large amount of data was moved around.
I don’t know if you tried
dism /online /cleanup-image /startcomponentcleanup
either.But even after all that it might still act up.
-
RE: Snappins don't work
This error is typically resolved by using the Reset Encryption Data button for the host/group in question in the FOG web UI
-
RE: TFTP need enter manually
The problem here is that you have 2 DHCP Servers.
192.168.6.15 and 192.168.6.152
-
RE: rcu_sched stall OR kernel panic on PowerEdge R640
@george1421 Yes, I think that’s why it was left at 8 in the config, though perhaps some CPUs don’t handle a majority of their cores being ignored very well?
-
RE: Extremely Slow Deploy to NVME drives
@Middle Try setting the kernel argument as a global setting instead of on the host page. (FOG Configuration -> FOG Settings -> General -> Kernel args)
This problem may also be resolved with SSD firmware updates if available.
I’d also be interested in the results of kernel args
pcie_aspm=off
andpcie_aspm=force
(do not set the latter as global)Only set one of the 3 kernel arguments.
This problem is caused by ASPM and how certain devices interact with it. The reason it’s a problem specifically for NVME devices is because of their PCIE connection. A lot of these drives have buggy implementations (sometimes fixed in firmware updates)
-
RE: rcu_sched self detected stall on CPU when Deploying
@Wolfbane8653 Please try kernel argument
tsc=unstable
Then try kernel argument
clocksource=hpet
-
RE: rcu_sched self detected stall on CPU when Deploying
@Wolfbane8653 You can safely set this globally, unless you have even older CPUs
-
RE: button "Reset encryption Data" does not appear
@martial Can’t hurt to upgrade, for sure.
-
RE: rcu_sched stall OR kernel panic on PowerEdge R640
@Sebastian-Roth I think it’s fine to leave X86 to 8 since I don’t think they make huge multicore 32 bit CPUs. (wouldn’t really make sense to me anyway) Though I also think it wouldn’t necessarily hurt to change it, why bother if it’s not needed? I also believe 8 is default for X86 anyway
As for ARM: https://www.phoronix.com/scan.php?page=news_item&px=ARM64-256-Default-NR_CPUS
Default in Linux 5.1 ARM 64 is 256 now. (current default being 64)