@Sebastian-Roth This worked, you can mark this as solved. Thank you for your help.
Posts made by Jedi
-
RE: How to edit drive cloning options
-
RE: Failed to read back partitions
Hi George,
Thank you for your reply. My system is Ubuntu 19.04 so I just took the output from terminal, hope that’s ok?
The image was taken from /dev/sdd and I am wanting to deploy it to /dev/sdb
In case its relevant I specified “/dev/sdd” as Host Primary Disk to prevent FOG taking Windows as the image.
See here:
https://forums.fogproject.org/topic/13042/how-to-edit-drive-cloning-options
root@Ubuntu19:/home/mike# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 35.3M 1 loop /snap/gtk-common-themes/1198
loop1 7:1 0 3.7M 1 loop /snap/gnome-system-monitor/77
loop2 7:2 0 143.5M 1 loop /snap/gnome-3-28-1804/23
loop3 7:3 0 53.7M 1 loop /snap/core18/941
loop4 7:4 0 14.8M 1 loop /snap/gnome-characters/206
loop5 7:5 0 1008K 1 loop /snap/gnome-logs/57
loop6 7:6 0 3.7M 1 loop /snap/gnome-system-monitor/70
loop7 7:7 0 151M 1 loop /snap/gnome-3-28-1804/36
loop8 7:8 0 4M 1 loop /snap/gnome-calculator/406
loop9 7:9 0 91.1M 1 loop /snap/core/6531
loop10 7:10 0 53.7M 1 loop /snap/core18/782
loop11 7:11 0 89.3M 1 loop /snap/core/6673
loop12 7:12 0 14.8M 1 loop /snap/gnome-characters/254
loop13 7:13 0 1008K 1 loop /snap/gnome-logs/61
loop14 7:14 0 4M 1 loop /snap/gnome-calculator/352
sda 8:0 0 596.2G 0 disk
└─sda1 8:1 0 596.2G 0 part
sdb 8:16 0 1.8T 0 disk
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part
sdd 8:48 0 1.8T 0 disk
├─sdd1 8:49 0 512M 0 part
└─sdd2 8:50 0 1.8T 0 part
├─ubuntu–vg-root 253:0 0 1.8T 0 lvm /
└─ubuntu–vg-swap_1 253:1 0 976M 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:1 0 232.3G 0 part
├─nvme0n1p2 259:2 0 100M 0 part /boot/efi
└─nvme0n1p3 259:3 0 502M 0 part
root@Ubuntu19:/home/mike# fdisk -l
Disk /dev/loop0: 35.3 MiB, 37027840 bytes, 72320 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop1: 3.7 MiB, 3821568 bytes, 7464 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop2: 143.5 MiB, 150470656 bytes, 293888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop3: 53.7 MiB, 56315904 bytes, 109992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop4: 14.8 MiB, 15458304 bytes, 30192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop5: 1008 KiB, 1032192 bytes, 2016 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop6: 3.7 MiB, 3846144 bytes, 7512 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop7: 151 MiB, 158343168 bytes, 309264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/nvme0n1: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk model: Samsung SSD 970 EVO 250GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 09583A20-4CD6-11E9-B903-C97FD46B0FAFDevice Start End Sectors Size Type
/dev/nvme0n1p1 2048 487161329 487159282 232.3G Microsoft basic data
/dev/nvme0n1p2 487161856 487366655 204800 100M EFI System
/dev/nvme0n1p3 487366656 488394751 1028096 502M Windows recovery environmentDisk /dev/sda: 596.2 GiB, 640135028736 bytes, 1250263728 sectors
Disk model: WDC WD6400AAKS-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0000a7b9Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1250263039 1250260992 596.2G 83 LinuxDisk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EFRX-68E
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-00M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1A9126E1-F57F-4FD8-B67C-35B6AB0A5AF0Device Start End Sectors Size Type
/dev/sdc1 2048 1953523711 1953521664 931.5G Linux filesystemDisk /dev/sdd: 1.8 TiB, 2000315023360 bytes, 3906865280 sectors
Disk model: MARVELL Raid VD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 70FAF413-B8D4-4AD7-BEB0-ACEC78095D8BDevice Start End Sectors Size Type
/dev/sdd1 2048 1050623 1048576 512M EFI System
/dev/sdd2 1050624 3906865151 3905814528 1.8T Linux LVMDisk /dev/mapper/ubuntu–vg-root: 1.8 TiB, 1998749433856 bytes, 3903807488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/ubuntu–vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop8: 4 MiB, 4218880 bytes, 8240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop9: 91.1 MiB, 95522816 bytes, 186568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop10: 53.7 MiB, 56315904 bytes, 109992 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop11: 89.3 MiB, 93581312 bytes, 182776 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop12: 14.8 MiB, 15462400 bytes, 30200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop13: 1008 KiB, 1032192 bytes, 2016 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop14: 4 MiB, 4214784 bytes, 8232 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@Ubuntu19:/home/mike# -
Failed to read back partitions
I have an OS installed on a hardware RAID controller with two 2TB SSD’s.
I have taken an image. It had to be “Multiple Partition Image - Single Disk (Not Resizeable)”. The “Single Disk - Resizable” option would not work for some reason.
Now I am wanting to deploy the image to a different drive on the same computer (a 2TB HDD).
I unplugged the sata cables for all the drives except the one I am wanting to deploy the image to.
When I attempted to deploy the image I got the following error:
Failed to read back partitions (runPartprobe) Args Passed: /dev/sdd
Any help with this would be much appreciated.
-
RE: Stalling on FOG splash screen
@Tom-Elliott Hi Tom, thank you for your reply.
Ok I have created a image definition, assigned that image to the host and now can capture.
“I suspect network boot is using BIOS and MBR but your machine is configured for UEFI.”
By “network boot is using BIOS and MBR” I have interpreted that to mean a reference to Network Devices being set to Legacy OPROM first under CSM in the bios.
“your machine” I have interpreted to be a reference to my Windows 10 operating system.
So if my translation is correct what you are saying is that if I network boot in bios mode FOG can only facilitate booting into an OS installed in bios mode and if I network boot in UEFI mode FOG can only facilitate booting into an OS installed in UEFI mode?
-
RE: Stalling on FOG splash screen
@george1421 “it should find the windows disk no problem”
I decided to test that, make sure the fundamentals are right.
I set the windows drive as second in the boot order (after network boot in bios mode) and disabled all other boot options. Took the raid controller right out of the equation.
On reboot I ended up with a blank screen and blinking cursor.
As FOG was installed on a virtual machine I decided to restore a snapshot that was before the FOG install and do a complete reinstall of FOG (I am using Bridged Adapter and Promiscuous Mode is set to Allow All). I performed a full host registration. When asked if I would like to deploy an image to this computer now I said yes. It said the task was complete and it was going to reboot and take an image. It rebooted but did not take an image. I ended up with the blank screen and blinking cursor again (exit to hard drive type is set to SANBOOT).
I repeated the process with a different ethernet cable connected to a different port on the switch. No change.
I went to tasks --> List All Hosts --> Capture and it said “Failed to create task - Invalid image assigned to host”.
In the logs under Image Replicator it says
Starting image replication
Please physically associate images to a storage group
There is nothing to replicateIn the logs under Image Size it says
[04-06-19 9:58:51 pm] * Completed.
[04-06-19 9:58:51 pm] * No images associated with this group as master.
[04-06-19 9:58:51 pm] * Finding any images associated with this group as its primary group
[04-06-19 9:58:51 pm] * We are node ID: 1. We are node name: DefaultMember
[04-06-19 9:58:51 pm] * We are group ID: 1. We are group name: default
[04-06-19 9:58:51 pm] * Starting Image Size Service.
[04-06-19 9:58:50 pm] * Starting service loop
[04-06-19 9:58:50 pm] * Checking for new items every 3600 seconds
[04-06-19 9:58:50 pm] * Starting ImageSize Service
[04-06-19 9:58:50 pm] Interface Ready with IP Address: tessa-vm <-- this is the host name of the virtual machine FOG runs on
[04-06-19 9:58:50 pm] Interface Ready with IP Address: mail.odysseytours.nz
[04-06-19 9:58:50 pm] Interface Ready with IP Address: 210.54.90.13 <-- this is my IP address assigned to me by my ISP
[04-06-19 9:58:50 pm] Interface Ready with IP Address: 192.168.1.149 <-- this is the IP address of the virtual machine FOG runs on
[04-06-19 9:58:50 pm] Interface Ready with IP Address: 127.0.1.1
[04-06-19 9:58:50 pm] Interface Ready with IP Address: 127.0.0.1I am surprised to the see the reference to mail.odysseytours.nz
This is an email server I am running on Ubuntu 18.04 on another computer. I have never registered this computer with FOG. Could there be a conflict with having to servers on the same subnet?
Any suggestions?
-
RE: Stalling on FOG splash screen
@george1421 Hi George, sorry to keep bothering you with this but it seems that because I have specified ipxe.efi as the chainloader in the router that means every computer I want to deploy an image too must also network boot via UEFI. From what I can gather every computer can network boot in bios mode but not every computer can network boot in UEFI mode. Virtualbox virtual machines are one notable example (https://forums.virtualbox.org/viewtopic.php?f=9&t=84349). So I need to be able to network boot in bios mode. I have changed the chainloader back to undionly.kpxe in the router. I am using SANBOOT as “Exit to Hard Drive Type”.
My system has four storage devices:
A NVMe 250GB drive with Windows 10 installed (ntfs)
A 120GB SSD with DOOM installed (ext4)
A 650GB Western Digital hard drive used as a back up (ext4)
StarTech 4 port PCIe SATA III 6Gbps RAID Controller Card with two 2TB Western Digital hard drives which my linux OS is installed on (LVM2)If I specify “IBA GE Slot 00C8 v1547” as the first boot option and “ubuntu” as the second boot option (I can successfully boot into my linux OS with this option if not network booting in bios mode) and disable all other boot options I end up with a blank screen and a blinking cursor top left hand corner.
If I add either the 120GB SSD or the 650GB HDD as a third boot option I end up with the following:
error! no such device: 4a05a32b-f942-4bf2-815a-584d501366a.
Entering rescue mode…
grub rescue>If I boot into my linux OS via bios the output of lsblk is:
NAME FSTYPE LABEL UUID MOUNTPOINT NAME SIZE OWNER GROUP MODE
sdb sdb 111.8G root disk brw-rw----
-sdb1 ext4 85e0230a-9028-4bfd-ae02-770525c04399 /mnt/85e02
-sdb1 111.8G root disk brw-rw----
sr0 sr0 1024M root cdrom brw-rw----
sdc sdc 1.8T root disk brw-rw----
|-sdc2 ext2 456c2955-b3fc-46e8-9340-484fd24e350a /boot |-sdc2 732M root disk brw-rw----
|-sdc3 LVM2_me B4wxhE-1z8r-GV9D-j5ov-rOGD-1kty-wzMhM2 |-sdc3 1.8T root disk brw-rw----
| |-zorin–vg-swap_1
| | swap 835f8cbf-8f28-4552-9b40-3f851841f78f [SWAP] | |-zorin–vg-swap_1
| | | | 976M root disk brw-rw----
|-zorin--vg-root | ext4 545dd428-5b28-4ad4-9062-159d5e100767 / |
-zorin–vg-root
| | 1.8T root disk brw-rw----
-sdc1 vfat 31BE-A03A /boot/efi
-sdc1 512M root disk brw-rw----
sda sda 596.2G root disk brw-rw----
-sda1 ext4 ed26adf1-8d42-4af6-b52d-6c037c616847 /media/sdb
-sda1 596.2G root disk brw-rw----
nvme0n1 nvme0n1 232.9G root disk brw-rw----
|-nvme0n1p3 ntfs FAA41560A41520A5 |-nvme0n1p3 502M root disk brw-rw----
|-nvme0n1p1 ntfs New Volume
| 01D313DE0D108410 |-nvme0n1p1 232.3G root disk brw-rw----
-nvme0n1p2 vfat AA05-C22B
-nvme0n1p2 100M root disk brw-rw----So as you can see Grub appears to be looking for a UUID that does not exist.
Is the Grub Rescue program being provided by FOG? Because if its not then this is possibly not a FOG related issue and I need to be looking elsewhere.
But if the Grub Rescue software is being hosted on the FOG server do you have any insights as to where Grub Rescue is getting the “4a05a32b-f942-4bf2-815a-584d501366a” UUID from? I am thinking if I can get Grub to look for the correct UUID that may solve my problem.
Thank you for your help.
-
RE: Stalling on FOG splash screen
@george1421 Hi George, thank you for the time you have taken to reply to my post.
I am happy to report I have solved my problem.
When I said ‘if I change the “Boot from Network Devices” option to [UEFI first] I lose “IBA GE Slot 00C8 v1547” as a boot option entirely’ what I omitted to add was that I gained two new options:
UEFI: IP4 Intel Ethernet Connection (H) I219-V
and
UEFI: IP6 Intel Ethernet Connection (H) I219-V
I had tried these previously without success but that was with “undionly.kpxe” as the loader. Your post “You will get similar error if the computer is in bios mode and you send it ipxe.efi” was the key. Once I changed the computer to UEFI mode and set “UEFI: IP4 Intel Ethernet Connection (H) I219-V” as the first boot option in the BIOS and changed the loader in the router to ipxe.efi it worked!
During the boot process I get the message:
Waiting for link-up on net0…Down (http://ipxe.org/38086193)
which takes about 15 seconds then it takes another fifteen seconds or so with
Waiting for link-up on net1…
This adds a lot to the time it takes for booting to complete, is there any way to avoid this?
I have made a small contribution of $50USD to FOG by way of thanks for your help.
Receipt # 4157-1929-2864-5066
-
RE: Stalling on FOG splash screen
I have tried changing the boot loader in the router to ipxe.efi but that produced the error “NBP is too big to fit in free base memory”.
This post indicated this error was caused by the client being “set to PXE boot in legacy BIOS mode but the binary offered to the client is UEFI” however if I change the “Boot from Network Devices” option to [UEFI first] I lose “IBA GE Slot 00C8 v1547” as a boot option entirely. Is this to be expected?
https://forums.fogproject.org/topic/11828/nbp-is-too-big-to-fit-in-free-base-memory/2
I then tried pxelinux.O.old but ended up with a blank screen and a blinking cursor so that was a big circle.
So I went back to undionly.kpxe then tried all the options, one by one, under FOG Configuration --> iPXE General Configuration --> Boot Exit settings --> Exit to Hard Drive Type(EFI).
No change.
I then tried all the the options, one by one, under FOG Configuration --> iPXE General Configuration --> Boot Exit settings --> Exit to Hard Drive Type
The “GRUB” option produced a blank screen and blinking cursor.
The “GRUB_FIRST_HDD” option produced a
“Launching grub
Begin pxe scan start cmain()”screen where it stalled.
Google produced this page which did not provide a solution.
https://forums.fogproject.org/topic/9906/ubuntu-image-for-fog-clients/2
The “GRUB_FIRST_FOUND_WINDOWS” option resulted in a grub4dos screen. Typing “ls” resulting in an error indicating it could not locate any drives so that seems a dead end also.
I have read carefully through the refind.conf file. The only option that look like it may be of any benefit is the “also_scan_dirs boot,ESP2:EFI/linux/kernels”. I have tried uncommenting that with “REFIND_EFI” set as the option under FOG Configuration --> iPXE General Configuration --> Boot Exit settings --> Exit to Hard Drive Type. No change.
I have gone as far as I can go.
I chose the “StarTech 4 port PCIe SATA III 6Gbps RAID Controller Card” because I googled “best raid controller”. One site ranked it second, another third. This suggests to me this hardware is not obscure. I do not consider it unreasonable to expect FOG to support hardware as mainstream as this. It’s a bit disappointing really.
Does anyone know of a raid controller FOG supports?
-
RE: Stalling on FOG splash screen
@george1421 Hi George, I apologise for misunderstanding your earlier post, I am new to FOG.
If you are referring to FOG Configuration --> iPXE General Configuration --> Boot Exit settings
Exit to Hard Drive Type is set to SANBOOT
Exit to Hard Drive Type(EFI) is set to REFIND-EFIIf I need to edit the refind.conf file in /var/www/html/fog/service/ipxe directory are you able to provide clear and specific information on exactly what edits I need to make.
Thank you for your help with this, I really appreciate it.
-
RE: Stalling on FOG splash screen
Hi George,
Thank you for your reply.
The host computer is Linux Mint 19.1 running as a virtual machine in VirtualBox on a Windows 10 host.
So I guess that would make the boot manager Grub2?
-
Stalling on FOG splash screen
Motherboard:
Maximus VII Ranger
Bios version 3003
Build date 10/28/2015
baseboard-serial-number 140627739500080CPU
intel i5-4460 @ 3.20GHz
FOG version 1.5.5
Chainloader = undionly.kpxeI have a dual-boot Windows 10/Linux (Zorin 12.4) installation.
Windows 10 is installed on a NVMe 250GB SSD.
Zorin was installed on a Western Digital hard drive.
Both operating systems were installed BIOS legacy mode.
I had Network Booting set up and FOG was working fine.
Then the Western Digital hard drive failed so I replaced it with hardware raid using a StarTech 4 port PCIe SATA III 6Gbps RAID Controller Card with two 2TB Western Digital hard drives.
I had to re-install Zorin so I did so in UEFI mode. Then I had problems dual booting as BIOS legacy and UEFI are not compatable so I converted my Windows 10 installation from Bios Legacy to UEFI by following the instructions here:
https://www.maketecheasier.com/convert-legacy-bios-uefi-windows10/
Current BIOS settings:
Advanced --> Network Stack Configuration --> Network Stack [Enabled]
Ipv4 PXE Support [Enabled]
Ipv6 PXE Support [Enabled]CSM(Compatibilty Support Module)
Launch CSM [Enabled] Boot Device Control [UEFI and Legacy OPROM] Boot from Network Devices [Legacy OPROM first] Boot from Storage Devices [Both, UEFI first] Boot from PCI-E/PCI Expansion Devices [UEFI driver first]
Advanced --> Onboard Devices Configuration --> Intel LAN Controller --> Intel LAN PXE Option ROM [Enabled]
My current Boot Options (in order) are:
IBA GE Slot 00C8 v1547 ubuntu MARVELL Raid VD (1907649MB) Windows Boot Manager (Samsung SSD 970 EVO 250GB) P2: Asus DRW-24BSST
After the FOG splash screen I now get a blank screen with a blinking cursor in the top right hand corner. I also get this outcome if I make “MARVELL Raid VD (1907649)” the second boot option (after “IBA GE Slot 00C8 v1547”). I also get this outcome if I choose “MARVELL Raid VD (1907649)” as the “Boot Overide” in the BIOS.
If I go into the BIOS and chose “ubuntu” as the Boot Overide, or make “ubuntu” as the first boot option, bypassing network booting entirely, the Grub menu appears and I can boot into either Zorin or Windows no problem.
If I remove “MARVELL Raid VD (1907649MB)” as a Boot Option entirely once I get to the FOG splash screen it counts down from 3 with the “Boot from hard disk” option selected then the screen refreshes and it starts the count again. Over and over. That’s as far as it gets. For some reason FOG does not want to pass the boot control over to “ubuntu”?
As I am able to continue the boot process by choosing “ubuntu” from the BIOS I am thinking there is a setting in FOG somewhere I need to tweak to resolve this?
Have pressed the CLR_CMOS button to reset the BIOS and repeated the entire process. No change.
-
How to edit drive cloning options
Hi,
I am new to FOG. I have successfully installed FOG and cloned my first image. The problem is I am cloning from a dual boot Windows/Linux computer. FOG cloned the drive Windows was installed on when I really wanted the drive I had Linux installed on to be cloned. I have not been able to figure out how to remedy this.
Advice on how to do this would be appreciated.
Thank you.