Pigz abort
-
Hello all,
I’m running Fog 0.32 on Ubuntu 12.04.2 LTS, and I’m trying to take a full disk image of a HP 4300 workstation. However, when it gets a minute or two into the second partition (the 500 GB windows drive), I get the error message:
pigz abort write error on <stdout>
Any suggestions would be greatly appreciated!
-
What is the size of the image to be created? How much space is available to actually store the image?
-
I’m not sure how big the image file will be. The drive is a 500 GB drive. The partitions are as follows:
1 - 100 MB System
2 - 456.70 GB NTFS
3 - 8.86 GB HP Recovery
4- 100 MBI have 1.2 TB available on the server.
-
So when the image is being uploaded/created, it tells you the size of the image upload. Is this Partition 2 the full size that it’s copying, or is there free space? Next question would be, what kind of Hard Drive is the 500 GB? Meaning, is it SATA/AHCI or SATA/IDE in the bios? Is it a Solid State, Solid State Hybrid, or regular Platter drive?
This is important, as through some of my testing, I’ve found that, especially hybrid SSD, actually causes my kernel to crash because it can’t open a solid read/write stream to the drive. It keeps hanging and will give this pigz error anywhere between 2 minutes, to 10 minutes into the image upload cycle.
-
It’s a WD 500 GB SATA3 6 Gb /16 MB Cache (WD5000AAKX) and looks like a regular run of the mill platter to me.
The only thing that looks odd to me is the Available space for image:… 1.66 GiB
-
Something odd looks to be happening. The Image File size says that it’s only 3.97 GB, but the data that’s being copied is 173.88 GB? Does that sound right to you? It looks like it’s copying the image file size (3.97GB) then aborting the upload because it’s reached the file size. What if you try creating a new image with Multiple Partition, All Disks Non-resizeable. Then assign that image to the machine, then try to upload?
-
Actually, the “Image file size” continues to grow in size. Just did the all disk option, same (similar) results.
-
Can you try a system with a smaller image/partition table? Maybe even a different system altogether? This will let you know if it’s the system, or your fog server that can’t handle the issue.
Also, are you sure you have 1.2 T available? This sounds like your storage location is full.
-
Yes, Tom, I think that is part of the problem. I’ve looked at the server settings, but I don’t see where it’s misconfigured. The strange thing is that previously I didn’t have any problems; it’s worked great for more than 18 months or so. I guess I got smacked by an update or two.
-Chuck -
From the sounds of it, your storage node is full which would explain why it stops imaging at, very nearly, the same point every time. Maybe try adding some storage space, if you can, or add another storage node that has the space available on it.
-
I’ve checked the disks, and they look good. /storage shows 1.2T Available (and / shows 120 G available)
Somehow, the system doesn’t register what’s available. Any ideas where to look on the server? I’ve already looked up the Storage Management -> All Storage Nodes and double checked those settings, along with the config in /opt/fog/service/etc/config.php and /var/www/fog/commons/config.php.
Would the memory_limit, post_max_size, and upload_max_filesize all set at 1900M in /etc/php5/apache2/php.ini cause a problem? (I tried to image a 325 G drive today and I still ran into a problem…)
Thanks,
Chuck -
Does your fog server actually use /storage as the location for the images? Or is it the typical /images directory setup. If it’s /images, it’s trying to use your root system (120G) to store more than that worth of data which would fail. A bypass to that would be to move /images to /storage then link /storage/images to /images which could be done with->
mv /images /storage; ln -s /storage/images /
You wouldn’t have to make any configuration changes then. The other thing to check would be the /etc/exports file to see what your NFS system is trying to mount to store the image. My guess is it’s actually mounting /images even if your configuration is pointing to /storage
To fix that, you’d simply change the file reference to /storage and /storage/dev (assuming that’s how your system is setup)
Also check your .mntcheck files to see that they exist. They don’t actually contain information, but they need to be present. Make sure they exist by typing:
touch /images/.mntcheck; touch /images/dev/.mntcheck; touch /storage/.mntcheck /storage/dev/.mntcheck
chmod -R +x /images; chmod -R +x /storage
Of course remove the items you don’t need as I don’t know what your particular setup requires.
Then try again.
-
One last note,
If you do end up changing the /etc/exports file, make sure to restart the NFS server.
I think a simple:
/etc/init.d/nfs restart
will do the trick. -
Hey Tom,
Yes, I do have everything pointing to /storage… actually /storage/images, and when I try to create new images, the new files are written there. For example, when creating the all disk image you suggested it did the following:
/storage/images/4300AllDisk contains: d1.mbr, d1p1.img, d1p2.img, d1p3.img, d1p4.imgThe only thing is that d1p2.img is only 2G for a 456.7 G partition.
(I’ve checked /images and the only files there are the /dev directory and the .mntcheck file)
.mntcheck is present in /storage/images and set to 777 root:root
Oh, and /etc/exports looks ok to me with:
/storage/images *(ro,sync,no_wdelay,insecure_locks,no_root_squash,insecure)
/storage/images/dev *(rw,sync,no_wdelay,no_root_squash,insecure)Thanks again,
Chuck -
Then the next question is … Is /storage a raid system with a possible bad drive? I don’t know what else to check then and I’m sorry.
-
Thanks, Tom! That may very well be the problem. It appears I’m getting conflicting data…
vgdisplay shows only 68.70 GB free, while df -h shows 1.2 T available.-Chuck
-
So is /storage a part of an LVM?
What is the output of vgdisplay?
-
Here’s what my output looks like!
— Volume group —
VG Name vg_mastavirtual
System ID
Format lvm2
Metadata Areas 5
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 5
Act PV 5
VG Size 4.16 TiB
PE Size 4.00 MiB
Total PE 1089666
Alloc PE / Size 1089666 / 4.16 TiB
Free PE / Size 0 / 0
VG UUID Y86n60-8kTN-Akpl-Ixkm-jcUK-iqNG-UIFXV6Then a pvdisplay shows:
— Volume group —
VG Name vg_mastavirtual
System ID
Format lvm2
Metadata Areas 5
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 5
Act PV 5
VG Size 4.16 TiB
PE Size 4.00 MiB
Total PE 1089666
Alloc PE / Size 1089666 / 4.16 TiB
Free PE / Size 0 / 0
VG UUID Y86n60-8kTN-Akpl-Ixkm-jcUK-iqNG-UIFXV6[root@mastavirtual ~]# pvdisplay
— Physical volume —
PV Name /dev/sda1
VG Name vg_mastavirtual
PV Size 465.76 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 119234
Free PE 0
Allocated PE 119234
PV UUID NQYnUI-WNc1-4usj-uMSO-ZLTc-4O7k-pWWx2M— Physical volume —
PV Name /dev/sdb1
VG Name vg_mastavirtual
PV Size 698.64 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 178850
Free PE 0
Allocated PE 178850
PV UUID aBQhCX-0A1r-EupN-bJsv-tAWf-HEdA-soXPfC— Physical volume —
PV Name /dev/sdc1
VG Name vg_mastavirtual
PV Size 298.09 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 76310
Free PE 0
Allocated PE 76310
PV UUID cE19Nu-51wP-UdvP-K8ak-BeFA-PLtd-cr4h2l— Physical volume —
PV Name /dev/sdd1
VG Name vg_mastavirtual
PV Size 931.51 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID Th8qJh-j1Y8-g20V-xlL9-Jkqv-WsRl-3T5gKy— Physical volume —
PV Name /dev/sde2
VG Name vg_mastavirtual
PV Size 1.82 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476806
Free PE 0
Allocated PE 476806
PV UUID fUVEMi-fdqY-1QgK-yRVY-oU75-5oBb-XtmCZD -
Yes (/storage/images) is part of the LVM.
Here’s the output:
root@kiri1:~# pvdisplay
— Physical volume —
PV Name /dev/sdb1
VG Name kiri2
PV Size 1.36 TiB / not usable 1.29 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 357587
Free PE 17587
Allocated PE 340000
PV UUID glQE6b-qwEH-LQo3-k2an-owfH-OPvy-PXnj1I— Physical volume —
PV Name /dev/sda5
VG Name kiri1
PV Size 136.46 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 34933
Free PE 6
Allocated PE 34927
PV UUID GUvasn-41Eh-ozeI-bZnZ-UaXV-HsYC-r1C9MIroot@kiri1:~# vgdisplay
— Volume group —
VG Name kiri2
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.36 TiB
PE Size 4.00 MiB
Total PE 357587
[COLOR=#ff0000]Alloc PE / Size 340000 / 1.30 TiB[/COLOR]
[COLOR=#ff0000]Free PE / Size 17587 / 68.70 GiB[/COLOR]
VG UUID Agux9x-QG3h-O5Gg-WZrc-wuDh-4oGO-twHaDg— Volume group —
VG Name kiri1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 136.46 GiB
PE Size 4.00 MiB
Total PE 34933
Alloc PE / Size 34927 / 136.43 GiB
Free PE / Size 6 / 24.00 MiB
VG UUID NRCufd-StrG-CQb0-87bS-2ehH-7Yea-PSMhKMroot@kiri1:~# lvdisplay
— Logical volume —
LV Name /dev/kiri2/storage
VG Name kiri2
LV UUID 75YgXN-ucDL-pgvA-E6lm-h6LK-sSTz-urI94a
LV Write Access read/write
LV Status availableopen 1
LV Size 1.30 TiB
Current LE 340000
Segments 1
Allocation inherit
Read ahead sectors auto- currently set to 256
Block device 252:0
— Logical volume —
LV Name /dev/kiri1/root
VG Name kiri1
LV UUID 2Z2aY7-qkTk-gu1y-j8qt-x9gO-IObv-f7Ht3M
LV Write Access read/write
LV Status availableopen 1
LV Size 130.86 GiB
Current LE 33499
Segments 1
Allocation inherit
Read ahead sectors auto- currently set to 256
Block device 252:1
— Logical volume —
LV Name /dev/kiri1/swap_1
VG Name kiri1
LV UUID u4ZjMQ-j9IO-n8Pt-GP5p-uGzv-R76q-HQmt0f
LV Write Access read/write
LV Status availableopen 2
LV Size 5.58 GiB
Current LE 1428
Segments 1
Allocation inherit
Read ahead sectors auto- currently set to 256
Block device 252:2
root@kiri1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/kiri1-root 129G 3.0G 120G 3% /
udev 5.9G 4.0K 5.9G 1% /dev
tmpfs 2.4G 336K 2.4G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 5.9G 0 5.9G 0% /run/shm
/dev/mapper/kiri2-storage 1.3T 141G 1.2T 11% /storage/dev/sda1 228M 47M 170M 22% /boot
- currently set to 256
-
Do you know of any tools to test the disks within the LVM to find out if your disks are okay?