Pigz abort



  • Hello all,

    I’m running Fog 0.32 on Ubuntu 12.04.2 LTS, and I’m trying to take a full disk image of a HP 4300 workstation. However, when it gets a minute or two into the second partition (the 500 GB windows c: drive), I get the error message:

    pigz abort write error on <stdout>

    Any suggestions would be greatly appreciated!



  • Hey Tom, I appreciate you following back up on this. Currently, I do have a drive that has failed in the LVM and am waiting on my boss for a replacement.

    However, I don’t think that was the problem. As soon as I get a replacement drive, I’ll finish the upgrade process from Ubuntu 12.04 LTS to 14.04 LTS and from FOG 0.32 to 1.2.0. (The first one of these temporarily caused GRUB to break…)

    Thanks again,
    Chuck


  • Senior Developer

    Am I understanding what you’re saying correctly? The reason things are failing is because the hosts file doesn’t have 127.0.0.1? Is this on the server or client?



  • FOUND THIS…
    [url]https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/911796[/url]
    make sure that localhost 127.0.0.1 is in the hosts file
    [SIZE=12px][FONT=monospace][COLOR=#333333]if localhost cannot be resolved for some reason this command will fail, then the nfs-kernel-server is started with the option[/COLOR][/FONT][/SIZE]
    [SIZE=12px][FONT=monospace][COLOR=#333333]–no-nfs-version 3[/COLOR][/FONT][/SIZE]
    [SIZE=12px][FONT=monospace][COLOR=#333333]Reverts to lower version with 2gb file size limit…[/COLOR][/FONT][/SIZE]


  • Senior Developer

    No problem and I hope all is working for you now.



  • Ah, I see… that’s good to know! Thank you.


  • Senior Developer

    That’s a problem with partimage, not with the fog system. It is rather annoying, but I don’t know how to get that to correct itself.



  • I’m uploading an image right now. Built a new simple (no bonding, no LVM, simple partitioning with default /images) VM Ubuntu 12.04.2 LTS with Fog 0.32 on a 200 GB drive.

    The really funny thing is, it still reads:
    “Available space for image:…1.66 GiB = 1778888704 bytes”


  • Senior Developer

    Have you had any luck, or is it still rebuilding?



  • Hey Tom, I appreciate you going through all the steps with me. I just wanted to rule out something silly that I may have missed. I’m currently waiting to see if I can establish another storage node.
    -Chuck


  • Senior Developer

    Well, I hope this works for you. I’m sorry I wasn’t much help, but it sounds like it was probably just a bad drive causing all of this headache. To test, while this one is rebuilding, do you have another storage node that has enough space that you could test with?



  • pvck & vgck ? Right now, I replaced one of the drives on sda5, and one of the drives on sdb1. They will need some time to rebuild.


  • Senior Developer

    Do you know of any tools to test the disks within the LVM to find out if your disks are okay?



  • Yes (/storage/images) is part of the LVM.

    Here’s the output:

    root@kiri1:~# pvdisplay
    — Physical volume —
    PV Name /dev/sdb1
    VG Name kiri2
    PV Size 1.36 TiB / not usable 1.29 MiB
    Allocatable yes
    PE Size 4.00 MiB
    Total PE 357587
    Free PE 17587
    Allocated PE 340000
    PV UUID glQE6b-qwEH-LQo3-k2an-owfH-OPvy-PXnj1I

    — Physical volume —
    PV Name /dev/sda5
    VG Name kiri1
    PV Size 136.46 GiB / not usable 2.00 MiB
    Allocatable yes
    PE Size 4.00 MiB
    Total PE 34933
    Free PE 6
    Allocated PE 34927
    PV UUID GUvasn-41Eh-ozeI-bZnZ-UaXV-HsYC-r1C9MI

    root@kiri1:~# vgdisplay
    — Volume group —
    VG Name kiri2
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 2
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 1
    Open LV 1
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 1.36 TiB
    PE Size 4.00 MiB
    Total PE 357587
    [COLOR=#ff0000]Alloc PE / Size 340000 / 1.30 TiB[/COLOR]
    [COLOR=#ff0000]Free PE / Size 17587 / 68.70 GiB[/COLOR]
    VG UUID Agux9x-QG3h-O5Gg-WZrc-wuDh-4oGO-twHaDg

    — Volume group —
    VG Name kiri1
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 3
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 2
    Open LV 2
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 136.46 GiB
    PE Size 4.00 MiB
    Total PE 34933
    Alloc PE / Size 34927 / 136.43 GiB
    Free PE / Size 6 / 24.00 MiB
    VG UUID NRCufd-StrG-CQb0-87bS-2ehH-7Yea-PSMhKM

    root@kiri1:~# lvdisplay
    — Logical volume —
    LV Name /dev/kiri2/storage
    VG Name kiri2
    LV UUID 75YgXN-ucDL-pgvA-E6lm-h6LK-sSTz-urI94a
    LV Write Access read/write
    LV Status available

    open 1

    LV Size 1.30 TiB
    Current LE 340000
    Segments 1
    Allocation inherit
    Read ahead sectors auto

    • currently set to 256
      Block device 252:0

    — Logical volume —
    LV Name /dev/kiri1/root
    VG Name kiri1
    LV UUID 2Z2aY7-qkTk-gu1y-j8qt-x9gO-IObv-f7Ht3M
    LV Write Access read/write
    LV Status available

    open 1

    LV Size 130.86 GiB
    Current LE 33499
    Segments 1
    Allocation inherit
    Read ahead sectors auto

    • currently set to 256
      Block device 252:1

    — Logical volume —
    LV Name /dev/kiri1/swap_1
    VG Name kiri1
    LV UUID u4ZjMQ-j9IO-n8Pt-GP5p-uGzv-R76q-HQmt0f
    LV Write Access read/write
    LV Status available

    open 2

    LV Size 5.58 GiB
    Current LE 1428
    Segments 1
    Allocation inherit
    Read ahead sectors auto

    • currently set to 256
      Block device 252:2

    root@kiri1:~# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/kiri1-root 129G 3.0G 120G 3% /
    udev 5.9G 4.0K 5.9G 1% /dev
    tmpfs 2.4G 336K 2.4G 1% /run
    none 5.0M 0 5.0M 0% /run/lock
    none 5.9G 0 5.9G 0% /run/shm
    /dev/mapper/kiri2-storage 1.3T 141G 1.2T 11% /storage

    /dev/sda1 228M 47M 170M 22% /boot


  • Senior Developer

    Here’s what my output looks like!
    — Volume group —
    VG Name vg_mastavirtual
    System ID
    Format lvm2
    Metadata Areas 5
    Metadata Sequence No 4
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 3
    Open LV 3
    Max PV 0
    Cur PV 5
    Act PV 5
    VG Size 4.16 TiB
    PE Size 4.00 MiB
    Total PE 1089666
    Alloc PE / Size 1089666 / 4.16 TiB
    Free PE / Size 0 / 0
    VG UUID Y86n60-8kTN-Akpl-Ixkm-jcUK-iqNG-UIFXV6

    Then a pvdisplay shows:

    — Volume group —
    VG Name vg_mastavirtual
    System ID
    Format lvm2
    Metadata Areas 5
    Metadata Sequence No 4
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 3
    Open LV 3
    Max PV 0
    Cur PV 5
    Act PV 5
    VG Size 4.16 TiB
    PE Size 4.00 MiB
    Total PE 1089666
    Alloc PE / Size 1089666 / 4.16 TiB
    Free PE / Size 0 / 0
    VG UUID Y86n60-8kTN-Akpl-Ixkm-jcUK-iqNG-UIFXV6

    [root@mastavirtual ~]# pvdisplay
    — Physical volume —
    PV Name /dev/sda1
    VG Name vg_mastavirtual
    PV Size 465.76 GiB / not usable 3.00 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 119234
    Free PE 0
    Allocated PE 119234
    PV UUID NQYnUI-WNc1-4usj-uMSO-ZLTc-4O7k-pWWx2M

    — Physical volume —
    PV Name /dev/sdb1
    VG Name vg_mastavirtual
    PV Size 698.64 GiB / not usable 3.00 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 178850
    Free PE 0
    Allocated PE 178850
    PV UUID aBQhCX-0A1r-EupN-bJsv-tAWf-HEdA-soXPfC

    — Physical volume —
    PV Name /dev/sdc1
    VG Name vg_mastavirtual
    PV Size 298.09 GiB / not usable 4.00 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 76310
    Free PE 0
    Allocated PE 76310
    PV UUID cE19Nu-51wP-UdvP-K8ak-BeFA-PLtd-cr4h2l

    — Physical volume —
    PV Name /dev/sdd1
    VG Name vg_mastavirtual
    PV Size 931.51 GiB / not usable 4.00 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 238466
    Free PE 0
    Allocated PE 238466
    PV UUID Th8qJh-j1Y8-g20V-xlL9-Jkqv-WsRl-3T5gKy

    — Physical volume —
    PV Name /dev/sde2
    VG Name vg_mastavirtual
    PV Size 1.82 TiB / not usable 4.00 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 476806
    Free PE 0
    Allocated PE 476806
    PV UUID fUVEMi-fdqY-1QgK-yRVY-oU75-5oBb-XtmCZD


  • Senior Developer

    So is /storage a part of an LVM?

    What is the output of vgdisplay?



  • Thanks, Tom! That may very well be the problem. It appears I’m getting conflicting data…
    vgdisplay shows only 68.70 GB free, while df -h shows 1.2 T available.

    -Chuck


  • Senior Developer

    Then the next question is … Is /storage a raid system with a possible bad drive? I don’t know what else to check then and I’m sorry.



  • Hey Tom,

    Yes, I do have everything pointing to /storage… actually /storage/images, and when I try to create new images, the new files are written there. For example, when creating the all disk image you suggested it did the following:
    /storage/images/4300AllDisk contains: d1.mbr, d1p1.img, d1p2.img, d1p3.img, d1p4.img

    The only thing is that d1p2.img is only 2G for a 456.7 G partition.

    (I’ve checked /images and the only files there are the /dev directory and the .mntcheck file)

    .mntcheck is present in /storage/images and set to 777 root:root

    Oh, and /etc/exports looks ok to me with:
    /storage/images *(ro,sync,no_wdelay,insecure_locks,no_root_squash,insecure)
    /storage/images/dev *(rw,sync,no_wdelay,no_root_squash,insecure)

    Thanks again,
    Chuck


  • Senior Developer

    One last note,

    If you do end up changing the /etc/exports file, make sure to restart the NFS server.

    I think a simple:
    /etc/init.d/nfs restart
    will do the trick.


Log in to reply
 

353
Online

39.3k
Users

11.0k
Topics

104.4k
Posts

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.