SOLVED Display error in statusbar of active deployment tasks

  • Hi,

    the amount of data written and size of the partition/hard drive (160GB/149GiB) are displayed twice (in a different order and capped by the first digit of hard disk size i think) in the status bar with mouse over:
    0_1457093275036_2016-03-04 - FOG Task details.png

    The image deployed is “Multiple Partition Image - Single Disk (Not Resizable)” in unicast mode.
    The fog version used for upload and download was svn4929 (git6547), but the display was like this in some more early (fog 1.3) trunk versions.

    Maybe it also would be possible to show the total amount of data to be written instead of the disk size and display GB and GiB correctly if it is not too much to change.


  • @Tom-Elliott @Sebastian-Roth sorry for the late reply - I now tested it with git6907 and the display is fine now.

    Also thanks for correcting the hard disk related prefixes to binary.

  • I’ve solved this thread as I fully believe this is now fixed. Of course test and report would be handy to verify and we can change the “solved” status as necessary.

  • Senior Developer

    @tian Can you please update to the very latest version (plus run the installer) and see if things are fixed? Tom just uploaded the new inits…

  • Senior Developer

    Well, it’s pretty simple now that we know what’s causing this. It’s the good old friend in C called buffer overflow (a combination of a too small buffer and using unsafe format string function - the classic). It’s not as bad as crashing partclone but still overwriting other char arrays and therefore messing things up! It’s actually not native to partclone but added with the patch we apply to it. 😢

    @Tom-Elliott As it doesn’t make much sense to diff patch files (looks really confusing) I posted a complete new patch here (raw) - as similar to the old patch file as possible. Hope you can easily see what changed - look for “snprintf” and “max_len”…

    Possibly this will fix the “progress bar hang on 100 GB issue” as well??

  • Senior Developer

    I have been able to partly replicate the issue. It seams to happen with huge images but there is more to it I think.

    Basically I started uploading a big image and saw this in /tmp/status.fog on the client:

      0.00byte@00:00:01@00:01:39@0 B@511.229 GB%11.20 B@  1.00@548928225280.000000
      2.52GB@00:00:01@03:38:06@40.00 MB@511.229 GB%11.240.00 MB@  0.01@548928225280.000000
      2.58GB@00:00:04@03:32:44@164.00 MB@511.229 GB%11.2164.00 MB@  0.03@548928225280.000000
      2.56GB@00:00:07@03:34:10@285.00 MB@511.229 GB%11.2285.00 MB@  0.05@548928225280.000000
      2.57GB@00:00:10@03:33:40@408.00 MB@511.229 GB%11.2408.00 MB@  0.08@548928225280.000000

    So partclone is actually printing the screwed information and it’s not the parsing of numbers going wrong in the web interface.

  • Senior Developer

    @tian Bump…? Still an issue?

  • Is this still happening? Adjusting the scripts to change what is output is doable but it means rewriting the code as this info is not easily accessible from partclone. I actually pull the data from partclone as it’s presented on the screen (which is why you see it in that state. Adjusting the code to get the unformatted method is pretty simple ultimately, but again, it means rewriting some of the source. That said, I’m not experiencing the same issue of “double” data. Is there a potential exact method of when this occurs?