SOLVED image/. file is at 265+ G and crashing several nodes- keeps growing

  • Server
    • FOG Version: 1.33, most nodes at 1.3.3 also

    • OS:

    • Service Version:
    • OS:

    I have a set up with a master and 16 nodes. I came in this morning to 8 nodes down due to space. I’ve found that at /images there is “.” at 268 G on one of the servers (probably all of them). I can’t get servers back up because as soon as I make space, it’s getting used again. I think I know the offending image and I’ve removed it from the group that it’s replicating to. I’m on the node that’s causing the issue but I’m not sure how to clean up the “.” file.

    Any advice would be very welcome.


  • This is solved from a FOG point of view.

    The problem was that a user replicated several large images overnight and Hyper-V expanded the VM, choking the main servers.

    Thank you for all of your help @Sebastian-Roth!

  • Senior Developer

    @Wayne-Workman We are working on this in chat. Cleaned it mostly up already.

  • @lpetelik Look at the output of this on that node: ls -laht /images it should tell you where the space is going. You can also try du -sh /images/* and you might also look at the output of df -h for 100% full partitions.

  • Senior Developer

    @LPetelik Trying to contact you on forum chat…

    By the way, “.” is just the current directory, meaning the sum of all subdirectories within your /images folder is growing or possibly is just a file directly in /images?!?