Database Stress when cloning on Big Environments



  • Hi, @Sebastian-Roth .

    I’m here again, I don’t know where to put this thread, because this is not properly a bug.

    First at all, I have currently 16.500 computers registered on my fog system, with 60 StorageNodes, and growing, located on different buildings on different towns, we are 13 techies.

    I need to notify than the continuous writtings on the database, to report the clonation progress, causes a lot of not needed stress on the database, maybe is a good idea to change the notification system, to reduce stress and make the system more scalable.

    Two ideas:
    Easier: Notification on 33%, 66%, 90%, 100%.
    Best: Get out of the database that informatión and store only the clonation data need by the history.



  • @Sebastian-Roth said in Database Stress when cloning on Big Environments:

    @EduardoTSeoane I’d be very interested to hear if this quick fix helped to ease the DB load. We might consider make this value configurable through the web UI then.

    Good Idea. When I get some tests on production I’ll post it here.


  • Developer

    @EduardoTSeoane I’d be very interested to hear if this quick fix helped to ease the DB load. We might consider make this value configurable through the web UI then.



  • @Sebastian-Roth yeah, I don’t want te get rid the satus reporting, maybe to do it on another way that don’t use the database, I know that my System maybe is oversized but I want to help to get better performance to all.

    I’m trying the solution on production, when i have some contrasted information I put a quote on this solution.


  • Developer

    @EduardoTSeoane Thanks for reporting this. At first I thought there can’t be that many deployments at the same time to cause stress on the DB but with that many storage nodes I suppose it really can happen.

    I’d suggest you try something to see if that eases the stress. Add the following line to your post init script (if you already use a custom post init script just add it to that or simply add that line to `/images/dev/

    sed -i 's/usleep 3000000$/usleep 30000000/g' /bin/fog.statusreporter
    

    Status updates will then happen only every 30 seconds (3 secs default) and therefore 10 times less updates on the DB.

    Best: Get out of the database that informatión and store only the clonation data need by the history.

    I don’t think we want to get rid of the status reporting altogether because this is what updates the status bar that you see in the web UI task view and people not having physical access to the hosts rely on this status.



  • Hehehe i Verified it again!!! @Quazz Task Deploy info on database is updated during progress on deploying at least, a few computers is not problem, but with 16500 and 13 techies working like slaves this can be a cause of database’s stress.

    At least taskPercentText, taskDataCopied, taskTimeRemaining, taskTimeElapsed, taskBPM on task table is updated during partition deploy.

    I think that this info can help to improve this great solution.


  • Moderator

    I’d say, only update database itself if task fails or is completed.

    Keep webserver progress update ping as is.


Log in to reply
 

360
Online

6.4k
Users

13.8k
Topics

130.3k
Posts