FOG Storage node add time of day bandwidth restrictions


  • Moderator

    Add the ability to set bandwidth restrictions for replication based on time of day. I can see a need to be able to define a single time range (such as 8a to 5p) where we might want to restrict replication traffic to 1MB/s but after that time range replicate data at 10MB/s. This would be needed in a WAN/VPN setup where the storage node might be location behind a congested link. To extend this concept a bit more setting 0MB/s would disable replication between the time range. You would want to do this if your storage node was behind a slow MPLS link.

    I can see issues since we are moving large files that a time boundary passes but the file hasn’t finished transferring, the file would continue to send at the old transfer rate until the file is finished copying. If rsync was used to move files vs the current ftp process when a time boundary was passed the current trasfer could be aborted and then restarted with the new bandwidth restrictions, rsync would continue moving the file where it left off before the transfer was aborted.



  • I think anything that includes a way of killing existing FTP instances is dangerous…

    lftp is not only used for imaging, it’s used for transferring uploaded images from /images/dev to /images, it is what reports how much disk space is being used on the server, the size of images, deleting images (which takes a long time on Ext3), downloading the kernel and init…

    I could just imagine some poor tech coming in to work early to update FOG and… right at the end of the installer when the new kernel and init is being downloaded, a cron event fires off and destroys the FTP transfer… and the poor technician has no idea it even happened…

    You’d have to schedule this way way outside of operating hours; having it run AT the start time of the day is dangerous, having it run AT the exact end of the day is dangerous.

    You might try time-based bandwidth shaping to slow down the fot-to-fog transfers over your WAN, you might get a much more stable and reliable method of controlling it that way.


  • Moderator

    It would be interesting to see if this could be managed from within the application.

    I could see just adding a TOD range to the gui for the storage node, then update the FOG Replicator service to look at that date range when it starts the replication transfer to that storage node. From the outside it looks trivial to add. :wink:



  • @george1421

    I think killing any existing lftp instances is possible with scripting.

    for instance,

    kill -9 $(pidof lftp)
    

    Or this way:

    kill $(ps aux | grep 'lftp' | awk '{print $2}')
    

    Shamelessly Googled and found here:
    http://askubuntu.com/questions/239923/shell-script-to-9-kill-based-on-name
    http://stackoverflow.com/questions/3510673/find-and-kill-a-process-in-one-line-using-bash-and-regex


  • Moderator

    @Wayne-Workman said:

    @george1421 You can. You can get very specific with cron-tab events…

    Ugh, sorry I am guilty of reading too fast. I read cron-tab as cross-tab so I was stuck trying to understand what you meant by a cross tab query.

    Yes you are correct it can be done with cron. But these jobs would need to be managed from within the FOG console. You wouldn’t want most users poking around setting up cron jobs. But doing it with cron wouldn’t abort a current transfer or notify any of the FOG services that something happened because you are poking right into the database.




  • Moderator

    @Wayne-Workman said:

    I think one could write a cron-tab event to run two scripts…

    As long as you could fire that script at a specific TOD and then revert the setting to the default transfer rate once the premium time range has passed.


  • Moderator

    I don’t know if the transfer percent is available in the code. If it was then sure we would want it to continue. But then where do you draw the line. What happens if we are 90%, or 80% when would you decide to abort vs continue.

    My recommendation would be to use rsync because even if we were at 94% if you abort rsync and start it up again it will skip ahead to where it left off and continue at the changed transfer speed.



  • I think one could write a cron-tab event to run two scripts…

    The scripts would just contain something like this:

    mysql
    use fog
    UPDATE nfsGroupMembers SET ngmBandwidthLimit = 10000 WHERE ngmMemberName='DefaultMember';
    


  • Would you want to abort a transfer when it passes a time boundary… say… when the transfer is already 94% complete?


 

600
Online

5.4k
Users

12.6k
Topics

118.7k
Posts