API gives 500 error but only for a specific call

  • I’m running Fog 1.5.6 and PowerShell API 1.6.
    Everything was working fine, then somebody had a fiddle and broke stuff (not me).
    I’ve managed to get stuff working again, for the most part, and can do things like call:
    Invoke-FogApi -Method GET
    Get-FogObject -type object -coreObject tasktype
    Get-FogObject -type object -coreObject taskstate
    Get-FogObject -type object -coreObject image
    Get-FogObject -type object -coreObject host
    I can assign images using:
    Invoke-FogApi -uriPath “host/$FogHostId/edit” -Method Put -jsonData (@{“imageID” = $ImageToDeploy.id} | ConvertTo-Json)
    I can start a deployment using:
    New-FogObject -type objecttasktype -IDofObject $FogHost.id -coreTaskObject host -jsonData (@{“taskTypeID” = $DeployTaskId; “taskName” = “Deploy $ImageName” ; “wol” = “1”} | ConvertTo-Json)
    All the above works fine.

    The only thing that doesn’t work (anymore) is:
    Get-FogObject -type object -coreObject task
    which gives me a 500 server error.
    I can see the status of tasks via the Web UI, and cancel them. It just seems to be some kind of strange API issue!

    I’ve tried restarting the Fog server, but it made no difference. I don’t really want to upgrade if I don’t have to - this used to work fine!

    Can anyone suggest anything? I’m not sure where to look to start to diagnose this.

    Thanks in advance (and thanks again Tom for your help just now!!)

  • Moderator

    @robincm Sure you can do a full database maintenance/cleanup manually: https://wiki.fogproject.org/wiki/index.php/Troubleshoot_MySQL#Database_Maintenance_Commands

    Not sure if it’s wise to do this an a regular basis because others might want to keep that history for a long time. You can access all of that in the hosts (edit) view -> Image History tab.

  • @george1421 Thanks, but I still think a more elegant solution would be to be able to purge old tasks from the database, that way instead of returning 1600+ results, the API call would only return a few hundred, and memory wouldn’t be a problem.

    I can’t work out how to do this via the Fog GUI - Tasks only shows active tasks, not historic ones. Or am I not looking in the right place?

    I don’t understand why old tasks are being kept in the database if I can’t interact with them in any way?

    Worst case maybe I can run a SQL DELETE command and purge that way? Any idea if this would have any negative impact?

    Or ideally - one for the devs - perhaps Fog could have an option to periodically delete old tasks from the database, or only store maybe x days worth of old tasks?

    Thanks for the help so far everyone!

  • Moderator

    @robincm said in API gives 500 error but only for a specific call:

    No computers with fog agent, I’m not using it.

    OK then its just the size of your api query that is requiring more memory. We have seen with many fog clients hitting the database, the database configuration causes everything to slow down. Adding more memory to the phpfpm workers isn’t typically a fix for FOG. But that is not your case the returned size of the api call is using up the workers memory allocation.
    In your case watch the amount of free ram when allocating more memory to the php-fpm workers. Make sure your fog server is not dipping into the swap space. That will impact your performance too.

  • @george1421 No computers with fog agent, I’m not using it.

  • Moderator

    @robincm said in API gives 500 error but only for a specific call:

    API call is now working, albeit a bit slow

    How many computers, with the fog client installed, are communicating with this fog server?

  • @Sebastian-Roth @Tom-Elliott Increasing the PHP memory from 256M to 512M has fixed it and the tasks API call is now working, albeit a bit slow. But I am seeing >1600 tasks being returned, none of which are showing as active tasks in the GUI, so I imagine Fog must keep a record of all historic tasks. Is there a way to clear those old tasks out?

  • Moderator

    @robincm I’d say you may try to monitor the log files while doing an API call to see if the memory issue actually stem from those API calls: tail -f /var/log/....... (you can add several names of log files here to monitor all of them at the same time!)

  • The www-error.log has many recurring entries of “PHP Fatal error: Allowed memory size of 268435456 bytes exhausted”.

    I believe that this value which equates to 256MB is set in the file /etc/php-fpm.d/www.conf

    php_admin_value[error_log] = /var/log/php-fpm/www-error.log
    php_admin_flag[log_errors] = on
    php_admin_value[memory_limit] = 256M

    I will change this last line from 256M to 512M, then restart the server and try again.

    Do you think this seems sensible? I’ve not had to change any RAM values before though, makes me think there’s maybe another underlying cause.

  • I’m not fully familiar with how the PS scripts operate.

    But if you’re seeing a 500, chances are the FOG server is also showing this issue (error 500 means a server error typically)

    Can you provide the FOG Servers Error logs:

    /var/log/php-fpm/www-error.log (or very nearly)
    /var/log/httpd/error_log (or very nearly)

    Thank you,