Database Connection Unavailable
-
@george1421 Hey George,
I assume the command should be systemctl?
When I changed it to that it is just blank until I quit out of it -
Running Ubuntu version 16.
-
@dylan123 Yeah that was a type-o, sorry. So starting mysql doesn’t respond with anything?
What about restart? And then starting it doesn’t put anything in the error.log ? If so something is not right with your system.
-
@george1421 So gave it a restart and tried running the command again but it failed.
Result when running systemctl status mysql.service
Result when running journalctl -xe
The denied entries seem interesting, not that I know why they’d be getting denied but I imagine that might be the issue.
I get no return at all though when I run cat /var/log/mysql/error.log
Everything had been running fine for a couple of years so I’m confused at to what’s happened. Must have been an update it’s done by itself, just find it strange that restoring hasn’t worked.
-
@dylan123 Ok, mysql is not starting anymore. And it seems to be cause by the security framework apparmor as we see in the logs output. I think I have seen this on one of my test machines once as well but I don’t know why it’s caused. Probably some part of package updates that is done when you ran the FOG installer.
Quite possible there was a different issue with mysql at first and re-running the FOG installer made it worse?!
See if you can make it work using this: https://support.plesk.com/hc/en-us/articles/360004185293-Unable-to-start-MySQL-on-Ubuntu-AVC-apparmor-DENIED-operation-open-
-
@Sebastian-Roth I gave it a crack but not luck. Well it seemed to get rid of the permission errors but I just got generic failed to start -
– The result is failed.
Oct 29 19:19:24 Fog sudo[1883]: pam_unix(sudo:session): session closed for user
Oct 29 19:19:24 Fog systemd[1]: mysql.service: Unit entered failed state.
Oct 29 19:19:24 Fog systemd[1]: mysql.service: Failed with result ‘exit-code’.
Oct 29 19:19:24 Fog systemd[1]: mysql.service: Service hold-off time over, sched
Oct 29 19:19:24 Fog systemd[1]: Stopped MySQL Community Server.
– Subject: Unit mysql.service has finished shutting downI’m trying to restore from a VM took 37 days ago, hopefully that does the trick… Will report back.
-
@dylan123 said in Database Connection Unavailable:
I’m trying to restore from a VM took 37 days ago, hopefully that does the trick… Will report back.
Just confirming that this backup worked thankfully so that’s a good result.
Thanks for the time and efforts as always guys, much appreciated!
-
@dylan123 I was just thinking about it on the way into the office this AM. Being out of space on the root partition would keep mysql from starting. I’ve done it my self. Now that you restored it from backup that evidence is kind of gone. But if you issue a
df -h
lets see what your free space is now? -
@dylan123 What about the error logs??
If I remember correctly I did reset my VM to an earlier state as well just to get rid of the issue. But I’d really be interested to know how this happens. I find it really strange.
-
@george1421 said in Database Connection Unavailable:
@dylan123 I was just thinking about it on the way into the office this AM. Being out of space on the root partition would keep mysql from starting. I’ve done it my self. Now that you restored it from backup that evidence is kind of gone. But if you issue a
df -h
lets see what your free space is now?Here’s the result of that -
I’m not sure which of the outputs the database sits on to know whether it’s out of space or not.
I still have a backup of where it doesn’t seem to work as my first restore had the same issue so I can bring that back temporarily to do some testing/troubleshooting to try pin it down further.
@Sebastian-Roth said in Database Connection Unavailable:
@dylan123 What about the error logs??
If I remember correctly I did reset my VM to an earlier state as well just to get rid of the issue. But I’d really be interested to know how this happens. I find it really strange.
If I run cat /var/log/mysql/log.error I still get a no result return. Running journalctl -xe doesn’t show up any of the errors it previously was, it ends with ‘the start-up result is done.’
-
@dylan123 Sometimes log rotation moves away current log infos. So when you get to take a look at the corrupted state again, see if you can gather some log information by running
zcat /var/log/mysql/error.log.1.gz
(...2.gz
and so on) -
@dylan123 Well looking at the output it looks like everything is mounted on your root partition. You have ~90GB free space on your entire server. Its possible that 2 system uploads could have filled up your root partition since your fog images are on the same partition as the OS. Roughly equivalent to storing your images on the drive and filling that up.
-
@dylan123
typically mariadb defaults to/var/lib/mysql/
for the data files.It looks like your boot partition may be full. While this wouldn’t necessarily cause you issues with day to day operation, it can play havoc with apt if you have a kernel update in the queue when running updates. I have had more than several update sequences bork due to this, and depending on where it borked, I could see packages that have a number of dependencies such as mariadb being left in a non-startable state. I would suggest doing an
apt autoremove
to remove the unneeded kernels, but be aware you may need to manually give apt a little room in /boot by manually deleting an old kernel or two then running aapt -f install
(I think) to cleanup from the dirty state. I believe Ubuntu generally keeps the current and one or two previous kernels by default.If you want to see if this might be an issue prior to digging around, a
apt upgrade
should complain at you. -
@dylan123 Reading a bit more about the apparmor stuff I found this: https://stackoverflow.com/a/49583958
I read in another SO thread comment that the apparmor=“DENIED” message probably isn’t the reason that MySQL (or in my case MariaDB) wasn’t starting, as it’s only a warning.
Confirmed by this bug report:
On the affected system, there was no noticeable impact (yet?) other than the denials, so I’d say it’s low impact.
Spinning up a VM with Ubuntu 18.04 I saw that quite true it has those log messages in
/var/log/syslog
on my VM as well but the DB is running perfectly fine. So just for the records: Those apparmor=“DENIED” message don’t cause an impact on the DB and can be ignored.Check all the information George and Daniel posted as well as my suggestion on the mysql logs.