@ramone As far as I am aware, no one ever volunteered to take up the docker image maintenance. It’s essentially dead.
I think it’s possible in theory, you would just need volumes for the fog directories that need to be static between updates like the database and images, though there would surely be other fun issues with ports to work out. I personally see the desire for it if you’re in an environment where you already have lots of containers as a standard in your infrastructure, but I like having it just on its own server.
Is it not an option to start with a docker image that doesn’t already have a database on the default port? Or are you saying the docker host already has a database on said port?
I’m also sure we could figure out using an external database as storagenodes already connect to an external database. I would think that using docker for adding storage nodes might make some sense as you could put them all on one server and use volumes to mount disks from different sources.
However, the more virtualization and containerization you add, the more complication arises. Already once just on a virtual server you may not be able to use multi-cast imaging unless you’re able to add igmp snooping in your virtual networking. I don’t know if containers have that same limitation or other limitations that could be introduced.
This isn’t really a great answer I realize, and I apologize for that, but there’s a lot to consider with changing infrastructure.
Anyway, something you might try is to create a /opt/fog/.fogsettings
file before installing and put in these settings
snmysqlpass='password'
snmysqlhost='remoteHost'
snmysqluser='fogmaster'
mysqldbname='fog'
Then try the installer, no idea if it would work, but something to try as far as using an external database.