Imagine this scenario: you need to practice old-school PHP + MySQL and you have to setup a practice environment.
The first choice is almost obvious (and normally mentioned everywhere), that is installing a local LAMP/MAMP/WAMPP instance so you can have a local Apache server with PHP and MySQL.
But what if you don’t want (or can’t) install that? Well, another obvious option is to use any of the thousands hostings that for a fee, will provide you a working environment, with everything ready to use to host your “apps”.
But what if you want more control? Well, you could use Docker on your own machine and have everything accesible. But, again, another constraint: what if you need to access from anywhere and you don’t want to expose your home or work environment to the Internet…? And what if you don’t want Docker to be in background consuming your resources…?
Well, I was in a kind of similar situation. I needed a practice environment, I wanted to have full control of it … and it happens that I already had a DigitalOcean project with a Docker Swarm setup there.
So my (initial) approach was: OK, let’s create a Docker stack with all the software I need and try to configure it in a simple manner (with the goal to improve it later… you’ll see why…) so I came with this Docker Stack file for my “phpenv” service.
version: "3.8"
services:
apache:
image: php:8.2-apache
volumes:
- web_data:/var/www/html
ports:
- target: 80
published: 80
protocol: tcp
mode: host
deploy:
placement:
constraints: [node.hostname == swarm00]
networks:
- internal
command: >
bash -c "
docker-php-ext-install mysqli &&
apache2-foreground
"
mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: dont-put-your-root-pass-here
MYSQL_DATABASE: mydb
MYSQL_USER: dev
MYSQL_PASSWORD: dont-put-your-pass-here
volumes:
- mysql_data:/var/lib/mysql
deploy:
placement:
constraints: [node.hostname == swarm00]
networks:
- internal
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
environment:
PMA_HOST: mysql
PMA_USER: dev
PMA_PASSWORD: dont-put-your-pass-here
ports:
- target: 80
published: 8081
protocol: tcp
mode: host
deploy:
placement:
constraints: [node.hostname == swarm00]
networks:
- internal
sftp:
image: drakkan/sftpgo:latest
volumes:
- web_data:/var/www/html
- sftp_config:/var/lib/sftpgo
ports:
- target: 2022
published: 2222
protocol: tcp
mode: host
- target: 8080
published: 9080
protocol: tcp
mode: host
deploy:
placement:
constraints: [node.hostname == swarm00]
networks:
- internal
volumes:
web_data:
mysql_data:
sftp_config:
networks:
internal:
driver: overlay
This kind of worked. I could use SFTP and a nice SFTP web panel to transfer files / manage users (without the need to actually use SCP). I wanted everything to deploy automatically, but at this point, there were items that couldn’t be automated –at least with the resources I had:
- Creating users for sftpgo (as it was mainly for SFTP access I just pointed the root directory to /var/www/html), but user creation -incluing the admin one- had to be manual.
- Another issue is that for phpMyAdmin to work, you actually need to change an item into configuration (set the session to use cookies so you can actually log out) and most importantly, to create several databases (and this is not done by this stack). This is a little bit tricky to do if you only have access via the Docker host. I ended using docker exec -it sh to run mysql and then the queries to create the phpMyAdmin environment. In that regard, if you login to phpMyAdmin with a root user, you could skip this as you will receive a message indicating that the configuration is not finished –clicking that link will generate the required tables (probably the easiest solution, but… again, I wanted this to work out-of-the-box)
- I noticed that, when the server restarted, the stack didn’t start right away and needed to start it manually. Probably is totally unrelated to the stack, but that happened.
Notice I added a constraint to make it work only on the swarm00 host (that makes 0 sense but I wanted to start ‘simply’) and that also explains why the network mode is host. At some point I might (or might not) divide the services between the nodes (as it should be in a Swarm deploy).
Regarding security THIS IS A TERRIBLE security example. Ports opened to the entire Internet will cause every single bot, hacker and hackerwannabe to try to do nasty things in your setup. Also NEVER use plain-text passwords in your Dockerfile, docker-compose files, ect –as I did; if you are working in a Docker Swarm cluster: you should use docker secrets for this. And lock the swarm, by the way. Also, there is no SSL in my setup.
In the part II of this project, I might introduce another approach or even better, try to improve this one. The most important thing is to secure everything better to avoid unwanted traffic (you can investigate this by using docker logs ).