I’m currently working on a new project that houses a framework for multi-tenant applications based on Laravel. The source code is stored in a hosted Gitlab instance via Githost.io – which is run by Gitlab themselves. There’s a set of unit tests that I wanted to run via continuous integration to run the tests with each commit and fast.
Throughout this post, the Docker daemon is running at
tcp://192.168.1.254:4243/ instead of via a socket. This is because I use the API for other things over HTTP.
The local development environment is handled by Vagrant, using a custom box based upon Ubuntu that includes Composer, MariaDB and a few other components. This works very well, one simple Vagrantfile can be dropped in to new projects and immediately get running with a LAMP style stack:
Vagrant.configure("2") do |config| config.vm.provider :virtualbox do |v| v.name = "mult" v.customize ["modifyvm", :id, "--memory", 2048] config.vm.box = "ssx/lamp" config.vm.network :private_network, ip: "192.168.13.37" config.ssh.forward_agent = true end
config.vm.synced_folder "./", "/vagrant", type: "nfs"
This box works quite well, but to Vagrant up and run tests using this with every commit would be quite slow. Enter Docker. I converted the image that was used in the Vagrant environment to a Docker one and now tests can be ran within seconds instead of minutes.
I went via the shell route and writing the Docker commands manually to boot up in container and then run our script and tests, in the repo I have
build.sh which must be set to +x to execute:
Restart the services within the container
service mysql restart
service nginx restart
service php5-fpm restart
Move into the directory
Run our migrations
php artisan migrate
php artisan db:seed
Execute our test suite
.gitlab-ci.yml file needs to be created to run the build and then tests:
stages: - test
- composer install
- cp .env.example .env
- php artisan key:generate
- docker run -d -t -v
pwd:/vagrant hellossx/lamp:1.0.4 > container.pid
- docker exec
This does mean that if tests fail, the container will never get cleaned up. You can add a cronjob to clean up these orphaned containers, with something like this
Set our docker host
Clean up old images that failed tests
for i in $(docker ps -a | grep "hour ago" | cut -f1 -d" "); do docker stop $i && docker rm $i; done
This will remove the containers after an hour and leave the previous ones around for a short while incase you want to attach and investigate anything.